Intel MPI Library is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.1 (MPI-3.1).
Intel MPI uses OFI Libfabric as the communication runtime from 2019 release onwards. Libfabric provides two different network providers for InfiniBand - "verbs" provider and "mlx" provider. The verbs provider is implemented over InfiniBand verbs (ibverbs) interfaces whereas the mlx provider is implemented over OpenUCX. The network provider can be selected at runtime using the environment provider FI_PROVIDER.
To select verbs provider:
To select mlx provider:
The following figures depict the point-to-point MPI performance using IntelMPI 2019, using verbs and mlx providers. These were taken using OSU MicroBenchmarks on two Azure HBv2 VM instances running CentOS HPC 8.1 VM image. Intel MPI version used is Intel MPI 2019 Update 7. The two host nodes are connected to the same leaf InfiniBand switch.
This blog lists configuration options for selecting InfiniBand based network providers of IntelMPI 2019 as well as an overview of their performance characteristics. Intel MPI 2019 U7 is available in Azure HPC images and can be deployed through a variety of deployment vehicles (CycleCloud, Batch, ARM templates, etc). AzureHPC scripts provide an easy way to quickly deploy an HPC cluster using these HPC VM images.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.