This is an old revision of the document!
Mellanox UCX and Open MPI on DARWIN
During early-access testing of the DARWIN cluster, several users reported unexpected crashes of their Open MPI applications. The crashes were accompanied by a running stream of kernel messages:
: [Sat Feb 6 16:51:55 2021] infiniband mlx5_0: create_mkey_callback:148:(pid 0): async reg mr failed. status -12 [Sat Feb 6 16:51:55 2021] mlx5_core 0000:81:00.0: mlx5_cmd_check:794:(pid 0): CREATE_MKEY(0x200) op_mod(0x0) failed, status limits exceeded(0x8), syndrome (0x59c8a4) [Sat Feb 6 16:51:55 2021] infiniband mlx5_0: create_mkey_callback:148:(pid 0): async reg mr failed. status -12 [Sat Feb 6 16:51:55 2021] mlx5_core 0000:81:00.0: mlx5_cmd_check:794:(pid 0): CREATE_MKEY(0x200) op_mod(0x0) failed, status limits exceeded(0x8), syndrome (0x59c8a4) [Sat Feb 6 16:51:55 2021] infiniband mlx5_0: create_mkey_callback:148:(pid 0): async reg mr failed. status -12 [Sat Feb 6 16:51:55 2021] mlx5_core 0000:81:00.0: mlx5_cmd_check:794:(pid 0): CREATE_MKEY(0x200) op_mod(0x0) failed, status limits exceeded(0x8), syndrome (0x59c8a4) [Sat Feb 6 16:51:55 2021] infiniband mlx5_0: create_mkey_callback:148:(pid 0): async reg mr failed. status -12 :
In the 4.x releases of Open MPI the low-level InfiniBand BTL driver (openib
) has been deprecated in favor of the Unifiec Communication X framework. The Mellanox OFED software stack present on each node in DARWIN ships with a copy of the UCX library, so by default Open MPI versions which integrate with UCX build those modules by default. Older releases (1.6, 1.8) continue to build and use the openib
BTL for low-level InfiniBand communications, and the builds with UCX support also build that module — even the 4.x releases that have deprecated its use.
MCA Defaults
For Open MPI 4.x releases that do build and include the openib
BTL module, a warning is produced when a job first begins running:
-------------------------------------------------------------------------- By default, for Open MPI 4.0 and later, infiniband ports on a device are not used by default. The intent is to use UCX for these devices. You can override this policy by setting the btl_openib_allow_ib MCA parameter to true. Local host: <nodename> Local adapter: mlx5_0 Local port: 1 -------------------------------------------------------------------------- -------------------------------------------------------------------------- WARNING: There was an error initializing an OpenFabrics device. Local host: <nodename> Local device: mlx5_0 --------------------------------------------------------------------------
Since a UCX library is provided within the OS on DARWIN, it makes sense to disable the openib
module by default to avoid this message and ensure the use of the UCX modules. This change is easily effected in the etc/openmpi-mca-params.conf
file that is part of the Open MPI install:
btl = ^openib pml = ucx # Never use the IPoIB interfaces for TCP communications: oob_tcp_if_exclude = ib0 btl_tcp_if_exclude = ib0
The openib
BTL is disabled, the ucx
PML module is selected as the only option, and the InfiniBand's IPoIB interface is excluded from use for TCP/IP communications (out-of-band signaling, for example).
Ongoing Errors
With the MCA defaults outlined above, some applications were still seeing the issues that were accompanied by CREATE_MKEY kernel messages:
[Sat Feb 6 16:51:55 2021] infiniband mlx5_0: create_mkey_callback:148:(pid 0): async reg mr failed. status -12 [Sat Feb 6 16:51:55 2021] mlx5_core 0000:81:00.0: mlx5_cmd_check:794:(pid 0): CREATE_MKEY(0x200) op_mod(0x0) failed, status limits exceeded(0x8), syndrome (0x59c8a4)
The error number (-12) corresponds to ENOMEM
; examining the mlx5 kernel driver source code, this (with the status limits exceeded message) implies that the Mellanox hardware could not create an additional memory-mapping record.
The Mellanox ConnectX-6 network interface breaks any message into a series of memory mappings of fixed size. The larger the memory region being communicated via RDMA, the more memory-mapping records are necessary. The network interface has a finite amount of memory available for these records, and the kernel messages indicate that that table has filled, the interface is in the process of sending the data represented by those records, and the caller should try again. So:
- A large data transmission via RDMA is requested
- The Mellanox network interface's MTT begins to fill with memory-mapping records at the rate X
- Data begins transmitting, MTT records are completed and removed at rate Y < X
- Kernel messages are produced with each failed MTT addition request
- Once all MTT records have been added, the network interface processes the remainder while no kernel messages are produced
On occasion, the job did eventually crash or the node itself would encounter a kernel panic and go offline, so the issue was not a facile annoyance.
Google searches eventually turned up a known bug in all Open MPI releases prior to 4.1.0 with respect to newer releases of UCX. The UCX library began promoting a newer API for creating memory keys (recall CREATE_MKEY
in the kernel messages) called NBX over an earlier NB API. A copy of Open MPI 4.1.0 was built and one of the applications that was failing reliably (with both 4.0.5 and 3.1.6) was recompiled on Open MPI 4.1.0. Subsequent runs no longer failed or produced the kernel messages regarding MTT exhaustion.
Older Open MPI Releases
The UCX changes implemented in Open MPI 4.1.0 have not been backported to any of the older releases of the software, so this issue does persist. The issue arises when attempting large data broadcasts, for example:
Allocate(Rmat(2,144259970) : Call MPI_Bcast(Rmat, 2*144259970, MPI_REAL, 0, MPI_COMM_WORLD, mpierr)
The array in question is 1154079760 bytes (slightly over 1.0 GiB).