Some of the content in this area will be in PDF format and may need to be downloaded before being read.
While installing all 16k (nearly 17k) CRAN packages on a recent R 4.1.3 build, many packages with a dependency on rJava would hang when being tested. GDB debugging and analysis of both the C source and runtime assembly code revealed an interesting problem with GCC 11.2's compilation of the code.
The MPI process-spawning API has not been frequently used on our clusters. A user reported an issue with the Rmpi library and example code that spawns R workers via MPI_Comm_spawn() on the Caviness cluster. The issue was debugged and addressed for all pertinent versions of Open MPI, and is summarized here.
During early-access testing of the DARWIN cluster several users reported issues with their MPI jobs' crashing unexpectedly in code locations that worked on previous clusters (like Caviness). The full troubleshooting and mitigation of the issue should be instructive for DARWIN users who attempt to build and manage their own Open MPI libraries on DARWIN.
As time goes by, the /dev/shm
filesystem on compute nodes can fill with orphaned files. Without swap matching the amount of RAM in the node, these files will begin putting pressure on subsequent applications that run on the node. In Automated /dev/shm cleanup, a method of removing orphaned files from /dev/shm
is outlined.
The R statistical computing software can be built atop a variety of BLAS and LAPACK libraries – including its own internal Rblas and Rlapack libraries. Creating alternate builds of R that vary ONLY in the identity of the underlying BLAS/LAPACK implementation can consume extremely large amounts of disk space (and time!). The runtime-configurable R BLAS/LAPACK whitepaper documents the scheme used on our latest HPC cluster to make the choice of library a runtime configurable option.
The behavior of the Mills cluster's cutting-edge Interlagos processor is studied under multi-threaded and multi-process work loads. Influences of compiler and BLAS/LAPACK library choice are presented.
The Nodes on the Mills cluster have 2 or 4 AMD Opteron 6200 series sockets. Each socket is organized as a multi-chip module package with two CPU dies, interconnected using a HyperTransport link. Each die is organized as 3 core pairs (Interlagos modules). Thus, to the OS, the socket appears as a 12 logical CPUs (12-core sockets). Resources such as memory and floating points unites are shared between the cores.
This technical tuning guide is intended for "systems admins, application end-users, and developers on a Linux platform who perform application development, code tuning, optimization, and initial system installation". The document describes resource sharing, and the effect on your applications.
The SC1) High Performance Computing Challenge includes the benchmarks:
By default Matlab uses multiple computational threads for standard linear algebra calculations. Without the options -singleCompThread
it will use libraries tuned to use the computational hardware. Examples are the sunperf
library on Solaris (Strauss) and the MKL library on intel hardware including Mills.
To fully use the computational threads you must call the built in high level functions or data parallel constructs in Matlab. For example, it is easy to write loops to do a Matrix multiply, but it w
For Mills, the recommended libraries include OpenMPI, ACML, and FFTW. The AMD recommended compilers include Open64 and PGI. The following document from AMD includes instructions for installing these libraries, but this is not needed on Mills since they are already installed as VALET packages.