This is an old revision of the document!
NAMD on Caviness
Batch job
Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in /opt/templates/slurm/generic/mpi/openmpi
. Copy and edit the template based on your job requirements by following the comments described in the openmpi.qs
file.
$ vpkg_versions namd Available versions in package (* = default version): [/opt/shared/valet/2.1/etc/namd.vpkg_yaml] namd Scalable Molecular Dynamics 2.12 Version 2.12 * 2.13 Version 2.13 2.13:gpu Version 2.13 (with CUDA support)
The *
version is loaded by default when using vpkg_require namd
. Make sure you select a GPU variant of the namd package if you plan to use GPUs by using vpkg_require named:gpu
and provide the correct options to namd
in the job script
${UD_MPIRUN} namd2 +idlepoll +p${SLURM_CPUS_ON_NODE} +devices ${CUDA_VISIBLE_DEVICES} ...
Documentation for namd
indicates +idlepoll
must always be used for runs using CUDA devices. Slurm sets CUDA_VISIBLE_DEVICES to the device indices your job was granted, and SLURM_CPUS_ON_NODE to the number of CPUs granted to you. Alos ${UD_MPIRUN}
is setup as part of the job script template provided in /opt/templates/slurm/generic/mpi/openmpi/openmpi.qs
file.
/opt/templates/slurm
have changed especially as we learn more about what works well on a particular cluster.