Writing /var/www/html/docs.hpc.udel.edu/current/data/log/deprecated/2024-07-04.log failed
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision |
software:namd:caviness [2019-08-29 17:40] – [Batch job] anita | software:namd:caviness [2024-03-19 09:43] – [Batch job] anita |
---|
===== Batch job ===== | ===== Batch job ===== |
| |
Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in ''/opt/templates/slurm/generic/mpi/openmpi''. Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file. | Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in ''/opt/shared/templates/slurm/generic/mpi/openmpi''. Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file. |
<code bash> | <code bash> |
$ vpkg_versions namd | $ vpkg_versions namd |
</code> | </code> |
| |
The ''*'' version is loaded by default when using ''vpkg_require namd''. Make sure you select a GPU variant of the namd package if you plan to use GPUs by using ''vpkg_require named:gpu'' and provide the correct options to ''namd'' in the job script | The ''*'' version is loaded by default when using ''vpkg_require namd''. Make sure you select a GPU variant of the ''namd'' package if you plan to use GPUs by using ''vpkg_require named/2.13:gpu'' and provide the correct options to ''namd'' in the job script |
| |
<code bash> | <code bash> |
namd2 +idlepoll +p${SLURM_CPUS_ON_NODE} +devices ${CUDA_VISIBLE_DEVICES} ... | ${UD_MPIRUN} namd2 +idlepoll +p${SLURM_CPUS_ON_NODE} +devices ${CUDA_VISIBLE_DEVICES} ... |
</code> | </code> |
| |
Documentation for ''namd'' indicates "+idlepoll" must always be used for runs using CUDA devices. Slurm sets CUDA_VISIBLE_DEVICES to the device indices your job was granted, and SLURM_CPUS_ON_NODE to the number of CPUs granted to you. | Documentation for ''namd'' indicates ''+idlepoll'' must always be used for runs using CUDA devices. Slurm sets ''CUDA_VISIBLE_DEVICES'' to the device indices your job was granted, and ''SLURM_CPUS_ON_NODE'' to the number of CPUs granted to you. Also ''${UD_MPIRUN}'' is setup as part of the job script template provided in ''/opt/shared/templates/slurm/generic/mpi/openmpi/openmpi.qs'' file. |
| |
| |
<note tip>It is always a good idea to periodically check if the templates in ''/opt/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note> | <note tip>It is always a good idea to periodically check if the templates in ''/opt/shared/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note> |