software:namd:caviness

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
software:namd:caviness [2018-11-15 20:04] – [Batch job] anitasoftware:namd:caviness [2024-03-19 09:45] – [Batch job] anita
Line 3: Line 3:
 ===== Batch job ===== ===== Batch job =====
  
-Open MPI Slurm job submission script should be used for NAMD jobs on ''Caviness'' and can be found in ''/opt/templates/slurm/generic/mpi/openmpi'' Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file.+Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in ''/opt/shared/templates/slurm/generic/mpi/openmpi'' Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file. 
 +<code bash> 
 +$ vpkg_versions namd
  
-<note tip>It is always a good idea to periodically check if the templates in ''/opt/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note>+Available versions in package (* = default version): 
 + 
 +[/opt/shared/valet/2.1/etc/namd.vpkg_yaml] 
 +namd                 Scalable Molecular Dynamics 
 +  2.12               Version 2.12 
 +* 2.13               Version 2.13 
 +  2.13:gpu           Version 2.13 (with CUDA support) 
 +  2.14               compiled with Intel 2020, Open MPI 4.1.4 
 +  3.0b3              compiled with Intel 2020, Open MPI 4.1.4 
 +  3.0b3:cuda-11.3.1  compiled with Intel 2020, CUDA 11 
 +  3.0b3:cuda-12.1.1  compiled with Intel 2020, CUDA 12 
 +</code> 
 + 
 +The ''*'' version is loaded by default when using ''vpkg_require namd''. Make sure you select a GPU variant of the ''namd'' package if you plan to use GPUs, i.e. ''vpkg_require named/2.13:gpu'' and provide the correct options to ''namd'' in the job script 
 + 
 +<code bash> 
 +${UD_MPIRUN} namd2 +idlepoll +p${SLURM_CPUS_ON_NODE} +devices ${CUDA_VISIBLE_DEVICES} ... 
 +</code> 
 + 
 +Documentation for ''namd'' indicates ''+idlepoll'' must always be used for runs using CUDA devices. Slurm sets ''CUDA_VISIBLE_DEVICES'' to the device indices your job was granted, and ''SLURM_CPUS_ON_NODE'' to the number of CPUs granted to you. Also ''${UD_MPIRUN}'' is setup as part of the job script template provided in ''/opt/shared/templates/slurm/generic/mpi/openmpi/openmpi.qs'' file. 
 + 
 + 
 +<note tip>It is always a good idea to periodically check if the templates in ''/opt/shared/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note>
  • software/namd/caviness.txt
  • Last modified: 2024-05-12 15:25
  • by bkang