software:namd:caviness

Writing /var/www/html/docs.hpc.udel.edu/current/data/log/deprecated/2024-07-04.log failed
Writing /var/www/html/docs.hpc.udel.edu/current/data/log/deprecated/2024-07-04.log failed

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
software:namd:caviness [2021-04-27 16:21] – external edit 127.0.0.1software:namd:caviness [2024-05-12 15:21] bkang
Line 1: Line 1:
 +====== NAMD on Caviness ======
  
 +===== Batch job =====
 +
 +Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in ''/opt/shared/templates/slurm/generic/mpi/openmpi'' Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file.
 +<code bash>
 +$ vpkg_versions namd
 +
 +Available versions in package (* = default version):
 +
 +[/opt/shared/valet/2.1/etc/namd.vpkg_yaml]
 +namd                 Scalable Molecular Dynamics
 +  2.12               Version 2.12
 +* 2.13               Version 2.13
 +  2.13:gpu           Version 2.13 (with CUDA support)
 +  2.14               compiled with Intel 2020, Open MPI 4.1.4
 +  3.0b3              compiled with Intel 2020, Open MPI 4.1.4
 +  3.0b3:cuda-11.3.1  compiled with Intel 2020, CUDA 11
 +  3.0b3:cuda-12.1.1  compiled with Intel 2020, CUDA 12
 +</code>
 +
 +The ''*'' version is loaded by default when using ''vpkg_require namd''. Make sure you select a GPU variant of the ''namd'' package if you plan to use GPUs, i.e. ''vpkg_require namd:gpu'' and provide the correct options to ''namd'' in the job script
 +
 +<code bash>
 +${UD_MPIRUN} namd2 +idlepoll +p${SLURM_CPUS_ON_NODE} +devices ${CUDA_VISIBLE_DEVICES} ...
 +</code>
 +
 +Documentation for ''namd'' indicates ''+idlepoll'' must always be used for runs using CUDA devices. Slurm sets ''CUDA_VISIBLE_DEVICES'' to the device indices your job was granted, and ''SLURM_CPUS_ON_NODE'' to the number of CPUs granted to you. Also ''${UD_MPIRUN}'' is setup as part of the job script template provided in ''/opt/shared/templates/slurm/generic/mpi/openmpi/openmpi.qs'' file.
 +
 +
 +<note tip>It is always a good idea to periodically check if the templates in ''/opt/shared/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note>
 +
 +===== Scaling =====
 +<code bash>
 +vpkg_require namd/3.0b3
 +charmrun namd3  +p$SLURM_NTASKS  apoa1.namd > apoa1.log
 +</code>
 +{{:software:namd:scaling_namd_cpu.jpg?400|}}
 +
 +<code bash>
 +vpkg_require namd/3.0b3:cuda-12.1.1
 +charmrun namd3 +idlepoll +p$SLURM_CPUS_PER_TASK +devices $CUDA_VISIBLE_DEVICES apoa1.namd > apoa1.log
 +</code>
 +{{:software:namd:scaling_namd_gpu.jpg?400|}}
 +{{:software:namd:scaling_namd_cpu_gpu.jpg?400|}}
  • software/namd/caviness.txt
  • Last modified: 2024-05-12 15:25
  • by bkang