software:namd:caviness

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
software:namd:caviness [2019-08-29 17:40] – [Batch job] anitasoftware:namd:caviness [2024-05-12 15:21] bkang
Line 3: Line 3:
 ===== Batch job ===== ===== Batch job =====
  
-Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in ''/opt/templates/slurm/generic/mpi/openmpi'' Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file.+Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in ''/opt/shared/templates/slurm/generic/mpi/openmpi'' Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file.
 <code bash> <code bash>
 $ vpkg_versions namd $ vpkg_versions namd
Line 10: Line 10:
  
 [/opt/shared/valet/2.1/etc/namd.vpkg_yaml] [/opt/shared/valet/2.1/etc/namd.vpkg_yaml]
-namd        Scalable Molecular Dynamics +namd                 Scalable Molecular Dynamics 
-  2.12      Version 2.12 +  2.12               Version 2.12 
-* 2.13      Version 2.13 +* 2.13               Version 2.13 
-  2.13:gpu  Version 2.13 (with CUDA support)+  2.13:gpu           Version 2.13 (with CUDA support) 
 +  2.14               compiled with Intel 2020, Open MPI 4.1.4 
 +  3.0b3              compiled with Intel 2020, Open MPI 4.1.4 
 +  3.0b3:cuda-11.3.1  compiled with Intel 2020, CUDA 11 
 +  3.0b3:cuda-12.1.1  compiled with Intel 2020, CUDA 12
 </code> </code>
  
-The ''*'' version is loaded by default when using ''vpkg_require namd''. Make sure you select a GPU variant of the namd package if you plan to use GPUs by using ''vpkg_require named:gpu'' and provide the correct options to ''namd'' in the job script+The ''*'' version is loaded by default when using ''vpkg_require namd''. Make sure you select a GPU variant of the ''namd'' package if you plan to use GPUs, i.e. ''vpkg_require namd:gpu'' and provide the correct options to ''namd'' in the job script
  
 <code bash> <code bash>
-namd2 +idlepoll +p${SLURM_CPUS_ON_NODE} +devices ${CUDA_VISIBLE_DEVICES} ...+${UD_MPIRUN} namd2 +idlepoll +p${SLURM_CPUS_ON_NODE} +devices ${CUDA_VISIBLE_DEVICES} ...
 </code> </code>
  
-Documentation for ''namd'' indicates "+idlepollmust always be used for runs using CUDA devices. Slurm sets CUDA_VISIBLE_DEVICES to the device indices your job was granted, and SLURM_CPUS_ON_NODE to the number of CPUs granted to you.+Documentation for ''namd'' indicates ''+idlepoll'' must always be used for runs using CUDA devices. Slurm sets ''CUDA_VISIBLE_DEVICES'' to the device indices your job was granted, and ''SLURM_CPUS_ON_NODE'' to the number of CPUs granted to you. Also ''${UD_MPIRUN}'' is setup as part of the job script template provided in ''/opt/shared/templates/slurm/generic/mpi/openmpi/openmpi.qs'' file.
  
  
-<note tip>It is always a good idea to periodically check if the templates in ''/opt/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note>+<note tip>It is always a good idea to periodically check if the templates in ''/opt/shared/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note> 
 + 
 +===== Scaling ===== 
 +<code bash> 
 +vpkg_require namd/3.0b3 
 +charmrun namd3  +p$SLURM_NTASKS  apoa1.namd > apoa1.log 
 +</code> 
 +{{:software:namd:scaling_namd_cpu.jpg?400|}} 
 + 
 +<code bash> 
 +vpkg_require namd/3.0b3:cuda-12.1.1 
 +charmrun namd3 +idlepoll +p$SLURM_CPUS_PER_TASK +devices $CUDA_VISIBLE_DEVICES apoa1.namd > apoa1.log 
 +</code> 
 +{{:software:namd:scaling_namd_gpu.jpg?400|}} 
 +{{:software:namd:scaling_namd_cpu_gpu.jpg?400|}}
  • software/namd/caviness.txt
  • Last modified: 2024-05-12 15:25
  • by bkang