software:namd:caviness

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
software:namd:caviness [2018-11-15 19:43] – created anitasoftware:namd:caviness [2024-05-12 15:25] (current) – [Scaling] bkang
Line 1: Line 1:
 ====== NAMD on Caviness ====== ====== NAMD on Caviness ======
  
-[[http://www.ks.uiuc.edu/Research/namd/|NAMD]] (NAnoscale Molecular Dynamics) is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. +===== Batch job =====
  
-To determine the available versions of NAMD installed use +Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in ''/opt/shared/templates/slurm/generic/mpi/openmpi'' Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file. 
- +<code bash>
-<code>+
 $ vpkg_versions namd $ vpkg_versions namd
 +
 +Available versions in package (* = default version):
 +
 +[/opt/shared/valet/2.1/etc/namd.vpkg_yaml]
 +namd                 Scalable Molecular Dynamics
 +  2.12               Version 2.12
 +* 2.13               Version 2.13
 +  2.13:gpu           Version 2.13 (with CUDA support)
 +  2.14               compiled with Intel 2020, Open MPI 4.1.4
 +  3.0b3              compiled with Intel 2020, Open MPI 4.1.4
 +  3.0b3:cuda-11.3.1  compiled with Intel 2020, CUDA 11
 +  3.0b3:cuda-12.1.1  compiled with Intel 2020, CUDA 12
 </code> </code>
  
-===== Batch job =====+The ''*'' version is loaded by default when using ''vpkg_require namd''. Make sure you select a GPU variant of the ''namd'' package if you plan to use GPUs, i.e. ''vpkg_require namd:gpu'' and provide the correct options to ''namd'' in the job script
  
-Templates for Grid Engine queue submission scripts are available for serial or parallel batch NAMD jobs on ''Mills'' & ''Farber'' in ''/opt/templates/gridengine/namd'' Copy and edit the template based on your job by following the usage described in the ''named-serial.qs'' or ''named-parallel.qs'', respectively.+<code bash> 
 +${UD_MPIRUN} namd2 +idlepoll +p${SLURM_CPUS_ON_NODE} +devices ${CUDA_VISIBLE_DEVICES} ... 
 +</code>
  
-<note tip>It is a good idea to periodically check if the template ''/opt/templates/gridengine/namd'' has changed as we learn more about what works well on a particular cluster.</note>+Documentation for ''namd'' indicates ''+idlepoll'' must always be used for runs using CUDA devices. Slurm sets ''CUDA_VISIBLE_DEVICES'' to the device indices your job was granted, and ''SLURM_CPUS_ON_NODE'' to the number of CPUs granted to you. Also ''${UD_MPIRUN}'' is setup as part of the job script template provided in ''/opt/shared/templates/slurm/generic/mpi/openmpi/openmpi.qs'' file. 
 + 
 + 
 +<note tip>It is always a good idea to periodically check if the templates in ''/opt/shared/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note> 
 + 
 +===== Scaling ===== 
 + 
 +Using ApoA1 as an example, the scaling results are presented. The performance improved with increasing CPU and GPU numbers. 
 + 
 +<code bash> 
 +vpkg_require namd/3.0b3 
 +charmrun namd3  +p$SLURM_NTASKS  apoa1.namd > apoa1.log 
 +</code> 
 +{{:software:namd:scaling_namd_cpu.jpg?400|}} 
 + 
 +<code bash> 
 +vpkg_require namd/3.0b3:cuda-12.1.1 
 +charmrun namd3 +idlepoll +p$SLURM_CPUS_PER_TASK +devices $CUDA_VISIBLE_DEVICES apoa1.namd > apoa1.log 
 +</code> 
 +{{:software:namd:scaling_namd_gpu.jpg?400|}} 
 +{{:software:namd:scaling_namd_cpu_gpu.jpg?400|}}
  • software/namd/caviness.1542329039.txt.gz
  • Last modified: 2018-11-15 19:43
  • by anita