Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in ''/opt/templates/slurm/generic/mpi/openmpi''. Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file.
+
Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in ''/opt/shared/templates/slurm/generic/mpi/openmpi''. Copy and edit the template based on your job requirements by following the comments described in the ''openmpi.qs'' file.
<code bash>
<code bash>
$ vpkg_versions namd
$ vpkg_versions namd
Line 10:
Line 10:
[/opt/shared/valet/2.1/etc/namd.vpkg_yaml]
[/opt/shared/valet/2.1/etc/namd.vpkg_yaml]
-
namd Scalable Molecular Dynamics
+
namd Scalable Molecular Dynamics
-
2.12 Version 2.12
+
2.12 Version 2.12
-
* 2.13 Version 2.13
+
* 2.13 Version 2.13
-
2.13:gpu Version 2.13 (with CUDA support)
+
2.13:gpu Version 2.13 (with CUDA support)
+
2.14 compiled with Intel 2020, Open MPI 4.1.4
+
3.0b3 compiled with Intel 2020, Open MPI 4.1.4
+
3.0b3:cuda-11.3.1 compiled with Intel 2020, CUDA 11
+
3.0b3:cuda-12.1.1 compiled with Intel 2020, CUDA 12
</code>
</code>
-
The ''*'' version is loaded by default when using ''vpkg_require namd''. Make sure you select a GPU variant of the ''namd'' package if you plan to use GPUs by using ''vpkg_require named:gpu'' and provide the correct options to ''namd'' in the job script
+
The ''*'' version is loaded by default when using ''vpkg_require namd''. Make sure you select a GPU variant of the ''namd'' package if you plan to use GPUs, i.e. ''vpkg_require namd:gpu'' and provide the correct options to ''namd'' in the job script
<code bash>
<code bash>
Line 22:
Line 26:
</code>
</code>
-
Documentation for ''namd'' indicates ''+idlepoll'' must always be used for runs using CUDA devices. Slurm sets CUDA_VISIBLE_DEVICES to the device indices your job was granted, and SLURM_CPUS_ON_NODE to the number of CPUs granted to you. Alos ''${UD_MPIRUN}'' is setup as part of the job script template provided in ''/opt/templates/slurm/generic/mpi/openmpi/openmpi.qs'' file.
+
Documentation for ''namd'' indicates ''+idlepoll'' must always be used for runs using CUDA devices. Slurm sets ''CUDA_VISIBLE_DEVICES'' to the device indices your job was granted, and ''SLURM_CPUS_ON_NODE'' to the number of CPUs granted to you. Also ''${UD_MPIRUN}'' is setup as part of the job script template provided in ''/opt/shared/templates/slurm/generic/mpi/openmpi/openmpi.qs'' file.
-
<note tip>It is always a good idea to periodically check if the templates in ''/opt/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note>
+
<note tip>It is always a good idea to periodically check if the templates in ''/opt/shared/templates/slurm'' have changed especially as we learn more about what works well on a particular cluster.</note>
+
+
===== Scaling =====
+
+
Using ApoA1 as an example, the scaling results are presented. The performance improved with increasing CPU and GPU numbers.