software:mpi4py:caviness

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
software:mpi4py:caviness [2020-04-23 12:45] – [Batch job] anitasoftware:mpi4py:caviness [2021-04-27 16:21] (current) – external edit 127.0.0.1
Line 74: Line 74:
 ===== Batch job ===== ===== Batch job =====
  
-Any MPI job requires you to use ''mpirun'' to initiate it, and this should be done through the Slurm job scheduler to best utilize the resources on the cluster.  Also, if you want to run on more than 1 node (more than 36+ cores), then you must initiate a batch job from the head node. Remember if you only have 1 node in your workgroup, then you would need to take advantage of the [[abstract:caviness:runjobs:queues#the-standard-partition|standard]] partition to be able to run a job utilizing multiple nodes, however keep in mind using the standard partition means your job can be preempted so you will need to mindful of [[abstract:caviness:runjobs:schedule_jobs#handling-system-signals-aka-checkpointing|checkpointing]] your job.+Any MPI job requires you to use ''mpirun'' to initiate it, and this should be done through the Slurm job scheduler to best utilize the resources on the cluster.  Also, if you want to run on more than 1 node (more than 36+ cores), then you must initiate a batch job from the head node. Remember if you only have 1 node in your workgroup, then you would need to take advantage of the [[abstract:caviness:runjobs:queues#the-standard-partition|standard]] partition to be able to run a job utilizing multiple nodes, however keep in mind using the standard partition means your job can be preempted so you will need to be mindful of [[abstract:caviness:runjobs:schedule_jobs#handling-system-signals-aka-checkpointing|checkpointing]] your job.
  
 The best results have been found by using the //openmpi.qs// template for [[/software/openmpi/openmpi|Open MPI]] jobs. For example, copy the template and call it ''mympi4py.qs'' for the job script using The best results have been found by using the //openmpi.qs// template for [[/software/openmpi/openmpi|Open MPI]] jobs. For example, copy the template and call it ''mympi4py.qs'' for the job script using
  
 <code bash> <code bash>
-cp /opt/templates/slurm/generic/mpi/openmpi/openmpi.qs mympi4py.qs+cp /opt/shared/templates/slurm/generic/mpi/openmpi/openmpi.qs mympi4py.qs
 </code> </code>
  
-and modify it for your application. There are several ways to communicate the number and layout of worker processes. In this example, we will modify the job script to specify a single node and 4 cores using ''#SBATCH --nodes=1'' and ''#SBATCH --ntasks=4'' It is important to to carefully read the comments and select the appropriate options for your job.  Make sure you specify the correct VALET environment for your job selecting the correct version for Python 2 or 3 for python-mpi. Since the above example is based on Python 2, we will specify the correct Python version as follows:+and modify it for your application. There are several ways to communicate the number and layout of worker processes. In this example, we will modify the job script to specify a single node and 4 cores using ''#SBATCH --nodes=1'' and ''#SBATCH --ntasks=4'' It is important to to carefully read the comments and select the appropriate options for your job.  Make sure you specify the correct VALET environment for your job selecting the correct version for Python 2 or 3 for python-mpi. Since the above example is based on Python 2, we will specify the VALET package as follows:
  
 <code bash>  <code bash> 
Line 147: Line 147:
 [3] [ 0.  2.  4.  6.  8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30.] [3] [ 0.  2.  4.  6.  8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30.]
 </code> </code>
 +
 +===== Recipes =====
 +If you need to build a Python virtualenv based on a collection of Python modules including mpi4py, then you will need to follow this recipe to get a properly-integrated mpi4py module.
 +
 +  * [[technical:recipes:mpi4py-in-virtualenv|Building a Python virtualenv with a properly-integrated mpi4py module]]
  • software/mpi4py/caviness.1587660357.txt.gz
  • Last modified: 2020-04-23 12:45
  • by anita