software:mpi4py:caviness

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
software:mpi4py:caviness [2020-04-23 16:13] anitasoftware:mpi4py:caviness [2021-04-27 16:21] (current) – external edit 127.0.0.1
Line 74: Line 74:
 ===== Batch job ===== ===== Batch job =====
  
-Any MPI job requires you to use ''mpirun'' to initiate it, and this should be done through the Slurm job scheduler to best utilize the resources on the cluster.  Also, if you want to run on more than 1 node (more than 36+ cores), then you must initiate a batch job from the head node. Remember if you only have 1 node in your workgroup, then you would need to take advantage of the [[abstract:caviness:runjobs:queues#the-standard-partition|standard]] partition to be able to run a job utilizing multiple nodes, however keep in mind using the standard partition means your job can be preempted so you will need to mindful of [[abstract:caviness:runjobs:schedule_jobs#handling-system-signals-aka-checkpointing|checkpointing]] your job.+Any MPI job requires you to use ''mpirun'' to initiate it, and this should be done through the Slurm job scheduler to best utilize the resources on the cluster.  Also, if you want to run on more than 1 node (more than 36+ cores), then you must initiate a batch job from the head node. Remember if you only have 1 node in your workgroup, then you would need to take advantage of the [[abstract:caviness:runjobs:queues#the-standard-partition|standard]] partition to be able to run a job utilizing multiple nodes, however keep in mind using the standard partition means your job can be preempted so you will need to be mindful of [[abstract:caviness:runjobs:schedule_jobs#handling-system-signals-aka-checkpointing|checkpointing]] your job.
  
 The best results have been found by using the //openmpi.qs// template for [[/software/openmpi/openmpi|Open MPI]] jobs. For example, copy the template and call it ''mympi4py.qs'' for the job script using The best results have been found by using the //openmpi.qs// template for [[/software/openmpi/openmpi|Open MPI]] jobs. For example, copy the template and call it ''mympi4py.qs'' for the job script using
  
 <code bash> <code bash>
-cp /opt/templates/slurm/generic/mpi/openmpi/openmpi.qs mympi4py.qs+cp /opt/shared/templates/slurm/generic/mpi/openmpi/openmpi.qs mympi4py.qs
 </code> </code>
  
Line 147: Line 147:
 [3] [ 0.  2.  4.  6.  8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30.] [3] [ 0.  2.  4.  6.  8. 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30.]
 </code> </code>
 +
 +===== Recipes =====
 +If you need to build a Python virtualenv based on a collection of Python modules including mpi4py, then you will need to follow this recipe to get a properly-integrated mpi4py module.
 +
 +  * [[technical:recipes:mpi4py-in-virtualenv|Building a Python virtualenv with a properly-integrated mpi4py module]]
  • software/mpi4py/caviness.1587672818.txt.gz
  • Last modified: 2020-04-23 16:13
  • by anita