software:mpi4py:farber

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
software:mpi4py:farber [2020-04-23 11:57] anitasoftware:mpi4py:farber [2021-04-27 16:21] (current) – external edit 127.0.0.1
Line 81: Line 81:
  
 <code bash> <code bash>
-cp /opt/templates/gridengine/openmpi/openmpi-ib.qs mympi4py.qs+cp /opt/shared/templates/gridengine/openmpi/openmpi-ib.qs mympi4py.qs
 </code> </code>
  
-and modify it for your application. Make sure you read the comments in the job script to select the appropriate option specifically modify the ''NPROC'' to specify the number of cores for ''#$ -pe mpi NPROC'' and understand you get 1GB of memory per ''NPROC'' (cores). Also if you specify [[abstract:farber:runjobs:queues#farber-exclusive-access|exclusive access]] by using ''-l exclusive=1'', then no other jobs can be running on the nodes, giving exclusive access to your job. Make sure you specify the correct VALET environment for your job selecting the correct version for Python 2 or 3 for mpi4py. Since the above example is based on Python 2 and needs mpi4py and numpy, we will specify the correct Python versions as follows:+and modify it for your application. Make sure you read the comments in the job script to select the appropriate option specifically modify the ''NPROC'' to specify the number of cores for ''#$ -pe mpi NPROC'' and understand you get 1GB of memory per ''NPROC'' (cores). Also if you specify [[abstract:farber:runjobs:queues#farber-exclusive-access|exclusive access]] by using ''-l exclusive=1'', then no other jobs can be running on the nodes, giving exclusive access to your job. Make sure you specify the correct VALET environment for your job selecting the correct version for Python 2 or 3 for mpi4py. Since the above example is based on Python 2 and needs mpi4py, we will specify the VALET package as follows:
  
 <code bash>  <code bash> 
 vpkg_require python-mpi4py/python2.7.8 vpkg_require python-mpi4py/python2.7.8
-vpkg_require python-numpy/python2.7.8 
 </code> </code>
  
Line 123: Line 122:
 qsub -l exclusive=1 -l standby=1 mympi4py.qs qsub -l exclusive=1 -l standby=1 mympi4py.qs
 </code> </code>
 +
 +==== Output ====
 +
 +The following output is based on the Python 2 script ''scatter-gather.py'' submitted with 4 cores ''$# -pe mpi 4'' and 1GB of memory per core in the ''mympi4py.qs'' as described above:
 +
 +<code bash>
 +[CGROUPS] UD Grid Engine cgroup setup commencing
 +[CGROUPS] WARNING: No OS-level core-binding can be made for mpi jobs
 +[CGROUPS] Setting 1073741824 bytes (vmem none bytes) on n039 (master)
 +[CGROUPS]   with 4 cores
 +[CGROUPS] done.
 +
 +Adding dependency `python/2.7.8` to your environment
 +Adding dependency `openmpi/1.8.2` to your environment
 +Adding package `python-mpi4py/1.3.1-python2.7.8` to your environment
 +Adding dependency `atlas/3.10.2` to your environment
 +Adding package `python-numpy/1.8.2-python2.7.8` to your environment
 +GridEngine parameters:
 +  mpirun        = /opt/shared/openmpi/1.8.2/bin/mpirun
 +  nhosts        = 1
 +  nproc         = 4
 +  executable    = python
 +  Open MPI vers = 1.8.2
 +  MPI flags     = --display-map --mca btl ^tcp
 +-- begin OPENMPI run --
 + Data for JOB [64887,1] offset 0
 +
 + ========================   JOB MAP   ========================
 +
 + Data for node: n039    Num slots: 4    Max slots: 0    Num procs: 4
 +        Process OMPI jobid: [64887,1] App: 0 Process rank: 0
 +        Process OMPI jobid: [64887,1] App: 0 Process rank: 1
 +        Process OMPI jobid: [64887,1] App: 0 Process rank: 2
 +        Process OMPI jobid: [64887,1] App: 0 Process rank: 3
 +
 + =============================================================
 +[2] [  8.   9.  10.  11.]
 +After Scatter:
 +[0] [ 0.  1.  2.  3.]
 +[1] [ 4.  5.  6.  7.]
 +[3] [ 12.  13.  14.  15.]
 +[3] [  0.   2.   4.   6.   8.  10.  12.  14.  16.  18.  20.  22.  24.  26.  28.
 +  30.]
 +[1] [  0.   2.   4.   6.   8.  10.  12.  14.  16.  18.  20.  22.  24.  26.  28.
 +  30.]
 +After Allgather:
 +[0] [  0.   2.   4.   6.   8.  10.  12.  14.  16.  18.  20.  22.  24.  26.  28.
 +  30.]
 +[2] [  0.   2.   4.   6.   8.  10.  12.  14.  16.  18.  20.  22.  24.  26.  28.
 +  30.]
 +-- end OPENMPI run --
 +</code>
 +
 +===== Recipes =====
 +If you need to build a Python virtualenv based on a collection of Python modules including mpi4py, then you will need to follow this recipe to get a properly-integrated mpi4py module.
 +
 +  * [[technical:recipes:mpi4py-in-virtualenv|Building a Python virtualenv with a properly-integrated mpi4py module]]
  • software/mpi4py/farber.1587657422.txt.gz
  • Last modified: 2020-04-23 11:57
  • by anita