This is an old revision of the document!
Mpi4py for Farber
MPI for Python (mpi4py) provides bindings of the Message Passing Interface (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors.
$ vpkg_versions python-mpi4py Available versions in package (* = default version): [/opt/shared/valet/2.0.1/etc/python-mpi4py.vpkg_json] python-mpi4py Fundamental library for scientific computing * 1.3.1-python2.7.8 Version 1.3.1 and python 2.7.8 1.3.1-python3.2.5 Version 1.3.1 and python 3.2.5 3.0.3-python3.6.3 Version 3.0.3 and python 3.6.3 python2.7.8 alias to python-mpi4py/1.3.1-python2.7.8 python3.2.5 alias to python-mpi4py/1.3.1-python3.2.5 python3.6.3 alias to python-mpi4py/3.0.3-python3.6.3
Sample mpi4py script
Adapted from the documentation provided by NASA Modeling Guru consider the mpi4py script that implements the scatter-gather procedure:
- scatter-gather.py
#-------------- # Loaded Modules #-------------- import numpy as np from mpi4py import MPI #------------- # Communicator #------------- comm = MPI.COMM_WORLD my_N = 4 N = my_N * comm.size if comm.rank == 0: A = np.arange(N, dtype=np.float64) else: #Note that if I am not the root processor A is an empty array A = np.empty(N, dtype=np.float64) my_A = np.empty(my_N, dtype=np.float64) #------------------------- # Scatter data into my_A arrays #------------------------- comm.Scatter( [A, MPI.DOUBLE], [my_A, MPI.DOUBLE] ) if comm.rank == 0: print "After Scatter:" for r in xrange(comm.size): if comm.rank == r: print "[%d] %s" % (comm.rank, my_A) comm.Barrier() #------------------------- # Everybody is multiplying by 2 #------------------------- my_A *= 2 #----------------------- # Allgather data into A again #----------------------- comm.Allgather( [my_A, MPI.DOUBLE], [A, MPI.DOUBLE] ) if comm.rank == 0: print "After Allgather:" for r in xrange(comm.size): if comm.rank == r: print "[%d] %s" % (comm.rank, A) comm.Barrier()
Batch job
Any MPI job requires you to use mpirun
to initiate it, and this should be done through the Grid Engine job scheduler to best utilize the resources on the cluster. Also, if you want to run on more than 1 node (more than 24 or 48 cores depending on the node specifications), then you must initiate a batch job from the head node. Remember if you only have 1 node in your workgroup, then you would need to take advantage of the standby queue to be able to run a job utilizing multiple nodes.
The best results on Farber have been found by using the openmpi-ib.qs template for Open MPI jobs. For example, copy the template and call it mympi4py.qs
for the job script using
cp /opt/templates/gridengine/openmpi/openmpi-ib.qs mympi4py.qs
and modify it for your application. Make sure you read the comments in the job script to select the appropriate options. However if you specify exclusive access by using -l exclusive=1
, then no other jobs can be running on the nodes, giving exclusive access to your job. Make sure you specify the correct VALET environment for your job selecting the correct version for python2 or python3 and openmpi.
vpkg_require python-mpi4py/python3.6.3 vpkg_require python-numpy/python3.6.3
Lastly, modify MY_EXE
for your mpi4py script. In this example, it would be
MY_EXE="python3 scatter-gather.py"
All the options for mpirun
will automatically be determined based on the options you selected above for your job script. Now to run this job, from the head node, Farber, simply use
workgroup -g <<//investing-entity//>> qsub mympi4py.qs
or
qsub -l exclusive=1 mympi4py.qs
for exclusive access of the nodes needed for your job.
Remember if you want to specify more cores for NPROC
than available in your workgroup, then you need to specify the standby queue -l standby=1
but remember it has limited run times based on the total number of cores you are using for all of your jobs. In this example, since it is only requesting 48 cores, then it can run up to 8 hours using
qsub -l exclusive=1 -l standby=1 mympi4py.qs