Building MCFOST on Caviness
Two distinct use cases are described in this document. The first represents a build of the MCFOST program alone. The result of this scheme is the mcfost
executable for stand-alone usage.
The second integrates the MCFOST program into a Python virtual environment (virtualenv). In this scheme, the mcfost
executable can be used as a computational driver under scripted Python code.
The following recipes reflect a user's building and managing MCFOST in their home directory. The environment variable MCFOST_PREFIX
will be used throughout and could instead point to a workgroup's storage to create versions shared by all members of the workgroup. The value of MCFOST_PACKAGE
will vary: for the stand-alone scheme, mcfost
will suffice. For Python virtualenv integrations something indicative of the nature of that virtualenv is appropriate. For example, a simple virtualenv which includes astropy might be mcfost-astropy
, while a Python program using multiple Python modules (like astropy) that models cooling of a stellar body might be stellar-cooling
1). Finally, code revisions must be balanced against working milestones in the development, so the MCFOST_VERSION
environment variable will be used in that regard.
Stand-alone
The recipe begins by setting-up a directory to contain all versions of MCFOST to be built in stand-alone fashion:
$ export MCFOST_PACKAGE="mcfost" $ export MCFOST_PREFIX="$(echo ~/sw/${MCFOST_PACKAGE})" $ mkdir -p "$MCFOST_PREFIX" $ cd "$MCFOST_PREFIX"
To build from the head of the master branch of the MCFOST source, it is easier to version the software by compilation date; the actual release version can be aliased to the date-based version after the fact in the VALET package definition. So the next step is to create a version by date, change into that directory, and clone the source:
$ export MCFOST_VERSION="$(date +%Y.%m.%d)" $ mkdir -p "${MCFOST_PREFIX}/${MCFOST_VERSION}" $ cd "${MCFOST_PREFIX}/${MCFOST_VERSION}" $ git clone https://github.com/cpinte/mcfost.git src $ cd src
The next step is to build the program.
Build Procedure
MCFOST has two components to its build: library dependencies and the mcfost
executable. The library dependencies is a scripted procedure that downloads source and builds static libraries that mcfost
will link against. Several environment variables are used to communicate compiler variant and options to the MCFOST build system. The entire procedure can be scripted:
- SWMGR-build.sh
#!/bin/bash -l vpkg_require intel-oneapi/2022 export MCFOST_INSTALL="${MCFOST_PREFIX}/${MCFOST_VERSION}" export MCFOST_GIT=1 export MCFOST_AUTO_UPDATE=0 export SYSTEM=ifort #export MCFOST_XGBOST=yes # Start with the library build: pushd lib ./install.sh if [ $? -ne 0 ]; then exit 1 fi # Remove the HDF5 fake install root that install.sh created: [ -d ~/hdf5_install_tmp ] && rm -rf ~/hdf5_install_tmp popd # Move to the executable build: pushd src make if [ $? -ne 0 ]; then exit 1 fi # There is no "install" target, so just do it ourself: mkdir -p "${MCFOST_INSTALL}/bin" cp mcfost "${MCFOST_INSTALL}/bin" popd # Add a symlink to the utils directory to the install prefix, too; # we'll set MCFOST_UTILS to point to that: ln -s "$(realpath --relative-to="$MCFOST_INSTALL" ./utils)" "${MCFOST_INSTALL}/utils"
The library dependencies and mcfost
are built with the Intel oneAPI 2022 compiler suite for best-performance on Caviness. Parallelization of MCFOST is via OpenMP — there is no support for multinode parallelism e.g. with MPI. Saving the SWMGR-build.sh
script to the src
directory (where the procedure left off above) and making it executable, it can be executed therein:
$ chmod +x SWMGR-build.sh $ ./SWMGR-build.sh Adding dependency `binutils/2.38` to your environment Adding package `intel-oneapi/2022.3.0.8767` to your environment ~/sw/mcfost/2023.03.03/src/lib ~/sw/mcfost/2023.03.03/src Building MCFOST libraries with ifort ~/sw/mcfost/2023.03.03/src/lib ~/sw/mcfost/2023.03.03/src/lib --2023-03-03 10:48:05-- http://sprng.org/Version2.0/sprng2.0b.tar.gz Resolving sprng.org (sprng.org)... 204.85.28.65 Connecting to sprng.org (sprng.org)|204.85.28.65|:80... connected. : Compiling MCFOST for ifort system........... ifort -fpp -O3 -no-prec-div -fp-model fast=2 -traceback -axSSE2,SSSE3,SSE4.1,SSE4.2,AVX,CORE-AVX2,CORE-AVX512 -fopenmp -o mcfost mcfost_env.o parameters.o constants.o healpix_mod.o sha.o messages.o operating_system.o random_numbers.o utils.o sort.o fits_utils.o grains.o read1d_models.o cylindrical_grid.o spherical_grid.o kdtree2.o elements_type.o Voronoi.o grid.o wavelengths.o stars.o read_DustEM.o Temperature.o density.o read_opacity.o scattering.o read_fargo3d.o hdf5_utils.o utils_hdf5.o read_athena++.o readVTK.o read_idefix.o read_pluto.o coated_sphere.o dust_prop.o molecular_emission.o PAH.o input.o benchmarks.o atom_type.o wavelengths_gas.o broad.o read_param.o dust_ray_tracing.o uplow.o abo.o occupation_probability.o lte.o radiation_field.o thermal_emission.o diffusion.o io_prodimo.o disk_physics.o gas_contopac.o voigts.o opacity_atom.o optical_depth.o ML_prodimo.o output.o mem.o init_mcfost.o io_phantom_infiles.o io_phantom_utils.o mess_up_SPH.o read_phantom.o read_gadget2.o SPH2mcfost.o mhd2mcfost.o dust_transfer.o mol_transfer.o collision_atom.o io_atom.o electron_density.o see.o atom_transfer.o voro++_wrapper.o no_xgboost_wrapper.o mcfost.o -L/home/1001/sw/mcfost/2023.03.03/lib/ifort -L/home/1001/sw/mcfost/2023.03.03/lib/ifort -cxxlib -lcfitsio -lvoro++ -lsprng -lhdf5_fortran -lhdf5 -lz -ldl ~/sw/mcfost/2023.03.03/src
If successful (as above), the ${MCFOST_PREFIX}/${MCFOST_VERSION}
directory will be setup with the standard Linux subdirectories (bin
, lib
) and the compiled targets:
$ ls -l "${MCFOST_PREFIX}/${MCFOST_VERSION}" total 42 drwxr-xr-x 2 frey everyone 3 Mar 3 11:07 bin drwxr-xr-x 5 frey everyone 8 Mar 3 11:07 include drwxr-xr-x 3 frey everyone 3 Mar 3 11:07 lib drwxr-xr-x 10 frey everyone 15 Mar 3 11:26 src lrwxrwxrwx 1 frey everyone 9 Mar 3 11:07 utils -> src/utils $ ls -l "${MCFOST_PREFIX}/${MCFOST_VERSION}/bin" total 15003 -rwxr-xr-x 1 frey everyone 28472712 Mar 3 11:07 mcfost
With this version of MCFOST built, the next step is to setup a VALET package definition to manage it in the user's environment.
VALET Integration
For a user building and managing MCFOST on their own, the VALET package definition file should be placed in ~/.valet
. If the software were being maintained for an entire workgroup the appropriate directory for the package definition file would be $WORKDIR/sw/valet
. The file should be named mcfost.vpkg_yaml
. Before editing that file the value of MCFOST_PREFIX
and MCFOST_VERSION
must be noted:
$ echo $MCFOST_PREFIX ; echo $MCFOST_VERSION /home/1001/sw/mcfost 2023.03.03
With those values, ~/.valet/mcfost.vpkg_yaml
can be created:
- mcfost.vpkg_yaml
mcfost: prefix: /home/1001/sw/mcfost description: 3D continuum and line radiative transfer url: "https://mcfost.readthedocs.io/" default-version: 2023.03.03 actions: - variable: MCFOST_INSTALL value: ${VALET_PATH_PREFIX} - variable: MCFOST_UTILS value: ${VALET_PATH_PREFIX}/utils versions: 2023.03.03: description: git master branch as of 2023-03-03, compiled with Intel oneAPI 2022 dependencies: - intel-oneapi/2022
Once created, the syntax can be checked:
[frey@login01.caviness ~]$ vpkg_check ~/.valet/mcfost.vpkg_yaml /home/1001/.valet/mcfost.vpkg_yaml is OK :
When additional versions/variants of the program are built and added to $MCFOST_PREFIX
a new version dictionary can be added to mcfost.vpkg_yaml
.
Finally, the package can be added to the environment and the release version can be determined:
$ vpkg_require mcfost/2023.03.03 Adding dependency `binutils/2.38` to your environment Adding dependency `intel-oneapi/2022.3.0.8767` to your environment Adding package `mcfost/2023.03.03` to your environment $ mcfost -version You are running MCFOST 4.0.00 Git SHA = fe87f0e2c32ffcc897687d1a7a9a90e3ed2f02eb Binary compiled the Mar 03 2023 at 10:01:05 with INTEL compiler version 2021 Checking last version ... MCFOST is up-to-date
Ah ha! Version 4.0.00, so let's add an alias to the VALET package definition:
mcfost: prefix: /home/1001/sw/mcfost description: 3D continuum and line radiative transfer url: "https://mcfost.readthedocs.io/" default-version: 2023.03.03 actions: - variable: MCFOST_INSTALL value: ${VALET_PATH_PREFIX} - variable: MCFOST_UTILS value: ${VALET_PATH_PREFIX}/utils versions: 2023.03.03: description: git master branch as of 2023-03-03, compiled with Intel oneAPI 2022 dependencies: - intel-oneapi/2022 4.0.00: alias-to: 2023.03.03
Having done that, the versions of MCFOST available are queried by doing:
$ vpkg_versions mcfost Available versions in package (* = default version): [/home/1001/.valet/mcfost.vpkg_yaml] mcfost 3D continuum and line radiative transfer 4.0.00 alias to mcfost/2023.03.03 * 2023.03.03 git master branch as of 2023-03-03, compiled with Intel oneAPI 2022
Using MCFOST
With the build completed and the VALET package definition created, job scripts can now load a version of MCFOST into the runtime environment by doing
:
vpkg_require mcfost/4.0.00
:
The program is parallelized with OpenMP, so /opt/templates/slurm/generic/threads.qs
can be used as the basis for job scripts. Making a copy of that template, the tail end of the script can be altered:
# # If you have VALET packages to load into the job environment, # uncomment and edit the following line: # vpkg_require mcfost/4.0.00 # # Do standard OpenMP environment setup: # . /opt/shared/slurm/templates/libexec/openmp.sh # # [EDIT] Execute MCFOST: # MCFOST_ARGS=(-tmp_dir "${TMPDIR:-./}") if [ -n "$SLURM_MEM_PER_NODE" ]; then MCFOST_ARGS+=(-max_mem "$((SLURM_MEM_PER_NODE/1024))") elif [ -n "$SLURM_MEM_PER_CPU" ]; then MCFOST_ARGS+=(-max_mem "$((SLURM_MEM_PER_CPU*SLURM_JOB_CPUS_PER_NODE/1024))") fi mcfost "${MCFOST_ARGS[@]}" ...any other flags for this job... mcfost_rc=$? # Add any cleanup commands here... exit $mcfost_rc
Virtualenv Integration
The recipe shown here for integration with a Python virtualenv is a real-world example provided by a Caviness user. As with the stand-alone scheme, it begins with setting-up the directory that will hold the software:
$ export MCFOST_PACKAGE="stellar-cooling" $ export MCFOST_PREFIX="$(echo ~/sw/${MCFOST_PACKAGE})" $ mkdir -p "$MCFOST_PREFIX" $ cd "$MCFOST_PREFIX"
For this recipe the new Intel oneAPI compilers will be used. As of March 20, 2023, the MCFOST code included support for the traditional Intel Fortran compiler and the GNU Fortran compiler. When writing this recipe, IT RCI staff added support for the Intel oneAPI Fortran compiler and contributed those changes back to the MCFOST developers; as of March 21, 2023, that code has been merged into the official MCFOST source. A date-based versioning will be used for this recipe, as well:
$ export MCFOST_VERSION="$(date +%Y.%m.%d)" $ mkdir -p "${MCFOST_PREFIX}"
Build the Virtual Environment
At the very least a Python conda
tool will be necessary in addition to the Intel Fortran compiler. The oneAPI toolchain includes Intel Distribution for Python, so adding the intel-oneapi/2022
package to the environment satisfies both requirements.
The recipe presented here also requires Open MPI and HDF5 support. Since MCFOST is Fortran code, the MPI-IO variant of HDF5 is not usable, therefore:
$ vpkg_devrequire hdf5/1.10.9:intel-oneapi-2022 $ vpkg_require openmpi/4.1.4:intel-oneapi-2022
The virtual environment will require a few modules that are compiled against local libraries
- mpi4py
- schwimmbad
- image_registration
- FITS_tools
- h5py
- DebrisDiskFM
the following (compiled) modules provided by the Intel channel
- numpy
- scipy
- astropy
- matplotlib
- zipp
- pluggy
- atomicwrites
and a few standard PyPI modules
- emcee
- corner
- pyklip
The process follows:
$ conda create --prefix "${MCFOST_PREFIX}/${MCFOST_VERSION}" --channel intel numpy scipy astropy matplotlib pip zipp pluggy atomicwrites $ conda activate "${MCFOST_PREFIX}/${MCFOST_VERSION}" $ conda uninstall --force-remove impi_rt $ pip install --upgrade pip $ pip install --no-binary :all: --compile mpi4py $ pip install --compile emcee corner pyklip $ pip install --no-binary :all: --compile schwimmbad image_registration FITS_tools $ CC="icx" HDF5_DIR="$HDF5_PREFIX" pip install --no-binary :all: --compile h5py
If successful, the DebrisDiskFM module can be built; there is no PyPI or conda channel packaging of DebrisDiskFM so it must be built from source:
$ mkdir -p "${MCFOST_PREFIX}/${MCFOST_VERSION}/src" $ cd "${MCFOST_PREFIX}/${MCFOST_VERSION}/src" $ git clone https://github.com/seawander/DebrisDiskFM.git $ cd DebrisDiskFM $ python setup.py install --compile
At this point all of the Python components of the virtualenv have been completed. The next step is to build and install MCFOST into the virtualenv, as well.
Build MCFOST
Akin to the procedure outlined above:
$ cd "${MCFOST_PREFIX}/${MCFOST_VERSION}/src" $ git clone https://github.com/cpinte/mcfost.git $ cd mcfost
The procedure is again scripted:
- SWMGR-build.sh
#!/bin/bash -l vpkg_require hdf5/1.10.9:intel-oneapi-2022 export MCFOST_INSTALL="${MCFOST_PREFIX}/${MCFOST_VERSION}" export MCFOST_GIT=1 export MCFOST_AUTO_UPDATE=0 export SYSTEM=ifx #export MCFOST_XGBOST=yes export SKIP_HDF5=yes export HDF5_DIR="$HDF5_PREFIX" # Start with the library build: pushd lib ./install.sh if [ $? -ne 0 ]; then exit 1 fi # Remove the HDF5 fake install root that install.sh created: [ -d ~/hdf5_install_tmp ] && rm -rf ~/hdf5_install_tmp popd # Move to the executable build: pushd src make if [ $? -ne 0 ]; then exit 1 fi # There is no "install" target, so just do it ourself: mkdir -p "${MCFOST_INSTALL}/bin" cp mcfost "${MCFOST_INSTALL}/bin" popd # Add a symlink to the utils directory to the install prefix, too; # we'll set MCFOST_UTILS to point to that: ln -s "$(realpath --relative-to="$MCFOST_INSTALL" ./utils)" "${MCFOST_INSTALL}/utils"
As in the stand-alone recipe, the SWMGR-build.sh
file is saved to the MCFOST source directory and executed from there:
$ chmod +x SWMGR-build.sh $ ./SWMGR-build.sh
If successful, the mcfost
executable will be present in the ${MCFOST_PREFIX}/${MCFOST_VERSION}/bin
directory with the other executables associated with the virtualenv:
$ which mcfost /home/1001/sw/stellar-cooling/2023.03.21/bin/mcfost
Henceforth additional Python modules can be added, either using the conda
command or pip.
Some care must be exercised to ensure the existing modules are not upgraded/downgraded. Using the virtualenv is straightforward by means of a VALET package definition.
VALET Integration
For a user building and managing MCFOST on their own, the VALET package definition file should be placed in ~/.valet
. If the software were being maintained for an entire workgroup the appropriate directory for the package definition file would be $WORKDIR/sw/valet
. The file for this project will be named stellar-cooling.vpkg_yaml
. Before editing that file the value of MCFOST_PREFIX
and MCFOST_VERSION
must be noted:
$ echo $MCFOST_PREFIX ; echo $MCFOST_VERSION /home/1001/sw/stellar-cooling 2023.03.21
With those values, ~/.valet/${MCFOST_PACKAGE}.vpkg_yaml
can be created:
- stellar-cooling.vpkg_yaml
stellar-cooling: prefix: /home/1001/sw/stellar-cooling description: 3D continuum and line radiative transfer url: "https://mcfost.readthedocs.io/" default-version: 2023.03.21 actions: - variable: MCFOST_INSTALL value: ${VALET_PATH_PREFIX} - variable: MCFOST_UTILS value: ${VALET_PATH_PREFIX}/utils versions: 2023.03.21: description: build from 4.0.00 source with Python virtualenv dependencies: - hdf5/1.10.9:intel-oneapi-2022 - openmpi/4.1.4:intel-oneapi-2022 actions: - action: source script: sh: intel-python.sh success: 0
Once created, the syntax can be checked:
[frey@login01.caviness ~]$ vpkg_check ~/.valet/stellar-cooling.vpkg_yaml /home/1001/.valet/stellar-cooling.vpkg_yaml is OK :
When additional versions/variants of the virtualenv are built and added to $MCFOST_PREFIX
a new version dictionary can be added to stellar-cooling.vpkg_yaml
. Each version/variant is usable by means of vpkg_require
, as above. In addition to the mcfost
command's being present on the path, the virtualenv's Python interpreter is available as python
. Python scripts written to make use of this environment should either be executed as an argument to python
:
python my_script.py …
or as an executable itself should include the following hash-bang on the first line of the file:
#!/usr/bin/env python