Building VASP 6 on Caviness/DARWIN
Over the years the VASP build system has changed significantly. In version 6, the use of the makefile.include
to encapsulate machine-specific options has improved the portability and reproducibility of the build procedures.
The build procedure outlined herein uses Open MPI on top of the Intel compiler suite in conjunction with CUDA 11 with the target executable destined for use on Volta- and Turing-generation NVIDIA devices.
Directory Preparation
To begin, choose a directory in which the VASP version(s) will be built and installed. To build in your home directory, for example:
[user@login00.darwin ~]$ VASP_BASEDIR=~/sw/vasp [user@login00.darwin ~]$ VASP_BASEDIR_PRIVS=0700
If you are managing VASP software for your entire workgroup, you could instead use
[user@login00.darwin ~]$ VASP_BASEDIR="${WORKDIR}/sw/vasp" [user@login00.darwin ~]$ VASP_BASEDIR_PRIVS=2770
If the directory hierarchy does not yet exist, it can be setup using
[user@login00.darwin ~]$ mkdir -p -m $VASP_BASEDIR_PRIVS "${VASP_BASEDIR}/attic"
All VASP source code packages you download should be copied to that attic
directory so they are collocated with the builds and installs of the program:
[user@login00.darwin ~]$ cp ~/vasp.6.1.0.tar.gz "${VASP_BASEDIR}/attic"
Source Preparation
In this example version 6.1.0 of VASP will be built; all sub-programs (NCL, Gamma-only, standard, GPU, GPU NCL) will be created.
The Intel compiler suite is well-documented with regard to building VASP, so there is usually very little reason to try alternative toolchains (like GNU or Portland). Our standard recipes for VASP will entail use of the Intel compilers, the MKL for BLAS/LAPACK/FFTW/ScaLAPACK/BLACS, and Open MPI for parallelism.
We will create a directory to hold our base build of VASP 6.1.0, naming it with the version identifier: 6.1.0
. The source is then unpacked therein:
[user@login00.darwin ~]$ VASP_INSTALL_PREFIX="${VASP_BASEDIR}/6.1.0" [user@login00.darwin ~]$ mkdir -m $VASP_BASEDIR_PRIVS "$VASP_INSTALL_PREFIX" [user@login00.darwin ~]$ cd "$VASP_INSTALL_PREFIX" [user@login00.darwin 6.1.0]$ tar -xf "${VASP_BASEDIR}/attic/vasp.6.1.0.tar.gz" [user@login00.darwin 6.1.0]$ mv vasp.6.1.0 src [user@login00.darwin 6.1.0]$ cd src
Our current working directory is now the build root for this copy of VASP 6.1.0.
Selecting Machine-Specific Parameters
The VASP 6 build environment requires a makefile.include
file to be present in the build root. There are various example files present in the arch
subdirectory in the build root. Given our choice of compiler and parallelism (see above), the closest example from which to begin is arch/makefile.include.linux_intel
. There will be some changes necessary to tailor it to the Caviness/DARWIN systems:
- VALET sets many environment variables that facilitate reuse of a single
makefile.include
for various choices of Open MPI, CUDA, etc. versus hard-coding paths into the file. - Several Make variables need to be explicitly exported in order for sub-make environments to inherit them properly. In particular, the CUDA generated-code architectures option in
makefile.include
was not being propagated to the CUDA sub-build, leading to its choosing to generate SM30/35 code paths (which the version of CUDA no longer supported). - Though the
-xHOST
option works fine on Caviness (targeting the login nodes' AVX2 capability level but not the AVX512 capabilities of Gen2 and later nodes in that cluster), it did not work on DARWIN; some SSE-specific code optimizations in the minimax code failed to compile. A more-specific architecture needed to be selected (-xCORE-AVX2
).
This is the makefile.include
that was produced via trial and error:
- makefile.include.darwin
# Precompiler options CPP_OPTIONS= -DHOST=\"DARWIN-Intel-OpenMPI-CUDA\"\ -DMPI -DMPI_BLOCK=8000 -Duse_collective \ -DscaLAPACK \ -DCACHE_SIZE=4000 \ -Davoidalloc \ -Dvasp6 \ -Duse_bse_te \ -Dtbdyn \ -Dfock_dblbuf CPP = fpp -f_com=no -free -w0 $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS) $(CPPFLAGS) FC = mpifort FCL = mpifort -mkl=sequential FREE = -free -names lowercase FFLAGS = -assume byterecl -w -xCORE-AVX2 OFLAG = -O2 OFLAG_IN = $(OFLAG) DEBUG = -O0 MKL_PATH = $(MKLROOT)/lib/intel64 BLAS = LAPACK = BLACS = -lmkl_blacs_intelmpi_lp64 SCALAPACK = -lmkl_scalapack_lp64 $(BLACS) OBJECTS = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d.o INCS =-I$(MKLROOT)/include/fftw LLIBS = $(LDFLAGS) $(SCALAPACK) $(LAPACK) $(BLAS) OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o OBJECTS_O2 += fft3dlib.o # For what used to be vasp.5.lib CPP_LIB = $(CPP) FC_LIB = $(FC) CC_LIB = icc $(CPPFLAGS) CFLAGS_LIB = -O FFLAGS_LIB = -O1 FREE_LIB = $(FREE) OBJECTS_LIB= linpack_double.o getshmem.o # For the parser library CXX_PARS = icpc LLIBS += -lstdc++ # Normally no need to change this SRCDIR = ../../src BINDIR = ../../bin #================================================ # GPU Stuff CPP_GPU = -DCUDA_GPU -DRPROMU_CPROJ_OVERLAP -DUSE_PINNED_MEMORY -DCUFFT_MIN=28 -UscaLAPACK -Ufock_dblbuf OBJECTS_GPU= fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d_gpu.o fftmpiw_gpu.o CC = icc CXX = icpc CFLAGS = $(CPPFLAGS) -fPIC -DADD_ -Wall -qopenmp -DMAGMA_WITH_MKL -DMAGMA_SETAFFINITY -DGPUSHMEM=300 -DHAVE_CUBLAS ## ## vpkg_devrequire cuda/<version> will have setup everything in the ## environment for us (CUDA_PREFIX, PATH, and LD_LIBRARY_PATH) ## CUDA_ROOT ?= $(CUDA_PREFIX) export CUDA_ROOT NVCC := nvcc -ccbin=icc $(CPPFLAGS) CUDA_LIB := $(LDFLAGS) -lnvToolsExt -lcudart -lcuda -lcufft -lcublas ## ## compute_30,35 dropped from CUDA 11 ## compute_60 was Pascal (present on Caviness, not DARWIN) ## compute_70,72 are Volta ## compute_75 is Turing ## ##GENCODE_ARCH := -gencode=arch=compute_30,code=\"sm_30,compute_30\" \ ## -gencode=arch=compute_35,code=\"sm_35,compute_35\" \ GENCODE_ARCH := -gencode=arch=compute_60,code=\"sm_60,compute_60\" \ -gencode=arch=compute_70,code=\"sm_70,compute_70\" \ -gencode=arch=compute_72,code=\"sm_72,compute_72\" \ -gencode=arch=compute_75,code=\"sm_75,compute_75\" export GENCODE_ARCH ## ## vpkg_require openmpi/<version> will have the pkg-config path ## setup for us to query this argument ## MPI_INC = $(shell pkg-config --cflags-only-I ompi-c | sed 's/-I//') export MPI_INC
It should be saved to the build root as makefile.include
. The only change necessary to use this file on Caviness is the -xCORE-AVX2
option in the FFLAGS
definition — it can be changed back to -xHost
or to -xCORE-AVX512
(to target Gen2 and higher nodes).
Build Environment
The makefile.include
above includes comments that reference the VALET commands used to configure the build (and runtime) environment for the copy of VASP being built. Two packages must be added; in case you have already added packages to your login shell environment, first rollback to a clean environment:
[user@login00.darwin src]$ vpkg_rollback all
This, of course, does not remove changes you introduce manually in your .bashrc
or .bash_profile
files; IT RCI strongly discourages user's making such environment changes in those files since they produce a very difficult-to-manage shell environment.
From a clean shell environment, add the Open MPI and CUDA packages:
[user@login00.darwin src]$ vpkg_require openmpi/4.1.0:intel-2020 Adding dependency `intel/2020u4` to your environment Adding package `openmpi/4.1.0:intel-2020` to your environment [user@login00.darwin src]$ vpkg_devrequire cuda/11.1.1 Adding package `cuda/11.1.1-455.32.00` to your environment
The PATH
has been updated such that commands like mpifort
and nvcc
require no leading path to be properly located by the shell. The CPPFLAGS
and LDFLAGS
variables reference the myriad paths under the CUDA installation that may be required for compilation. And the CUDA_PREFIX
variable contains the installation prefix for that CUDA library:
[user@login00.darwin src]$ which nvcc /opt/shared/cuda/11.1.1-455.32.00/bin/nvcc [user@login00.darwin src]$ which mpifort /opt/shared/openmpi/4.1.0-intel-2020/bin/mpifort [user@login00.darwin src]$ echo $CUDA_PREFIX /opt/shared/cuda/11.1.1-455.32.00 [user@login00.darwin src]$ echo $CPPFLAGS -I/opt/shared/cuda/11.1.1-455.32.00/include -I/opt/shared/cuda/11.1.1-455.32.00/nvvm/include -I/opt/shared/cuda/11.1.1-455.32.00/extras/CUPTI/include -I/opt/shared/cuda/11.1.1-455.32.00/extras/Debugger/include
The makefile.include
is constructed to properly-integrate with this environment, such that altering the version of either VALET package (openmpi
or cuda
) should not require any modification to the makefile.include
.
Compilation and Linking
At this point, compilation and linking of the five variants of the VASP 6.1.0 program is simple:
[user@login00.darwin src]$ make all
This will take some time. If you are tempted to speed-up the compilation using Make parallelism, be cautioned: the build system provided by the VASP developers does not have enough dependency information to properly-order the compilation of Fortran modules. This results in errors like:
fock.F(3619): error #7002: Error in opening the compiled module file. Check INCLUDE paths. [FOCK] USE fock --------^
A natural speed-up to the build is to omit the variants that are unnecessary to your work. If only the standard variant is needed, then build just that one:
[user@login00.darwin src]$ make std
A successful build will produce executables in the bin
subdirectory of the build root:
[user@login00.darwin src]$ ls -l bin total 38372 -rwxr-xr-x 1 user everyone 29892256 Feb 15 11:13 vasp_std
Installation
In order to foster better software management, it is advisable not to use the executables within the build root. Any subsequent attempt to recompile (e.g. to fix an omission in makefile.include
) will overwrite an executable that has likely been used for production calculations!
Once the build has completed successfully, copy (install) the executables:
[user@login00.darwin src]$ mkdir -m $VASP_BASEDIR_PRIVS "${VASP_INSTALL_PREFIX}/bin" [user@login00.darwin src]$ for exe in bin/*; do install -Cv --backup=numbered "$exe" "${VASP_INSTALL_PREFIX}/bin"; done ‘bin/vasp_gam’ -> ‘/home/user/sw/vasp/6.1.0/bin/vasp_gam’ ‘bin/vasp_gpu’ -> ‘/home/user/sw/vasp/6.1.0/bin/vasp_gpu’ ‘bin/vasp_gpu_ncl’ -> ‘/home/user/sw/vasp/6.1.0/bin/vasp_gpu_ncl’ ‘bin/vasp_ncl’ -> ‘/home/user/sw/vasp/6.1.0/bin/vasp_ncl’ ‘bin/vasp_std’ -> ‘/home/user/sw/vasp/6.1.0/bin/vasp_std’
The --backup=numbered
option ensures that if executables already exist in the install location, they will be renamed with a numbered file extension rather than being simply replaced by the new copy. If, for some reason, the old executable needs to be restored, the backup can be renamed to effect that change.
The -C
option checks if the source and destination files differ, and only performs the copy operation if they do.
VALET Package Definition
With this version of VASP built, the remaining step is to leverage VALET for setup of the runtime environment when you use the software. VALET automatically recognizes the standard directory layout, so configuring versions/variants of vasp
is very straightforward. First, note your installation path:
[user@login00.darwin src]$ vpkg_rollback all [user@login00.darwin src]$ cd [user@login00.darwin ~]$ echo $VASP_BASEDIR /home/user/sw/vasp
Since this build was done in the user's home directory, they were personal copies of the software and should use a VALET package definition file stored in ~/.valet
[user@login00.darwin ~]$ VALET_PKG_DIR=~/.valet ; VALET_PKG_DIR_MODE=0700
versus an installation made for an entire workgroup, which would store the VALET package definition files in $WORKDIR/sw/valet
[user@login00.darwin ~]$ VALET_PKG_DIR="$WORKDIR/sw/valet" ; VALET_PKG_DIR_MODE=2770
Whichever scheme is in-use, ensure the directory exists:
[user@login00.darwin ~]$ mkdir -p --mode=$VALET_PKG_DIR_MODE "$VALET_PKG_DIR"
VALET allows package definitions in a variety of formats (XML, JSON, YAML) but YAML tends to be the simplest format so we will use it here.
Package section
The package section of the definition file includes items that apply to all versions/variants of the software:
vasp: prefix: /home/user/sw/vasp description: Vienna Ab-initio Simulation Package url: "http://cms.mpi.univie.ac.at/vasp/"
The package identifier is the top-level key in the document — vasp
— and the value of $VASP_BASEDIR
is the value of the prefix
key in this section. The URL and description are information taken from the official VASP web site.
Versions
The versions
key is used to provide a list of the versions/variants of the software:
vasp: prefix: /home/user/sw/vasp description: Vienna Ab-initio Simulation Package url: "http://cms.mpi.univie.ac.at/vasp/" versions: "6.1.0": description: compiled with Open MPI, Intel compilers, MKL, ScaLAPACK, CUDA dependencies: - openmpi/4.1.0:intel-2020 - cuda/11.1.1
The version identifier 6.1.0
is inferred to be the path prefix to the version in question here. The package's prefix (/home/user/sw/vasp
) with the version identifier appended (/home/user/sw/vasp/6.1.0
) is implicit.
The implicit behavior is overridden by providing a prefix
key in the version definition: a relative path is appended to the package's prefix, an absolute path is used as-is.
It is a good idea to specify which version definition should act as the default. This yields the following package definition file
- vasp.vpkg_yaml
vasp: prefix: /home/user/sw/vasp description: Vienna Ab-initio Simulation Package url: "http://cms.mpi.univie.ac.at/vasp/" default-version: "6.1.0" versions: "6.1.0": description: compiled with Open MPI, Intel compilers, MKL, ScaLAPACK, CUDA dependencies: - openmpi/4.1.0:intel-2020 - cuda/11.1.1
saved at $VALET_PKG_DIR/vasp.vpkg_yaml
.
Checking the definition file
The package definition file can be checked for proper syntax using the VALET command vpkg_check
:
[user@login00.darwin ~]$ vpkg_check "$VALET_PKG_DIR/vasp.vpkg_yaml" /home/user/.valet/vasp.vpkg_yaml is OK [vasp] { contexts: all actions: { VASP_PREFIX=${VALET_PATH_PREFIX} (contexts: development) } http://cms.mpi.univie.ac.at/vasp/ Vienna Ab-initio Simulation Package prefix: /home/user/sw/vasp source file: /home/user/.valet/vasp.vpkg_yaml default version: vasp/6.1.0 versions: { [vasp/6.1.0] { contexts: all dependencies: { openmpi/4.1.0:intel-2020 cuda/11.1.1 } compiled with Open MPI, Intel compilers, MKL, ScaLAPACK, CUDA prefix: /home/user/sw/vasp/6.1.0 standard paths: { bin: /home/user/sw/vasp/6.1.0/bin } } } }
The file had no errors in its YAML syntax. Notice also that the standard path (bin
) is found and noted by VALET!
Runtime environment
To load vasp 6.1.0 into the runtime environment, the vpkg_require
command is used:
[user@login00.darwin ~]$ vpkg_require vasp/6.1.0 Adding dependency `intel/2020u4` to your environment Adding dependency `openmpi/4.1.0:intel-2020` to your environment Adding dependency `cuda/11.1.1-455.32.00` to your environment Adding package `vasp/6.1.0` to your environment [user@login00.darwin ~]$ which vasp_std ~/sw/vasp/6.1.0/bin/vasp_std
The vasp_std
command is used without a leading path which implies that the shell will check directories in the $PATH
environment variable for an executable with that name. If a different version/variant of vasp is chosen, the command would still be vasp_std
but the shell would find it at a different location. This abstraction (no full paths to executables) makes it easier to alter complex job scripts by simply changing the variant of vasp added using vpkg_require
.