software:wrf:caviness.new

Installation of WRF/WPS

This guide is loosely based on the information presented in the WRF-ARW Online Tutorial. WRF expects the MPICH library for distributed-memory parallelism, but IT RCI recommends Open MPI on Caviness, so some minor adjustments are necessary. A major difference will be in how the software is organized and used.

As outlined in the software building and management guide, WRF (and accompanying WPS) will be installed in a versioned hierarchy. The base path for the installation will vary by user and use case:

  • Versions managed by a singular user for that user's private usage
    • Home directory, e.g. ${HOME}/sw/wrf
    • Workgroup per-user directory, e.g. ${WORKDIR}/users/«user»/sw/wrf
  • Versions managed for all members of a workgroup
    • Workgroup software directory, e.g. ${WORKDIR}/sw/wrf

In this document the software will be installed for all members of a workgroup. Though the procedure assumes WRF was not previously installed for the workgroup, it has been constructed to work regardless. Procedural differences based on previous installations will be highlighted where appropriate.

To begin, the workgroup is entered and the installation directories are created. In this document version 4.5.2 will be built and installed:

$ workgroup -g «workgroup»
$ WRF_PREFIX="${WORKDIR}/sw/wrf"
$ WRF_VERSION="4.5.2"
$ mkdir -p "${WRF_PREFIX}/${WRF_VERSION}/src"
$ BUILD_DATE="$(date +%Y.%m.%d)"

Note that depending on the choice of base path (as outlined above) the value of WRF_PREFIX will be different. The BUILD_DATE will be used to differentiate some paths and filenames throughout this document. The WRF_VERSION corresponds with WRF itself; it is up to the user to determine the correct version of WPS to accompany the chosen version of WRF.

WRF depends on several external libraries, including the NetCDF and NetCDF-Fortran libraries. Most of the other libraries are available in the OS.

$ vpkg_devrequire netcdf-fortran/4.5.4:intel-oneapi-2023,openmpi

The PNG library must be at least 1.2.50:

$ pkg-config libpng --modversion
1.5.13

The Jasper library must be at least 1.900.1:

$ pkg-config jasper --modversion
1.900.1

The zip library (zlib) dependency isn't directly needed by WRF but by the NetCDF library to which it is linked. Since NetCDF is not being purpose-built for this copy of WRF (a VALET package is being used), the zip library is not needed.

For the sake of dependencies and build system tests, a directory will be created to the purpose:

$ mkdir -p "${WRF_PREFIX}/${WRF_VERSION}/src/3rd_party"

Any dependencies would be downloaded and built in the 3rd_party directory and be installed to the WRF version directory.

The build of the zip library is shown here for illustrative purposes only – it does not need to be built as part of this procedure.

$ pushd "${WRF_PREFIX}/${WRF_VERSION}/src/3rd_party"
$ wget 'https://www.zlib.net/zlib-1.3.1.tar.gz'
$ tar -xf zlib-1.3.1.tar.gz
$ pushd zlib-1.3.1
$ vpkg_require cmake/default
$ mkdir build-${BUILD_DATE}
$ cd build-${BUILD_DATE}
$ CC=icx \
    cmake \
        -DCMAKE_BUILD_TYPE=Release \
        -DCMAKE_INSTALL_PREFIX="${WRF_PREFIX}/${WRF_VERSION}" \
        ..
$ make
$ make install
$ popd

WRF has two tests to ensure the build system is likely to succees. In the 3rd_party directory:

$ cd "${WRF_PREFIX}/${WRF_VERSION}/src/3rd_party"
$ wget 'https://www2.mmm.ucar.edu/wrf/OnLineTutorial/compile_tutorial/tar_files/Fortran_C_NETCDF_MPI_tests.tar'
$ tar -xf Fortran_C_NETCDF_MPI_tests.tar
$ ${FC} ${CPPFLAGS} ${FCFLAGS} -c 01_fortran+c+netcdf_f.f
$ ${CC} ${CPPFLAGS} ${CFLAGS} -c 01_fortran+c+netcdf_c.c
$ ${FC} ${CPPFLAGS} ${FCFLAGS} -o 01_fortran+c+netcdf \
            01_fortran+c+netcdf_f.o 01_fortran+c+netcdf_c.o \
            ${LDFLAGS} -lnetcdff -lnetcdf
$ ./01_fortran+c+netcdf
   C function called by Fortran
   Values are xx =  2.00 and ii = 1 
 SUCCESS test 1 fortran + c + netcdf
$ mpif90 ${CPPFLAGS} ${FCFLAGS} -c 02_fortran+c+netcdf+mpi_f.f
$ mpicc ${CPPFLAGS} ${CFLAGS} -c 02_fortran+c+netcdf+mpi_c.c
$ mpif90 ${CPPFLAGS} ${FCFLAGS} -o 02_fortran+c+netcdf+mpi \
            02_fortran+c+netcdf+mpi_f.o 02_fortran+c+netcdf+mpi_c.o \
            ${LDFLAGS} -lnetcdff -lnetcdf
$ mpirun -np 1 02_fortran+c+netcdf+mpi
   C function called by Fortran
   Values are xx =  2.00 and ii = 1 
 status =            2
 SUCCESS test 2 fortran + c + netcdf + mpi

The WRF and WPS source code will be cloned from Github in the versioned source directory and the desired version checked-out:

$ cd "${WRF_PREFIX}/${WRF_VERSION}/src"
 
$ git clone https://github.com/wrf-model/WRF.git WRF
$ pushd WRF
$ git checkout v4.5.2
$ git submodule update --init --recursive
$ popd
 
$ git clone https://github.com/wrf-model/WPS.git WPS
$ pushd WPS
$ git checkout v4.5
$ popd

The clone and checkout process will take some time – these are large source code repositories.

For other 4.5.x and newer releases of WRF, the value assigned to WRF_VERSION should be altered and the versions checked-out in the git repositories will be different.

For the Jasper library to be included in the program, the WRF configuration system must be altered:

$ cd "${WRF_PREFIX}/${WRF_VERSION}/src/WRF"
$ sed -i 's/I_really_want_to_output_grib2_from_WRF = "FALSE"/I_really_want_to_output_grib2_from_WRF = "TRUE"/' arch/Config.pl

Two builds will be done: distributed-memory parallelism (MPI) and serial.

For the MPI build, the configure script must be run with the following selections:

  • Option 78: Intel (ifx/icx), "dm"
  • 1: Basic nesting
$ ./clean -aa
$ NETCDF="${NETCDF_DASH_FORTRAN_PREFIX}" \
  JASPERLIB=/usr/lib64 JASPERINC=/usr/include ZLIB="${WRF_PREFIX}/${WRF_VERSION}" \
    ./configure

The configure script produces a configure.wrf file for the build system. A few minor modifications are necessary to the compilers and for the sake of brevity (no need to time the Fortran compiles):

$ sed -i -e 's/^\(DM_FC *= *\)mpi.*$/\1mpif90/' \
         -e 's/^\(DM_CC *= *\)mpi.*$/\1mpicc/' \
         -e 's/^\(FC *= *\)time /\1/' \
         configure.wrf

Finally, the code can be compiled. The -j 4 flag allows up to four concurrent compiles to accelerate the process:

$ NETCDF="${NETCDF_DASH_FORTRAN_PREFIX}" \
  JASPERLIB=/usr/lib64 JASPERINC=/usr/include \
    ./compile -j 4 em_real 2>&1 | tee compile-mpi-${BUILD_DATE}.log

Again, a lot of information will be displayed to the terminal as the build proceeds; all of it will also be written to a file with the name compile-mpi-«YYYY.MM.DD».log. If the build is successful, the output will terminate with e.g.

==========================================================================
build started:   Tue Apr  2 16:39:04 EDT 2024
build completed: Tue Apr 2 16:53:00 EDT 2024
 
--->                  Executables successfully built                  <---
 
-rwxr-xr-x 1 user workgroup 67864392 Apr  2 16:53 main/ndown.exe
-rwxr-xr-x 1 user workgroup 67779464 Apr  2 16:53 main/real.exe
-rwxr-xr-x 1 user workgroup 67344608 Apr  2 16:53 main/tc.exe
-rwxr-xr-x 1 user workgroup 71494672 Apr  2 16:52 main/wrf.exe
 
==========================================================================

At this point those executables can be installed outside the source directory to ${WRF_PREFIX}/${WRF_VERSION}/bin for future computational usage:

$ mkdir -p "${WRF_PREFIX}/${WRF_VERSION}/bin"
$ for exe in main/*.exe; do
    install --mode=0775 "$exe" "${WRF_PREFIX}/${WRF_VERSION}/bin/mpi_$(basename "${exe}")"
  done

In the future, the distributed-memory parallel variants of the program will be referenced as mpi_wrf.exe, mpi_real.exe, etc.

Building WPS

Since the WPS build makes use of the headers and intermediate libraries produced by the WRF build, at this point the user would need to skip to the section on building WPS and choose the same "dm" variant used to build WRF. However, in this document only the serial WPS will be built, so that step is skipped.

For the serial build, the configure script must be run with the following selections:

  • Option 76: Intel (ifx/icx) "serial"
  • 0: No nesting
$ ./clean -aa
$ NETCDF="${NETCDF_DASH_FORTRAN_PREFIX}" \
  JASPERLIB=/usr/lib64 JASPERINC=/usr/include ZLIB="${WRF_PREFIX}/${WRF_VERSION}" \
    ./configure

The configure script produces a configure.wrf file for the build system. A few minor modifications are necessary to the compilers and for the sake of brevity (no need to time the Fortran compiles):

$ sed -i -e 's/^\(DM_FC *= *\)mpi.*$/\1mpif90/' \
         -e 's/^\(DM_CC *= *\)mpi.*$/\1mpicc/' \
         -e 's/^\(FC *= *\)time /\1/' \
         configure.wrf

Finally, the code can be compiled. The -j 4 flag allows up to four concurrent compiles to accelerate the process:

$ NETCDF="${NETCDF_DASH_FORTRAN_PREFIX}" \
  JASPERLIB=/usr/lib64 JASPERINC=/usr/include \
    ./compile -j 4 em_real 2>&1 | tee compile-serial-${BUILD_DATE}.log

A lot of information will be displayed to the terminal as the build proceeds; all of it will also be written to a file with the name compile-serial-«YYYY.MM.DD».log. If the build is successful, the output will terminate with e.g.

==========================================================================
build started:   Tue Apr  2 16:19:28 EDT 2024
build completed: Tue Apr 2 16:33:37 EDT 2024
 
--->                  Executables successfully built                  <---
 
-rwxr-xr-x 1 user workgroup 67864392 Apr  2 16:33 main/ndown.exe
-rwxr-xr-x 1 user workgroup 67779464 Apr  2 16:33 main/real.exe
-rwxr-xr-x 1 user workgroup 67344608 Apr  2 16:33 main/tc.exe
-rwxr-xr-x 1 user workgroup 71494672 Apr  2 16:32 main/wrf.exe
 
==========================================================================

At this point those executables can be installed outside the source directory to ${WRF_PREFIX}/${WRF_VERSION}/bin for future computational usage:

$ mkdir -p "${WRF_PREFIX}/${WRF_VERSION}/bin"
$ for exe in main/*.exe; do
    install --mode=0775 "$exe" "${WRF_PREFIX}/${WRF_VERSION}/bin"
  done

In the future, the serial variants of the program will be referenced as wrf.exe, real.exe, etc.

In the next section WPS will be built, which will make use of the headers and intermediate libraries produced in this step. Do not alter the WRF source directory! E.g. do not remove build artifacts with ./clean.

The WRF 4.5.2 code has been updated to include support for the Clang-based Intel oneAPI compiler suite – the configure options selected in the previous section reflected that. However, WPS 4.5 does not offer those options. The build system must be patched to add that support.

Download the patch file to the WPS source directory and apply the patch:

$ cd "${WRF_PREFIX}/${WRF_VERSION}/src/WPS"
$ wget 'https://docs.hpc.udel.edu/_media/software/wrf/intel-oneapi.patch'
$ patch -d arch -p1 < intel-oneapi.patch 
patching file configure.defaults

Only the serial variant of WPS will be built. Run the configure script with the following selection:

  • Option 41: Intel oneAPI "serial"
$ NETCDF="${NETCDF_DASH_FORTRAN_PREFIX}" \
  JASPERLIB="/usr/lib64" JASPERINC="/usr/include" \
  WRF_DIR="${WRF_PREFIX}/${WRF_VERSION}/src/WRF" \
    ./configure

The WRF_DIR provides the WPS build system with the path to the WRF headers and intermediate libraries produced in the previous step.

WRF made use of and linked against the NetCDF-Fortran library, but the WPS configure system does not link against that library. The configure.wrf file generated by ./configure must be adjusted accordingly:

$ sed -i 's/-lnetcdf/-lnetcdff -lnetcdf/g' configure.wps

At this point WPS can be built:

$ NETCDF="${NETCDF_DASH_FORTRAN_PREFIX}" \
  JASPERLIB=/usr/lib64 JASPERINC=/usr/include \
  WRF_DIR="${WRF_PREFIX}/${WRF_VERSION}/src/wrf" \
    ./compile 2>&1 | tee compile-serial-${BUILD_DATE}.log

If successful, the executables are installed to the ${WRF_PREFIX}/${WRF_VERSION}/bin directory for future computational usage:

$ find . -name \*.exe -type f -exec install --mode=0775 \{\} "${WRF_PREFIX}/${WRF_VERSION}/bin" \; -print
./ungrib/src/g1print.exe
./ungrib/src/ungrib.exe
./ungrib/src/g2print.exe
./metgrid/src/metgrid.exe
./util/src/rd_intermediate.exe
./util/src/height_ukmo.exe
./util/src/avg_tsfc.exe
./util/src/int2nc.exe
./util/src/mod_levs.exe
./util/src/calc_ecmwf_p.exe
./geogrid/src/geogrid.exe

The link_grib.csh script is referenced in documentation as always being run from the source directory. But it functions just fine when installed to the same bin directory as the executables:

$ install --mode=0775 link_grib.csh "${WRF_PREFIX}/${WRF_VERSION}/bin"

As with the "dm" WRF executables, the executables associated with a "dm" build of WPS should be installed with the mpi_ prefix.

The following is only to be substituted for a "dm" WPS build. Do not execute it in concert with the code above!

$ for exe in $(find . -name \*.exe -type f); do
    echo "$exe"
    install --mode=0775 "$exe" "${WRF_PREFIX}/${WRF_VERSION}/bin/mpi_$(basename "${exe}")"
  done
./ungrib/src/g1print.exe
./ungrib/src/ungrib.exe
./ungrib/src/g2print.exe
./metgrid/src/metgrid.exe
./util/src/rd_intermediate.exe
./util/src/height_ukmo.exe
./util/src/avg_tsfc.exe
./util/src/int2nc.exe
./util/src/mod_levs.exe
./util/src/calc_ecmwf_p.exe
./geogrid/src/geogrid.exe

The link_grib.csh script only needs to be installed once.

Creating a VALET package to describe the WRF software makes it easy to dynamically configure the runtime environment for computational jobs. The location to which the package definition file is installed depends on the base installation path chosen for WRF:

  • Versions managed by a singular user for that user's private usage
    • WRF_VALET_DIR="${HOME}/.valet"
  • Versions managed for all members of a workgroup
    • WRF_VALET_DIR="${WORKDIR}/sw/valet"

Three variable quantities are needed when authoring the VALET package definition for the WRF software built above. The first two are the base installation directory and the version id:

$ printf "%s\n%s\n" "$WRF_PREFIX" "$WRF_VERSION"
/work/it_nss/sw/wrf
4.5.2

The third variable is the VALET package that was added to facilitate the build:

  • netcdf-fortran/4.5.4:intel-oneapi-2023,openmpi

If no VALET package definition file exists, create the file with the following template (filling-in the values indicated by double angle brackets):

wrf.vpkg_yaml
wrf:
    prefix: «WRF_PREFIX»
    description: the Weather Research and Forecasting model
    url: "https://github.com/wrf-model/WRF"

    versions:
        "«WRF_VERSION»":
            description: WRF 4.5.2 and WPS 4.5, Intel oneAPI, Open MPI
            dependencies:
                - netcdf-fortran/4.5.4:intel-oneapi-2023,openmpi
            actions:
                - variable: WRF_SRC_DIR
                  value: ${VALET_PATH_PREFIX}/src/WRF
                - variable: WPS_SRC_DIR
                  value: ${VALET_PATH_PREFIX}/src/WPS
 

This template can be downloaded directly to the WRF_VALET_DIR directory chosen above and then edited:

$ wget -O "${WRF_VALET_DIR}/wrf.vpkg_yaml" \
    'https://docs.hpc.udel.edu/_export/code/software/wrf/caviness.new?codeblock=29'

If a wrf.vpkg_yaml package definition already exists, simply edit the file and add the version section alone:

        "«WRF_VERSION»":
            description: WRF 4.5.2 and WPS 4.5, Intel oneAPI, Open MPI
            dependencies:
                - netcdf-fortran/4.5.4:intel-oneapi-2023,openmpi
            actions:
                - variable: WRF_SRC_DIR
                  value: ${VALET_PATH_PREFIX}/src/WRF
                - variable: WPS_SRC_DIR
                  value: ${VALET_PATH_PREFIX}/src/WPS
 
 

The package is added to the environment:

$ vpkg_require wrf/4.5.2
Adding dependency `szip/2.1.1` to your environment
Adding dependency `hdf4/4.2.16` to your environment
Adding dependency `binutils/2.35.1` to your environment
Adding dependency `gcc/12.2.0` to your environment
Adding dependency `intel-oneapi/2023.0.0.25537` to your environment
Adding dependency `ucx/1.13.1` to your environment
Adding dependency `openmpi/4.1.5:intel-oneapi-2023` to your environment
Adding dependency `hdf5/1.10.9:intel-oneapi-2023,openmpi` to your environment
Adding dependency `pnetcdf/1.12.3:intel-oneapi-2023,openmpi` to your environment
Adding dependency `netcdf/4.9.1:intel-oneapi-2023,openmpi` to your environment
Adding dependency `netcdf-fortran/4.5.4:intel-oneapi-2023,openmpi` to your environment
Adding package `wrf/4.5.2` to your environment

The executables are now available on the shell's path and can be executed as bare commands, e.g. real.exe:

$ which real.exe
/work/it_nss/sw/wrf/4.5.2/bin/real.exe
 
$ which mpi_real.exe
/work/it_nss/sw/wrf/4.5.2/bin/mpi_real.exe
 
$ real.exe
forrtl: No such file or directory
forrtl: severe (29): file not found, unit 27, file /home/1001/namelist.input

In addition, two variables are set pointing to the respective source directories of WRF and WPS:

$ ls -ld $WRF_SRC_DIR
drwxr-sr-x 22 user workgroup 41472 Apr  4 11:31 /work/it_nss/sw/wrf/4.5.2/src/WRF
 
$ ls -ld $WPS_SRC_DIR
drwxr-sr-x 9 user workgroup 41472 Apr  4 13:00 /work/it_nss/sw/wrf/4.5.2/src/WPS
  • software/wrf/caviness.new.txt
  • Last modified: 2024-04-05 12:22
  • by frey