software:gaussian:caviness

This is an old revision of the document!


Gaussian on Caviness

A template Slurm job submission script for batch Gaussian jobs is available at /opt/templates/slurm/applications/gaussian.qs. Copy and edit the template based on your job's resource needs; the comments in the template describe each Slurm option and environment variable present. At the very least, the input file must be altered accordingly:

#
# Gaussian input file:
#
GAUSSIAN_INPUT_FILE="h2o.com"

The template sources an external script to complete setup of the job environment, including fixup of the input file to account for CPU counts and memory limits imposed by Slurm, and specification of CPU and GPU bindings necessary for Gaussian '16 to utilize GPU offload.

Gaussian '16 introduced the ability to offload some computation to GPU devices. Each GPU associated with the job must have a corresponding CPU dedicated to controlling the GPU and moving data to and from it; thus, OpenMP processor bindings are a necessary part of starting g16 executable.

The Gaussian job environment script, part of the Caviness Slurm templates package (/opt/shared/slurm/templates/libexec/gaussian.sh), includes code to produce the necessary PGI OpenMP CPU bindings and CPU-to-GPU bindings strings. These values are communicated to Gaussian via the environment variables GAUSS_CDEF and GAUSS_GDEF, respectively.

By default, the job environment script removes any nproc, nprocshared, cpu, and gpucpu route 0 lines from the input file (as part of the input file fixup). The number of CPUs is then inferred by Gaussian from GAUSS_CDEF. Setting GAUSSIAN_SKIP_INPUT_FIXUP will use the input file verbatim, which could yield errors if those route 0 directives are present.

To use GPU offload, first ensure the job script is adding a GPU-enabled version of Gaussian to the job environment:

#
# Add Gaussian to the environment:
#
vpkg_require gaussian/g16b01:gpu

Next, request GPU resources, either in a #SBATCH comment in the script or from the command line:

$ sbatch --gres=gpu:p100:2 ...

When displaying to a Mac with XQuartz across an SSH tunnel, the following error occurs:

<code>

  $ gview
  [xcb] Unknown sequence number while processing queue
  [xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
  [xcb] Aborting, sorry about that.
  gview.exe: xcb_io.c:259: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.
  Abort (core dumped)

The fix described on using Gaussian Remotely via Mac OS X which requires altering the XQuartz preferences resulted in no more crash. The enable_iglx flag is documented more extensively in Re-enabling INdirect GLX on your X server.

  • software/gaussian/caviness.1572443939.txt.gz
  • Last modified: 2019-10-30 09:58
  • by anita