The University of Delaware owns a site license to Gaussian software. Only cluster accounts using a UDelNet Id are covered by that license and have access to the software described on this page. All users – including HPC guest accounts – are free to install their own copies of Gaussian software, either from binary packages or by compiling from source (Portland Group compilers are available to all cluster users).
A Slurm job submission script template for batch Gaussian jobs is available at /opt/shared/templates/slurm/applications/gaussian.qs
. Copy and edit the template based on your job's resource needs; the comments in the template describe each Slurm option and environment variable present.
/opt/shared/templates/slurm/applications
for changes in existing templates, or the addition of new templates, designed to provide the best performance for a particular version of Gaussian on that cluster.
At the very least, the Gaussian input file variable must be altered accordingly:
# # Gaussian input file: # GAUSSIAN_INPUT_FILE="h2o.com"
The template sources an external script to complete the setup of the job environment, including fixup of the input file to account for CPU counts and memory limits imposed by Slurm, and specification of CPU and GPU bindings necessary for Gaussian '16 to utilize GPU offload.
Gaussian '16 introduced the ability to offload some computation to GPU devices. Each GPU associated with the job must have a corresponding CPU dedicated to controlling the GPU and moving data to and from it; thus, OpenMP processor bindings are a necessary part of starting g16
executable.
The Gaussian job environment script, part of the Caviness Slurm templates package (/opt/shared/slurm/templates/libexec/gaussian.sh
), includes code to produce the necessary PGI OpenMP CPU bindings and CPU-to-GPU bindings strings. These values are communicated to Gaussian via the environment variables GAUSS_CDEF
and GAUSS_GDEF
, respectively.
nproc
, nprocshared
, cpu
, and gpucpu
route 0 lines from the input file (as part of the input file fixup). The number of CPUs is then inferred by Gaussian from GAUSS_CDEF
. Setting GAUSSIAN_SKIP_INPUT_FIXUP
will use the input file verbatim, which could yield errors if those route 0 directives are present.
To use GPU offload, first ensure the job script is adding a GPU-enabled version of Gaussian to the job environment:
# # Add Gaussian to the environment: # vpkg_require gaussian/g16b01:gpu
Next, request GPU resources, either in a #SBATCH
comment in the script or from the command line:
$ sbatch --gres=gpu:p100:2 ...
When displaying to a Mac with XQuartz across an SSH tunnel, the following error occurs
$ gview [xcb] Unknown sequence number while processing queue [xcb] Most likely this is a multi-threaded client and XInitThreads has not been called [xcb] Aborting, sorry about that. gview.exe: xcb_io.c:259: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed. Abort (core dumped)
or you may experience a "blank white square" X-window on the display rather than opening all the corresponding GUI X-windows for gview.
The fix described on using Gaussian Remotely via Mac OS X resulted in no more crash nor the "blank white square" X-window. The enable_iglx
flag is documented more extensively in Re-enabling INdirect GLX on your X server.