abstract:farber:runjobs:runjobs

This is an old revision of the document!


<booktoc/>

Running applications

This section uses the wiki's documentation conventions.

The Grid Engine job scheduling system is used to manage and control the computing resources for all jobs submitted to a cluster. This includes load balancing, reconciling requests for memory and processor cores with availability of those resources, suspending and restarting jobs, and managing jobs with different priorities. Grid Engine on Farber is Univa Grid Engine but still referred to as SGE.

Grid Engine job scheduling system provides an excellent overview of Grid Engine which is the job scheduling system used on Farber.

In order to schedule any job (interactively or batch) on a cluster, you must set your workgroup to define your cluster group or investing-entity compute nodes.

See Scheduling Jobs and Managing Jobs for general information about getting started with scheduling and managing jobs on a cluster using Grid Engine.

Generally, your runtime environment (path, environment variables, etc.) should be the same as your compile-time environment. Usually, the best way to achieve this is to put the relevant VALET commands in shell scripts. You can reuse common sets of commands by storing them in a shell script file that can be sourced from within other shell script files.

If you are writing an executable script that does not have the -l option on the bash command, and you want to include VALET commands in your script, then you should include the line:
source /etc/profile.d/valet.sh

You do not need this command when you

  1. type commands, or source the command file,
  2. include lines in the file to be submitted to the qsub.

A job scheduling system is used to manage and control the computing resources for all jobs submitted to a cluster. This includes load balancing, limiting resources, reconciling requests for memory and processor cores with availability of those resources, suspending and restarting jobs, and managing jobs with different priorities.

Each investing-entity's group (workgroup) has owner queues that allow the use a fixed number of slots to match the total number of cores purchased. If a job is submitted that would use more than the slots allowed, the job will wait until enough slots are made available by completed jobs. There is no time limit imposed on owner queue jobs. All users can see running and waiting jobs, which allows groups to work out policies for managing purchased nodes.

The standby queues are available for projects requiring more slots than purchased, or to take advantage of idle nodes when a job would have to wait in the owner queue. Other workgroup nodes will be used, so standby jobs have a time limit, and users are limited to a total number of cores for all of their standby jobs. Generally, users can use 10 nodes for an 8 hour standby job or 40 nodes for a 4 hour standby job.

A spillover queue may be available for the case where a job is submitted to the owner queue, and there are standby jobs consuming needed slots. Instead of waiting, the jobs will be sent to the spillover queue to start on a similar idle node.

A spare queue may be on a cluster to make spare nodes available to users, by special request.

Each cluster is configured with a particular job scheduling system. General documentation is available for all job scheduling systems currently in use.

In order to schedule any job (interactively or batch) on a cluster, you must set your workgroup to define your cluster group or investing-entity compute nodes.

All interactive jobs should be scheduled to run on the compute nodes, not the login/head node.

An interactive session (job) can often be made non-interactive (batch) by putting the input in a file, using the redirection symbols < and >, and making the entire command a line in a job script file:

program_name < input_command_file > output_command_file

Then the non-interactive (batch) job can be scheduled as a batch job.

Starting an interactive session

Remember you must specify your workgroup to define your cluster group or investing-entity compute nodes before submitting any job, and this includes starting an interactive session. Now use the Grid Engine command qlogin on the login (head) node. Grid Engine will look for a node with a free scheduling slot (processor core) and a sufficiently light load, and then assign your session to it. If no such node becomes available, your qlogin request will eventually time out. The qlogin command results in a job in the workgroup interactive serial queue, <investing_entity>-qrsh.q.

Type

    workgroup -g //investing-entity//

Type

    qlogin

to reserve one scheduling slot and start an interactive shell on one of your workgroup investing-entity compute nodes.

Type

    qlogin –pe threads 12

to reserve 12 scheduling slots and start an interactive shell on one your workgroup investing-entity compute node.

Type

    exit

to terminate the interactive shell and release the scheduling slot(s).

Acceptable nodes for interactive sessions

Use the login (head) node for interactive program development including Fortran, C, and C++ program compilation. Use Grid Engine (qlogin) to start interactive shells on your workgroup investing-entity compute nodes.

Grid Engine provides the qsub command for scheduling batch jobs:

command Action
qsub «command_line_options» «job_script» Submit job with script command in the file «job_script»

For example,

 qsub myproject.qs

or to submit a standby job that waits for idle nodes (up to 240 slots for 8 hours),

 qsub -l standby=1 myproject.qs

or to submit a standby job that waits for idle 48-core nodes (if you are using a cluster with 48-core nodes like Mills)

 qsub -l standby=1 -q standby.q@@48core myproject.qs
 

or to submit a standby job that waits for idle 24-core nodes, (would not be assigned to any 48-core nodes; important for consistency of core assignment)

 qsub -l standby=1 -q standby.q@@24core myproject.qs

or to submit to the four hour standby queue (up to 816 slots spanning all nodes)

 qsub -l standby=1,h_rt=4:00:00 myproject.qs

or to submit to the four hour standby queue spanning just the 24-core nodes.

 qsub -l standby=1,h_rt=4:00:00 -q standby-4h.q@@24core myproject.qs

This file myproject.qs will contain bash shell commands and qsub statements that include qsub options and resource specifications. The qsub statements begin with #$.

We strongly recommend that you use a script file that you pattern after the prototypes in /opt/templates and save your job script files within a $WORKDIR (private work) directory.

Reusable job scripts help you maintain a consistent batch environment across runs. The optional .qs filename suffix signifies a queue-submission script file.

See also resource options to specify memory free and/or available, exclusive access, and requesting specific Matlab licenses.

Grid Engine environment variables

In every batch session, Grid Engine sets environment variables that are useful within job scripts. Here are some common examples. The rest appear in the ENVIRONMENTAL VARIABLES section of the qsub man page.

Environment variable Contains
HOSTNAME Name of the execution (compute) node
JOB_ID Batch job id assigned by Grid Engine
JOB_NAME Name you assigned to the batch job (See Command options for qsub)
NSLOTS Number of scheduling slots (processor cores) assigned by Grid Engine to this job
SGE_TASK_ID Task id of an array job sub-task (See Array jobs)
TMPDIR Name of directory on the (compute) node scratch filesystem

When Grid Engine assigns one of your job's tasks to a particular node, it creates a temporary work directory on that node's 1-2 TB local scratch disk. And when the task assigned to that node is finished, Grid Engine removes the directory and its contents. The form of the directory name is

/scratch/[$JOB_ID].[$SGE_TASK_ID].«queue_name»

For example after qlogin type

    echo $TMPDIR

to see the name of the node scratch directory for this interactive job.

/scratch/71842.1.it_css-qrsh.q

See Filesystems and Computing environment for more information about the node scratch filesystem and using environment variables.

Grid Engine uses these environment variables' values when creating the job's output files:

File name patter Description
[$JOB_NAME].o[$JOB_ID] Default output filename
[$JOB_NAME].e[$JOB_ID] error filename (when not joined to output)
[$JOB_NAME].po[$JOB_ID] Parallel job output output (Empty for most queues)
[$JOB_NAME].pe[$JOB_ID] Parallel job error filename (Usually empty)

Command options for qsub

The most commonly used qsub options fall into two categories: operational and resource-management. The operational options deal with naming the output files, mail notification of the processing steps, sequencing of a series of jobs, and establishing the UNIX environment. The resource-management options deal with the specific system resources you desire or need, such as parallel programming environments, number of processor cores, maximum CPU time, and virtual memory needed.

The table below lists qsub's common operational options.

Option / Argument Function
-N «job_name» Names the job <job_name>. Default: the job script's full filename.
-m {b|e|a|s|n} Specifies when e-mail notifications of the job's status should be sent: beginning, end, abort, suspend. Default: never
-M «email_address» Specifies the email address to use for notifications.
-j {y|n} Joins (redirects) the STDERR results to STDOUT. Default: y(yes)
-o «output_file» Directs job output STDOUT to <output_file>. Default: see Grid Engine environment variables
-e «error_file» Directs job errors (STDERR) to <error_file>. File is only produced when the qsub option –j n is used.
-hold_jid <job_list> Holds job until the jobs named in <job_list> are completed. Job may be listed as a list of comma-separated numeric job ids or job names.
-t «task_id_range» Used for array jobs. See Array jobs for details.
Special notes for IT clusters:
-cwd Default. Uses current directory as the job's working directory.
-V Ignored. Generally, the login node's environment is not appropriate to pass to a compute node. Instead, you must define the environment variables directly in the job script.
-q «queue_name» Not need in most cases. Your choice of resource-management options determine the queue.
The resource-management options for qsub have two common forms:
-l «resource»=«value»
-pe «parallel_environment» «Nproc»

For example, putting the lines

#$ -l h_cpu=1:30:00
#$ –pe threads 12

in the job script tells Grid Engine to set a hard limit of 1.5 hours on the CPU time resource for the job, and to assign 12 processors for your job.

Grid Engine tries to satisfy all of the resource-management options you specify in a job script or as qsub command-line options. If there is a queue already defined that accepts jobs having that particular combination of requests, Grid Engine assigns your job to that queue.

An array job essentially runs the same job by generating a new repeated task many times. Each time, the environment variable SGE_TASK_ID is set to a sequence number by Grid Engine and its value provides input to the job submission script.

The $SGE_TASK_ID is the key to make the array jobs useful. Use it in your bash script, or pass it as a parameter so your program can decide how to complete the assigned task.

For example, the $SGE_TASK_ID sequence values of 2, 4, 6, … , 5000 might be passed as an initial data value to 2500 repetitions of a simulation model. Alternatively, each iteration (task) of a job might use a different data file with filenames of data$SGE_TASK_ID (i.e., data1, data2, data3, ', data2000).

The general form of the qsub option is:

-t start_value - stop_value : step_size

with a default step_size of 1. For these examples, the option would be:

-t 2-5000:2 and -t 1-2000

Additional simple how-to examples for array jobs.

If you have a multiple jobs where you want to automatically run other job(s) after the execution of another job, then you can use chaining. When you chain jobs, remember to check the status of the other job to determine if it successfully completed. This will prevent the system from flooding the scheduler with failed jobs. Here is a simple chaining example with three job scripts doThing1.qs, doThing2.qs and doThing3.qs.

doThing1.qs
#$ -N doThing1
#
# If you want an email message to be sent to you when your job ultimately
# finishes, edit the -M line to have your email address and change the
# next two lines to start with #$ instead of just #
# -m eas
# -M my_address@mail.server.com
#
# Setup the environment; add vpkg_require commands after this
# line:

# Now append all of your shell commands necessary to run your program
# after this line:
 ./dotask1
doThing2.qs
#$ -N doThing2
#$ -hold_jid doThing1
#
# If you want an email message to be sent to you when your job ultimately
# finishes, edit the -M line to have your email address and change the
# next two lines to start with #$ instead of just #
# -m eas
# -M my_address@mail.server.com
#
# Setup the environment; add vpkg_require commands after this
# line:

# Now append all of your shell commands necessary to run your program
# after this line:

# Here is where you should add a test to make sure
# that dotask1 successfully completed before running
# ./dotask2
# You might check if a specific file(s) exists that you would
# expect after a successful dotask1 run, something like this
#  if [ -e dotask1.log ] 
#      then ./dotask2
#  fi
# If dotask1.log does not exist it will do nothing.
# If you don't need a test, then you would run the task.
 ./dotask2
doThing3.qs
#$ -N doThing3
#$ -hold_jid doThing2
#
# If you want an email message to be sent to you when your job ultimately
# finishes, edit the -M line to have your email address and change the
# next two lines to start with #$ instead of just #
# -m eas
# -M my_address@mail.server.com
#
# Setup the environment; add vpkg_require commands after this
# line:

# Now append all of your shell commands necessary to run your program
# after this line:
# Here is where you should add a test to make sure
# that dotask2 successfully completed before running
# ./dotask3
# You might check if a specific file(s) exists that you would
# expect after a successful dotask2 run, something like this
#  if [ -e dotask2.log ] 
#      then ./dotask3
#  fi
# If dotask2.log does not exist it will do nothing.
# If you don't need a test, then just run the task.
 ./dotask3

Now submit all three job scripts. In this example, we are using account traine in workgroup it_css on Mills.

[(it_css:traine)@mills ~]$ qsub doThing1.qs
[(it_css:traine)@mills ~]$ qsub doThing2.qs
[(it_css:traine)@mills ~]$ qsub doThing3.qs

The basic flow is doThing2 will wait until doThing1 finishes, and doThing3 will wait until doThing2 finishes. If you test for success, then doThing2 will check to make sure that doThing1 was successful before running, and doThing3 will check to make sure that doThing2 was successful before running.

You might also want to have doThing1 and doThing2 execute at the same time, and only run doThing3 after they finish. In this case you will need to change doThing2 and doThing3 scripts and tests.

doThing2.qs
#$ -N doThing2
#
# If you want an email message to be sent to you when your job ultimately
# finishes, edit the -M line to have your email address and change the
# next two lines to start with #$ instead of just #
# -m eas
# -M my_address@mail.server.com
#
# Setup the environment; add vpkg_require commands after this
# line:

# Now append all of your shell commands necessary to run your program
# after this line:
 ./dotask2
doThing3.qs
#$ -N doThing3
#$ -hold_jid doThing1,doThing2
#
# If you want an email message to be sent to you when your job ultimately
# finishes, edit the -M line to have your email address and change the
# next two lines to start with #$ instead of just #
# -m eas
# -M my_address@mail.server.com
#
# Setup the environment; add vpkg_require commands after this
# line:

# Now append all of your shell commands necessary to run your program
# after this line:
# Here is where you should add a test to make sure
# that dotask1 and dotask2 successfully completed before running
# ./dotask3
# You might check if a specific file(s) exists that you would
# expect after a successful dotask1 and dotask2 run, something like this
#  if [ -e dotask1.log -a -e dotask2.log ];
#      then ./dotask3
#  fi
# If both files do not exist it will do nothing.
# If you don't need a test, then just run the task.
 ./dotask3

Now submit all three jobs again. However this time doThing1 and doThing2 will run at the same time, and only when they are both finished, will doThing3 run. doThing3 will check to make sure doThing1 and doThing2 are successful before running.

  • abstract/farber/runjobs/runjobs.1512483534.txt.gz
  • Last modified: 2017-12-05 09:18
  • by sraskar