abstract:farber:runjobs:runjobs

This is an old revision of the document!


<booktoc/>

Running applications

This section uses the wiki's documentation conventions.

The Grid Engine job scheduling system is used to manage and control the computing resources for all jobs submitted to a cluster. This includes load balancing, reconciling requests for memory and processor cores with availability of those resources, suspending and restarting jobs, and managing jobs with different priorities. Grid Engine on Farber is Univa Grid Engine but still referred to as SGE.

Grid Engine job scheduling system provides an excellent overview of Grid Engine which is the job scheduling system used on Farber.

In order to schedule any job (interactively or batch) on a cluster, you must set your workgroup to define your cluster group or investing-entity compute nodes.

See Scheduling Jobs and Managing Jobs for general information about getting started with scheduling and managing jobs on a cluster using Grid Engine.

Generally, your runtime environment (path, environment variables, etc.) should be the same as your compile-time environment. Usually, the best way to achieve this is to put the relevant VALET commands in shell scripts. You can reuse common sets of commands by storing them in a shell script file that can be sourced from within other shell script files.

If you are writing an executable script that does not have the -l option on the bash command, and you want to include VALET commands in your script, then you should include the line:
source /etc/profile.d/valet.sh

You do not need this command when you

  1. type commands, or source the command file,
  2. include lines in the file to be submitted to the qsub.

A job scheduling system is used to manage and control the computing resources for all jobs submitted to a cluster. This includes load balancing, limiting resources, reconciling requests for memory and processor cores with availability of those resources, suspending and restarting jobs, and managing jobs with different priorities.

Each investing-entity's group (workgroup) has owner queues that allow the use a fixed number of slots to match the total number of cores purchased. If a job is submitted that would use more than the slots allowed, the job will wait until enough slots are made available by completed jobs. There is no time limit imposed on owner queue jobs. All users can see running and waiting jobs, which allows groups to work out policies for managing purchased nodes.

The standby queues are available for projects requiring more slots than purchased, or to take advantage of idle nodes when a job would have to wait in the owner queue. Other workgroup nodes will be used, so standby jobs have a time limit, and users are limited to a total number of cores for all of their standby jobs. Generally, users can use 10 nodes for an 8 hour standby job or 40 nodes for a 4 hour standby job.

A spillover queue may be available for the case where a job is submitted to the owner queue, and there are standby jobs consuming needed slots. Instead of waiting, the jobs will be sent to the spillover queue to start on a similar idle node.

A spare queue may be on a cluster to make spare nodes available to users, by special request.

Each cluster is configured with a particular job scheduling system. General documentation is available for all job scheduling systems currently in use.

  • abstract/farber/runjobs/runjobs.1512483892.txt.gz
  • Last modified: 2017-12-05 09:24
  • by sraskar