Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
technical:slurm:scheduler-params [2019-10-24 12:23] – frey | technical:slurm:scheduler-params [Unknown date] (current) – removed - external edit (Unknown date) 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Revisions to Slurm Configuration v2.0.0 on Caviness ====== | ||
- | This document summarizes alterations to the Slurm job scheduler configuration on the Caviness cluster. | ||
- | |||
- | ===== Issues ===== | ||
- | |||
- | ==== Users submitting large numbers of jobs ==== | ||
- | |||
- | There are currently no limits on the number of jobs each user can submit on Caviness. | ||
- | |||
- | * Calculation of owning user's fair-share priority (based on decaying usage history) | ||
- | * Calculation of overall job priority (fair-share, | ||
- | * Sorting of all jobs in the queue based on priority | ||
- | * From the head of the queue up: | ||
- | * Search for free resources matching requested resources | ||
- | * Start execution if the job is eligible and resources are free | ||
- | |||
- | The fair-share calculations require extensive queries against the job database, and locating free resources is a complex operation. | ||
- | |||
- | Many Caviness users are used to submitting a job and immediately seeing (via '' | ||
- | |||
- | One reason the Slurm queue on Caviness can see degraded scheduling efficiency when filled with too many jobs relates to the ordering of the jobs -- and thus to the job priority. | ||
- | |||
- | ^factor^ multiplier^notes^ | ||
- | |qos override (priority-access)| 20000|standard, | ||
- | |wait time (age)| 8000|longest wait time in queue=1.0| | ||
- | |fair-share| 4000|see '' | ||
- | |partition id| 2000|1.0 for all partitions| | ||
- | |job resource size| 1|largest resource request=1.0| | ||
- | |||
- | Next to priority access, wait time is the largest factor: | ||
- | |||
- | Taken together, these factors allow a single user to submit thousands of jobs (even if s/he has a very small share of purchased cluster resources) that quickly sort to the head of the pending queue due to their wait time. The weight on wait time then begins to prioritize those jobs over jobs submitted by users who have not been using the cluster. | ||
- | |||
- | ===== Solutions ===== | ||
- | |||
- | ==== Job submission limits ==== | ||
- | |||
- | On many HPC systems per-user limits are enacted to restrict how many pending jobs can be present in the queue: | ||
- | |||
- | It would be preferable to avoid enacting such limits on Caviness. | ||
- | |||
- | ==== Altered priority weights ==== | ||
- | |||
- | The dominance of wait time in priority calculations is probably the factor contributing most greatly to this problem. | ||
- | |||
- | The Slurm documentation also points out that the priority factor weights should be of large enough magnitude to allow a wide number of | ||
- | |||
- | === Addition of GRES === | ||
- | |||
- | New Generic RESource types will be added to represent the GPU devices present in Generation 2 nodes. | ||
- | |||
- | ^GRES name^Type^Description^ | ||
- | |gpu|v100|nVidia Volta GPU| | ||
- | |gpu|t4|nVidia T4 GPU| | ||
- | |||
- | === Addition of Nodes === | ||
- | |||
- | ^Kind^Features^Nodes^GRES^ | ||
- | |Baseline (2 x 20C, 192 GB)|Gen2, | ||
- | |Large memory (2 x 20C, 384 GB)|Gen2, | ||
- | |X-large memory (2 x 20C, 768 GB)|Gen2, | ||
- | |XX-large memory (2 x 20C, 1024 GB)|Gen2, | ||
- | |Low-end GPU (2 x 20C, 192 GB, 1 x T4)|Gen2, | ||
- | |Low-end GPU (2 x 20C, 384 GB, 1 x T4)|Gen2, | ||
- | |Low-end GPU (2 x 20C, 768 GB, 1 x T4)|Gen2, | ||
- | |All-purpose GPU (2 x 20C, 384 GB, 2 x V100)|Gen2, | ||
- | |All-purpose GPU (2 x 20C, 768 GB, 2 x V100)|Gen2, | ||
- | |||
- | The Features column is a comma-separated list of tags that a job can match against. | ||
- | |||
- | <code bash> | ||
- | $ sbatch --constraint=Gold-6230& | ||
- | </ | ||
- | |||
- | All previous-generation nodes' feature lists will have '' | ||
- | |||
- | <code bash> | ||
- | $ sbatch --constraint=Gen1 … | ||
- | </ | ||
- | |||
- | |||
- | === Changes to Workgroup Accounts, QOS, and Shares === | ||
- | |||
- | * Any existing workgroups with additional purchased resource capacity will have their QOS updated to reflect the aggregate core count, memory capacity, and GPU count. | ||
- | * New workgroups will have an appropriate Slurm account created and populated with sponsored users. | ||
- | * Slurm cluster shares (for fair-share scheduling) are proportional to each workgroup' | ||
- | |||
- | === Changes to Partitions === | ||
- | |||
- | The **standard** partition node list will be augmented to: | ||
- | |||
- | < | ||
- | Nodes=r[00-01]n[01-55], | ||
- | </ | ||
- | |||
- | Various workgroups' | ||
- | |||
- | === Network Topology Changes === | ||
- | |||
- | The addition of the new rack brings two new OPA switches into the high-speed network topology. | ||
- | |||
- | |||
- | ==== Changes to auto_tmpdir ==== | ||
- | |||
- | The directory removal error message has been changed to an internal informational message that users will not see. Additional changes were made to the plugin to address the race condition itself. | ||
- | |||
- | An additional option has been added to the plugin to request shared per-job (and per-step) temporary directories on the Lustre file system: | ||
- | |||
- | < | ||
- | --use-shared-tmpdir | ||
- | by --tmpdir). | ||
- | unique sub-directories for each node allocated to the job | ||
- | (e.g. < | ||
- | </ | ||
- | |||
- | ^Variant^Node^TMPDIR^ | ||
- | |job, no per-node|'' | ||
- | |::: | ||
- | |step, no per-node|'' | ||
- | |::: | ||
- | |job, per-node|'' | ||
- | |::: | ||
- | |step, per-node|'' | ||
- | |::: | ||
- | |||
- | ===== Implementation ===== | ||
- | |||
- | The auto_tmpdir plugin has already been compiled and debugged/ | ||
- | |||
- | To make all of these changes atomic (in a sense), all nodes will be put in the **DRAIN** state to prohibit additional jobs' being scheduled while jobs already running are left alone. | ||
- | |||
- | Next, the Slurm accounting database must be updated with: | ||
- | |||
- | * Changes to cluster share for existing workgroup accounts | ||
- | * Changes to resource levels for existing workgroup QOS's who purchased Gen2 resources | ||
- | * Addition of new workgroup accounts | ||
- | * Addition of new workgroup QOS's | ||
- | |||
- | Adding new nodes and partitions to the Slurm configuration requires the scheduler ('' | ||
- | |||
- | The execution daemons ('' | ||
- | |||
- | The sequence of operations looks something like this (on '' | ||
- | |||
- | <code bash> | ||
- | $ scontrol update nodename=r[00-01]n[00-56], | ||
- | $ # …update existing workgroups' | ||
- | $ # …update existing workgroups' | ||
- | $ # …add new workgroups' | ||
- | $ # …add new workgroups' | ||
- | </ | ||
- | |||
- | The updated Slurm configuration must be pushed to all compute nodes: | ||
- | |||
- | <code bash> | ||
- | $ wwsh provision set r03g\* --fileadd=gen2-gpu-cgroup.conf | ||
- | $ wwsh provision set r03g[00-04] r03g[07-08] --fileadd=gen2-gpu-t4-gres.conf | ||
- | $ wwsh provision set r03g[05-06] --fileadd=gen2-gpu-v100-gres.conf | ||
- | $ wwsh file sync slurm-nodes.conf slurm-partitions.conf topology.conf | ||
- | $ pdsh -w r[00-01]n[00-56], | ||
- | 193 | ||
- | $ # …wait… | ||
- | $ pdsh -w r[00-01]n[00-56], | ||
- | 115 | ||
- | $ # …repeat until… | ||
- | $ pdsh -w r[00-01]n[00-56], | ||
- | 0 | ||
- | </ | ||
- | |||
- | Now that all nodes have the correct configuration files, the scheduler configuration is copied into place and both instances are restarted: | ||
- | |||
- | <code bash> | ||
- | $ sudo cp / | ||
- | $ sudo cp / | ||
- | $ sudo cp / | ||
- | $ sudo cp / | ||
- | $ sudo rsync -arv /etc/slurm/ root@r02mgmt01:/ | ||
- | $ sudo systemctl restart slurmctld | ||
- | $ sudo ssh r02mgmt01 systemctl restart slurmctld | ||
- | </ | ||
- | |||
- | Return the Gen1 nodes to service: | ||
- | |||
- | <code bash> | ||
- | $ scontrol reconfigure | ||
- | $ scontrol update nodename=r[00-01]n[00-56], | ||
- | </ | ||
- | |||
- | And the Gen2 nodes can have their Slurm service started: | ||
- | |||
- | <code bash> | ||
- | $ pdsh -w r03n[00-57], | ||
- | </ | ||
- | ===== Impact ===== | ||
- | |||
- | No downtime is expected to be required. | ||
- | |||
- | ===== Timeline ===== | ||
- | |||
- | ^Date ^Time ^Goal/ | ||
- | |2019-09-09| |Authoring of this document| | ||
- | |2019-10-23|11: |