technical:slurm:gen2-additions

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
technical:slurm:gen2-additions [2019-09-09 16:09] freytechnical:slurm:gen2-additions [Unknown date] (current) – removed - external edit (Unknown date) 127.0.0.1
Line 1: Line 1:
-====== Revisions to Slurm Configuration v1.1.5 on Caviness ====== 
  
-This document summarizes alterations to the Slurm job scheduler configuration on the Caviness cluster. 
- 
-===== Issues ===== 
- 
-==== Addition of Generation 2 nodes to cluster ==== 
- 
-At the end of June, 2019, the first addition of resources to the Caviness cluster was purchases.  The purchase adds: 
- 
-  * 70 new nodes of varying specification 
-  * 2 new kinds of GPU (V100 and T4) 
-  * Several new stakeholder workgroups 
- 
-Beyond simply booting the new nodes, the Slurm configuration must be adjusted to account for the new hardware, the new workgroups, and adjustments to the resource limits of existing workgroups that purchased Generation 2 nodes. 
- 
-==== Minor changes to automatic TMPDIR plugin ==== 
- 
-Our Slurm [[https://github.com/jtfrey/auto_tmpdir|auto_tmpdir]] plugin automatically manages per-job (and per-step) temporary file storage: 
- 
-  - When a job starts, a temporary directory is created and `TMPDIR` is set accordingly in the job's environment 
-  - Optionally, when a job step starts a temporary directory within the job directory is created and `TMPDIR` is set accordingly in the step's environment 
-  - As steps complete their temporary directories are removed 
-  - When the job completes its temporary directory is removed 
- 
-The plugin offers options to: 
- 
-  - prohibit removal of the temporary directories 
-  - create per-job directories in a directory other than ''/tmp'' 
-  - prohibit the creation of per-step temporary directories (steps inherit the job temporary directory) 
- 
-Normally the plugin creates directories on local scratch storage (''/tmp''): the temporary files on one node are not visible to other nodes participating in the job.  Placing the ''TMPDIR'' on Lustre would mean all nodes participating in the job could see the same temporary files.  Users could leverage the existing ''--tmpdir=<path>'' flag with their own arbitrary path on ''/lustre/scratch'', but that would spread such files across the file system.  Having a flag that selects an IT-specified path would keep all shared ''TMPDIR'' storage collocated on the file system.  This also opens the possibility in the future of having that directory (and all content under it) make use of an [[http://wiki.lustre.org/Creating_and_Managing_OST_Pools|OST pool]] backed by faster media (SSD, NVMe). 
- 
-Some users have seen the following message in a job output file and assumed their job failed: 
- 
-<code> 
-auto_tmpdir: remote: failed stat check of <tmpdir> (uid = <uid#>, st_mode = <perms>, errno = <int-code>) 
-</code> 
- 
-Slurm was reporting this as an error when, in reality, the message was the result of a race condition whereby the job's temporary directory was removed //before// a job step had completed and removed its own temporary directory (which is inside the job's temporary directory, e.g. ''/tmp/job_34523/step_1'').  While it is logically an error in the context of the job scheduler, it is not an error from the point of view of the job or the user who submitted the job. 
- 
-===== Solutions ===== 
- 
-==== Configuration changes ==== 
- 
-==== Changes to auto_tmpdir ==== 
- 
-The directory removal error message has been changed to an internal informational message that users will not see.  Additional changes were made to the plugin to address the race condition itself. 
- 
-An additional option has been added to the plugin to request shared per-job (and per-step) temporary directories on the Lustre file system: 
- 
-<code> 
---use-shared-tmpdir     Create temporary directories on shared storage (overridden 
-                        by --tmpdir).  Use "--use-shared-tmpdir=per-node" to create 
-                        unique sub-directories for each node allocated to the job 
-                        (e.g. <base>/job_<jobid>/<nodename>). 
-</code> 
- 
-^Variant^Node^TMPDIR^ 
-|job, no per-node|''r00n00''|''/lustre/scratch/slurm/job_12345''| 
-|:::|''r00n01''|''/lustre/scratch/slurm/job_12345''| 
-|step, no per-node|''r00n00''|''/lustre/scratch/slurm/job_12345/step_0''| 
-|:::|''r00n01''|''/lustre/scratch/slurm/job_12345/step_0''| 
-|job, per-node|''r00n00''|''/lustre/scratch/slurm/job_12345/r00n00''| 
-|:::|''r00n01''|''/lustre/scratch/slurm/job_12345/r00n01''| 
-|step, per-node|''r00n00''|''/lustre/scratch/slurm/job_12345/r00n00/step_0''| 
-|:::|''r00n01''|''/lustre/scratch/slurm/job_12345/r01n00/step_0''| 
- 
-===== Implementation ===== 
- 
-The addition of new partitions, nodes, and network topology information to the Slurm configuration should not require a full restart of all daemons.   
- 
-===== Impact ===== 
- 
-No downtime is expected to be required. 
- 
-===== Timeline ===== 
- 
-^Date ^Time ^Goal/Description ^ 
-|2019-05-22| |Authoring of this document| 
-| | |Changes made| 
  • technical/slurm/gen2-additions.1568059748.txt.gz
  • Last modified: 2019-09-09 16:09
  • by frey