abstract:mills:filesystems:filesystems

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
abstract:mills:filesystems:filesystems [2021-07-15 14:28] – [Node scratch] anitaabstract:mills:filesystems:filesystems [2021-07-15 14:31] (current) – old revision restored (2018-06-08 00:43) anita
Line 9: Line 9:
 An  8 TB permanent filesystem is provided on the login node (head node), mills.hpc.udel.edu. The filesystem's RAID-6 (double parity) configuration is accessible to the compute nodes via 1 Gb/s Ethernet and to the campus network via 10 Gb/s Ethernet. Two terabytes are allocated for users' home directories in /home.  The remaining 6 TB are reserved for the system software, libraries and applications in /opt.  An  8 TB permanent filesystem is provided on the login node (head node), mills.hpc.udel.edu. The filesystem's RAID-6 (double parity) configuration is accessible to the compute nodes via 1 Gb/s Ethernet and to the campus network via 10 Gb/s Ethernet. Two terabytes are allocated for users' home directories in /home.  The remaining 6 TB are reserved for the system software, libraries and applications in /opt. 
  
 +==== Archive ====
 +
 +Each research group has 1 TB of shared group storage on the archive filesystem (/archive). The directory is identified by the research-group identifier <<//investing_entity//>> (e.g., ''/archive/it_css''). A read-only snapshot of users' files is made several times per day on the disk. In addition, the filesystem is replicated on UD's off-campus disaster recovery site. Daily [[:abstract:mills:filesystems:filesystems#archive-snapshots|snapshots]] are user-accessible. Older files may be restored by special request.
 + 
 +The 60 TB permanent archive filesystem uses 3 TB enterprise class SATA drives in a triple-parity RAID configuration for high reliability and availability. The filesystem is accessible to the head node via 10 Gbit/s Ethernet and to the compute nodes via 1 Gbit/s Ethernet.
  
 ===== High-performance filesystem ===== ===== High-performance filesystem =====
Line 45: Line 50:
  
  
 +===== Local filesystem =====
  
 +==== Node scratch ====
 +
 +Each compute node has its own 1-2 TB local hard drive, which is needed for time-critical tasks such as managing virtual memory.  The system usage of the local disk is kept as small as possible to allow some local disk for your applications, running on the node.  Thus, there is a ''/scratch'' filesystem mounted on each node. 
  
 ===== Quotas and usage ===== ===== Quotas and usage =====
Line 64: Line 73:
 Each user's home directory has a hard quota limit of 2 GB.  Each user's home directory has a hard quota limit of 2 GB. 
  
 +==== Archive ==== 
 +Each group's work directory has a quota designed to give your group 1 TB of disk space.
 ==== Lustre ==== ==== Lustre ====
  
Line 95: Line 105:
 <note warning>Please use the custom [[abstract:mills:filesystems:lustre#lustre-utilities|Lustre utilities]] to remove files on all Lustre filesytems ''/lustre/work'' or ''/lustre/scratch'', or to check disk usage on ''/lustre/scratch''.</note> <note warning>Please use the custom [[abstract:mills:filesystems:lustre#lustre-utilities|Lustre utilities]] to remove files on all Lustre filesytems ''/lustre/work'' or ''/lustre/scratch'', or to check disk usage on ''/lustre/scratch''.</note>
  
 +==== Node scratch ====
 +The node scratch is mounted on ''/scratch'' for each of your nodes.  There is no quota, and if you exceed the physical size of the disk you will get disk failure messages.  To check the usage of your disk use the ''df -h'' command **on the compute node**.
 +
 +For example, the command
 +<code>
 +   ssh n017 df -h /scratch
 +</code>
 +shows 197 MB used from the total filesystem size of 793 GB.
 +<code>
 +Filesystem            Size  Used Avail Use% Mounted on
 +/dev/sda2             793G  197M  753G   1% /scratch
 +</code>
 +This node ''n017'' has a 1 TB disk and 64 MB memory, which requires 126 GB of swap space on the disk.
 +
 +<note warning>There is a physical disk installed on each node that is used for time critical tasks, such as swapping memory. The compute nodes are configured with either 1 TB disk or 2 TB disk, however, the ''/scratch'' filesystem will never have the total disk.  Large memory nodes need more swap space.
 +</note>
  
 +We strongly recommend that you refer to the node scratch by using the environment variable, ''$TMPDIR'', which is defined by Grid Engine when using ''qsub'' or ''qlogin''.
 ===== Recovering files ===== ===== Recovering files =====
 ==== Home backups ==== ==== Home backups ====
  • abstract/mills/filesystems/filesystems.1626373731.txt.gz
  • Last modified: 2021-07-15 14:28
  • by anita