abstract:mills:filesystems:filesystems

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
abstract:mills:filesystems:filesystems [2021-07-15 14:28] – [Node scratch] anitaabstract:mills:filesystems:filesystems [2021-07-15 14:31] (current) – old revision restored (2018-06-08 00:43) anita
Line 9: Line 9:
 An  8 TB permanent filesystem is provided on the login node (head node), mills.hpc.udel.edu. The filesystem's RAID-6 (double parity) configuration is accessible to the compute nodes via 1 Gb/s Ethernet and to the campus network via 10 Gb/s Ethernet. Two terabytes are allocated for users' home directories in /home.  The remaining 6 TB are reserved for the system software, libraries and applications in /opt.  An  8 TB permanent filesystem is provided on the login node (head node), mills.hpc.udel.edu. The filesystem's RAID-6 (double parity) configuration is accessible to the compute nodes via 1 Gb/s Ethernet and to the campus network via 10 Gb/s Ethernet. Two terabytes are allocated for users' home directories in /home.  The remaining 6 TB are reserved for the system software, libraries and applications in /opt. 
  
 +==== Archive ====
 +
 +Each research group has 1 TB of shared group storage on the archive filesystem (/archive). The directory is identified by the research-group identifier <<//investing_entity//>> (e.g., ''/archive/it_css''). A read-only snapshot of users' files is made several times per day on the disk. In addition, the filesystem is replicated on UD's off-campus disaster recovery site. Daily [[:abstract:mills:filesystems:filesystems#archive-snapshots|snapshots]] are user-accessible. Older files may be restored by special request.
 + 
 +The 60 TB permanent archive filesystem uses 3 TB enterprise class SATA drives in a triple-parity RAID configuration for high reliability and availability. The filesystem is accessible to the head node via 10 Gbit/s Ethernet and to the compute nodes via 1 Gbit/s Ethernet.
  
 ===== High-performance filesystem ===== ===== High-performance filesystem =====
Line 45: Line 50:
  
  
 +===== Local filesystem =====
  
 +==== Node scratch ====
 +
 +Each compute node has its own 1-2 TB local hard drive, which is needed for time-critical tasks such as managing virtual memory.  The system usage of the local disk is kept as small as possible to allow some local disk for your applications, running on the node.  Thus, there is a ''/scratch'' filesystem mounted on each node. 
  
 ===== Quotas and usage ===== ===== Quotas and usage =====
Line 64: Line 73:
 Each user's home directory has a hard quota limit of 2 GB.  Each user's home directory has a hard quota limit of 2 GB. 
  
 +==== Archive ==== 
 +Each group's work directory has a quota designed to give your group 1 TB of disk space.
 ==== Lustre ==== ==== Lustre ====
  
  • abstract/mills/filesystems/filesystems.1626373712.txt.gz
  • Last modified: 2021-07-15 14:28
  • by anita