Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
abstract:darwin:filesystems:filesystems [2021-04-28 11:50] – [High-performance filesystem] anita | abstract:darwin:filesystems:filesystems [2021-07-18 11:36] – [Workgroup] anita | ||
---|---|---|---|
Line 3: | Line 3: | ||
===== Home filesystem ===== | ===== Home filesystem ===== | ||
- | The 65 TB filesystem uses 3 TB enterprise class SATA drives in a triple-parity RAID configuration for high reliability and availability. The filesystem is accessible to the head node via 10 Gbit/s Ethernet and to the compute nodes via 1 Gbit/ | + | The 13.5 TiB filesystem uses 960 GiB enterprise class SSD drives in a triple-parity RAID configuration for high reliability and availability. The filesystem is accessible to all nodes via IPoIB on the 100 Gbit/ |
==== Home storage ==== | ==== Home storage ==== | ||
- | Each user has 20 GB of disk storage reserved for personal use on the home file system. Users' home directories are in /home (e.g., ''/ | + | Each user has 20 GB of disk storage reserved for personal use on the home filesystem. Users' home directories are in /home (e.g., ''/ |
===== High-performance Lustre filesystem ===== | ===== High-performance Lustre filesystem ===== | ||
Lustre is designed to use parallel I/O techniques to reduce file-access time. The Lustre filesystems in use at UD are composed of many physical disks using RAID technologies to give resilience, data integrity, and parallelism at multiple levels. There is approximately 1.1 PiB of Lustre storage available on DARWIN. It uses high-bandwidth interconnects such as Mellanox HDR100. Lustre should be used for storing input files, supporting data files, work files, and output files associated with computational tasks run on the cluster. | Lustre is designed to use parallel I/O techniques to reduce file-access time. The Lustre filesystems in use at UD are composed of many physical disks using RAID technologies to give resilience, data integrity, and parallelism at multiple levels. There is approximately 1.1 PiB of Lustre storage available on DARWIN. It uses high-bandwidth interconnects such as Mellanox HDR100. Lustre should be used for storing input files, supporting data files, work files, and output files associated with computational tasks run on the cluster. | ||
- | ==== Workgroup | + | ==== Workgroup storage ==== |
- | Allocation workgroup storage is available on a [[abstract: | + | Allocation workgroup storage is available on a [[abstract: |
- | Each allocation will have at least 1 TiB of shared ([[abstract: | + | Each allocation will have at least 1 TiB of shared ([[abstract: |
Each user in the allocation workgroup will have a ''/ | Each user in the allocation workgroup will have a ''/ | ||
Line 21: | Line 21: | ||
Each allocation will also have a ''/ | Each allocation will also have a ''/ | ||
- | Please see [[abstract: | + | Please see [[abstract: |
**Note**: A full filesystem inhibits use for everyone preventing jobs from running. | **Note**: A full filesystem inhibits use for everyone preventing jobs from running. | ||
===== Local filesystem ===== | ===== Local filesystem ===== | ||
- | ==== Node scratch | + | ==== Node temporary storage |
Each compute node has its own 2 TB local hard drive, which is needed for time-critical tasks such as managing virtual memory. | Each compute node has its own 2 TB local hard drive, which is needed for time-critical tasks such as managing virtual memory. | ||
Line 61: | Line 61: | ||
</ | </ | ||
- | ==== Workgroup | + | ==== Workgroup ==== |
- | All of Lustre is available | + | All of Lustre is available |
The example below shows 25 TB is in use out of 954 TB of usable Lustre storage. | The example below shows 25 TB is in use out of 954 TB of usable Lustre storage. | ||
Line 84: | Line 84: | ||
</ | </ | ||
- | ==== Node scratch | + | ==== Node ==== |
- | The node scratch | + | The node temporary storage |
We strongly recommend that you refer to the node scratch by using the environment variable, '' | We strongly recommend that you refer to the node scratch by using the environment variable, '' |