abstract:mills:mills

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
abstract:mills:mills [2020-10-26 12:52] – [Ask or tell the HPC community] anitaabstract:mills:mills [2023-08-21 10:50] (current) – [Getting started on Mills (Retired)] anita
Line 6: Line 6:
 For general information about the community cluster program, visit the [[itrc>community-cluster|IT Research Computing website]]. To cite the Mills cluster for grants, proposals and publications, use these [[itrc>community-cluster-templates/|HPC templates]]. For general information about the community cluster program, visit the [[itrc>community-cluster|IT Research Computing website]]. To cite the Mills cluster for grants, proposals and publications, use these [[itrc>community-cluster-templates/|HPC templates]].
  
 +Login (head) node is online to allow file transfer off Mills' filesystems.
 +
 +<note warning>
 +**Monday, August 21,2023**: The Mills cluster was fully retired and is no longer accessible. Any data present in /lustre/work, /archive or /home directories on Mills is no longer available.
 +
 +We must finally say goodbye to the Mills cluster. Thank you to all Mills cluster users for your cooperation and contributions to the UD research and HPC communities.
 +
 +For complete details see [[https://sites.udel.edu/it-rci/2023/07/31/mills-retirement-on-august-21-2023/|Mills Retirement On August 21, 2023]]. 
 +
 +</note> 
 ===== Configuration ===== ===== Configuration =====
  
Line 27: Line 37:
 ==== Compute nodes ==== ==== Compute nodes ====
  
-There are many compute nodes with different configurations.  Each node may have extra memory, multi-core processors (CPUs), GPUs and/or extra local disk space.  They may have different OS versions or OS configurations, such as mounted network disks.  This document assumes all the compute nodes have the same OS and almost the same configuration.  Some nodes may have more cores, more memory or more disk. +There are no longer compute nodes.
- +
-The standard UNIX on the compute nodes is configured to support just the running of your jobs, particularly parallel jobs.  For example, there are no man pages on the compute nodes.  Large components of the OS, such as X11, are only added to the environment when needed. +
- +
-All the multi-core CPUs and GGPs share the same memory in what may be a complicated manner. To add more processing capability while keeping hardware expense and power requirement down, most architectures use Non-Uniform Memory Access (NUMA).  Also the processors may be sharing hardware, such as the FPUs (Floating point units).   +
- +
-Commercial applications, and normally your program, will use a layer of abstraction called a //programming model// Consult the cluster specific documentation for advanced techniques to take advantage of the low level architecture.  +
 ===== Storage ===== ===== Storage =====
  
  • abstract/mills/mills.1603731144.txt.gz
  • Last modified: 2020-10-26 12:52
  • by anita