technical:generic:caviness-lustre-rebalance

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
technical:generic:caviness-lustre-rebalance [2021-02-23 16:39] freytechnical:generic:caviness-lustre-rebalance [2021-02-23 17:13] (current) – [Testing] frey
Line 18: Line 18:
  
 Part of the Generation 2 addition to the Caviness cluster was: Part of the Generation 2 addition to the Caviness cluster was:
-  * (2) OSTseach 120 TB in size +  * (2) OST pools12 x 12 TB HDDs 
-  * (1) MDT, 16 TB in size+  * (1) MDT pool12 x 1.6 TB SSDs
 The previous components of the Lustre file system were: The previous components of the Lustre file system were:
   * (4) OSTs, each 65 TB in size   * (4) OSTs, each 65 TB in size
   * (1) MDT, 4 TB in size   * (1) MDT, 4 TB in size
-Thus, the additions will nearly double the capacity of the Lustre file system. 
  
-Bringing the new capacity online will require downtime, primarily because the existing MDT and OST usage levels are so high.  Every directory currently present on the Lustre filesystem **only** makes use of the existing MDT (MDT0000).  Adding the 16 TB MDT0001 to the file system does not effect any change in where metadata is being stored.  //Metadata striping// only takes effect on Lustre directories that are explicitly changed to use both MDT0000 and MDT0001.  Even so, every file and directory is mapped to one of the MDTs based on its name((The filename is hashed using a 64-bit FNV-1 function, and the hash modulus the number of MDTs (2 in this case) provides the MDT index.)).+Bringing the new capacity online will require downtime, primarily because the existing MDT and OST usage levels are so high.  Every directory currently present on the Lustre filesystem **only** makes use of the existing MDT (mdt0).  Adding a single 16 TB mdt1 to the file system does not effect any change in where metadata is being stored.  //Metadata striping// only takes effect on Lustre directories that are explicitly changed to use both mdt0 and mdt1.  Even so, every file and directory is mapped to one of the MDTs based on its name((The filename is hashed using a 64-bit FNV-1 function, and the hash modulo the number of MDTs (2 in this example) provides the MDT index.)).
  
 ==== MDT Configuration ==== ==== MDT Configuration ====
  
-The filename hashing presents a major issue when growing a Lustre file system's metadata capacity:  with a directory striped across both MDTs, nominally 50% of new files will map to MDT0000.  MDT0000 will reach capacity well ahead of MDT0001, but 50% of filenames will continue to map to MDT0000.  These files will not be created on MDT0001 instead — doing so would require metadata regarding where the metadata was stored and obviate the hashing in the first place!  As a consequence, once an MDT that is part of a metadata stripe reaches 100% capacity, a fraction of new files will fail to be created.+The filename hashing presents a major issue when growing a Lustre file system's metadata capacity:  with a directory striped across two MDTs, nominally 50% of new files will map to mdt0.  Thus, mdt0 will reach capacity well ahead of mdt1, but 50% of filenames will continue to map to mdt0.  These files cannot be created on mdt1 — doing so would require metadata regarding where the metadata was stored and obviate the hashing in the first place!  As a consequence, once an MDT that is part of a metadata stripe reaches 100% capacity, a fraction of new files will fail to be created.
  
-Thusone design tenet for multiple Lustre MDTs:+Given this information, design tenets for effective use of multiple Lustre MDTs are as follows:
   * Each MDT should be of approximately the same capacity to promote balanced growth   * Each MDT should be of approximately the same capacity to promote balanced growth
-  * The filename hash function must be well-designed (e.g. to avoid mapping like-named files to the same MDT+  * The filename hash function must be well-designed (to provide a balanced distributions of hash values
-The second requirement is outside our ability to control (hopefully the Lustre developers did a good job).  The first requirement is by definition not met on Caviness since MDT0000 is close to full and the new MDT(s) will be empty.+The second requirement is outside our ability to control (hopefully the Lustre developers did a good job).  The first requirement is by definition not met on Caviness since mdt0 is close to full and the new MDT(s) will be empty.
  
 The Generation 1 Lustre metadata was configured with a single MDT serviced by a pair of MDS nodes: The Generation 1 Lustre metadata was configured with a single MDT serviced by a pair of MDS nodes:
Line 42: Line 41:
 The three disks in light blue are //parity data// (for redundancy) and the one disk in light orange is a hot spare.  All 12 disks are 400 GB SSD, for a total of 8 x 400 GB = 3200 GB raw capacity.  The green connecting line leads to the primary server for the MDT, and the red connects to the failover server. The three disks in light blue are //parity data// (for redundancy) and the one disk in light orange is a hot spare.  All 12 disks are 400 GB SSD, for a total of 8 x 400 GB = 3200 GB raw capacity.  The green connecting line leads to the primary server for the MDT, and the red connects to the failover server.
  
-Generation 2 added the following hardware to the picture:+With the hardware added to Generation 2 and the balanced design tenets outlined above, the 16 TB of new metadata storage will be organized as:
  
 {{ :technical:generic:caviness_-gen2_mds_mdt.svg?width=450 |Caviness, Generation 2 Lustre metadata}} {{ :technical:generic:caviness_-gen2_mds_mdt.svg?width=450 |Caviness, Generation 2 Lustre metadata}}
Line 68: Line 67:
   - Finally, all directories under ''/lustre/scratch/altroot'' will be moved back to being under ''/lustre/scratch'' as before.   - Finally, all directories under ''/lustre/scratch/altroot'' will be moved back to being under ''/lustre/scratch'' as before.
 With the metadata of the new copies being striped across all MDTs, and the Lustre metadata subsystem spreading the copies across the new and old OSTs, the net effect will be to rebalance MDT and OST usage across all devices. With the metadata of the new copies being striped across all MDTs, and the Lustre metadata subsystem spreading the copies across the new and old OSTs, the net effect will be to rebalance MDT and OST usage across all devices.
 +
 +===== Testing =====
 +
 +All aspects of this workflow were tested using VirtualBox on a Mac laptop.  A CentOS 7 VM (of the same version as is in-use on Caviness) was provisioned with Lustre 2.10.3 patchless server kernel modules installed.  This VM was diff-cloned to create three additional VMs: mds0, mds1, oss1, oss2.
 +
 +The four VMs each had a virtual NIC configured in a named internal network (''lustre-net'') and IP addresses were assigned manually in the OS.  Connectivity between the four VMs via that network was confirmed.  LNET was configured manually after boot on each node:
 +<code bash>
 +[mds0 ~]$ modprobe lnet
 +[mds0 ~]$ lnetctl net configure --all
 +</code>
 +
 +The following VDIs were created:
 +  * 50 GB - mgt
 +  * 250 GB - mdt0, mdt1
 +  * 1000 GB - ost0, ost1
 +The mgt and mdt0 VDIs were attached to mds0 and formatted:
 +<code bash>
 +[mds0 ~]$ mkfs.lustre --mgs --reformat \
 +    --servicenode=mds0@tcp --mgsnode=mds1@tcp \
 +    --backfstype=ldiskfs \
 +    /dev/sdb
 +[mds0 ~]$ mkfs.lustre --mdt --reformat \
 +    --mgsnode=mds0@tcp --mgsnode=mds1@tcp \
 +    --servicenode=mds0@tcp --mgsnode=mds1@tcp \
 +    --backfstype=ldiskfs --fsname=demo \
 +    /dev/sdc
 +</code>
 +The ost0 VDI was attached to oss0 and formatted:
 +<code bash>
 +[oss0 ~]$ mkfs.lustre --ost --reformat --index=0 \
 +    --mgsnode=mds0@tcp --mgsnode=mds1@tcp \
 +    --servicenode=oss0@tcp --mgsnode=oss1@tcp \
 +    --backfstype=ldiskfs --fsname=demo \
 +    /dev/sdb
 +</code>
 +The mgt and mdt0 were brought online:
 +<code bash>
 +[mds0 ~]$ mkdir -p /lustre/mgt /lustre/mdt{0,1}
 +[mds0 ~]$ mount -t lustre /dev/sdb /lustre/mgt
 +[mds0 ~]$ mount -t lustre /dev/sdc /lustre/mdt0
 +</code>
 +Finally, ost0 was brought online:
 +<code bash>
 +[oss0 ~]$ mkdir -p /lustre/ost{0,1}
 +[oss0 ~]$ mount -t lustre /dev/sdb /lustre/ost0
 +</code>
 +
 +==== Client Setup ====
 +
 +Another VM was created with the same version of CentOS 7 and the Lustre 2.10.3 client modules.  The VM also had a virtual NIC created as part of the named internal network (''lustre-net'') and an IP address assigned manually within the OS.  Connectivity to the four Lustre VMs was confirmed and LNET configured manually as above.
 +
 +The "demo" Lustre file system was mounted on the client:
 +<code bash>
 +[client ~]$ mkdir /demo
 +[client ~]$ mount -t lustre mdt0@tcp:mdt1@tcp:/demo /demo
 +</code>
 +
 +At this point, some tests were performed in order to fill the metadata to approximately 70% of capacity.
 +
 +==== Addition of MDT ====
 +
 +The new MDT was formatted and brought online:
 +<code bash>
 +[mds1 ~]$ mkfs.lustre --mdt --reformat --index=1 \
 +    --mgsnode=mds0@tcp --mgsnode=mds1@tcp \
 +    --servicenode=mds1@tcp --mgsnode=mds0@tcp \
 +    --backfstype=ldiskfs --fsname=demo \
 +    /dev/sdb
 +[mds1 ~]$ mkdir -p /lustre/mgt /lustre/mdt{0,1}
 +[mds1 ~]$ mount -t lustre /dev/sdb /lustre/mdt1
 +</code>
 +
 +After a few moments, the client VM received the updated file system configuration and had mounted the new MDT.  MDT usage and capacity changed accordingly.  **//This indicated that an online addition of MDTs to a running Lustre file system is possible.//**
 +
 +Further testing was performed to confirm that
 +  * by default all metadata additions were against mdt0
 +  * creating a new directory with metadata striping over mdt0 and mdt1 initially allowed a balanced creation of new files across both MDTs
 +  * once mdt0 was filled to capacity, creation of new files whose name hashed and mapped to mdt0 failed; names that hashed and mapped to mdt1 succeeded
 +
 +==== Addition of OST ====
 +
 +The new OST was formatted and brought online:
 +<code bash>
 +[mds1 ~]$ mkfs.lustre --ost --reformat --index=1 \
 +    --mgsnode=mds0@tcp --mgsnode=mds1@tcp \
 +    --servicenode=oss1@tcp --mgsnode=oss0@tcp \
 +    --backfstype=ldiskfs --fsname=demo \
 +    /dev/sdb
 +[oss1 ~]$ mkdir -p /lustre/ost{0,1}
 +[oss1 ~]$ mount -t lustre /dev/sdb /lustre/ost1
 +</code>
 +
 +After a few moments, the client VM received the updated file system configuration and had mounted the new OST.  OST usage and capacity changed accordingly.  **//This indicated that an online addition of OSTs to a running Lustre file system is possible.//**
 +
  • technical/generic/caviness-lustre-rebalance.1614116364.txt.gz
  • Last modified: 2021-02-23 16:39
  • by frey