software:globus:caviness

This is an old revision of the document!


Using Globus on Caviness

Moving personal data onto and off of the Caviness cluster using the Globus file transfer service uses a staging area on the /lustre/scratch filesystem. This approach is necessary when the Globus guest collection properties are enabled on a collection:

  1. When you authenticate to activate your Caviness collection, Globus generates a certificate that identifies that activation.
  2. All navigation and file transfer requests against your Caviness collection are authenticated by that certificate.
  3. When you share that collection, you are essentially giving your certificate to other Globus users so that they access your Caviness collection as you.
  4. Thus, any content to which you have access on your Caviness collection is also visible to the Globus users with whom you've shared access.

If we were to simply expose the entirety of /lustre/scratch on Globus, then when you enable sharing you could be providing access to much more than just your own files and directories. This would be a major data security issue, so we instead present a specific sub-directory to Globus: /lustre/scratch/globus/home.

A user requesting Globus-accessible storage is provided a directory (named with the uid number, e.g. 1001) which is his/her staging point:

  • Make files already on /lustre/scratch visible on your Globus endpoint by moving them to /lustre/scratch/globus/home/<uid#> (using "mv" is very fast; duplicating the files using "cp" will take longer).
  • Copy files from home or workgroup storage to /lustre/scratch/globus/home/<uid#> (using "rsync" or "cp") to make them visible on your Globus endpoint.
  • Please note: do not create symbolic links in /lustre/scratch/globus/home/<uid#> to files or directories that reside in your home or workgroup storage. They will not work on the Globus endpoint.
  • The /lustre/scratch/globus/home/<uid#> directory is writable by the Globus, so data can be copied from remote collection to your Caviness collection.
  • Files copied to /lustre/scratch/globus/home/<uid#> via Globus can be moved elsewhere on /lustre/scratch (again, "mv" is very fast and duplication using "cp" will be slower) or copied to home and workgroup storage.

PLEASE BE AWARE that all data moved to Caviness will be consuming storage space on the shared scratch filesystem, /lustre/scratch. We have currently not enacted any quota limits on that filesystem, so it is left up to the users to refrain from filling it to capacity. Since Globus is often used to transfer large datasets, we ask that you be very careful to NOT push the filesystem to capacity.

Should capacity be neared, IT may request that you remove data on /lustre/scratch — including content present in your Globus staging directory.

PLEASE REFRAIN from modifying the group ownership and permissions on /lustre/scratch/globus/home/<uid#>. Group ownership is set to the "everyone" group and permissions as 0700 (drwx——) to properly secure your data. Any changes to ownership/permissions on the directory could expose your data to other users on the Globus service.

You can find your uid number using the "id" command:

$ id -u
1001

This user's Globus home directory would be /lustre/scratch/globus/home/1001. Users can append the following to the ~/.bash_profile file:

export GLOBUS_HOME="/lustre/scratch/globus/home/$(id -u)"
  • software/globus/caviness.1712586791.txt.gz
  • Last modified: 2024-04-08 10:33
  • by anita