UK National HPC Service

Computer Services for Academic Research Logo
Home | Helpdesk | Machine Status | Search | Apply

Using the hold/long term filesystems


There is a clear need for historical data to be kept, that both provides very cheap storage and is accessible. The hold directory is designed for this purpose.

The hold directory is an offline tape storage device but data is served by a filesystem on the SAN. Most data will be offline, when requested data is migrated to the online disk storage (which operates as a cache for the offline files) and is held in a dual state.

Data when placed in hold will be migrated to the tape system at a later time. When it exists on tape only, data can take a while to become online again.


There are 3 locations of /hold:

  • /hold/week/<username> otherwise known as $HOLDWEEK and is targeted at files that need to last for a week.
  • /hold/month/<username> otherwise known as $HOLDMONTH and is targeted at files that need to last for a month.
  • /hold/year/<username> otherwise known as $HOLDYEAR and is targeted at files that have no expiry date. This is the default location and the environment variable $HOLDIR also decribes it.


  • There is a back up policy on these directories.
  • There is a purge policy on the hold directories, clearing out files in $HOLDWEEK and $HOLDMONTH if they expire. Files are often cleared from the cache and migrated on to tape.
  • There is a quota on these directories which can be increased by your PI. There is also a limit on the number of files.


The cache for the hold filesystem is on the san and is thus readily available from all machines. However files may still be on tape and not in the cache in which case it will take a considerable amount of time to return from tape.

There is a quota on this file system in terms of the number of files you can store there so it is important to tar files up to create a larger file.

Data that is only in an offline state takes a considerable time to retrieve and can be prefetched so that it is in a dual state with the command dmget. You can interrogate the status of files with dmls -l. Because of this delay in getting to the file it is important not to access offline files during a batch job, it will only result in batch jobs taking far longer than expected and possibly not finishing.

Xcp is no longer required, historically this was provided to provide optimal data transfer between machines. Now that most file systems are globally visible it is no longer important and can be replaced with the normal cp, the -D option will provide significant performance benefits when moving around large files.

Moreover, we encourage efficient use of the hold filesystem, this includes:

  • Only storing large files, as small files will permanently fill up the cache.
  • It is much quicker to retrieve one large file than many small ones as the latency time can be several minutes or if the system is busy, such as requesting lots of files at once, it can take much longer.
  • Avoid using offline files during batch job in favour of prefetching to other SAN directories such as $SANTMP or /tmp.

Page maintained by This page last updated: Thursday, 01-Jun-2006 15:25:28 BST