Overview To CRC Resources

Hardware Overview

We have a several clusters which are used for different purposes. The clusters (with acronyms expanded):

  1. H2P (Hail to Pitt!) – Our newest cluster, which contains:
    • SMP (Shared MultiProcessor) – Designed for single- or multi-core jobs
    • GPU (Graphics Processing Unit) – Designed for throughput computing on GPUs
    • MPI_OPA (Message Passing Interface) – Parallel computing over the OmniPath (OPA) fast interconnect from Intel
  2. HTC (High Throughput Computing) – Designed for Health Sciences Research
  3. MPI_IB – Parallel computing over the Infiniband (IB) fast interconnect from Mellanox
  4. Frank – Legacy H2P

Storage Overview

Three main storage resources are available–ihome, mobydisk, and ZFS.

The ihome storage area is the default home directory for CRC users. The default quota per user over ihome is 75G. ihome is backed up. ihome is mounted on each of the CRC clusters’ compute and login nodes. To access ihome, simply go to your home directory on any one of the CRC clusters. For instance, /ihome/sam/ketan is the location of home directory of user named ketan in a group named sam.

The Mobydisk is a lustre based filesystem provided as a fast storage space to CRC users. The default quota on mobydisk is 2T per group. Mobydisk is not backed up. Mobydisk is mounted on login and compute nodes of each of the clusters at /mnt/mobydisk. Check with your PI to get access to mobydisk and the location of your data since it depends on your group’s computational requirements.

The ZFS is an additional serially accessed storage area for CRC users. The default quota on ZFS is 5T per group. ZFS is not backed up and is not mounted over compute nodes. Consequently, ZFS is available only over the login nodes on each of the clusters. The path to your specific storage area on ZFS depends on how it was created and is located on one of the two ZFS servers located at /zfs1 and /zfs2. File a support ticket  if you need allocation on ZFS or are unsure of your storage location over ZFS.

In addition to the above, CRC provides scratch spaces local to compute and/or login nodes. This space is mounted as /scratch and is available for use as a temporary fast storage while a users job is running. Note that the contents of scratch will be erased as soon as another job starts on a given node. The space on the scratch space varies and is typically between 190G (omnipath nodes) and 1-3T (SMP nodes).

Pictorial Summary

Below there is a general picture of the hardware available at Pitt. NTA stands for Non-Traditional Architecture.

New User Guide - Navigation

Click here to see all New User Guide articles