Cluster Hardware Overview

The Pitt Center for Research Computing provides different types of hardware for various advanced computing needs.

The CRC's primary distinction between hardware is called a "cluster". A cluster is a set of computers that implement some specialized High Performance Computing (HPC) trait like the parallelism of graphics processing units, or message passing computing architectures. The HPC clusters available to CRC users are:

  • Message Passing Interface (MPI) for highly parallel computing across many machines.
  • High Throughput Computing (HTC) for processing data in large quantities or for long periods of time.
  • Shared Memory Processing (SMP) for efficient exchange and access to data with a common memory space.
  • Graphics Processing Units (GPU) for accelerated computing with GPU applications.
  • Visualization and Interactive Desktop (VIZ) for projects requiring a graphical user interface.

These clusters can be further split into "partitions" of machines with similar hardware specifications (processors, memory, etc.). The individual machines that make up the clusters and that users can submit their jobs to are called "compute nodes".

Below, you will find the hardware specifications for each cluster and the partitions that compose it.

       

MPI Cluster

Nodes that can utilize distributed computing across themselves using the Message Passing Interface. MPI nodes are for tightly-coupled codes that are parallelized using the Message Passing Interface (MPI) and benefit from the low-latency Omni-Path (OP) or Infiniband (IB) interconnect fabrics.

    Infiniband Network (IB)

    Dual 24-core Ice Lake CPU (Intel Xeon Gold 6342 @ 2.8 GHz)

    This set of nodes composes the "mpi" partition on the cluster.

    • 136 nodes
    • 48 cores/node
    • 512 GB RAM/node
    • 480 GB NVMe for OS and 1.6 TB NVMe for local scratch
    • HDR200 Infiniband (200 Gb/sec)
    • 10/25 GbE

    Omni-path Network (OPA)

    This set of nodes composes the "opa-high-mem" partition on the cluster.

    • 36 nodes
    • 28 cores/node
    • 192 GB RAM/node
    • 256 GB SSD for OS and 500 GB SSD for local scratch
    • Omni-path (100 Gb/sec)

     

    HTC Cluster

    Nodes are designed for High Throughput Computing workflows such as gene sequence analysis and data-intensive analytics.

    Dual 32-core Ice Lake CPU (Intel Xeon Platinum 8352Y @ 2.20 GHz)

    • 18 nodes
    • 64 cores/node
    • 512 GB RAM/node
    • 2 TB NVMe drive for local scratch
    • 10 GbE

    Dual 32-core Ice Lake CPU (Intel Xeon Platinum 8352Y @ 2.20 GHz)

    These nodes have double the memory than the other Ice Lakes nodes.

    • 4 nodes
    • 64 cores/node
    • 1 TB RAM/node
    • 2 TB NVMe drive for local scratch
    • 10 GbE

    Dual 24-core Cascade Lake CPU (Intel Xeon Gold 6248R @ 3.0 GHz)

    • 8 nodes
    • 48 cores/node
    • 768 GB RAM/node
    • 480 GB SSD for OS and 960 GB SSD for local scratch
    • HDR InfiniBand (100 Gb/sec)

     

    SMP Cluster

    Nodes that allow for shared memory processing. SMP nodes are appropriate for programs that are parallelized using the shared memory framework. They are also appropriate for those who want to move up to supercomputers while keeping the programming style of their laptops, such as running MATLAB.

    dual 16-core  Rome CPU (AMD EPYC 7302 3.0 GHz) 

    This set of nodes composes the "smp" partition on the cluster. 

    • 58 nodes
    • 32 cores/node
    • 256 GB RAM/node
    • 256 GB SSD for OS and  1 TB SSD for local scratch
    • 10GigE

    dual 12-core Skylake CPU (Intel Xeon Gold 6126 2.60 GHz)

    This set of nodes also composes the "smp" partition on the cluster.

    • 132 nodes
    • 24 cores/node
    • 192 GB RAM/node
    • 256 GB SSD for OS and 500 GB SSD for local scratch
    • 10GigE

    dual 6-core Broadwell CPU (Intel Xeon E5-2643 v4 3.40 GHz)

    This set of nodes composes the "smp-high-mem" partition on the cluster

    • 24 nodes
      • 12 cores/node
      • 256 GB RAM/node
      • 256 GB SSD for OS and 1 TB SSD for local scratch
      • 10GigE
    • 2 Nodes 
      • 12 cores/node
      • 256 GB RAM/node
      • 256 GB SSD for OS and 3 TB SSD for local scratch
      • 10GigE
    • 2 Nodes
      • 12 cores/node
      • 512 GB RAM/node
      • 256 GB SSD for OS and 3 TB SSD for local scratch
      • 10GigE
    • 1 Node
      • 12 cores/node
      • 256 GB RAM/node
      • 256 GB SSD for OS and 6 TB NVMe for local scratch
      • 10GigE

    dual 16-core Naples CPU (AMD  2.40 GHz); 10GigE

    This node also belongs to the "smp-high-mem" partition on the cluster.

    • 1 node
    • 32 core/node
    • 1024 GB RAM/node
    • 256 GB SSD for OS and 1TB NVMe for local scratch

     

    GPU Cluster

    Nodes that enable accelerated computing using Graphics Processing Units. GPU nodes are targeted for applications specifically written to take advantage of the inherent parallelism in general purpose graphics processing unit architectures.

    Dual 6-core Broadwell CPU (Intel Xeon E5-2620 v3 @ 2.40 GHz) 

    This set of nodes composes the "titanx" partition on the cluster.

    • 7 nodes
    • 12 cores/node
    • 128 GB RAM/node
    • 4 NVIDIA Titan X GPUs/node
    • 12 GB memory per GPU
    • Max of 3 cpus per gpu

    Dual 6-core Broadwell CPU (Intel Xeon E5-2620 v3 @ 2.40 GHz) 

    This set of nodes composes the "gtx1080" partition on the cluster.

    • 8 nodes
    • 12 cores/node
    • 128 GB RAM/node
    • 4 NVIDIA GeForce GTX1080 GPUs/node
    • 8 GB memory per GPU
    • Max of 6 cpus per gpu

    Dual 4-core Skylake CPU (Intel Xeon Silver 4112 @ 2.60 GHz) 

    This set of nodes also composes the "gtx1080" partition on the cluster.

    • 10 nodes
    • 8 cores/node
    • 96 GB RAM/node
    • 4 NVIDIA GeForce GTX1080 Ti GPUs/node
    • 11 GB memory per GPU
    • Max of 6 cpus per gpu

    Dual 10-core host with NVIDIA K40 GPUs  

    This node is in the "k40" partition on the cluster.

    • 1 node
    • 20 cores/node
    • 128 GB RAM/node
    • 2 NVIDIA K40 GPUs
    • 12 GB memory per GPU
    • Max of 10 cpus per gpu

    Dual 12-core host with NVIDIA A100 GPUs

    This node is in the "v100" partition on the cluster.

    • 1 node
    • 24 cores/node
    • 192 GB RAM/node
    • 4 NVIDIA V100 GPUs
    • 32 GB memory per GPU
    • Max of 6 cpus per gpu

    Dual 64-core host with NVIDIA A100 GPUs  

    This node is in the "A100" partition on the cluster.

    • 3 nodes
    • 128 cores/node
    • 1 TB RAM/node
    • 8 NVIDIA A100 GPUs/node
    • 40 GB memory per GPU
    • Max of 16 cpus per gpu

    5 nodes with 4 NVIDIA Titan GPUs/node

    4 Power9 nodes with 4 NVIDIA V100 32GB GPUs/node NVLink 

    3 X86_64 nodes with 8 NVIDIA A100 40GB GPUs/node NVLink

    22 X86_64 nodes with 4 NVIDIA A100 40GB GPUs/node PCIe (summer 2022)

    2 X86_64 nodes with 8 NVIDIA A100 80GB GPUs/node NVLink (summer 2022)

     

    VIZ Nodes

    Nodes equipped with graphical user interface (GUI), especially for visualization projects. (GUI Interface)

    dual 14-core Broadwell CPU (Intel Xeon E5-2680 v4 2.4 GHz)

    • 1 node
    • 28 cores/node
    • 256 GB RAM/node
    • 240 GB for OS and 1.6 TB SSD for local scratch
    • 2 NVIDIA GTX 1080 Graphic Cards
    • 10 GbE

    dual 12-core Intel Xeon 6226 Cascade Lake-SP 2.7 GHz

    • 1 node
    • 24 cores/node
    • 192 GB RAM/node
    • 240 GB for OS and 1.9 TB SSD for local scratch
    • 2 NVIDIA RTX 2080 Ti Graphic Cards
    • 10 GbE