The CRC provides different hardware types to target different computing use cases. These hardware profiles are grouped together under a common cluster name and are further divided into partitions to highlight differences in the architecture or usage modes.
Cluster Acronym | Full Form of Acronym | Description of Use Cases |
mpi | Message Passing Interface | For tightly coupled parallel codes that use the Message Passing Interface APIs for distributing computation across multiple nodes, each with its own memory space |
htc | High Throughput Computing | For genomics and other health sciences-related workflows that can run on a single node |
smp | Shared Memory Processing | For jobs that can run on a single node where the CPU cores share a common memory space |
gpu | Graphics Processing Unit | For AI/ML applications and physics-based simulation codes that had been written to take advantage of accelerated computing on GPU cores |
Below, you will find the hardware specifications for each cluster and the partitions that compose it as well as the VIZ and Login nodes as listed in our user manual.
GPU Cluster Overview
The GPU cluster is optimized for computational tasks requiring GPU acceleration, such as artificial intelligence and machine learning workflows, molecular dynamics simulations, and large-scale data analysis.
Key Features
- Designed for high-performance GPU workloads.
- Supports CUDA, TensorFlow, PyTorch, and other GPU-accelerated frameworks.
Specifications
Partition Name | Node Count | GPU Type | GPU/Node | --constraint | Host Architecture | Core/Node | Max Core/GPU | Mem/Node | Mem/Core | Scratch | Network | Node Names |
---|---|---|---|---|---|---|---|---|---|---|---|---|
l40s |
20 | L40S 48GB | 4 | l40s,48g,intel |
Intel Xeon Platinum 8462Y+ | 64 | 16 | 512 GB | 8 GB | 7 TB NVMe | 10GbE | gpu-n[55-74] |
a100 |
10 | A100 40GB PCIe | 4 | a100,40g,amd |
AMD EPYC 7742 (Rome) | 64 | 16 | 512 GB | 8 GB | 2 TB NVMe | HDR200; 10GbE | gpu-n[35-44] |
2 | A100 40GB PCIe | 4 | a100,40g,intel |
Intel Xeon Gold 5220R (Cascade Lake) | 48 | 12 | 384 GB | 8 GB | 1 TB NVMe | 10GbE | gpu-n[33-34] | |
a100_multi |
10 | A100 40GB PCIe | 4 | a100,40g,amd |
AMD EPYC 7742 (Rome) | 64 | 16 | 512 GB | 8 GB | 2 TB NVMe | HDR200; 10GbE | gpu-n[45-54] |
a100_nvlink |
2 | A100 80GB SXM | 8 | a100,80g,amd |
AMD EPYC 7742 (Rome) | 128 | 16 | 1 TB | 8 GB | 2 TB NVMe | HDR200; 10GbE | gpu-n[31-32] |
3 | A100 40GB SXM | 8 | a100,40g,amd |
AMD EPYC 7742 (Rome) | 128 | 16 | 1 TB | 8 GB | 12 TB NVMe | HDR200; 10GbE | gpu-n[28-30] |
HTC Cluster Overview
The HTC Cluster is designed to handle data-intensive health sciences workflows (genomics, neuroimaging, etc.) processing that can run on a single node.
Key Features
- Dedicated Open OnDemand web portal instance
Specifications
Partition | Host Architecture | --constraint | Nodes | Cores/Node | Mem/Node | Mem/Core | Scratch | Network | Node Names |
---|---|---|---|---|---|---|---|---|---|
htc | AMD EPYC 9374F (Genoa) | amd, genoa | 20 | 64 | 768 GB | 12 GB | 3.2 TB NVMe | 10GbE | htc-n[50-69] |
Intel Xeon Platinum 8352Y (Ice Lake) | intel, ice_lake | 18 | 64 | 512 GB | 8 GB | 2 TB NVMe | 10GbE | htc-n[32-49] | |
Intel Xeon Platinum 8352Y (Ice Lake) | intel, ice_lake | 4 | 64 | 1 TB | 16 GB | 2 TB NVMe | 10GbE | htc-1024-n[0-3] | |
Intel Xeon Gold 6248R (Cascade Lake) | intel, cascade_lake | 8 | 48 | 768 GB | 16 GB | 960 GB SSD | 10GbE | htc-n[24-31] |
MPI Cluster Overview
The MPI cluster enables jobs with tightly coupled parallel codes using Message Passing Interface APIs for distributing computation across multiple nodes, each with its own memory space.
Key Features
- Infiniband and Omni-Path networking
- Minimum of 2 Nodes per Job
Specifications
Partition | Host Architecture | Nodes | Cores/Node | Mem/Node | Mem/Core | Scratch | Network | Node Names |
---|---|---|---|---|---|---|---|---|
mpi |
Intel Xeon Gold 6342 (Ice Lake) | 136 | 48 | 512 GB | 10.6 GB | 1.6 TB NVMe | HDR200; 10GbE | mpi-n[0-135] |
opa-high-mem |
Intel Xeon Gold 6132 (Skylake) | 36 | 28 | 192 GB | 6.8 GB | 500 TB SSD | OPA; 10GbE | opa-n[96-131] |
SMP Cluster Overview
The SMP nodes are appropriate for programs that are parallelized using the shared memory framework. These nodes are similar to your laptop but with more CPU cores and shared memory space between them.
Key Features
- high memory partition for nodes with up to 3 TB of shared memory
Specifications
Partition | Host Architecture | --constraint | Nodes | Cores/Node | Mem/Node | Mem/Core | Scratch | Network | Node Names |
---|---|---|---|---|---|---|---|---|---|
smp | AMD EPYC 9374F (Genoa) | amd, genoa | 43 | 64 | 768 GB | 12 GB | 3.2 TB NVMe | 10GbE | smp-n[214-256] |
AMD EPYC 7302 (Rome) | amd, rome | 55 | 32 | 256 GB | 8 GB | 1 TB SSD | 10GbE | smp-n[156-210] | |
high-mem | Intel Xeon Platinum 8352Y (Ice Lake) | intel, ice_lake | 8 | 64 | 1 TB | 16 GB | 10 TB NVMe | 10GbE | smp-1024-n[1-8] |
Intel Xeon Platinum 8352Y (Ice Lake) | intel, ice_lake | 2 | 64 | 2 TB | 32 GB | 10 TB NVMe | 10GbE | smp-2048-n[0-1] | |
AMD EPYC 7351 (Naples) | amd, naples | 1 | 32 | 1 TB | 32 GB | 1 TB NVMe | 10GbE | smp-1024-n0 | |
Intel Xeon E7-8870v4 (Broadwell) | intel, broadwell | 4 | 80 | 3 TB | 38 GB | 5 TB SSD | 10GbE | smp-3072-n[0-3] |
TEACH Overview
The TEACH cluster make a subset of hardware on the CRCD system available for students and teachers to develop computational workflows around course materials without competing with research-oriented jobs.
Key Features
- Consists of both CPU and GPU hardware
Specifications
Resource Type | Node Count | CPU Architecture | Core/Node | CPU Memory (GB) | GPU Card | No. GPU | GPU Memory (GB) |
---|---|---|---|---|---|---|---|
CPU |
54 | Gold 6126 Skylake 12C 2.6GHz | 24 | 192 | N/A | N/A | N/A |
GPU 1 |
7 | E5-2620v3 Haswell 6C 2.4GHz | 12 | 128 | NVIDIA Titan X | 4 | 12 |
GPU 2 |
6 | E5-2620v3 Haswell 6C 2.5GHz | 12 | 128 | NVIDIA GTX 1080 | 4 | 8 |
GPU 3 |
10 | Xeon 4112 Skylake 4C 2.6GHz | 8 | 96 | NVIDIA GTX 1080 Ti | 4 | 11 |
GPU 4 |
2 | Xeon Platinum 8502+ 1.9GHz | 128 | 512 | NVIDIA L4 | 8 | 24 |
Login Nodes Overview
The Login Nodes provide access to a Linux Commandline interface on the CRCD system via Secure SHell protocol (ssh).
Key Features
- Load balancing between login nodes to better address usage demands
- Cgroup-based management of system resources
Specifications
hostname | backend hostname | Architecture | Cores/Node | Mem | Mem/Core | OS Drive | Network |
---|---|---|---|---|---|---|---|
h2p.crc.pitt.edu | login0.crc.pitt.edu | Intel Xeon Gold 6326 (Ice Lake) | 32 | 256 GB | 8 GB | 2x 480 GB NVMe (RAID 1) | 25GbE |
login1.crc.pitt.edu | Intel Xeon Gold 6326 (Ice Lake) | 32 | 256 GB | 8 GB | 2x 480 GB NVMe (RAID 1) | 25GbE | |
htc.crc.pitt.edu | login3.crc.pitt.edu | Intel Xeon Gold 6326 (Ice Lake) | 32 | 256 GB | 8 GB | 2x 480 GB NVMe (RAID 1) | 25GbE |
VIZ Overview
The VIZ Login Nodes enable access to an in-browser Linux Desktop environment on the CRCD system.
Key Features
- Load balancing between login nodes to better address usage demands
- Cgroup-based management of system resources
Specifications
Web URL | backend hostname | GPU Type | # GPUs | Host Architecture | Cores | Mem | Mem/Core | Scratch | Network |
---|---|---|---|---|---|---|---|---|---|
https://viz.crc.pitt.edu | viz-n0.crc.pitt.edu | GTX 1080 8GB | 2 | Intel Xeon E5-2680v4 (Broadwell) | 28 | 256 GB | 9.1 GB | 1.6 TB SSD | 10GbE |
viz-n1.crc.pitt.edu | RTX 2080 Ti 11GB | 2 | Intel Xeon Gold 6226 (Cascade Lake) | 24 | 192 GB | 8 GB | 1.9 TB SSD | 10GbE |