Computing Hardware
The CRC's primary distinction between hardware is called a "cluster". A cluster is a set of computers that implement some specialized High Performance Computing (HPC) trait like the parallelism of graphics processing units, or message passing computing architectures. The HPC clusters available to CRC users are:
- Message Passing Interface (MPI) for highly parallel computing across many machines.
- High Throughput Computing (HTC) for processing data in large quantities or for long periods of time.
- Shared Memory Processing (SMP) for efficient exchange and access to data with a common memory space.
- Graphics Processing Units (GPU) for accelerated computing with GPU applications.
- Visualization and Interactive Desktop (VIZ) for projects requiring a graphical user interface.
These clusters can be further split into "partitions" of machines with similar hardware specifications (processors, memory, etc.). The individual machines that make up the clusters and that users can submit their jobs to are called "compute nodes".
Below, you will find the hardware specifications for each cluster and the partitions that compose it.
MPI Cluster
The MPI nodes are for tightly-coupled codes that are parallelized using the Message Passing Interface (MPI) and benefit from low-latency communication through an Infiniband (HDR200) or Omni-Path (OPA) network. Your job must request a minimum of 2 nodes.
Partition | Architecture | Nodes | Cores/Node | Mem/Node | Mem/Core | Scratch | Network |
---|---|---|---|---|---|---|---|
mpi | Intel Xeon Gold 6342 (Ice Lake) | 136 | 48 | 512 GB | 10.6 GB | 1.6 TB NVMe | HDR200; 10GbE |
opa-high-mem | Intel Xeon Gold 6132 (Skylake) | 36 | 28 | 192 GB | 6.8 GB | 500 TB SSD | OPA |
HTC Cluster
These nodes are designed for High Throughput Computing workflows such as gene sequence analysis, neuroimaging data processing, and other data-intensive analytics.
Partition | Architecture | Nodes | Cores/Node | Mem/Node | Mem/Core | Scratch | Network |
---|---|---|---|---|---|---|---|
htc | Intel Xeon Platinum 8352Y (Ice Lake) | 18 | 64 | 512 GB | 8 GB | 2 TB NVMe | 10GbE |
Intel Xeon Platinum 8352Y (Ice Lake) | 4 | 64 | 1 TB | 16 GB | 2 TB NVMe | 10GbE | |
Intel Xeon Gold 6248R (Cascade Lake) | 8 | 48 | 768 GB | 16 GB | 960 GB SSD | 10GbE |
SMP Cluster
Nodes that allow for shared memory processing. SMP nodes are appropriate for programs that are parallelized using the shared memory framework. They are also appropriate for those who want to move up to supercomputers while keeping the programming style of their laptops, such as running MATLAB.
Partition | Architecture | Nodes | Cores/Node | Mem/Node | Mem/Core | Scratch | Network |
---|---|---|---|---|---|---|---|
smp | AMD EPYC 7302 (Rome) | 58 | 32 | 256 GB | 8 GB | 1 TB SSD | 10GbE |
Intel Xeon Gold 6126 (Skylake) | 132 | 24 | 192 GB | 8 GB | 500 TB SSD | 10GbE | |
high-mem | Intel Xeon Platinum 8352Y (Ice Lake) | 8 | 64 | 1 TB | 16 GB | 10 TB NVMe | 10GbE |
Intel Xeon Platinum 8352Y (Ice Lake) | 2 | 64 | 2 TB | 32 GB | 10 TB NVMe | 10GbE | |
AMD EPYC 7351 (Naples) | 1 | 32 | 1 TB | 32 GB | 1 TB NVMe | 10GbE | |
Intel Xeon E7-8870v4 (Broadwell) | 4 | 80 | 3 TB | 38 GB | 5 TB SSD | 10GbE | |
Intel Xeon E5-2643v4 (Broadwell) | 24 | 12 | 256 GB | 21 GB | 1 TB SSD | 10GbE | |
Intel Xeon E5-2643v4 (Broadwell) | 2 | 12 | 256 GB | 21 GB | 3 TB SSD | 10GbE | |
Intel Xeon E5-2643v4 (Broadwell) | 2 | 12 | 512 GB | 42 GB | 3 TB SSD | 10GbE | |
Intel Xeon E5-2643v4 (Broadwell) | 1 | 12 | 256 GB | 21 GB | 6 TB SSD | 10GbE |
GPU Cluster
Nodes that enable accelerated computing using Graphics Processing Units. GPU nodes are targeted for applications specifically written to take advantage of the inherent parallelism in general purpose graphics processing unit architectures. For small problems, any of the GPUs below will suffice.
Partition: a100. This is the default partition in the gpu cluster and is comprised of the following hardware below. To request a particular Feature (such as an Intel host CPU), add the the following directive to your job script: #SBATCH --constraint=intel
Partition: a100_multi. This partition supports multi-node GPU workflows. Your job must request a minimum of 2 nodes and 8 GPUs.
Partition: a100_nvlink. This partition supports multi-GPU computation through 8-way A100s that are tightly coupled through an NVLink switch. The details of our Nvidia HGX platform are described below. To request a particular Feature (such as A100 with 80GB GPU memory), add the the following directive to your job script: #SBATCH --constraint=80g
Partition: gtx1080. 9 nodes with dual socket Intel Xeon Silver 4112 (Skylake, 4C, 2.60GHz base, up to 3.00GHz max boost)
Partition: v100. A single nodes with dual socket Intel Xeon Gold 6126 (Skylake, 12C, 2.60GHz base, up to 3.70GHz max boost)
Partition: power9. 4 nodes of IBM Power System AC922: dual-socket Power9 (16C, 2.7GHz base, 3.3GHz turbo). Code must be compile for the Power9 platform in order to work.
Partition | Nodes | GPU Type | GPU/Node | --constraint | Host Architecture | Core/Node | Max Core/GPU | Mem/Node | Mem/Core | Scratch | Network |
---|---|---|---|---|---|---|---|---|---|---|---|
a100 | 10 | A100 40GB PCIe | 4 | amd,40g | AMD EPYC 7742 (Rome) | 64 | 16 | 512 GB | 8 GB | 2 TB NVMe | HDR200; 25GbE |
2 | A100 40GB PCIe | 4 | intel,40g | Intel Xeon Gold 5220R (Cascade Lake) | 48 | 12 | 384 GB | 8 GB | 1 TB NVMe | 10GbE | |
a100_multi | 10 | A100 40GB PCIe | 4 | amd,40g | AMD EPYC 7742 (Rome) | 64 | 16 | 512 GB | 8 GB | 2 TB NVMe | HDR200; 25GbE |
a100_nvlink | 2 | A100 80GB SXM | 8 | amd,80g | AMD EPYC 7742 (Rome) | 128 | 16 | 1 TB | 8 GB | 2 TB NVMe | HDR200; 25GbE |
3 | A100 40GB SXM | 8 | amd,40g | AMD EPYC 7742 (Rome) | 128 | 16 | 1 TB | 8 GB | 12 TB NVMe | HDR200; 25GbE | |
gtx1080 | 9 | GTX 1080 Ti 11GB | 4 | Intel Xeon Silver 4112 (Skylake) | 8 | 2 | 96 GB | 12 GB | 480 GB SSD | 10GbE | |
v100 | 1 | V100 32GB PCIe | 4 | Intel Xeon Gold 6126 (Skylake) | 24 | 6 | 192 GB | 8 GB | 6 TB HDD | OPA; 10GbE | |
power9 | 4 | V100 32GB SXM | 4 | IBM Power System AC922 | 128 threads | 16 | 512 GB | 4 GB | 1 TB SSD | HDR100; 10GbE |
VIZ Nodes
Nodes equipped with graphical user interface (GUI), especially for visualization projects. (GUI Interface)
hostname | GPU Type | # GPUs | Host Architecture | Cores | Mem | Mem/Core | Scratch | Network |
---|---|---|---|---|---|---|---|---|
viz-n0 | GTX 1080 8GB | 2 | Intel Xeon E5-2680v4 (Broadwell) | 28 | 256 GB | 9.1 GB | 1.6 TB SSD | 10GbE |
viz-n1 | RTX 2080 Ti 11GB | 2 | Intel Xeon Gold 6226 (Cascade Lake) | 24 | 192 GB | 8 GB | 1.9 TB SSD | 10GbE |