Cluster Hardware Overview

Pitt Center for Research Computing provides different types of hardware for different advanced computing needs:

  • MPI: Nodes that can utilize distributed computing across themselves using the Message Passing Interface. MPI nodes are for tightly-coupled codes that are parallelized using the Message Passing Interface (MPI) and benefit from the low-latency Omni-Path (OP) or Infiniband (IB) interconnect fabrics.
  • SMP: Nodes that allow for shared memory processing. SMP nodes are appropriate for programs that are parallelized using the shared memory framework. They are also appropriate for those who want to move up to supercomputers while keeping the programming style of their laptops, such as running MATLAB.
  • GPU: Nodes that enable accelerated computing using Graphic Processing Units. GPU nodes are targeted for applications specifically written to take advantage of the inherent parallelism in general purpose graphics processing unit architectures.
  • VIZ:  Nodes equipped with graphical user interface (GUI), especially for visualization projects.
  • HTC: Nodes are designed for High Throughput Computing workflows such as gene sequence analysis and data-intensive analytics.

 

MPI

Omni-path Network (OPA)

dual 14-core Broadwell CPU (Intel Xeon E5-2690 2.60 GHz)

  • 96 nodes
  • 64 GB RAM
  • 256 GB SSD
  • Omni-path

dual 14-core Skylake CPU (Intel Xeon Gold 6132 2.60 GHz)

  • 36 nodes
  • 192 GB RAM
  • 256 GB SSD & 500 GB SSD
  • Omni-path

Infiniband Network Partition (IB)

dual 10-core Haswell CPU (Intel Xeon E5-2660 v3 2.6 GHz)

  • 32 nodes
  • 128 GB RAM
  • 256 GB SSD
  • FDR Infiniband

 

SMP

(These nodes have up to 512GB of shared memory.)

dual 16-core  Rome CPU (AMD EPYC 7302 3.0 GHz); 10GigE 

  • 58 nodes
  • 256 GB RAM
  • 256 GB SSD & 1TB SSD

dual 12-core Skylake CPU (Intel Xeon Gold 6126 2.60 GHz); 10GigE

  • 132 nodes
  • 192 GB RAM
  • 256 GB SSD & 500 GB SSD

dual 6-core Broadwell CPU (Intel Xeon E5-2643 v4 3.40 GHz); 10GigE

  • 24 nodes
    • 256 GB RAM
    • 256 GB SSD & 1 TB SSD
  • 2 Nodes
    • 256 GB RAM
    • 256 GB SSD & 3 TB SSD
  • 2 Nodes
    • 512 GB RAM
    • 256 GB SSD & 3 TB SSD
  • 1 Node
    • 256 GBRAM
    • 256 GB SSD & 6 TB NVMe

dual 16-core Naples CPU (AMD  2.40 GHz); 10GigE

  • 1 nodes
  • 1024 GB RAM
  • 256 GB SSD & 1TB NVMe

 

GPU

  • 7 nodes with 4 NVIDIA Titan X GPUs/node.  Each GPU has 12GB of memory.  The host node has 128GB RAM and dual CPUs with 6 cores/CPU.  The maximum number of cpu per gpu that can be requested is 3.
  • 8 nodes with 4 NVIDIA GTX1080 GPUs/node.  Each GPU has 8GB of memory.  The host node has 128GB RAM and dual CPUs with 6 cores/CPU. The maximum number of cpu per gpu that can be requested is 6.
  • 10 nodes with 4 NVIDIA GTX1080 Ti GPUs/node.  Each GPU has 11GB of memory.  The host node has 96GB RAM and dual CPUs with 4 cores/CPU. The maximum number of cpu per gpu that can be requested is 6.
  • 1 node with 2 NVIDIA  K40 GPUs/node.  Each GPU has 12GB of memory.  The host node has 128GB RAM and dual CPUs with 10 cores/CPU. The maximum number of cpu per gpu that can be requested is 10.
  • 1 node with 4 NVIDIA V100 32GB GPUs/node.  Each GPU has 32GB of memory.  The host node has 192GB RAM and dual CPUs with 12 cores/CPU. The maximum number of cpu per gpu that can be requested is 6.
  • 3 nodes with 8 NVIDIA A100 40GB GPUs/node. Each GPU has 40GB of memory. The host node has 1TB RAM and and dual CPUs with 64 cores/CPU. The maximum number of cpu per gpu that can be requested is 16.

 

VIZ (GUI Interface)

dual 14-core Broadwell CPU (Intel Xeon E5-2680 v4 2.4 GHz)

  • 1 node
  • 256 GB RAM
  • 1.6 TB SSD  (/scratch)
  • 2 NVIDIA GTX1080 Graphic Cards

 

HTC

The HTC cluster is designed to run high throughput computing jobs to support bioinformatics and health science research.

dual 8-core Haswel-EP  CPU (Intel Xeon E5-2630 v3 2.4GHz)

  • 20 nodes
  • 256 GB RAM
  • 256 GB SSD
  • FDR Infiniband

dual 12-core Skylake CPU (Intel Xeon Gold 6126 2.60 GHz)

  • 4 nodes
  • 384 GB RAM
  • 256 GB SSD & 500 GB SSD
  • FDR Infiniband

dual 24-core Cascade Lake CPU (Intel Xeon Gold 6248R 3.0 GHz)

  • 6 nodes
  • 768 GB RAM
  • 480 GB SSD & 960 GB SSD
  • 100 Gb/s HDR InfiniBand

Dual socket Intel Xeon Platinum 8352Y Ice Lake 32C 2.2GHz (64 cores/node)

  • 18 nodes
  • 512 GB RAM
  • 2 TB NVMe drive for local scratch
  • 10 GbE

Dual socket Intel Xeon Platinum 8352Y Ice Lake 32C 2.2GHz (64 cores/node)

  • 4 nodes
  • 1 TB RAM
  • 2 TB NVMe drive for local scratch
  • 10 GbE