Computing Clusters and Resources

For a detailed overview and guides to accessing CRC resources, see "Resource Documentation" under User Support. Apply for an account at https://crc.pitt.edu/apply.

Pitt CRC Hardware Resources

Pitt Center for Research Computing provides different types of hardware for different advanced computing needss

  • MPI: distributed computing across nodes using the Message Passing Interface. MPI nodes are for tightly-coupled codes that are parallelized using the Message Passing Interface (MPI) and benefit from the low-latency Omni-Path (OP) or Infiniband (IB) interconnect fabrics.
  • SMP: shared memory processing on a single node. SMP nodes are appropriate for programs that are parallelized using the shared memory framework. They are also appropriate for those who want to move up to supercomputers while keeping the programming style of their laptops, such as running MATLAB. These nodes have up to 512GB of shared memory.
  • GPU: accelerated computing using Graphic Processing Units, GPU nodes are targeted for applications specifically written to take advantage of the inherent parallelism in general purpose graphics processing unit architectures.
  • VIZ: graphic user interface (GUI) interface, especially for visualization projects.
  • HTC: HTC nodes are designed for High Throughput Computing workflows such as gene sequence analysis and data-intensive analytics.

 

Find the full documentation for MPI, SMP and GPU at https://crc.pitt.edu/h2p.

MPI (Omni-path Network)
dual 14-core Broadwell CPU (Intel Xeon E5-2690 2.60 GHz)

  • 96 nodes
  • 64 GB RAM
  • 256 GB SSD
  • Omni-path

dual 14-core Skylake CPU (Intel Xeon Gold 6132 2.60 GHz)

  • 36 nodes
  • 192 GB RAM
  • 256 GB SSD & 500 GB SSD
  • Omni-path

MPI (Infiniband Network)

dual 10-core Haswell CPU (Intel Xeon E5-2660 v3 2.6 GHz)

  • 32 nodes
  • 128 GB RAM
  • 256 GB SSD
  • FDR Infiniband

 

SMP
dual 12-core Skylake CPU (Intel Xeon Gold 6126 2.60 GHz); 10GigE

  • 132 nodes
  • 192 GB RAM
  • 256 GB SSD & 500 GB SSD

dual 6-core Broadwell CPU (Intel Xeon E5-2643 v4 3.40 GHz); 10GigE

  • 24 nodes
    • 256 GB RAM
    • 256 GB SSD & 1 TB SSD
  • 2 Nodes
    • 256 GB RAM
    • 256 GB SSD & 3 TB SSD
  • 2 Nodes
    • 512 GB RAM
    • 256 GB SSD & 3 TB SSD
  • 1 Node
    • 256 GBRAM
    • 256 GB SSD & 6 TB NVMe
       

GPU

  • 5 nodes with 4 NVIDIA Titan GPUs/node
  • 7 nodes with 4 NVIDIA Titan X GPUs/node
  • 18 nodes with 4 NVIDIA GTX1080 GPUs/node
  • 1 node with 2 NVIDIA  K40 GPUs/node
  • 1 nodes with 4 NVIDIA V100 32GB GPUs/node

 

VIZ (GUI Interface)
dual 14-core Broadwell CPU (Intel Xeon E5-2680 v4 2.4 GHz)

  • 1 node
  • 256 GB RAM
  • 1.6 TB SSD  (/scratch)
  • 2 NVIDIA GTX1080 Graphic Cards

Find the full documentations for VIZ at: crc.pitt.edu/viz.

 

HTC

The HTC cluster is designed to run high throughput computing jobs to support bioinformatics and health science research. Find the full documentation for HTC at: https://crc.pitt.edu/htc.

dual 8-core Haswel-EP  CPU (Intel Xeon E5-2630 v3 2.4GHz)

  • 20 nodes
  • 256 GB RAM
  • 256 GB SSD
  • FDR Infiniband

dual 12-core Skylake CPU (Intel Xeon Gold 6126 2.60 GHz)

  • 4 nodes
  • 384 GB RAM
  • 256 GB SSD & 500 GB SSD
  • FDR Infiniband