Computing Clusters and Resources

For a detailed overview and guides to accessing CRC resources, see "Resource Documentation" under User Support. Apply for an account at https://crc.pitt.edu/apply.

Pitt CRC Hardware Resources

Pitt Center for Research Computing provides different types of hardware for different advanced computing needss

  • MPI: distributed computing across nodes using the Message Passing Interface. MPI nodes are for tightly-coupled codes that are parallelized using the Message Passing Interface (MPI) and benefit from the low-latency Omni-Path (OP) or Infiniband (IB) interconnect fabrics.
  • SMP: shared memory processing on a single node. SMP nodes are appropriate for programs that are parallelized using the shared memory framework. They are also appropriate for those who want to move up to supercomputers while keeping the programming style of their laptops, such as running MATLAB. These nodes have up to 512GB of shared memory.
  • GPU: accelerated computing using Graphic Processing Units, GPU nodes are targeted for applications specifically written to take advantage of the inherent parallelism in general purpose graphics processing unit architectures.
  • VIZ: graphic user interface (GUI) interface, especially for visualization projects.
  • HTC: HTC nodes are designed for High Throughput Computing workflows such as gene sequence analysis and data-intensive analytics.

 

Find the full documentation for MPI, SMP and GPU at https://crc.pitt.edu/h2p.

MPI (Omni-path Network)
2
8-core Broadwell Processors

  • 96 nodes
  • 64 GB RAM
  • 256 GB SSD
  • Omni-path

 

MPI (Infinaband Network)

20-core Haswell Processors

  • 32 nodes
  • 128 GB RAM
  • 256 GB SSD
  • FDR Infiniband

 

SMP
24-core Skylake Processors

  • 100 nodes
  • 192 GB RAM
  • 256 and 512 GB SSD

12-core Broadwell Processors

  • 24 nodes
    • 256 GB RAM
    • 256 GB & 1 TB SSD
  • 2 Nodes
    • 256 GB RAM
    • 256 & 3 TB SSD
  • 2 Nodes
    • 512 GB RAM
    • 256 & 3 TB SSD
  • 1 Node
    • 256 GBRAM
    • 256 GB & 6 TB NVMe
       

GPU

  • 7 nodes with 4 NVIDIA Titan X Graphics Cards per node
  • 18 nodes with 4 NVIDIA GTX1080 Graphic Cards per node
  • 5 nodes with 4 NVIDIA Titan Graphics Cards per node
  • 1 node with 2 NVIDIA  K40 Graphic Cards

 

VIZ (GUI Interface)
28-core Broadwell Processors

  • 1 node
  • 256 GB RAM
  • 1.6 TB SSD  (/scratch)
  • 2 NVIDIA GTX1080 Graphic Cards

Find the full documentations for VIZ at: crc.pitt.edu/viz.

 

HTC

The HTC cluster is designed to run high throughput computing jobs to support bioinformatics and health science research. Find the full documentation for HTC at: https://crc.pitt.edu/htc.

16-core Haswell Processors

  • 20 nodes
  • 256 GB RAM
  • 256 GB SSD
  • FDR Infiniband

24-core Skylake Processors

  • 4 nodes
  • 384 GB RAM
  • 256 & 500 GB SSD
  • FDR Infiniband