Computing Clusters and Resources

For a detailed overview and guides to accessing CRC resources, see "Resource Documentation" under User Support.

H2P Cluster Hardware Resources

H2P (Hail to Pitt) is the main cluster at the Center for Research Computing. H2P supports these high-performance computing modes

  • MPI: distributed computing across nodes using the Message Passing Interface
  • SMP: shared memory processing on a single node
  • GPU: accelerated computing using Graphic Processing Units
  • VIZ: GUI interface

Find the full documentation for H2P at https://crc.pitt.edu/h2p.

MPI (Omni-path Network)
28-core Broadwell Processors

  • 96 nodes
  • 64 GB RAM
  • 256 GB SSD
  • Omni-path

 

MPI (Infinaband Network)

20-core Haswell Processors

  • 32 nodes
  • 128 GB RAM
  • 256 GB SSD
  • FDR Infiniband

 

SMP
24-core Skylake Processors

  • 100 nodes
  • 192 GB RAM
  • 256 and 512 GB SSD

12-core Broadwell Processors

  • 24 nodes
    • 256 GB RAM
    • 256 GB & 1 TB SSD
  • 2 Nodes
    • 256 GB RAM
    • 256 & 3 TB SSD
  • 1 Node
    • 256 GBRAM
    • 256 GB & 6 TB NVMe
       

GPU

  • 7 nodes with 4 NVIDIA Titan X Graphics Cards per node
  • 18 nodes with 4 NVIDIA GTX1080 Graphic Cards per node
  • 5 nodes with 4 NVIDIA Titan Graphics Cards per node
  • 1 node with 2 NVIDIA  K40 Graphic Cards

 

VIZ (GUI Interface)
28-core Broadwell Processors

  • 1 node
  • 256 GB RAM
  • 1.6 TB SSD  (/scratch)
  • 2 NVIDIA GTX1080 Graphic Cards

Find the full documentations at: crc.pitt.edu/viz.

 

HTC Cluster Hardware Resources

The HTC cluster is designed to run high throughput computing jobs to support bioinformatics and health science research. Find full documentaiton at: https://crc.pitt.edu/htc.

16-core Haswell Processors

  • 20 nodes
  • 256 GB RAM
  • 256 GB SSD
  • FDR Infiniband

24-core Skylake Processors

  • 4 nodes
  • 384 GB RAM
  • 256 & 500 GB SSD
  • FDR Infiniband