CRC Resource Descriptions for Writing Proposals

The information below is provided to use in proposals or other documents calling for descriptions of Pitt CRC computational resources available to researchers.

Text Version

Access to computing hardware, software, and research consulting are provided through the Pitt Center for Research Computing (Pitt CRC). CRC provides in-house high-performance computing (HPC) resources allocated for shared use for campus researchers.

Pitt CRC provides state-of-the-art HPC clusters. The clusters comprise 58 32-core AMD EPYC Rome, 136 24-core Xeon Gold, 29 12-core Xeon E5, 20 16-core Xeon E5, 132 28-core Xeon E5, and 32 20-core Xeon E5 compute nodes, totaling to 10124 computation-only CPU cores.

There are an additional 33 GPU dedicated nodes with a total of 128 Titan class or better GPUs. The nodes have a range of 62 GB to 125 GB per node shared memory. The global storage is comprised of a 130 TB Isilon home space (which is backed up), two 500 TB ZFS file systems, and a 1.6 PB BeeGFS parallel file system.

The systems are housed at the enterprise-level Network Operations Center (NOC) and are administered jointly with Pitt IT. Pitt IT maintains the critical environmental infrastructure (power, cooling, networking, and security) and administers the cluster operating systems and storage backups. CRC interfaces directly with researchers and provides software installation services, training workshops, and personalized consultation on improving software design/performance and computational workflows.

Detail Version

Overview

Pitt Center for Research Computing provides different types of hardware for different advanced computing needs. First, we describe the characteristics of each compute cluster. Details of each kind of hardware are listed further below.

SMP nodes are appropriate for programs that are parallelized using the shared memory framework. They are also appropriate for those who want to move up to supercomputers while keeping the programming style of their laptops, such as running MATLAB. These nodes have up to 512GB of shared memory.

HTC nodes are designed for High Throughput Computing workflows such as sequence analysis and some data-intensive analytics.

MPI nodes are for tightly-coupled codes that are parallelized using the Message Passing Interface (MPI) and benefit from the low-latency Omni-Path (OP) or Infiniband (IB) interconnect fabrics.

GPU nodes are targeted for applications specifically written to take advantage of the inherent parallelism in general-purpose graphics processing unit architectures.

Hardware Specifications

SMP Standard

  • 58 nodes with dual 16-core AMD EPYC 7302 3.0 GHz (Rome)
    • 256 GB RAM
    • 256 GB SSD & 1 TB SSD
    • 10GigE
  • 132 nodes with dual 12-core Intel Xeon Gold 6126 2.60 GHz (Skylake)
    • 192 GB RAM
    • 256 GB SSD & 500 GB SSD
    • 10GigE
  • 24 nodes with dual 6-core Intel Xeon E5-2643 v4 3.40 GHz (Broadwell)
    •  256 GB RAM
    •  256 GB SSD & 1 TB SSD
    •  10GigE

SMP Specialty

  • 2 nodes with dual 6-core Intel Xeon E5-2643 v4 3.40 GHz (Broadwell)256 GB RAM
    • 512 GB RAM         
    • 256 GB SSD & 3 TB SSD
    • 10GigE
  • 2 nodes with dual 6-core Intel Xeon E5-2643 v4 3.40 GHz (Broadwell)
    • 512 GB RAM
    • 256 GB SSD & 3 TB SSD
    • 10GigE
  • 1 node with dual 6-core Intel Xeon E5-2643 v4 3.40 GHz (Broadwell)
    • 256 GB RAM
    • 256 GB SSD & 6 TB NVMe
    • 10 GigE 
  • 1 node with dual 16-core AMD EPYC 2.40 GHz (Naples)
    • 1024 GB RAM
    • 256 GB SSD & 1 TB NVMe
    • 10 GigE     

HTC

  • 4 nodes with dual 12-core Intel Xeon Gold 6126 2.60 GHz (Skylake)
    • 384 GB RAM
    • 256 GB SSD  & 500 GB SSD
    • FDR Infiniband
  • 20 nodes with dual 8-core Intel Xeon E5-2630 v3 2.4GHz (Haswell-EP)
    • 256 GB RAM
    • 256 GB SSD
    • FDR Infinibanb

MPI - OP1

  • 96 nodes with dual 14-core Intel Xeon E5-2690 2.60 GHz (Broadwell)
    • 64 GB RAM
    • 256 GB SSD
    • 100 Gb Omni-Path

MPI - OP2

  • 36 nodes with dual 14-core Intel Xeon Gold 6132 2.60 GHz (Skylake)
    • 192 GB RAM
    • 256 GB SSD & 500 GB SSD
    • 100 Gb Omni-Path

MPI - IB

  • 32 nodes with dual 10-core Intel Xeon E5-2660 v3 2.6 GHz (Haswell)
    • 128 GB RAM
    • 256 GB SSD
    • FDR InfiniBand

GPU

  • 5 nodes with 4 NVIDIA Titan GPUs/node
  • 7 nodes with 4 NVIDIA Titan X GPUs/node
  • 18 nodes with 4 NVIDIA GeForce GTX 1080 GPUs/node
  • 1 node with 2 NVIDIA K40 GPUs
  • 1 nodes with 4 NVIDIA V100 32GB GPUs/node

 

Technical Support and Funding

The cluster computes nodes were purchased with funds provided by the University and by faculty researchers. The systems are housed at the enterprise-level Network Operations Center (NOC) and are administered jointly with Pitt IT. Pitt, IT maintains the critical environmental infrastructure (power, cooling, networking, and security) and administers the cluster operating systems and storage backups. CRC interfaces directly with researchers and provides software installation services, training workshops, and personalized consultation on improving software design/performance and computational workflows. The road map for research computing infrastructure is developed jointly by CRC and CSSD to meet the emerging needs of researchers at the University.

Connectivity between the NOC and the main campus is via two 100Gbps fibers and to Internet2 via 100Gbps. The global storage is comprised of a 130TB Isilon home space (which is backed up), two 500TB ZFS file systems, and a 1.6PB BeeGFS parallel filesystem.

This infrastructure is designed for future scaling via additional resources funded by research instrumentation grants, internal University funds, or faculty contributions from grants or start-up funds. 

The cluster operating systems is Redhat Enterprise Linux 6 and 7. A very wide range of major software packages are licensed and installed on the cluster, ranging from quantum mechanics (e.g., Gaussian, Molpro, VASP, CP2K, QMC), to classical mechanics (e.g., NAMD, LAMMPS, Amber), to continuum mechanics (e.g., Abaqus, ANSYS, Lumerical), to genomics analysis suits (e.g., Tophat/Bowtie, CLCb Genomics Server).