Pitt CRC Resource Descriptions for Writing Proposals

The information below is provided to use in proposals or other documents calling for descriptions of Pitt CRC computational resources available to researchers.

Text Version

Access to computing hardware, software, and research consulting are provided through the Pitt Center for Research Computing (Pitt CRC) (www.crc.pitt.edu). CRC provides in-house high performance computing (HPC) resources allocated for shared use for campus researchers.

Pitt CRC provides state-of-the-art HPC clusters. The clusters comprise 136 24-core Xeon Gold, 29 12-core Xeon E5, 20 16-core Xeon E5, 132 28-core Xeon E5, and 32 20-core Xeon E5 compute nodes, totaling to 8268 computation-only CPU cores.

There are an additional 33 GPU dedicated nodes with a total of 128 Titan class or better GPUs. The nodes have a range of 62 GB to 125 GB per node shared memory. The global storage is comprised of a 130 TB Isilon home space (which is backed up), a 450 TB Lustre parallel file system, two 500 TB ZFS file system, and a 1.6 PB BeeGFS parallel file system.

The systems are housed at the enterprise-level Network Operations Center (NOC) and are administered jointly with Computing Services and Systems Development (CSSD). CSSD maintains the critical environmental infrastructure (power, cooling, networking) and administers the cluster operating systems and storage backups. CRC interfaces directly with researchers and provides software installation services, training workshops, and personalized consultation on improving software design/performance and computational workflows.

 

Detail Version

Overview
Pitt Center for Research Computing provides different types of hardware for different advanced computing needs. First, we describe the characteristics of each compute cluster. Details of each kind of hardware are listed further below.

SMP nodes are appropriate for programs that are parallelized using the shared memory framework. They are also appropriate for those who want to move up to supercomputers while keeping the programming style of their laptops, such as runing MATLAB. These nodes have up to 512GB of shared memory.

HTC nodes are designed for High Throughput Computing workflows such as sequence analysis and some data-intensive analytics.

MPI nodes are for tightly-coupled codes that are parallelized using the Message Passing Interface (MPI) and benefit from the low-latency Omni-Path (OP) or Infiniband (IB) interconnect fabrics.

GPU nodes are targeted for applications specifically written to take advantage of the inherent parallelism in general purpose graphics processing unit architectures.

 

Hardware Specifications

SMP Standard

132 nodes of 24-core Xeon Gold 6126 2.60 GHz (Skylake)
           192 GB RAM
           256 GB SSD & 500 GB SSD
            10GigE

24 nodes of 12-core Xeon E5-2643v4 3.40 GHz (Broadwell)
            256 GB RAM
            256 GB SSD & 1 TB SSD
            10GigE

SMP Specialty

2 nodes of 12-core Xeon E5-2643v4 3.40 GHz (Broadwell)
            256 GB RAM
            256 GB SSD & 3 TB SSD
            10GigE

2 nodes of 12-core Xeon E5-2643v4 3.40 GHz (Broadwell)
            512 GB RAM
            256 GB SSD & 3 TB SSD
            10GigE

1 node of 12-core Xeon E5-2643v4 3.40 GHz (Broadwell)
            256 GB RAM
            256 GB SSD & 6 TB NVMe
            10 GigE          

HTC

4 nodes of 24-core Xeon Gold 6126 2.60 GHz (Skylake)
        384 GB RAM
        256 GB SSD  & 500 GB SSD
        FDR Infiniband

20 nodes of 16-core Intel Xeon E5-2630v3, 2.4GHz (Haswell-EP)
        256 GB RAM
        256 GB SSD
        FDR Infinibanb

MPI - OP1

96 nodes of 28-core Intel Xeon E5-2690 2.60 GHz (Broadwell)
        64 GB RAM
        256 GB SSD
        100 Gb Omni-Path

MPI - OP2

36 nodes of 28-core Intel Xeon Gold 6132 2.60 GHz (Skylake)
        192 GB RAM
        256 GB SSD & 500 GB SSD
        100 Gb Omni-Path

MPI - IB

32 nodes of 20-core Haswell (E5-2660 v3) 2.6 GHz (Haswell)
        128 GB RAM
        256 GB SSD
        FDR InfiniBand

GPU

5 nodes with 4 NVIDIA Titan GPGPUs/node
7 nodes with 4 NVIDIA Titan X GPGPUs/node
18 nodes with 4 NVIDIA GeForce GTX 1080 GPGPUs/node
1 node with 2 NVIDIA K40 GPGPUs
1 nodes with 4 NVIDIA V100 32GB GPGPUs/node

 

Technical Support and Funding

The cluster compute nodes were purchased with funds provided by the University and by faculty researchers. The systems are housed at the enterprise-level Network Operations Center (NOC) and are administered jointly with Computing Services and Systems Development (CSSD). CSSD maintains the critical environmental infrastructure (power, cooling, networking) and administers the cluster operating systems and storage backups. CRC interfaces directly with researchers and provides software installation services, training workshops, and personalized consultation on improving software design/performance and computational workflows. The road map for research computing infrastructure are developed jointly by CRC and CSSD to meet the emerging needs of researchers at the University.

Connectivity between the NOC and main campus is via two 100Gbps fibers and to Internet2 via 100Gbps. The global storage is comprised of a 130TB Isilon home space (which is backed up), a 450 TB Lustre parallel filesystem, two 500TB ZFS filesystem for archival, and a 1.6PB BeeGFS parallel filesystem.

This infrastructure is designed for future scaling via additional resources funded by research instrumentation grants, internal University funds, or faculty contributions from grants or start-up funds. 

The cluster operating systems are Redhat Enterprise Linux 6 and 7. A very wide range of major software packages are licensed and installed on the cluster, ranging from quantum mechanics (e.g., Gaussian, Molpro, VASP, CP2K, QMC), to classical mechanics (e.g., NAMD, LAMMPS, Amber), to continuum mechanics (e.g., Abaqus, ANSYS, COMSOL, Lumerical), to genomics analysis suits (e.g., Tophat/Bowtie, CLCb Genomics Server).