Computational Facilities

Access to computing hardware, software, and research consulting are provided through the Center for Research Computing (CRC) (www.crc.pitt.edu). CRC provides in-house high performance computing (HPC) resources allocated for shared use for campus researchers. The cluster compute nodes were purchased with funds provided by the University, by faculty researchers, and by a NSF Major Research Instrumentation grant. The systems are housed at the enterprise-level Network Operations Center (NOC) and are administered jointly with Computing Services and Systems Development (CSSD). CSSD maintains the critical environmental infrastructure (power, cooling, networking) and administers the cluster operating systems and storage backups. CRC interfaces directly with researchers and provides software installation services, training workshops, and personalized consultation on improving software design/performance and computational workflows. The road map for research computing infrastructure are developed jointly by CRC and CSSD to meet the emerging needs of researchers at the University.

Many of the nodes are connected via a fast low latency Infiniband or Omni-Path network fabric in order to enable efficient distributed parallel runs. Connectivity between the NOC and main campus is via two 100Gbps fibers and to Internet2 via 100Gbps. The global storage is comprised of a 130TB Isilon home space, 80 TB of standard NFS home space, a 450 TB Lustre parallel filesystem, and 1PB ZFS filesystem for archival.  This infrastructure is designed for future scaling via additional resources funded by research instrumentation grants, internal University funds, or faculty contributions from grants or start-up funds.

The cluster operating systems are Redhat Enterprise Linux 6 and 7. A very wide range of major software packages are licensed and installed on the cluster, ranging from quantum mechanics (e.g., Gaussian, Molpro, VASP, CP2K, QMC), to classical mechanics (e.g., NAMD, LAMMPS, Amber), to continuum mechanics (e.g., Abaqus, ANSYS, COMSOL, Lumerical), to genomics analysis suits (e.g., Tophat/Bowtie, CLCb Genomics Server).

Because of the diverse needs of the research community, the University computing facilities are highly heterogeneous with the various clusters designed to target specific computational workflows.  The various cluster hardware and their target users are described below:

  1. Tightly-coupled codes that are parallelized using the Message Passing Interface (MPI) can benefit from the low-latency Omni-Path (OP) or Infiniband (IB) interconnect fabrics.

    MPI-OP
        96 nodes of 28-core Intel Xeon E5-2690 2.60 GHz (Broadwell)
64 GB RAM/node
256 GB SSD
100 Gb Omni-Path

    MPI-IB
32 nodes of 20-core Haswell (E5-2660 v3) 2.6 GHz (Haswell)
128 GB RAM/node
256 GB SSD
FDR InfiniBand

  1. Serial jobs or programs that are parallelized using the shared memory framework can benefit from the SMP cluster, comprised of the following nodes:

SMP-Standard
        24 nodes of 12-core Xeon E5-2643v4 3.40 GHz (Broadwell)
256 GB RAM
256 GB SSD & 1 TB SSD
10GigE

    SMP-Specialty
        2 nodes of 12-core Xeon E5-2643v4 3.40 GHz (Broadwell)
256 GB RAM
256 GB SSD & 3 TB SSD
10GigE

2 nodes of 12-core Xeon E5-2643v4 3.40 GHz (Broadwell)
512 GB RAM
256 GB SSD & 3 TB SSD
FDR Infiniband

1 node of 12-core Xeon E5-2643v4 3.40 GHz (Broadwell)
256 GB RAM
256 GB SSD & 6 TB NVMe
GigE

  1. High throughput computing workflows such as next-generation sequencing assembly and data-intensive analytics can benefit from the HTC cluster, comprised of the following nodes:

HTC
        20 nodes of 16-core Intel Xeon E5-2630v3, 2.4GHz (Haswell-EP)
256 GB RAM
256 GB SSD
FDR Infiniband

  1. Applications written to take advantage of non-traditional architectures, such as NVIDIA GPUs and Intel Knights Landing Multi-core CPUs can benefit from the HTA cluster:

HTA
        7 nodes with 4 NVIDIA Titan X GPGPUs/node
8 nodes with 4 NVIDIA GeForce GTX 1080 GPGPUs/node
1 node with 2 NVIDIA K40 GPGPUs
8 nodes of Intel KNL

  1. All of the above workflows were also supported on the Legacy cluster, which is comprised of the following heterogeneous architectures:

Legacy

  • 20 nodes with 16-core Intel Ivy Bridge (E5-2650v2) 2.6 GHz, 64 GB of memory, 1 TB HDD, and FDR IB.
  • 24 nodes with 16-core Intel Sandy Bridge (E5-2650) 2.6 GHz, 128 GB of memory, 1TB HDD, and FDR IB.
  • 82 nodes with 16-core Intel Sandy Bridge (E5-2670) 2.6 GHz. 36 have 32 GB of RAM,1 TB HDD, connected by FDR IB. 36 have 64 GB of RAM, 1 TB HDD, connected by FDR IB. 8 have 64 GB of RAM, 2 TB HDD, connected by GigE. 2 have 128 GB of RAM, 3 TB HDD, connected by FDR IB.
  • 23 nodes with 48-core AMD Magny Cours (6172) 2.1 GHz CPU. 2 nodes have 256 GB RAM, 18 have 128 GB RAM, and 3 have 64 GB RAM.
  • 44 nodes with 12-core Intel Westmere (X5650) 2.67 GHz CPU and 48 GB RAM
  • 110 nodes with 8-core Intel Nehalem CPU (2.93 GHz X5570, 2.67 GHz X5550, and 2.27 GHz L5520). 8 have 48 GB RAM, 56 have 12 GB RAM and 46 have 24 GB RAM.
  • 54 nodes with 64-core AMD Interlagos (Opteron 6276) 2.3 GHz, QDR IB, 2TB HDD. 18 nodes have 256 GB RAM. 36 nodes have 128 GB of RAM.
  • 4 nodes with 4 NVIDIA Tesla C2050 GPGPUs
  • 4 nodes with 4 NVIDIA GTX Titan GPGPUs
  • 1 node with 8-core Intel Sandy Bridge (E5-2643), 128 GB RAM, 3TB SSD
  • 1 node with 12-core Haswell (E5-2620 v3) 2.4 GHz, 128 GB RAM, 2 x 250 GB HDD, 2 x 800 GB SSD