Resource Descriptions For Writing Proposals

×

Status message

You have been redirected to 'https://crc-pages.pitt.edu/user-manual/policies/resource-descriptions-for-writing-proposals/'.

Attention: You are viewing a page from the old CRC user manual.

This page may not represent the most up to date version of the content.

You can view it in our new user manual here.

Useful Resource Information For Writing External Proposals

The information below may be of use to you in drafting proposals or other documents calling for descriptions of Pitt CRC computational resources available to researchers.

 

Brief Text Description of CRC Resources

Access to computing hardware, software, and research consulting are provided through the Pitt Center for Research Computing (Pitt CRC). 

The CRC provides state-of-the-art high performance computing (HPC) resources allocated for shared use for campus researchers.

The clusters comprise 136 dual 24-core Intel Xeon Gold Ice Lake, 22 dual 32-core Intel Xeon Platinum Ice Lake, 58 dual 16-core AMD EPYC Rome, 136 dual 12-core Xeon Gold Skylake, 29 dual 6-core Xeon E5, 20 dual 8-core Xeon E5, 132 dual 14-core Xeon E5, and 32 dual 10-core Xeon E5 compute nodes, totaling to 18,060 computation-only CPU cores.

There are an additional 32 GPU dedicated nodes with a total of 20 V100 GPUs and 128 A100 GPUs. The nodes have a range of 512 GB to 1024 GB per node shared memory. The global storage is comprised of a 130 TB Isilon home space (which is backed up), 3 PB ZFS file systems, and a 1.6 PB BeeGFS parallel file system.

The systems are housed at the enterprise-level Network Operations Center (NOC) and are administered jointly with Pitt IT. Pitt IT maintains the critical environmental infrastructure (power, cooling, networking, and security) and administers the cluster operating systems and storage backups. CRC interfaces directly with researchers and provides software installation services, training workshops, and personalized consultation on improving software design/performance and computational workflows.

 

Technical Support and Funding

The cluster computes nodes were purchased with funds provided by the University and by faculty researchers, and are housed at the enterprise-level Network Operations Center (NOC), administered jointly with Pitt IT.

Pitt IT maintains the critical environmental infrastructure (power, cooling, networking, and security) and administers the cluster operating systems and storage backups.

CRC interfaces directly with researchers and provides software installation services, training workshops, and personalized consultation on improving software design/performance and computational workflows. The road map for research computing infrastructure is developed jointly by CRC and CSSD to meet the emerging needs of researchers at the University.

Connectivity between the NOC and the main campus is via two 100Gbps fibers and to Internet2 via 100Gbps.

The global storage is comprised of a 130 TB Isilon home ("ihome") space (which is backed up), 3 PB ZFS file systems, a 1.6 PB BeeGFS ("bgfs") parallel filesystem, and 1 PB iXsystems enterprise storage. 

The cluster operating systems is Redhat Enterprise Linux 6 and 7. A very wide range of major software packages are licensed and installed on the cluster, ranging from quantum mechanics (e.g., Gaussian, Molpro, VASP, CP2K, QMC), to classical mechanics (e.g., NAMD, LAMMPS, Amber), to continuum mechanics (e.g., Abaqus, ANSYS, Lumerical), to genomics analysis suits (e.g., Tophat/Bowtie, CLCb Genomics Server).

This infrastructure is designed for future scaling via additional resources funded by research instrumentation grants, internal University funds, or faculty contributions from grants or start-up funds.