Advanced R Workshop Thursday March 8

Don’t forget that Pitt CRC’s Advanced R Workshop will be held tomorrow, Thursday March 8, from 1-4PM in 311A Schenley Place (4420 Bayard St, Pittsburgh, PA 15213). The workshop will cover using the neuralnet package to train data sets using neural networks.


  1. Bring your own laptop.
  2. Install R Studio on your laptop. Download from:

Find  more information at

Refinements to Queuing System Improve Equitable Utilization

Pitt CRC has made refinements to the clusters’ queueing system that we believe will improve equitable utilization of compute resources. The key change involves the “quality of service” (QOS) system in Slurm on H2P. Previously each group had access to a fixed number of cores per cluster. For  example, on the smp and high-mem partitions a single group had access to only 840 CPUs. With this change, each cluster and partition can be limited based on wall time. The change in the QOS system will be applied automatically based on users’ wall time; users won’t need to take any action.

Find more details at

pbdR available on h2p cluster

Dear R users of CRC,

Packages from pbdR (programming in big data with R) are now available to use over the h2p Omnipath cluster. The packages will allow R users to write MPI enabled parallel code in R. An example code and slurm job definition are available at “/ihome/crc/how_to_run/pbdr” on the cluster. To know more about pbdR see [1,2]. Do not hesitate to contact us should you face difficulties with using these packages.


H2P GPU Charging Scheme

Dear H2P GPU Users,

If you are not using the GPU cluster on H2P you can safely ignore this. We have enabled the full charging scheme on the GPU cluster. Try to see the new partitions:

  1. gtx1080 (default): Charging 1 SU per card hour
  2. titanx: Charging 3 SUs per card hour
  3. k40: Charging 6 SUs per card hour

Check out H2P Service Units and Queue Information for more details.

Thank you!

The CRC Team