Don’t forget that Pitt CRC’s Advanced R Workshop will be held tomorrow, Thursday March 8, from 1-4PM in 311A Schenley Place (4420 Bayard St, Pittsburgh, PA 15213). The workshop will cover using the neuralnet package to train data sets using neural networks.
- Bring your own laptop.
- Install R Studio on your laptop. Download from: https://www.rstudio.com/products/rstudio/download/
Find more information at http://core.sam.pitt.edu/node/8150
Pitt CRC has made refinements to the clusters’ queueing system that we believe will improve equitable utilization of compute resources. The key change involves the “quality of service” (QOS) system in Slurm on H2P. Previously each group had access to a fixed number of cores per cluster. For example, on the smp and high-mem partitions a single group had access to only 840 CPUs. With this change, each cluster and partition can be limited based on wall time. The change in the QOS system will be applied automatically based on users’ wall time; users won’t need to take any action.
Find more details at http://core.sam.pitt.edu/node/8145.
Dear R users of CRC,
Packages from pbdR (programming in big data with R) are now available to use over the h2p Omnipath cluster. The packages will allow R users to write MPI enabled parallel code in R. An example code and slurm job definition are available at “/ihome/crc/how_to_run/pbdr” on the cluster. To know more about pbdR see [1,2]. Do not hesitate to contact us should you face difficulties with using these packages.
Dear H2P GPU Users,
If you are not using the GPU cluster on H2P you can safely ignore this. We have enabled the full charging scheme on the GPU cluster. Try
crc-sinfo.py to see the new partitions:
gtx1080 (default): Charging 1 SU per card hour
titanx: Charging 3 SUs per card hour
k40: Charging 6 SUs per card hour
Check out H2P Service Units and Queue Information for more details.
The CRC Team
Here, you can find all the useful directives for your computational need at CRC. Simply download the pdf.