Dear students, faculty and staff,
Welcome all new and returning students, faculty, and staff to start a productive semester. We are excited to remind you about our Fall 2017 series of workshops. If you haven’t registered yet, please try to do so via http://core.sam.pitt.edu/Registration_Fall2017_Tutorial
In our upcoming workshop on Cluster Training on Wednesday 9/13, we will answer the following questions:
- What hardware is available at Pitt?
- How can we help your computational workflow?
- How to access?
- How to submit jobs?
In the meantime, please take a look at our Spring 2017 presentation on Cluster Training at https://crc.pitt.edu/workshops/. There is no pre-requirement for the workshop but if you don’t know anything about Linux environment, please take a look at the crc-cluster-usage presentation in Spring-2017-workshop folder. You can catch up by following the instructions from page 8 to 13 of the presentation.
The Department of Mathematics of the University of Pittsburgh will be hosting a workshop on FreeFEM++ given by Professor Frederic Hecht:
What: An Introduction to Scientific Computing using Free Software FreeFem++
Description of the workshop from Professor Hecht:
“I would like it to be possible to solve digitally, in a user-friendly way, the problems modeled by partial differential equations (PDEs) from physics, engineering, computing graphics and recently from the finance-banking sector. This problem is therefore at the interface between applied mathematics, numerical analysis, computer science and the relevant applications (fluid mechanics, electromagnetism, quantum mechanics and stock options in finance).”
Where: Thackery hall Room 427
When: August 22, 2017 (All day) – September 1, 2017 (All day)
Who: Everyone is welcome. However, the current room is not large. It would be helpful to email us (firstname.lastname@example.org or Trenchea@pitt.edu) if you plan to attend so we can estimate audience size and adjust if necessary.
For more information and the detailed schedule, please visit: http://www.mathematics.pitt.edu/node/2052
Dear H2P Users,
The testing period for the new nodes was a success! We need to shuffle around some of the nodes in
smp to accommodate the new nodes. The partitions will become:
|Old Name||New Name||Charging Rate
We will be draining all of the nodes and moving them into new partitions. Running jobs will not be affected. Any jobs which are submitted to the
test partition will need to be resubmitted. Sorry for any inconvenience.
The CRC Team
Hello H2P Users!
We just added a 2400 core expansion to the SMP cluster on H2P. We would like to invite you to do some testing with us. There is a new partition on the
smp cluster called
test. We are making this available to all users on H2P starting today in a “testing” phase.
- Dual Socket, 12 core CPU @ 2.60 GHz (i.e, 24 cores total)
- 187 GB RAM (DDR4 @ 2666 MHz)
- 500 GB Local SSD Scratch ($SLURM_SCRATCH)
During this period, we are interested in two aspects:
- Precision: Do you get the same answer compared to previous calculations?
- Speedup: How do these calculations perform compared to previous ones?
test, the charging of SUs have been turned off! Share a support ticket with your results or issues.
If you don’t already have access:
- See Apply for allocation for details about getting access to H2P.
- Are you waiting on a proposal to be reviewed?
- Be patient, we plan to finish the current reviews soon!
The CRC Team
Dear R users of CRC,
Packages from pbdR (programming in big data with R) are now available to use over the h2p Omnipath cluster. The packages will allow R users to write MPI enabled parallel code in R. An example code and slurm job definition are available at “/ihome/crc/how_to_run/pbdr” on the cluster. To know more about pbdR see [1,2]. Do not hesitate to contact us should you face difficulties with using these packages.
Dear H2P GPU Users,
If you are not using the GPU cluster on H2P you can safely ignore this. We have enabled the full charging scheme on the GPU cluster. Try
crc-sinfo.py to see the new partitions:
gtx1080 (default): Charging 1 SU per card hour
titanx: Charging 3 SUs per card hour
k40: Charging 6 SUs per card hour
Check out H2P Service Units and Queue Information for more details.
The CRC Team