Hurry up and register for our workshops!

 

Dear students, faculty and staff,

Welcome all new and returning students, faculty, and staff to start a productive semester. We are excited to remind you about our Fall 2017 series of workshops. If you haven’t registered yet, please try to do so via http://core.sam.pitt.edu/Registration_Fall2017_Tutorial

In our upcoming workshop on Cluster Training on Wednesday 9/13, we will answer the following questions:

  • What hardware is available at Pitt?
  • How can we help your computational workflow?
  • How to access?
  • How to submit jobs?

In the meantime, please take a look at our Spring 2017 presentation on Cluster Training at https://crc.pitt.edu/workshops/. There is no pre-requirement for the workshop but if you don’t know anything about Linux environment, please take a look at the crc-cluster-usage presentation in Spring-2017-workshop folder. You can catch up by following the instructions from page 8 to 13 of the presentation.

 

Cheers,

CRC Team

Workshop on FreeFEM

The Department of Mathematics of the University of Pittsburgh will be hosting a workshop on FreeFEM++ given by Professor Frederic Hecht:
What: An Introduction to Scientific Computing using Free Software FreeFem++
Description of the workshop from Professor Hecht:
“I would like it to be possible to solve digitally, in a user-friendly way, the problems modeled by partial differential equations (PDEs) from physics, engineering, computing graphics and recently from the finance-banking sector. This problem is therefore at the interface between applied mathematics, numerical analysis, computer science and the relevant applications (fluid mechanics, electromagnetism, quantum mechanics and stock options in finance).”
Where: Thackery hall Room 427
When: August 22, 2017 (All day) – September 1, 2017 (All day)
Who: Everyone is welcome. However, the current room is not large. It would be helpful to email us (wjl@pitt.edu or Trenchea@pitt.edu) if you plan to attend so we can estimate audience size and adjust if necessary.

For more information and the detailed schedule, please visit: http://www.mathematics.pitt.edu/node/2052

Updates to H2P: Partition Changes After Testing Phase

Dear H2P Users,

The testing period for the new nodes was a success! We need to shuffle around some of the nodes in smp to accommodate the new nodes. The partitions will become:

Old NameNew NameCharging Rate
testsmp (default)0.8
smphigh-mem2.0
smp-highhigh-mem2.0

We will be draining all of the nodes and moving them into new partitions. Running jobs will not be affected. Any jobs which are submitted to the test partition will need to be resubmitted. Sorry for any inconvenience.

Thanks,

The CRC Team

New Hardware Testing Phase

Hello H2P Users!

We just added a 2400 core expansion to the SMP cluster on H2P. We would like to invite you to do some testing with us. There is a new partition on the smp cluster called test. We are making this available to all users on H2P starting today in a “testing” phase.

Hardware details:

  • Dual Socket, 12 core CPU @ 2.60 GHz (i.e, 24 cores total)
  • 187 GB RAM (DDR4 @ 2666 MHz)
  • 500 GB Local SSD Scratch ($SLURM_SCRATCH)

During this period, we are interested in two aspects:

  • Precision: Do you get the same answer compared to previous calculations?
  • Speedup: How do these calculations perform compared to previous ones?

On test, the charging of SUs have been turned off! Share a support ticket with your results or issues.

If you don’t already have access:

  • See Apply for allocation for details about getting access to H2P.
  • Are you waiting on a proposal to be reviewed?
    • Be patient, we plan to finish the current reviews soon!

Thank you,

The CRC Team

pbdR available on h2p cluster

Dear R users of CRC,

Packages from pbdR (programming in big data with R) are now available to use over the h2p Omnipath cluster. The packages will allow R users to write MPI enabled parallel code in R. An example code and slurm job definition are available at “/ihome/crc/how_to_run/pbdr” on the cluster. To know more about pbdR see [1,2]. Do not hesitate to contact us should you face difficulties with using these packages.

[1] https://www.hpcwire.com/off-the-wire/ornl-researchers-bridge-gap-r-hpc-communities
[2] https://rbigdata.github.io

H2P GPU Charging Scheme

Dear H2P GPU Users,

If you are not using the GPU cluster on H2P you can safely ignore this. We have enabled the full charging scheme on the GPU cluster. Try crc-sinfo.py to see the new partitions:

  1. gtx1080 (default): Charging 1 SU per card hour
  2. titanx: Charging 3 SUs per card hour
  3. k40: Charging 6 SUs per card hour

Check out H2P Service Units and Queue Information for more details.

Thank you!

The CRC Team

1-minute survey on workshop topics

Dear Users,

We are writing to you to request your participation in a brief survey on workshop topics for Fall 2017-Spring 2018. We would like to get feedback about your suggested topics. Your responses to this survey will help us evaluate the effectiveness of our proposed workshops so that we can design better topics. The survey is brief and it takes less 1-minute to complete. Please submit your entry before June 30.

CRC Team.