Node Configuration

There are 20 compute nodes in total with the following configuration:

  • 8 Xeon Gold 6248R (Cascade Lake) nodes
    • 3.0 GHz, 48-core
    • 768 GB RAM
    • 480 GB SSD & 960 GB SSD
    • 100 Gb/s HDR InfiniBand
  • 18 Dual socket Intel Xeon Platinum 8352Y Ice Lake 32C 2.2GHz (64 cores/node)

    • 512 GB RAM

    • 2 TB NVMe drive for local scratch

    • 10 GbE

  • 4 Dual socket Intel Xeon Platinum 8352Y Ice Lake 32C 2.2GHz (64 cores/node)

    • 1 TB RAM

    • 2 TB NVMe drive for local scratch

    • 10 GbE

There are two login nodes that can be used for compilation.

  • E5-2620 v3 (Haswell)
  • 2.40GHz, 12 cores (24 hyperthreads)
  • 64 GB 1867 MHz
  • 56 Gb/s FDR Infiniband

For performance reasons the following configuration has been chosen for compute nodes and login nodes.

  • RedHat Enterprise 7.6

Filesystems

All nodes in the HTC mount the following file severs.

It is important to note the $HOME directories are shared with other clusters and configuration files may not be compatible. Please check through your .bashrc, .bash_profile and all other dotfiles if you encounter problems.

Filesystem Mount
ihome, backup /ihome
BeeGFS, no backup /bgfs
ZFS, no backup, 7 days snapshot /zfs1, /zfs2
ixSystems, no backup, 7 days snapshot /ix
Scratch (compute only) /scratch

Compiler

GNU 4.8.5 compilers are available in your path when you login. Newer GNU 8.2.0 compilers are available as module environments.

Currently, HTC cluster does not support distributed parallel MPI jobs. Only shared memory parallel jobs are supported.