Node Configuration

There are 20 compute nodes in total with the following configuration:

  • 16 E5-2660 v3 (Haswell) nodes
    • 2.40GHz, 16 cores
    • 256 GB RAM 2133 MHz
    • 256 GB SSD
    • 56 Gb/s FDR InfiniBand
  • 4 E5-2643v4 (Broadwell) nodes
    • 3.40 GHz, 16 cores
    • 256 GB RAM
    • 256 GB SSD
    • 56 Gb/s FDR InfiniBand
  • 4 Xeon Gold 6126 (Skylake) nodes
    • 2.60 GHz, 24-core
    • 377 GB RAM
    • 256 GB SSD & 500 GB SSD
    • 56 Gb/s FDR InfiniBand

There are two login nodes that can be used for compilation.

  • E5-2620 v3 (Haswell)
  • 2.40GHz, 12 cores (24 hyperthreads)
  • 64 GB 1867 MHz
  • 56 Gb/s FDR Infiniband

For performance reasons the following configuration has been chosen for compute nodes and login nodes.

  • RedHat Enterprise 7.6

Filesystems

All nodes in the HTC mount the following file severs.

It is important to note the $HOME directories are shared with other clusters and configuration files may not be compatible. Please check through your .bashrc, .bash_profile and all other dotfiles if you encounter problems.

Filesystem Mount
ihome, backup /ihome
BeeGFS, not backup /bgfs
ZFS, not backup, 7 days snapshot /zfs1, /zfs2
Scratch (compute only) /scratch

Compiler

GNU 4.8.5 compilers are available in your path when you login. Newer GNU 8.2.0 compilers are available as module environments.

Currently, HTC cluster does not support distributed parallel MPI jobs. Only shared memory parallel jobs are supported.