Node configuration

H2P cluster is separated into 3 clusters:

  1. Shared Memory Parallel (smp): Meant for single node jobs.
  2. Graphics Processing Unit (gpu): The GPU partition made up of Titan, Titan X, K40  and 1080 GTX nodes.
  3. Distributed Memory (mpi): The multi-node partition. Meant for massively parallel Message Passing Interface (MPI) jobs.

cluster= smp (default)

  • partition= smp (default)
    • 100 nodes of dual-socket 12-core Xeon Gold 6126 2.60 GHz (Skylake)
    • 192 GB RAM
    • 256 GB SSD & 500 GB SSD
    • 10GigE
  • partition= smp (add #SBATCH --feature=amd to your SLURM script to request this node type exclusively)
    • 58 nodes of dual-socket 16-core AMD EPYC 7302 3.0 GHz (Rome)
    • 256 GB RAM
    • 256 GB SSD & 1 TB SSD
    • 10GigE
    • partition= high-mem
      • 29 nodes of dual-socket 12-core Xeon E5-2643v4 3.40 GHz (Broadwell)
      • 256 GB RAM and 512GB RAM
      • 256 GB SSD & 1 TB SSD
      • 10GigE

    cluster= gpu

    Make sure to ask for a GPU! (--gres=gpu:N where N is the number of GPUs you need)

    partition= gtx1080 (default)

    • 10 nodes with 4 GTX1080Ti (nodelist= gpu-n[16-25])
    • 8 nodes with 4 GTX1080 (nodelist= gpu-stage[08-15])

    partition= titanx

    • 7 nodes with 4 Titan X

    partition= k40

    • 1 node with 2 K40
    • partition= titan
    • 5 nodes with 4 Titan

    cluster= mpi

    partition= opa (default)

    • 96 nodes of 28-core Intel Xeon E5-2690 2.60 GHz (Broadwell)
    • 64 GB RAM/node
    • 256 GB SSD
    • 100 Gb Omni-Path

    partition= ib

    • 32 nodes of 20-core Intel Xeon E5-2660 2.60 GHz (Haswell)
    • 128 GB RAM/node
    • 56 Gb FDR

    partition= legacy (Moved over nodes from Frank)

    • 88 nodes of 16-core Intel Xeon E5-2650 2.60GHz
    • 64 GB RAM/node
    • 56 Gb FDR
    • Use --constraint=<feature>, where <feature> could be ivy, sandy, or interlagos