New Nvidia L40S GPU Partition Announcement

Dear CRC Users

We are excited to introduce the latest expansion to our GPU cluster: the L40S partition. This partition consists of 20 high-performance nodes, each equipped with 4 Nvidia L40S GPUs boasting 48GB of memory onboard each. Here are the full details for the new partition:

Partition

Nodes

GPU Type

GPUs/Node

Host Arch

Cores/Node

Mem/Node

Scratch

Nodes

l40s

20

L40S

4

Intel(R) Xeon(R) Platinum 8462Y+

64

512GB

7TB

gpu-n[55-74]

Please be aware that Nvidia L40S GPUs are tailored for AI tasks, 3D model development, and computer-aided engineering simulations, lacking support for double precision (FP64). Nonetheless, they excel over Nvidia A100 GPUs in single precision (FP32) and mixed precision tasks.

You can begin utilizing the capabilities of this new partition by specifying "l40s" in the "partition" field of your submission script. We are also in the process of integrating this into Jupyter on GPU on OnDemand, making it accessible for utilization soon.

Here is a sample SLURM submission template that you can start with:

#!/bin/bash
#SBATCH --job-name=my_new_l40s_job
#SBATCH --cluster=gpu
#SBATCH --partition=l40s
#SBATCH --nodes=1                # node count
#SBATCH --ntasks-per-node=1      # total number of tasks per node
#SBATCH --cpus-per-task=16        # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem=256G                # total memory per node (4 GB per cpu-core is default)
#SBATCH --gres=gpu:1             # number of gpus per node
#SBATCH --time=1-10:00:00          # total run time limit (HH:MM:SS)
#SBATCH --error=job.%J.err
#SBATCH --output=job.%J.out

Happy Computing!

The CRC Team