Skip to content

NERSC Systems

NERSC is one of the largest facilities in the world devoted to providing computational resources for scientific computing.

Perlmutter

Perlmutter is a HPE (Hewlett Packard Enterprise) Cray EX supercomputer, named in honor of Saul Perlmutter, an astrophysicist at Berkeley Lab who shared the 2011 Nobel Prize in Physics for his contributions to research showing that the expansion of the universe is accelerating.

Perlmutter, based on the HPE Cray Shasta platform, is a heterogeneous system comprising both CPU-only and GPU-accelerated nodes, with a performance of 3-4 times Cori when the installation completes.

We are in the process of Perlmutter Phase 2 integration (adding CPU only nodes and upgrading our system network to Slingshot 11). The final system will consist of 1536 GPU accelerated nodes with 1 AMD Milan processor and 4 NVIDIA A100 GPUs, and 3072 CPU-only nodes with 2 AMD Milan processors. The actual number of nodes available will be in flux during the integration and acceptance of the full system.

Cori

Cori is a Cray XC40 with a peak performance of about 30 petaflops. The system is named in honor of American biochemist Gerty Cori, the first American woman to win a Nobel Prize and the first woman to be awarded the prize in Physiology or Medicine. Cori is comprised of 2,388 Intel Xeon "Haswell" processor nodes, 9,688 Intel Xeon Phi "Knight's Landing" (KNL) nodes. The system also has a large Lustre scratch file system.

Cori Large Memory

Cori Large Memory consists of 20 nodes , each with 2 TB of memory and a 3.0 GHz AMD EPYC 7302 (Rome) processor. The nodes are available to high-priority scientific or technical campaigns that have a special need for this hardware. The initial focus is on supporting COVID-19 related research and preparing for the Perlmutter system (which will have a similar AMD processor).

Data transfer nodes

The data transfer nodes are NERSC servers dedicated to performing transfers between NERSC data storage resources such as HPSS and the NERSC Global File System (NGF), and storage resources at other sites. These nodes are being managed (and monitored for performance) as part of a collaborative effort between ESnet and NERSC to enable high performance data movement over the high-bandwidth 100Gb ESnet wide-area network (WAN).