# Perlmutter Scratch¶

Warning

This page is currently under active development. Check back soon for more content.

Permutter Scratch is an all-flash Lustre file system designed for high performance temporary storage of large files. It is intended to support intensive I/O for jobs that are being actively computed on the Perlmutter system. We recommend that you run your jobs, especially data intensive ones, from the Perlmutter Scratch File System.

## Usage¶

The Perlmutter Scratch File system should always be referenced using the environment variable \$PSCRATCH. It is available from all Permutter compute nodes and is tuned for high performance.

## Quotas¶

If your scratch usage exceeds your quota, you will not be able to write to the file system until you reduce your usage.

## Performance¶

The Perlmutter Scratch File System is an all-flash file system. It has 35 PB of disk space, an aggregate bandwidth of >5 TB/sec, and 4 million IOPS (4 KiB random). It has 16 MDS (metadata servers), 274 I/O servers called OSSs, and 3,792 dual-ported NVMe SSDs.

### Default File Striping¶

Perlmutter's Scratch File System uses Progressive File Layout (PFL) to automatically increasing the striping of files across OSTs as the files increase in size. Data from large files initially starts striped across a single OST and the remaining data is striped across increasing numbers of OSTs following this schema:

Upper Data Threshold Number of OSTs
1 GB 1
10 GB 8
100 GB 24
>100 GB 72

This should be reasonably efficient for all I/O types. However, if each one of your processes is doing I/O on a separate file (i.e. "file per process"), then you might find better performance by striping your files across only one OST. For more details about striping, please see our Lustre striping page.

## Backup¶

All NERSC users should backup important files on a regular basis. Ultimately, it is the user's responsibility to prevent data loss.

Warning

The scratch file system is subject to purging. Please make sure to back up your important files (e.g. to HPSS).