Skip to content

PyTorch

PyTorch is a high-productivity Deep Learning framework based on dynamic computation graphs and automatic differentiation. It is designed to be as close to native Python as possible for maximum flexibility and expressivity.

Using PyTorch at NERSC

There are multiple ways to use and run PyTorch on NERSC systems like Perlmutter.

Using NERSC PyTorch modules

The first approach is to use our provided PyTorch modules. This is the easiest and fastest way to get a complete Python + PyTorch environment with all the features supported by the system. On Perlmutter, the modules are named like pytorch/{version} and are built with CUDA and NCCL support for GPU-accelerated distributed training. You can see which PyTorch versions are available with module avail pytorch. We generally recommend to use the latest version to have all the latest PyTorch features.

As an example, to load PyTorch 2.0.1, you should do:

module load pytorch/2.0.1

You can customize these module environments by installing your own python packages on top. Simply do a user install with pip:

pip install --user ...

The modulefiles automatically set the $PYTHONUSERBASE environment variable for you, so that you will always have your custom packages every time you load that module.

Installing PyTorch yourself

Alternatively, you can install PyTorch into your own software environments. This allows you to have full control over the included packages and versions. It is recommended to use conda as described in our Python documentation. Follow the appropriate installation instructions at: https://pytorch.org/get-started/locally/.

Note that if you install PyTorch via conda, it will not have MPI support. However, you can install PyTorch with GPU and NCCL support via conda.

If you need to build PyTorch from source, you can refer to our build scripts for PyTorch in the nersc-pytorch-build repository. If you need assistance, please open a support ticket at http://help.nersc.gov/.

Containers

It is also possible to use your own Docker containers with PyTorch on Perlmutter with shifter. Refer to the NERSC shifter documentation for help deploying your own containers.

On Perlmutter, we provide prebuilt images based on NVIDIA GPU Cloud (NGC) containers. They are named like nersc/pytorch:ngc-20.09-v0. Note that the best performance for multi-node distributed training using containers is achieved via usage of the nccl-2.15 or nccl-2.18 shifter modules, for CUDA 11 and CUDA 12 containers, respectively, along with the default gpu shifter module.

Distributed training

PyTorch makes it fairly easy to get up and running with multi-GPU and multi-node training via its distributed package. For an overview, refer to the PyTorch distributed documentation.

On Perlmutter, best performance for multi-node distributed training using containers is achieved via usage of the nccl-2.15 shifter module, along with the default gpu shifter module.

See below for some complete examples for PyTorch distributed training at NERSC.

Performance optimization

To optimize performance of pytorch model training workloads on NVIDIA GPUs, we refer you to our Deep Learning at Scale Tutorial material from SC22, which includes guidelines for optimizing performance on a single NVIDIA GPU as well as best practices for scaling up model training across many GPUs and nodes.

Examples and tutorials

There is a set of example problems, datasets, models, and training code in this repository: https://github.com/NERSC/pytorch-examples

This repository can serve as a template for your research projects with a flexibly-organized design for layout and code structure. It also demonstrates how you can launch data-parallel distributed training jobs on our systems. The examples include MNIST image classification with a simple CNN and CIFAR10 image classification with a ResNet50 model.

We also provide a more lightweight template PyTorch code for data parallel distributed training with the option of integrating with Weights & Biases for experiment tracking and hyperparameter optimization at: https://github.com/NERSC/nersc-dl-wandb

For a general introduction to coding in PyTorch, you can check out this great tutorial from the DL4Sci school at Berkeley Lab in 2020 by Evann Courdier.

Additionally, for an example focused on performance and scaling, we have the material and code example from our Deep Learning at Scale tutorial at SC22.

Finally, PyTorch has a nice set of official tutorials you can learn from as well.