Skip to content


Quantum ESPRESSO is an integrated suite of computer codes for electronic structure calculations and materials modeling at the nanoscale. It builds on the electronic structure codes PWscf, PHONON, CP90, FPMD, and Wannier. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).

Availability and Supported Architectures at NERSC

Quantum ESPRESSO is available at NERSC as a provided support level package. Quantum ESPRESSO 7.x supports GPU execution.

Versions Supported

Perlmutter GPU Perlmutter CPU
7.x 7.x

Use the module avail espresso command to see a full list of available sub-versions.

Application Information, Documentation, and Support

Quantum ESPRESSO is freely available and can be downloaded from the Quantum ESPRESSO home page. See the preceding link for more resources, including documentation for building the code and preparing input files, tutorials, pseudopotentials, and auxiliary software. For troubleshooting, see the FAQ for solutions to common issues encountered while building and running the package. See the Quantum ESPRESSO users forum and mail archives for additional support-related questions. For help with issues specific to the NERSC module, please file a support ticket.

Using Quantum ESPRESSO at NERSC

Use the module spider command to see which versions are available and module load espresso/<version> to load the environment:

nersc$ module spider espresso

nersc$ module load espresso/7.0-libxc-5.2.2-gpu

The preceding command loads Quantum ESPRESSO 7.0 built for GPUs and linked to the LibXC v5.2.2 density functional library.

Sample Job Scripts

See the example jobs page for additional examples and information about jobs. For all routines except pw.x, run Quantum ESPRESSO in full MPI mode as there is currently no efficient OpenMP implementation available.

Perlmutter GPU
#SBATCH --qos=regular
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-task=1
#SBATCH --gpu-bind=map_gpu:0,1,2,3
#SBATCH -C gpu
#SBATCH -t 02:00:00
#SBATCH -J my_job

export SLURM_CPU_BIND="cores"
export OMP_PROC_BIND=spread
export OMP_PLACES=threads

module load espresso/7.0-libxc-5.2.2-gpu
srun pw.x -npool 2 -ndiag 1 -input


For GPU runs, use 1 MPI-rank-per-GPU and set OMP_NUM_THREADS=1 to avoid oversubscribing each GPU.

Perlmutter CPU
# Run on 4 nodes using 16 MPI ranks-per-node
#  using 8 OpenMP threads-per-MPI-rank
#SBATCH --qos=regular
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=16
#SBATCH -C cpu
#SBATCH -t 2:00:00

export SLURM_CPU_BIND="cores"
export OMP_PROC_BIND=spread
export OMP_PLACES=threads

# QE parallelization parameters (problem-dependent)

flags="-nimage $nimage -npool $npool -nband $nband -ntg $ntg -ndiag $ndiag"

module load espresso/7.0-libxc-5.2.2-cpu
srun pw.x $flags -input >& test.out 

Building Quantum ESPRESSO from Source

Some users may be interested in tweaking the Quantum ESPRESSO build parameters and building QE themselves. Quantum ESPRESSO tarballs are available for download at the developers' download page. Our build instructions for the QE module are listed below.

Building on Perlmutter for GPUs

To build Quantum ESPRESSO 7.0 on Perlmutter for GPUs, navigate to the main qe-7.0 directory and execute the following commands:

perlmutter$ export LC_ALL=C
perlmutter$ module load gpu ; module swap PrgEnv-gnu PrgEnv-nvidia ; module unload darshan ; module unload cray-libsci ; module load cray-fftw ; module load cray-hdf5-parallel
perlmutter$ ./configure CC=cc CXX=CC FC=ftn MPIF90=ftn --with-cuda=$CUDA_HOME --with-cuda-cc=80 --with-cuda-runtime=11.7 --enable-openmp --enable-parallel --with-scalapack=no FFLAGS="-Mpreprocess" FCFLAGS="-Mpreprocess" LDFLAGS="-acc"
perlmutter$ make all


The ./configure command may fail to select the desired libraries for linking, so one may want to edit before executing make all. We recommend linking the following libraries for a Perlmutter GPU build:

BLAS_LIBS      = ${CRAY_NVIDIA_PREFIX}/compilers/lib/
LAPACK_LIBS    = ${CRAY_NVIDIA_PREFIX}/compilers/lib/ ${CRAY_NVIDIA_PREFIX}/compilers/lib/
FFT_LIBS       = $(FFTW_DIR)/ \
               $(FFTW_DIR)/ \
               $(FFTW_DIR)/ \
               ${CUDALIB}  -lstdc++
HDF5_LIBS = $(HDF5_DIR)/lib/ \
                $(HDF5_DIR)/lib/ \
                $(HDF5_DIR)/lib/ \
                $(HDF5_DIR)/lib/ -lz -ldl


For Quantum ESPRESSO version 7.2, which has much more complete GPU-accelerated support, Open ACC needs to be explicitly added in the DFLAGS environment variable in the file, as well. This is in addition to the above edit.

Building on Perlmutter for CPUs

To build Quantum ESPRESSO 7.0 targeting the Perlmutter CPU partition, execute the following commands in the main qe-7.0 directory:

perlmutter$ module load cpu ; module load PrgEnv-gnu ; module load cray-hdf5-parallel ; module load cray-fftw ; module unload cray-libsci ; module unload darshan 
perlmutter$ ./configure CC=gcc CXX=CC FC=gfortran MPIF90=ftn --enable-openmp --enable-parallel --disable-shared --with-scalapack=yes --with-hdf5=${HDF5_DIR}
perlmutter$ make all


The ./configure command may fail to select the desired libraries for linking, so one may want to edit before executing make all. We recommend linking the following libraries for a Perlmutter CPU build:

FFT_LIBS       = $(FFTW_DIR)/ \
               $(FFTW_DIR)/ \
HDF5_LIBS = $(HDF5_DIR)/lib/ \
                $(HDF5_DIR)/lib/ \
                $(HDF5_DIR)/lib/ \
                $(HDF5_DIR)/lib/ -lz -ldl


To run the included examples one may need to modify the prefix and directory paths in the file environment_variables in the main QE directory.

Linking LibXC to Quantum ESPRESSO

If one is interested in linking Quantum ESPRESSO to a pre-existing build of the LibXC density functional library, add the following options to the ./configure command before building Quantum ESPRESSO:

--with-libxc=yes --with-libxc-prefix=<path-to-your-libxc-install-directory>


LibXC must be built using the same Fortran compiler as Quantum ESPRESSO, or the ./configure script will fail to find the LibXC library.

User Contributed Information

Please help us improve this page

Users are invited to contribute helpful information and corrections through our GitLab repository.