Quantum ESPRESSO/PWSCF¶
Quantum ESPRESSO is an integrated suite of computer codes for electronic structure calculations and materials modeling at the nanoscale. It builds on the electronic structure codes PWscf, PHONON, CP90, FPMD, and Wannier. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).
Availability and Supported Architectures at NERSC¶
Quantum ESPRESSO is available at NERSC as a provided support level package. Quantum ESPRESSO 7.x supports GPU execution.
Versions Supported¶
Cori Haswell | Cori KNL | Perlmutter GPU | Perlmutter CPU |
---|---|---|---|
7.x | 7.x | 7.x | 7.x |
Use the module avail espresso
command to see a full list of available sub-versions.
Application Information, Documentation, and Support¶
Quantum ESPRESSO is freely available and can be downloaded from the Quantum ESPRESSO home page. See the preceding link for more resources, including documentation for building the code and preparing input files, tutorials, pseudopotentials, and auxiliary software. For troubleshooting, see the FAQ for solutions to common issues encountered while building and running the package. See the Quantum ESPRESSO users forum and mail archives for additional support-related questions. For help with issues specific to the NERSC module, please file a support ticket.
Using Quantum ESPRESSO at NERSC¶
Use the module avail
command to see which versions are available and module load espresso/<version>
to load the environment:
nersc$ module avail espresso
espresso/7.0-libxc-5.2.2-gpu
espresso/7.0-libxc-5.2.2-cpu
nersc$ module load espresso/7.0-libxc-5.2.2-gpu
The preceding command loads Quantum ESPRESSO 7.0 built for GPUs and linked to the LibXC v5.2.2 density functional library.
Sample Job Scripts¶
See the example jobs page for additional examples and information about jobs. For all routines except pw.x
, run Quantum ESPRESSO in full MPI mode as there is currently no efficient OpenMP implementation available.
Cori Haswell
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH -C haswell
#SBATCH -t 02:00:00
#SBATCH -J my_job
export OMP_NUM_THREADS=1
module load espresso/7.0-libxc-5.2.2
srun ph.x -input test.in
Warning
Pay close attention to the explicit setting of OMP_NUM_THREADS=1
when running in pure MPI mode. This is optimal when intending to run with only MPI tasks.
Hybrid DFT¶
We have optimized the hybrid DFT calculations in Quantum ESPRESSO (pw.x
). These changes are described in our Quantum ESPRESSO case study.
The following scripts provides the best pw.x
performance for hybrid functional calculations:
Cori Haswell
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=16
#SBATCH -C haswell
#SBATCH -t 02:00:00
#SBATCH -J my_job
export OMP_NUM_THREADS=8
export OMP_PLACES=threads
export OMP_PROC_BIND=spread
module load espresso/7.0-libxc-5.2.2
srun --cpu-bind=cores pw.x -nbgrp 8 -input test.in
Cori KNL
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=68
#SBATCH -C knl,quad,cache
#SBATCH -t 02:00:00
#SBATCH -J my_job
export OMP_NUM_THREADS=16
export OMP_PLACES=threads
export OMP_PROC_BIND=spread
module load espresso/7.0-libxc-5.2.2
srun --cpu-bind=cores pw.x -nbgrp 8 -input test.in
Tip
For band-group parallelization, it is recommended to run one band group per MPI rank. However, please keep in mind that it is not possible to use more band-groups than there are bands in your system, so adjust the number accordingly if issues are encountered.
Note
The new implementation is much more efficient, so you might be able to use much fewer nodes and still get the solution within the same wallclock time.
Calculations on Perlmutter¶
Perlmutter GPU
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-task=1
#SBATCH --gpu-bind=map_gpu:0,1,2,3
#SBATCH -C gpu
#SBATCH -t 02:00:00
#SBATCH -J my_job
export SLURM_CPU_BIND="cores"
export OMP_PROC_BIND=true
export OMP_PLACES=threads
export OMP_NUM_THREADS=1
module load espresso/7.0-libxc-5.2.2-gpu
srun pw.x -npool 2 -ndiag 1 -input scf.in
Tip
For GPU runs, use 1 MPI-rank-per-GPU and set OMP_NUM_THREADS=1
to avoid oversubscribing each GPU.
Perlmutter CPU
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=256
#SBATCH -C cpu
#SBATCH -t 2:00:00
#SBATCH -J my_job
export SLURM_CPU_BIND="cores"
export OMP_PROC_BIND=true
export OMP_PLACES=threads
export OMP_NUM_THREADS=1
export HDF5_USE_FILE_LOCKING=FALSE
module load espresso/7.0-libxc-5.2.2-cpu
srun ph.x -input test.in
Building Quantum ESPRESSO from Source¶
Some users may be interested in tweaking the Quantum ESPRESSO build parameters and building QE themselves. Quantum ESPRESSO tarballs are available for download at the developers' download page. Our build instructions for the QE module are listed below.
Note
Not all versions are available for all architectures or compilers.
Building on Cori
The following procedure was used to build Quantum ESPRESSO versions >5.4 on Cori. In the root QE directory:
cori$ ./configure
cori$ cp /usr/common/software/espresso/<version>/<arch>/<comp>/make.inc .
cori$ make <application-name, e.g. pw>
where <version>
specifies the version, <arch>
the architecture (usually hsw
or knl
for Haswell and KNL respectively) and <comp>
the compiler (usually gnu
or intel
).
Building on Perlmutter for GPUs
To build Quantum ESPRESSO 7.0 on Perlmutter for GPUs, navigate to the main qe-7.0
directory and execute the following commands:
perlmutter$ export LC_ALL=C
perlmutter$ module load PrgEnv-nvidia ; module load cray-fftw/3.3.8.12 ; module load cudatoolkit/11.0
perlmutter$ ./configure CC=cc CXX=CC FC=ftn MPIF90=ftn --with-cuda=$CUDA_HOME --with-cuda-cc=80 --with-cuda-runtime=11.0 --enable-openmp --with-scalapack=no FFLAGS="-Mpreprocess" FCFLAGS="-Mpreprocess" LDFLAGS="-acc"
perlmutter$ make all
Building on Perlmutter for CPUs
To build Quantum ESPRESSO 7.0 targeting the Perlmutter CPU partition, execute the following commands in the main qe-7.0
directory:
perlmutter$ export CRAYPE_LINK_TYPE=static
perlmutter$ module load gcc/10.3.0 ; module load cray-fftw ; module load cray-hdf5-parallel
perlmutter$ ./configure CC=gcc CXX=CC FC=gfortran MPIF90=ftn --enable-openmp --enable-parallel --disable-shared --with-scalapack=yes --with-hdf5=${HDF5_DIR}
perlmutter$ make all
Tip
To compile Quantum ESPRESSO with the FFTW library, it may be necessary to add the path to FFTW to the LD_LIBRARY_PATH variable.
Tip
To run the included examples one may need to modify the prefix and directory paths in the file environment_variables
in the main QE directory.
Linking LibXC to Quantum ESPRESSO¶
If one is interested in linking Quantum ESPRESSO to a pre-existing build of the LibXC density functional library, add the following options to the ./configure
command before building Quantum ESPRESSO:
--with-libxc=yes --with-libxc-prefix=<path-to-your-libxc-install-directory>
Note
LibXC must be built using the same Fortran compiler as Quantum ESPRESSO, or the ./configure
script will fail to find the LibXC library.
Related Applications¶
User Contributed Information¶
Please help us improve this page
Users are invited to contribute helpful information and corrections through our GitLab repository.