Quantum ESPRESSO/PWSCF¶
Quantum ESPRESSO is an integrated suite of computer codes for electronic structure calculations and materials modeling at the nanoscale. It builds on the electronic structure codes PWscf, PHONON, CP90, FPMD, and Wannier. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).
Availability and Supported Architectures at NERSC¶
Quantum ESPRESSO is available at NERSC as a provided support level package. Quantum ESPRESSO 7.x supports GPU execution.
Versions Supported¶
Cori Haswell | Cori KNL | Perlmutter GPU | Perlmutter CPU |
---|---|---|---|
7.x | 7.x | 7.x | 7.x |
Use the module avail espresso
command to see a full list of available sub-versions.
Application Information, Documentation, and Support¶
Quantum ESPRESSO is freely available and can be downloaded from the Quantum ESPRESSO home page. See the preceding link for more resources, including documentation for building the code and preparing input files, tutorials, pseudopotentials, and auxiliary software. For troubleshooting, see the FAQ for solutions to common issues encountered while building and running the package. See the Quantum ESPRESSO users forum and mail archives for additional support-related questions. For help with issues specific to the NERSC module, please file a support ticket.
Using Quantum ESPRESSO at NERSC¶
Use the module avail
command to see which versions are available and module load espresso/<version>
to load the environment:
nersc$ module avail espresso
espresso/7.0-libxc-5.2.2-cpu
espresso/7.0-libxc-5.2.2-gpu (D)
nersc$ module load espresso/7.0-libxc-5.2.2-gpu
The preceding command loads Quantum ESPRESSO 7.0 built for GPUs and linked to the LibXC v5.2.2 density functional library.
Sample Job Scripts¶
See the example jobs page for additional examples and information about jobs. For all routines except pw.x
, run Quantum ESPRESSO in full MPI mode as there is currently no efficient OpenMP implementation available.
Cori Haswell
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH -C haswell
#SBATCH -t 02:00:00
#SBATCH -J my_job
export OMP_NUM_THREADS=1
module load espresso/7.0-libxc-5.2.2
srun ph.x -input test.in
Warning
Pay close attention to the explicit setting of OMP_NUM_THREADS=1
when running in pure MPI mode. This is optimal when intending to run with only MPI tasks.
Hybrid DFT¶
We have optimized the hybrid DFT calculations in Quantum ESPRESSO (pw.x
). These changes are described in our Quantum ESPRESSO case study.
The following scripts provides the best pw.x
performance for hybrid functional calculations:
Cori Haswell
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=16
#SBATCH -C haswell
#SBATCH -t 02:00:00
#SBATCH -J my_job
export OMP_NUM_THREADS=8
export OMP_PLACES=threads
export OMP_PROC_BIND=spread
module load espresso/7.0-libxc-5.2.2
srun --cpu-bind=cores pw.x -nbgrp 8 -input test.in
Cori KNL
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=68
#SBATCH -C knl,quad,cache
#SBATCH -t 02:00:00
#SBATCH -J my_job
export OMP_NUM_THREADS=16
export OMP_PLACES=threads
export OMP_PROC_BIND=spread
module load espresso/7.0-libxc-5.2.2
srun --cpu-bind=cores pw.x -nbgrp 8 -input test.in
Tip
For band-group parallelization, it is recommended to run one band group per MPI rank. However, please keep in mind that it is not possible to use more band-groups than there are bands in your system, so adjust the number accordingly if issues are encountered.
Note
The new implementation is much more efficient, so you might be able to use much fewer nodes and still get the solution within the same wallclock time.
Calculations on Perlmutter¶
Perlmutter GPU
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-task=1
#SBATCH --gpu-bind=map_gpu:0,1,2,3
#SBATCH -C gpu
#SBATCH -t 02:00:00
#SBATCH -J my_job
export SLURM_CPU_BIND="cores"
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
export OMP_NUM_THREADS=1
module load espresso/7.0-libxc-5.2.2-gpu
srun pw.x -npool 2 -ndiag 1 -input scf.in
Tip
For GPU runs, use 1 MPI-rank-per-GPU and set OMP_NUM_THREADS=1
to avoid oversubscribing each GPU.
Perlmutter CPU
#!/bin/bash
#
# Run on 4 nodes using 16 MPI ranks-per-node
# using 8 OpenMP threads-per-MPI-rank
#
#SBATCH --qos=regular
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=16
#SBATCH -C cpu
#SBATCH -t 2:00:00
export SLURM_CPU_BIND="cores"
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
export OMP_NUM_THREADS=8
export HDF5_USE_FILE_LOCKING=FALSE
# QE parallelization parameters (problem-dependent)
nimage=1
npool=1
nband=16
ntg=1
ndiag=1
flags="-nimage $nimage -npool $npool -nband $nband -ntg $ntg -ndiag $ndiag"
module load espresso/7.0-libxc-5.2.2-cpu
srun pw.x $flags -input test.in >& test.out
Building Quantum ESPRESSO from Source¶
Some users may be interested in tweaking the Quantum ESPRESSO build parameters and building QE themselves. Quantum ESPRESSO tarballs are available for download at the developers' download page. Our build instructions for the QE module are listed below.
Building on Cori
The following bash script was used to build Quantum ESPRESSO versions > 5.4 for both Haswell and KNL partitions of Cori. To use, modify the SOFTWAREPATH
and LIBXCPATH
variables to the location where you would like to install the binaries and the location of your libxc installation, respectively, and set the espresso_version
and xc_version
variables as appropriate. Then execute the script below from the main QE directory:
#!/bin/bash
# Cori HSW/KNL build script example: Quantum Espresso 7.0 linked to libxc 5.2.2
# This installs to two target binary directories:
# ${SOFTWAREPATH}/espresso/qe-${espresso_version}-libxc-${xc_version}-hsw
# ${SOFTWAREPATH}/espresso/qe-${espresso_version}-libxc-${xc_version}-knl
# SET THE FOLLOWING VARIABLES BEFORE RUNNING
SOFTWAREPATH=<path-where-you-want-to-install-binaries>
LIBXCPATH=<path-to-your-libxc-libraries>
espresso_version="7.0"
xc_version=5.2.2
export CRAYPE_LINK_TYPE=static
module load cray-hdf5-parallel
module unload cray-libsci
for version in libxc-${xc_version}-scalapack; do
for arch in hsw knl; do
pushd qe-${espresso_version}-libxc-${xc_version}-${arch}
if [ "${arch}" == "knl" ]; then
if [ "${CRAY_CPU_TARGET}" == "haswell" ]; then
echo "Current CRAY_CPU_TARGET=${CRAY_CPU_TARGET}, need to swap to knl"
module swap craype-haswell craype-mic-knl
fi
elif [ "${arch}" == "hsw" ]; then
if [ "${CRAY_CPU_TARGET}" == "knl" ]; then
echo "Current CRAY_CPU_TARGET=${CRAY_CPU_TARGET}, need to swap to haswell"
module swap craype-mic-knl craype-haswell
fi
else
ARCHFLAGS=" -xCORE-AVX-I"
fi
# Scalapack linker flags, cannot be found automatically
export scalapackflags="${MKLROOT}/lib/intel64/libmkl_scalapack_lp64.a -Wl,--start-group ${MKLROOT}/lib/intel64/libmkl_intel_lp64.a ${MKLROOT}/lib/intel64/libmkl_intel_thread.a ${MKLROOT}/lib/intel64/libmkl_core.a ${MKLROOT}/lib/intel64/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl"
# Clean everything up so that no libraries with wrong architecture are around
make veryclean
export FC=ifort
export F90=ifort
export MPIF90=ftn
export FCFLAGS="-mkl -O3 -qopenmp"
export CC=icc
export CFLAGS="-mkl -O3 -qopenmp"
export LDFLAGS="${FCFLAGS}"
export F90FLAGS="${FCFLAGS}"
export MPIF90FLAGS="${F90FLAGS}"
./configure --prefix=${SOFTWAREPATH}/espresso/qe-${espresso_version}-libxc-${xc_version}-${arch} \
--enable-openmp \
--enable-parallel \
--disable-shared \
--with-scalapack=intel \
--with-libxc --with-libxc-prefix=${LIBXCPATH} \
--with-hdf5=${HDF5_DIR}
# Modify 'make.inc' to link libraries not found by configure
cp -p make.inc make.inc-prev
sed -i "s|^HDF5_LIB =|HDF5_LIB = -L${HDF5_DIR}/lib -lhdf5|g" make.inc
sed -i "s|^F90FLAGS.*=|F90FLAGS = ${ARCHFLAGS}|g" make.inc
sed -i "s|^FFLAGS.*=|FFLAGS = ${ARCHFLAGS}|g" make.inc
sed -i "s|^CFLAGS.*=|CFLAGS = ${ARCHFLAGS}|g" make.inc
sed -i "s|^MANUAL_DFLAGS =|MANUAL_DFLAGS = -D__SCALAPACK|g" make.inc
sed -i "s|^SCALAPACK_LIBS =|SCALAPACK_LIBS = ${scalapackflags}|g" make.inc
# Clean up
make clean
j=8
make -j ${j} all
make -j ${j} want
make -j ${j} w90
make -j ${j} gipaw
make -j ${j} d3q
make -j ${j} epw
make install
popd
done
done
Building on Perlmutter for GPUs
To build Quantum ESPRESSO 7.0 on Perlmutter for GPUs, navigate to the main qe-7.0
directory and execute the following commands:
perlmutter$ export LC_ALL=C
perlmutter$ module load gpu ; module swap PrgEnv-gnu PrgEnv-nvidia ; module unload darshan ; module unload cray-libsci ; module load cray-fftw ; module load cray-hdf5-parallel
perlmutter$ ./configure CC=cc CXX=CC FC=ftn MPIF90=ftn --with-cuda=$CUDA_HOME --with-cuda-cc=80 --with-cuda-runtime=11.7 --enable-openmp --enable-parallel --with-scalapack=no FFLAGS="-Mpreprocess" FCFLAGS="-Mpreprocess" LDFLAGS="-acc"
perlmutter$ make all
Tip
The ./configure
command may fail to select the desired libraries for linking, so one may want to edit make.inc
before executing make all
. We recommend linking the following libraries for a Perlmutter GPU build:
BLAS_LIBS = ${CRAY_NVIDIA_PREFIX}/compilers/lib/libblas.so
LAPACK_LIBS = ${CRAY_NVIDIA_PREFIX}/compilers/lib/liblapack.so ${CRAY_NVIDIA_PREFIX}/compilers/lib/libblas.so
FFT_LIBS = $(FFTW_DIR)/libfftw3.so \
$(FFTW_DIR)/libfftw3_threads.so \
$(FFTW_DIR)/libfftw3_omp.so \
${CUDALIB} -lstdc++
HDF5_LIBS = $(HDF5_DIR)/lib/libhdf5hl_fortran.so \
$(HDF5_DIR)/lib/libhdf5_hl.so \
$(HDF5_DIR)/lib/libhdf5_fortran.so \
$(HDF5_DIR)/lib/libhdf5.so -lz -ldl
Building on Perlmutter for CPUs
To build Quantum ESPRESSO 7.0 targeting the Perlmutter CPU partition, execute the following commands in the main qe-7.0
directory:
perlmutter$ module load cpu ; module load PrgEnv-gnu ; module load cray-hdf5-parallel ; module load cray-fftw ; module unload cray-libsci ; module unload darshan
perlmutter$ ./configure CC=gcc CXX=CC FC=gfortran MPIF90=ftn --enable-openmp --enable-parallel --disable-shared --with-scalapack=yes --with-hdf5=${HDF5_DIR}
perlmutter$ make all
Tip
The ./configure
command may fail to select the desired libraries for linking, so one may want to edit make.inc
before executing make all
. We recommend linking the following libraries for a Perlmutter CPU build:
FFT_LIBS = $(FFTW_DIR)/libfftw3.so \
$(FFTW_DIR)/libfftw3_threads.so \
$(FFTW_DIR)/libfftw3_omp.so
HDF5_LIBS = $(HDF5_DIR)/lib/libhdf5hl_fortran.so \
$(HDF5_DIR)/lib/libhdf5_hl.so \
$(HDF5_DIR)/lib/libhdf5_fortran.so \
$(HDF5_DIR)/lib/libhdf5.so -lz -ldl
Tip
To run the included examples one may need to modify the prefix and directory paths in the file environment_variables
in the main QE directory.
Linking LibXC to Quantum ESPRESSO¶
If one is interested in linking Quantum ESPRESSO to a pre-existing build of the LibXC density functional library, add the following options to the ./configure
command before building Quantum ESPRESSO:
--with-libxc=yes --with-libxc-prefix=<path-to-your-libxc-install-directory>
Note
LibXC must be built using the same Fortran compiler as Quantum ESPRESSO, or the ./configure
script will fail to find the LibXC library.
Related Applications¶
User Contributed Information¶
Please help us improve this page
Users are invited to contribute helpful information and corrections through our GitLab repository.