BerkeleyGW¶
The BerkeleyGW package is a set of computer codes that calculates the quasiparticle properties and the optical responses of a large variety of materials from bulk periodic crystals to nanostructures such as slabs, wires and molecules. The package takes as input the mean-field results from various electronic structure codes such as the Kohn-Sham DFT eigenvalues and eigenvectors computed with Quantum ESPRESSO, EPM, PARSEC, Octopus, Abinit, Siesta, etc.
Availability and Supported Architectures at NERSC¶
BerkeleyGW is available at NERSC as a provided support level package. Version 3.x includes support for GPUs.
Versions Supported¶
Cori Haswell | Cori KNL | Perlmutter GPU | Perlmutter CPU |
---|---|---|---|
2.x | 2.x | 3.x | 3.x |
Use the module avail berkeleygw
command to see a full list of available sub-versions.
Application Information, Documentation, and Support¶
BerkeleyGW is freely available and can be downloaded from the BerkeleyGW home page. See the online documentation for the user manual, tutorials, examples, and links to previous workshops and literature articles. For troubleshooting, see the BerkeleyGW Help Forum. For help with issues specific to the NERSC module, please file a support ticket.
Using BerkeleyGW at NERSC¶
Use the module avail
command to see which versions are available and module load <version>
to load the environment:
nersc$ module avail berkeleygw
berkeleygw/3.0.1-cpu berkeleygw/3.0.1-gpu (D)
nersc$ module load berkeleygw/3.0.1-gpu
Sample Job Scripts¶
See the example jobs page for additional examples and information about jobs.
Cori Haswell
#!/bin/bash
# 2 Cori Haswell nodes, 32 MPI processes, 2 OpenMP threads-per-MPI-process
#SBATCH --qos=regular
#SBATCH --time=01:00:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=2
#SBATCH --constraint=haswell
srun epsilon.cplx.x
Cori KNL
#!/bin/bash
# 2 Cori KNL nodes. 32 MPI processes, 4 OpenMP threads-per-MPI-process
#SBATCH --qos=regular
#SBATCH --time=01:00:00
#SBATCH --nodes=2
#SBATCH --ntasks=32
#SBATCH --constraint=knl,quad,cache
#SBATCH --core-spec=4
export OMP_NUM_THREADS=4
export ELPA_DEFAULT_omp_threads=$OMP_NUM_THREADS
srun epsilon.cplx.x
Perlmutter GPU
#!/bin/bash
# 2 Perlmutter nodes, 8 MPI processes, 1 GPU-per-MPI-process
#SBATCH -q regular
#SBATCH -C gpu
#SBATCH -t 01:00:00
#SBATCH -n 8
#SBATCH -c 32
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-task=1
#SBATCH --gpu-bind=map_gpu:0,1,2,3
export TEMPDIRPATH=$SCRATCH/tmp
export OMP_NUM_THREADS=1
export SLURM_CPU_BIND="cores"
export MPIEXEC="`which srun` "
srun epsilon.cplx.x
Perlmutter CPU
#!/bin/bash
# 8 Perlmutter nodes, 8 MPI processes, 24 OpenMP threads-per-MPI-process
#SBATCH -C cpu
#SBATCH -q regular
#SBATCH -n 8
#SBATCH --tasks-per-node=1
#SBATCH --cpus-per-task=256
#SBATCH -t 01:00:00
export HDF5_USE_FILE_LOCKING=FALSE
export BGW_HDF5_WRITE_REDIST=1
ulimit -s unlimited
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
export OMP_NUM_THREADS=24
srun epsilon.cplx.x
Tip
In some cases the epsilon
module will fail while trying to access a file in HDF5 format. To prevent this, add export HDF5_USE_FILE_LOCKING=FALSE
to the job script.
Building BerkeleyGW from Source¶
Some users may be interested in modifying the BerkeleyGW build parameters and/or building BerkeleyGW themselves. BerkeleyGW can be downloaded as a tarball from the download page. Build instructions are included in the Makefile
and in README.md
in the BerkeleyGW main directory. Before building, one must load the appropriate modules and create a configuration file in the BerkeleyGW main directory. Sample configuration files, found in the config
directory, can be copied into the main directory, edited, and renamed as arch.mk
. Sample configuration file headers also contain recommendations of the modules to load.
Building on Cori
The following arch.mk
file was used to build BerkeleyGW-2.1 on Cori KNL:
# arch.mk for NERSC Cori KNL using Intel
#
# Load the following modules before building:
#
# module swap craype-haswell craype-mic-knl && module unload darshan && module load cray-hdf5-parallel && export CRAYPE_LINK_TYPE=static
#
COMPFLAG = -DINTEL
PARAFLAG = -DMPI -DOMP
MATHFLAG = -DUSESCALAPACK -DUNPACKED -DUSEFFTW3 -DHDF5 -DUSEMR3 # -DUSEELPA # -DUSEPRIMME
FCPP = /usr/bin/cpp -C -nostdinc
F90free = ftn -free -qopenmp
LINK = ftn -qopenmp
FOPTS = -fast -no-ip -no-ipo -align array64byte
FNOOPTS = $(FOPTS)
MOD_OPT = -module
INCFLAG = -I
C_PARAFLAG = -DPARA -DMPICH_IGNORE_CXX_SEEK
CC_COMP = CC -qopenmp
C_COMP = cc -qopenmp
C_LINK = CC -qopenmp
C_OPTS = -fast -no-ip -no-ipo -align #-g -traceback
C_DEBUGFLAG =
REMOVE = /bin/rm -f
FFTWPATH =
FFTWLIB = $(MKLROOT)/lib/intel64/libmkl_scalapack_lp64.a -Wl,--start-group $(MKLROOT)/lib/intel64/libmkl_intel_lp64.a $(MKLROOT)/lib/intel64/libmkl_core.a \
$(MKLROOT)/lib/intel64/libmkl_intel_thread.a $(MKLROOT)/lib/intel64/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -lpthread -lm -ldl
FFTWINCLUDE = $(MKLROOT)/include/fftw/
HDF5_LDIR = $(HDF5_DIR)/lib
HDF5LIB = -L$(HDF5_LDIR)/ -lhdf5hl_fortran -lhdf5_hl -lhdf5_fortran -lhdf5 -lz -ldl # -L/global/common/software/m1759/ipm/install/cori_intel_cray-mpich/lib -lipmf -lipm
HDF5INCLUDE = $(HDF5_DIR)/include
PERFORMANCE =
LAPACKLIB = $(FFTWLIB)
TESTSCRIPT = sbatch cori2.scr
Note
We recommend using the Intel compiler to build BerkeleyGW on Cori. If building with gcc, then use gcc/10.3.0 or earlier as executables built using gcc/11.2.0 are known to generate incorrect results.
Building on Perlmutter for GPUs
The following arch.mk
file may be used to build BerkeleyGW 3.0.1 targeting GPUs on Perlmutter:
# arck.mk for NERSC Perlmutter GPU build
#
# Load the following modules before building:
# ('PrgEnv-nvidia' MUST be pre-loaded! Modules 'cudatoolkit' and 'gpu' should be loaded by default.)
#
# module load cray-hdf5-parallel ; module load cray-fftw ; module load cray-libsci ; module load python
#
COMPFLAG = -DPGI
PARAFLAG = -DMPI -DOMP
MATHFLAG = -DUSESCALAPACK -DUNPACKED -DUSEFFTW3 -DHDF5 -DOPENACC -DOMP_TARGET
NVCC=nvcc
NVCCOPT= -O3 -use_fast_math
CUDALIB= -lcufft -lcublasLt -lcublas -lcudart -lcuda
FCPP = /usr/bin/cpp -C -nostdinc
F90free = ftn -Mfree -acc -mp=multicore,gpu -gpu=cc80 -Mcudalib=cublas,cufft -Mcuda=lineinfo -traceback -Minfo=mp,acc -gopt -traceback
LINK = ftn -acc -mp=multicore,gpu -gpu=cc80 -Mcudalib=cublas,cufft -Mcuda=lineinfo -Minfo=mp,acc
FOPTS = -fast -Mfree -Mlarge_arrays
FNOOPTS = $(FOPTS)
MOD_OPT = -module
INCFLAG = -I
C_PARAFLAG = -DPARA -DMPICH_IGNORE_CXX_SEEK
CC_COMP = CC
C_COMP = cc
C_LINK = cc ${CUDALIB} -lstdc++
C_OPTS = -fast -mp
C_DEBUGFLAG =
REMOVE = /bin/rm -f
FFTWLIB = $(FFTW_DIR)/libfftw3.so \
$(FFTW_DIR)/libfftw3_threads.so \
$(FFTW_DIR)/libfftw3_omp.so \
${CUDALIB} -lstdc++
FFTWINCLUDE = $(FFTW_INC)
PERFORMANCE =
SCALAPACKLIB =
LAPACKLIB = ${CRAY_NVIDIA_PREFIX}/compilers/lib/liblapack.so ${CRAY_NVIDIA_PREFIX}/compilers/lib/libblas.so
HDF5_LDIR = ${HDF5_DIR}/lib/
HDF5LIB = $(HDF5_LDIR)/libhdf5hl_fortran.so \
$(HDF5_LDIR)/libhdf5_hl.so \
$(HDF5_LDIR)/libhdf5_fortran.so \
$(HDF5_LDIR)/libhdf5.so -lz -ldl
HDF5INCLUDE = ${HDF5_DIR}/include/
Building on Perlmutter for CPUs
The following arch.mk
file may be used to build BerkeleyGW 3.0.1 targeting CPUs on Perlmutter:
# arck.mk for NERSC Perlmutter CPU build using GNU compiler
#
# Load the following modules before building:
# ('PrgEnv-gnu' MUST be pre-loaded!)
#
# module load cpu && module unload darshan && module swap gcc/11.2.0 gcc/10.3.0 && module load cray-fftw && module load cray-hdf5-parallel && module load cray-libsci && module load python && export CRAYPE_LINK_TYPE=static
#
COMPFLAG = -DGNU
PARAFLAG = -DMPI -DOMP
MATHFLAG = -DUSESCALAPACK -DUNPACKED -DUSEFFTW3 -DHDF5
FCPP = /usr/bin/cpp -C -nostdinc
F90free = ftn -fopenmp -ffree-form -ffree-line-length-none -fno-second-underscore
LINK = ftn -fopenmp -dynamic
FOPTS = -O3 -funroll-loops -funsafe-math-optimizations -fallow-argument-mismatch
FNOOPTS = $(FOPTS)
MOD_OPT = -J
INCFLAG = -I
C_PARAFLAG = -DPARA -DMPICH_IGNORE_CXX_SEEK
CC_COMP = CC
C_COMP = cc
C_LINK = CC -dynamic
C_OPTS = -O3 -ffast-math
C_DEBUGFLAG =
REMOVE = /bin/rm -f
FFTWINCLUDE = $(FFTW_INC)
PERFORMANCE =
LAPACKLIB =
HDF5_LDIR = $(HDF5_DIR)/lib
HDF5LIB = $(HDF5_LDIR)/libhdf5hl_fortran.a \
$(HDF5_LDIR)/libhdf5_hl.a \
$(HDF5_LDIR)/libhdf5_fortran.a \
$(HDF5_LDIR)/libhdf5.a -lz -ldl
HDF5INCLUDE = $(HDF5_DIR)/include
After loading the selected modules and creating arch.mk
in the BGW main directory, build using the following commands:
nersc$ make cleanall
nersc$ make all-flavors
Related Applications¶
User Contributed Information¶
Please help us improve this page
Users are invited to contribute helpful information and corrections through our GitLab repository.