Skip to content

VASP

VASP is a package for performing ab initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set. The approach implemented in VASP is based on a finite-temperature local-density approximation (with the free energy as variational quantity) and an exact evaluation of the instantaneous electronic ground state at each MD step using efficient matrix diagonalization schemes and an efficient Pulay mixing.

Access

VASP is available only to NERSC users who already have an existing VASP license. In order to gain access to the VASP binaries at NERSC through an existing VASP license, send your license information to the VASP support at Vienna at licensing (at) vasp (dot) at, cc'ing NERSC at vasp_licensing (at) nersc (dot) gov, requesting that they confirm your VASP license to NERSC staff. Click here to send a confirmation email now.

Note

NERSC does not provide the VASP 4 modules anymore.

It may take several business days from the date when your confirmation request is sent to the VASP support at Vienna to you actually gain the access to the VASP binaries provided at NERSC. If the confirmation takes longer than 5 business days, update the initial email thread with the license information.

When your VASP license is confirmed, NERSC will add you to a unix file group: vasp5 for VASP 5, and vasp6 for VASP 6. You can check if you have the VASP access at NERSC or not by typing the groups command. If you are in the vasp5 file group, then you can access VASP 5 binaries provided at NERSC. If you are in the vasp6 file group, then you can access both VASP 5 and VASP 6.

Modules

We provide multiple VASP builds for users. To see what VASP modules are available:

cori$ module avail vasp
For example, these are the available modules (as of 8/6/2020),

cori$ module avail vasp

---------------------------- /global/common/software/nersc/cle7/extra_modulefiles ----------------------------
vasp/5.4.1-hsw                 vasp/20170323_NMAX_DEG=128-knl vasp-tpc/5.4.1-hsw
vasp/5.4.1-knl                 vasp/20170629-hsw              vasp-tpc/5.4.1-knl
vasp/5.4.4-hsw(default)        vasp/20170629-knl              vasp-tpc/5.4.4-hsw(default)
vasp/5.4.4-knl                 vasp/20171017-hsw              vasp-tpc/5.4.4-knl
vasp/6.1.0-hsw                 vasp/20171017-knl              vasp-tpc/20170629-hsw
vasp/6.1.0-knl                 vasp/20181030-hsw              vasp-tpc/20170629-knl
vasp/20170323_NMAX_DEG=128-hsw vasp/20181030-knl

Where the modules with "5.4.4" or "5.4.1" in their version strings are the builds of the pure MPI VASP codes, and the modules with "2018" or "2017" in their version strings are the builds of the hybrid MPI+OpenMP VASP codes, which are avaiblable to the NERSC VASP 5 users through the VASP beta testing program. The modules with "6.1.0" is the official release of the hybrid MPI+OpenMP VASP, which are available to the users who have VASP 6 licenses. The vasp-tpc (tpc stands for third party codes) modules are the custom builds incorporating commonly used third party contributed codes, e.g., the VTST code from University of Texas at Austin, Wannier90, BEFF, VASPSol, etc.. The "knl" and "hsw" in the version strings indicate the modules are optimal builds for Cori KNL and Haswell, respectively. The current default on Cori is vasp/5.4.4-hsw (VASP 5.4.4 with the latest patches, and you can access it by

cori$ module load vasp
To use other non-default module, you need to provide the full module name,
cori$ module load vasp/20181030-knl
The "module show" command shows what VASP modules do to your environment, e.g.

cori$ module show vasp/20181030-knl
-------------------------------------------------------------------
/usr/common/software/modulefiles/vasp/20181030-knl:

module       load craype-hugepages2M 
module-whatis    VASP: Vienna Ab-initio Simulation Package
This is the vasp-knl development version (last commit 10/30/2018). Wannier90 v1.2 was enabled in the build.

setenv       PSEUDOPOTENTIAL_DIR /usr/common/software/vasp/pseudopotentials/5.3.5 
setenv       VDW_KERNAL_DIR /usr/common/software/vasp/vdw_kernal 
setenv       NO_STOP_MESSAGE 1 
setenv       MPICH_NO_BUFFER_ALIAS_CHECK 1 
setenv       MKL_FAST_MEMORY_LIMIT 0 
setenv       OMP_STACKSIZE 256m 
setenv       OMP_PROC_BIND spread 
setenv       OMP_PLACES threads 
prepend-path     PATH /usr/common/software/vasp/vtstscripts/3.1
prepend-path     PATH /global/common/cori/software/vasp/20181030/knl/intel/bin 
-------------------------------------------------------------------

This vasp module adds the path to the VASP binaries to your search path, and also sets a few environment variables. Where PSEUDOPOTENTIAL_DIR and VDW_KERNAL_DIR are defined for the locations of the pseudopotential files and the vdw_kernel.bindat file used in dispersion calculations. The OpenMP and MKL environment variables are set for optimal performance.

Vasp binaries

Each VASP module provides the three different binaries:

  • vasp_gam - gamma point only build
  • vasp_ncl - non-collinear spin
  • vasp_std - the standard kpoint binary

You need to choose an appropriate binary to run your job.

Running batch jobs

To run batch jobs, you need to prepare a job script (see samples below), and submit it to the batch system with the "sbatch" command. Assume the job script is named as run.slurm,

cori$ sbatch run.slurm

Please check the Queue Policy page for the available QOS's and their resource limits.

Cori Haswell

A sample job script to run the Pure MPI VASP codes
#!/bin/bash
#SBATCH -N 1
#SBATCH -C haswell
#SBATCH -q regular
#SBATCH -t 6:00:00

module load vasp
srun -n32 -c2 --cpu_bind=cores vasp_std
A sample job script to run the hybrid MPI+OpenMP VASP codes
#!/bin/bash
#SBATCH -N 2 
#SBATCH -C haswell
#SBATCH -q regular
#SBATCH -t 6:00:00

module load vasp/20181030-hsw
export OMP_NUM_THREADS=4

# launching 1 task every 4 cores (8 CPUs)
srun -n16 -c8 --cpu_bind=cores vasp_std

Cori KNL

A sample job scripts to run the Pure MPI VASP codes

#!/bin/bash
#SBATCH -N 2 
#SBATCH -C knl
#SBATCH -q regular
#SBATCH -t 6:00:00

module load vasp/5.4.4-knl
srun -n128 -c4 --cpu_bind=cores vasp_std

A sample job scripts to run the hybrid MPI+OpenMP VASP codes

#!/bin/bash
#SBATCH -N 2 
#SBATCH -C knl
#SBATCH -q regular
#SBATCH -t 6:00:00

module load vasp/20181030-knl
export OMP_NUM_THREADS=4

# launching 1 task every 4 cores (16 CPUs)
srun -n32 -c16 --cpu_bind=cores vasp_std

Tips

  1. For a better job throughput, run jobs on Cori KNL.
  2. The hybrid MPI+OpenMP VASP are recommended on Cori KNL for optimal performance.
  3. More performance tips can be found in a Cray User Group 2017 proceeding
  4. Also refer to the presentation slides for the VASP user training (6/18/2019).

Runninge interactively

To run VASP interactively, you need to request a batch session using the "salloc" command, e.g., the following command requests one Cori Haswell node for one hour,

cori$ salloc -N 1 -q interactive -C haswell -t 1:00:00

When the batch session returns with a shell prompt, execute the following commands:

cori$ module load vasp 
cori$ srun -n32 -c2 --cpu-bind=cores vasp_std
To run on Cori KNL interactively, do

cori$ salloc -N 2 -q interactive -C knl -t 4:00:00

The above command requests two KNL nodes for four hours. When the batch session returns with a shell prompt, execute the following commands:

cori$ module load vasp/20181030-knl
cori$ export OMP_NUM_TRHEADS=4
cori$ srun -n32 -c16 --cpu-bind=cores vasp_std

Tips

  1. The interactive QOS allocates the requested nodes immediately or cancels your job in about 5 minutes (when no nodes are available). See the Queue Policy page for more info.
  2. Test your job using the interactive QOS before submitting a long running job.

Long running VASP jobs

For long running VASP jobs (e.g., >48 hours), you can use the variable-time job script, which allows you to run jobs with any length. See a sample job script at Running Jobs. Variable-time jobs split a long running job into multiple chunks, so it requires the application to be able to restart from where it left off. Notice that not all VASP computations are restartable, e.g., RPA; long running atomic relaxations and MD simulations are good use cases of the variable-time job script.

Running multiple VASP jobs simultaneously

If you need to run many similar VASP jobs, it could be beneficial to run multiple of them simultaneously in one job script. See a sample job script to bundle jobs at Running Jobs.

However, the number of jobs you can bundle in a job script is very limited (<10?), because the batch system, Slurm (as it's implemented currently) is not good at handling many or frequent job launchings (sruns) in a single job script, while it's serving tens of thousands of other jobs on the system. If you want to run many more similar VASP jobs simultaneously, you can use the MPI wrapper for VASP that NERSC has provided, which enables as many VASP jobs as you wish with a single srun invocation. The MPI wrapper for VASP is available via the mvasp module on Cori.

Assume that you want to run 512 VASP jobs simultaneously each with a single KNL node, and have prepared VASP input files in 512 separate directories. In the directory where the 512 VASP run directories are create a job script like below,

run_mvasp.slurm: a sample job script to run 512 VASP jobs simultaneously on Cori KNL

#!/bin/bash
#SBATCH -J test_mvasp
#SBATCH -N 512 
#SBATCH -C knl
#SBATCH -q debug
#SBATCH -o %x-%j.out
#SBATCH -t 30:00

module load mvasp/5.4.4-knl

#run 512 VASP jobs simultaneously each running vasp_std with 1 KNL node (64 processes)
sbcast --compress=lz4 `which mvasp_std` /tmp/mvasp_std
srun -n 32768 -c4 --cpu-bind=cores /tmp/mvasp_std
then generate a file named joblist.in, which contains the number of jobs to run and the VASP run directories (one directory per line). You can use the script gen_joblist.sh that is available via the mvasp modules to create the joblist.in file.

module load mvasp
gen_joblist.sh

A sample joblist.in file is available at here. Then submit the job via sbatch

sbatch run_mvasp.slurm.  

Note

  • Be aware that running too many VASP jobs at once may overwhelm the file system where your job is running. Please do not run jobs out of your global homes.
  • In the sample job script above, to reduce the job startup time for large jobs the executable was copied to the /tmp file system (memory) of the compute nodes using the sbcast command prior to exeuction.

Similarly, you can run multiple VASP jobs with Haswell nodes. Here is a sample job script,

run_mvasp.slurm: a sample job script to run 512 VASP jobs simultaneously with 64 Haswell nodes
#!/bin/bash
#SBATCH -J test_mvasp
#SBATCH -N 64 
#SBATCH -C haswell 
#SBATCH -q debug
#SBATCH -o %x-%j.out
#SBATCH -t 30:00

module load mvasp/5.4.4-hsw

#run 512 VASP jobs simultaneously each running vasp_std with 4 processors (each node runs 8 jobs) 
srun -n 2048 -c2 --cpu-bind=cores ./mvasp_std

VASP makefiles

If you need to build VASP by yourselves, use the makefiles available at the VASP installation directories. For example, the "makefile.include" file that we used to build the vasp/5.4.4-hsw module is available at,

/global/common/sw/cray/cnl7/haswell/vasp/5.4.4/intel/18.0.1.163/w5vq7o2/ 

Type module show` to find the installation directory.

Documentation

VASP Online Manual

mpi_wrapper

An MPI wrapper to run multiple parallel VASP jobs simultaneously

About

If you need to run many similar VASP jobs, this wrapper code could be useful for you.

Instructions to use this wrapper

  • Download vasp5.4.4.pl2.tgz from VASP Portal to your cluster and untar it, e.g., on your home directory, and run the following commands:

 cd vasp.5.4.4.pl2
 git clone https://github.com/zhengjizhao/mpi_wrapper.git
 patch -p0 < mpi_wrapper/patch_vasp.5.4.4.pl2_mpi_wrapper.diff
then provide a working makefile.include file that you would use to compile VASP 5.4.4 on the VASP root directory, then run

 make std #or make all
The resulting VASP binaries will mvasp_std, mvasp_gam, and mvasp_ncl (where m stands for multiple)

  • To run, prepare VASP inputs in separate run directories, and then create a file named joblist.in
    which contains the number of jobs to run and the VASP run directories (one directory per line).

Assume that you want to run 512 VASP jobs simultaneously each with a single KNL node, and have prepared VASP input files in 512 separate directories. In the directory where the 512 VASP run directories are create a job script like below,

run_mvasp.slurm: a sample job script to run 512 VASP jobs simultaneously on Cori KNL

Then generate a job script, e.g., named run_mvasp.slurm like below: (e.g., to run on 512 Cori KNL nodes at NRESC running Slurm)

#!/bin/bash
#SBATCH -J test_mvasp
#SBATCH -N 512 
#SBATCH -C knl
#SBATCH -q debug
#SBATCH -o %x-%j.out
#SBATCH -t 30:00

#run 512 VASP jobs simultaneously each running vasp_std with 1 KNL node (64 processes)
srun -n 32768 -c4 --cpu-bind=cores ./mvasp_std

Note

Be aware that running too many VASP jobs at once may overwhelm the file system where your job is running.