CP2k¶
CP2k is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems.
Availability¶
System | Architecture | Modulefile | Image |
---|---|---|---|
Cori | Haswell | Yes | docker:cp2k/cp2k:latest |
Cori | KNL | Yes | docker:cp2k/cp2k:latest |
Perlmutter | A100 | N/A | docker:nvcr.io/hpc/cp2k:v9.1.0 |
Perlmutter | Milan | N/A | docker:cp2k/cp2k:2022.1 |
MPI Support on CPU
For MPI support for images from dockerhub use dev20220519
and newer images.
MPI Performance on Perlmutter GPU
The docker:nvcr.io/hpc/cp2k:v9.1.0
image has MPI support but multinode performance may not be optimal due to MPI ABI issues that are being actively addressed.
Support¶
The CP2k Reference Manual provides details on how to setup calculations and the various options available.
For questions about cp2k usage that are not specific to NERSC please consult the CP2k Forum and CP2k FAQ.
If you need to make your own customized build of CP2k the Makefile and build script used to create NERSC's modules are available.
Tip
If after consulting with the above you believe there is an issue with the NERSC module, please file a support ticket.
CP2k at NERSC¶
Perlmutter¶
CPU¶
#!/bin/bash
#SBATCH --image docker:cp2k/cp2k:2022.1
#SBATCH --nodes 1
#SBATCH --cpus-per-task 2
#SBATCH --ntasks-per-node 128
#SBATCH --constraint cpu
#SBATCH --qos debug
#SBATCH --time-min 5
#SBATCH --time 30
srun shifter --entrypoint cp2k -i H2O-64.inp
GPU¶
#!/bin/bash
#SBATCH --image docker:nvcr.io/hpc/cp2k:v9.1.0
#SBATCH --nodes 1
#SBATCH --cpus-per-task 32
#SBATCH --gpus-per-task 1
#SBATCH --ntasks-per-node 4
#SBATCH --constraint gpu
#SBATCH --qos debug
#SBATCH --time-min 5
#SBATCH --time 30
export OMP_NUM_THREADS=16
srun --cpu-bind cores --mpi pmi2 --module gpu shifter --entrypoint cp2k -i H2O-256.inp
Cori¶
NERSC provides modules for cp2k on Cori.
Use the module avail
command to see what versions are available:
module avail cp2k
Example - Cori KNL¶
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --constraint=knl
#SBATCH --time=300
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=64
#SBATCH --cpus-per-task=4
#SBATCH --core-spec=2
module unload craype-haswell
module load craype-mic-knl
module load cp2k
srun --cpu-bind=cores cp2k.popt -i example.inp
Example - Cori KNL with OpenMP¶
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --constraint=knl
#SBATCH --time=300
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=8
#SBATCH --core-spec=2
module unload craype-haswell
module load craype-mic-knl
module load cp2k/6.1
export OMP_NUM_THREADS=2
srun --cpu-bind=cores cp2k.psmp -i example.inp
Example - Cori Haswell¶
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --constraint=haswell
#SBATCH --time=300
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=2
module load cp2k
srun --cpu-bind=cores cp2k.popt -i example.inp
Example - Cori Haswell with OpenMP¶
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --constraint=haswell
#SBATCH --time=300
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=4
module load cp2k
export OMP_NUM_THREADS=2
srun --cpu-bind=cores cp2k.popt -i example.inp
Performance¶
Performance of cp2k can vary depending on the system size and run type. The multinode scaling performance of the code depends on the amount of work (or number of atoms) per MPI rank. It is recommended to try a representative test case on different number of nodes to see what gives the best performance.
User Contributed Information¶
- User contributions (tips/etc) are welcome!