Amber (Assisted Model Building with Energy Refinement) is the collective name for a suite of programs designed to carry out molecular mechanical force field simulations, particularly on biomolecules. See Amber force fields, AMBER consists of about 50 programs. Two major ones are:
- sander: Simulated annealing with NMR-derived energy restraints
- pmemd: This is an extensively-modified version of sander, optimized for periodic, PME simulations, and for GB simulations. It is faster than sander and scales better on parallel machines.
How to access AMBER¶
AMBER is now supported on Perlmutter using a docker container image and run on Perlmutter GPU using shifter.
To find the available container of AMBER on Perlmutter, type:
Perlmutter$ shifterimg images | grep 'nersc/amber'
The current supported version of AMBER on Perlmutter is 22.0
The container is built with following AMBER exectuables: pmemd, pmemd.MPI, pmemd.cuda, pmemd.cuda.MPI, pmemd.cuda_DPFP, pmemd.cuda_DPFP.MPI, pmemd.cuda_SPFP, pmemd.cuda_SPFP.MPI, sander, sander.LES, sander.LES.MPI, sander.MPI, sander.quick.cuda, sander.quick.cuda.MPI.
You should choose an appropriate binary to run your jobs. The sander, sander.LES are the serial binaries, their parallel binaries are sander.MPI, sander.LES.mpi, respectively.
How to run AMBER¶
There are two ways of running AMBER: submitting a batch job, or running interactively in an interactive batch session. Here is a sample batch script to run AMBER on Perlmutter:
Perlmutter Amber job
#!/bin/bash -l #SBATCH --image docker:nersc/amber_gpu:22 #SBATCH -C gpu #SBATCH -t 00:20:00 #SBATCH -J AMBER_GPU #SBATCH -o AMBER_GPU.o%j #SBATCH -A mXXXX #SBATCH -N 1 #SBATCH -c 32 #SBATCH --ntasks-per-node=4 #SBATCH --gpus-per-task=1 #SBATCH --gpu-bind=none #SBATCH -q regular export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK export OMP_PROC_BIND=spread export OMP_PLACES=threads command="srun -n 4 --cpu-bind=cores --gpu-bind=none --module mpich,gpu shifter <executable> <input>" echo $command $command
Then submit the job script using sbatch command, e.g., assume the job script name is
perlmutter$ sbatch test_amber.slurm
Please change the project number to number assigned to your project where it says mXXXX. The example above uses 1 GPU node on PM, which has 4 GPUs each. When changing the number of nodes, please modify the line SBATCH -N 1 to whatever number of nodes you want to run your problem with. Additionally, please change the line 'command="srun -n 4"' to -n as number of nodes times 4. Please change line 'input=' in accordance to the job you inputs necessary for your runs.
To request an interactive batch session, issue a command such as this one (e.g., requesting two nodes on Cori):
perlmutter$ salloc -N 1 -G 4 -C gpu -t 30 -c 64 -A nstaff -q debug --image=docker:nersc/amber_gpu:22
To run your job in an interactive shell, use the following commands:
Perlmutter$ #on Perlmutter GPU node, Perlmutter$ srun -n 4 shifter <amber exe and other input> ... (more sander command line options)
Further details on using docker containers at NERSC with shifter can be found at shifter