Skip to content

ORCA

ORCA is an ab initio quantum chemistry program package that contains modern electronic structure methods including density functional theory, many-body perturbation, coupled cluster, multireference methods, and semi-empirical quantum chemistry methods. Its main field of application is larger molecules, transition metal complexes, and their spectroscopic properties. ORCA is developed in the research group of Frank Neese. The free version is available only for academic use at academic institutions.

From Wikipedia

Attention

ORCA requires all users to create an account on their forum board before executables can be downloaded. Moreover, each executable is linked to a specific version of OpenMPI, so a containerized solution is usually ideal for running on Perlmutter.

Link to Orca Forum

Support

ORCA user can seek support via the Orca forum.

Install ORCA

The steps below describe how to install the static MPI version of ORCA 5.0.4.

Download Compressed Files

The ORCA binaries are precompiled and released in several tarballs available to download from their forum. Since the total size of the ORCA package is so large (~40 GB), we recommend that you install it on our filesystems rather than inside a container image. For long-term use we suggest you install them on our /global/common/software filesystem. You may need to request a quota increase first because the default quotas can be small on this filesystem. You can do this by filling out the Disk Quota Increase Form.

Extract Compressed Files to a Single Directory

From the directory containing the downloaded tarballs, extract the contents into a single folder. You can use the following commands to do this after substituting /your/install/path/orca for the path where you would like to install ORCA.

mkdir -p /your/install/path/orca
for file in orca_5_0_4_linux_x86-64_openmpi411_part[1-3].tar.xz; do \
  tar -xvf "$file" -C /your/install/path/orca; \
  mv /your/install/path/orca/${file%%.*}/* /your/install/path/orca/; \
  rm -r /your/install/path/orca/${file%%.*}; \
done

ORCA is precompiled so once the files have been extracted, it is ready to use.

Running ORCA

ORCA is coded to use OpenMPI version 4.1.1 and it's drive code contains calls to mpirun, therefore we recommend running in a container. Moreover, it's not necessary to call the ORCA executable itself with srun or mpirun. Below is table of containers suited for running ORCA.

Containers for ORCA

OpenMPI ver. Repository ORCA version
4.1.1 docker.io/stephey/orca:3.0 5.0.4

Test Example for Multiprocess Runs

This is an example ORCA input file, named input_parallel.inp, that contains directives to run an analysis using 16 MPI processes.

!HF DEF2-SVP
%PAL NPROCS 16 END
%MAXCORE 1000
%SCF
   MAXITER 500
END
* xyz 0 1
O   0.0000   0.0000   0.0626
H  -0.7920   0.0000  -0.4973
H   0.7920   0.0000  -0.4973
*

Single-Node Batch Job

To run the above example on a single node, the batch script below can be used:

#!/bin/bash

#SBATCH -q run
#SBATCH -N 1
#SBATCH -n 16
#SBATCH -c 16
#SBATCH -t 00:20:00
#SBATCH -C cpu
#SBATCH --image=stephey/orca:3.0

# Prepare Environment
export PATH=/your/install/path/orca:$PATH
export PATH=/usr/bin:$PATH
export LD_LIBRARY_PATH=/usr/lib:$LD_LIBRARY_PATH

# Prepare Working Directory
mkdir -p ORCA_workdir
cp input_parallel.inp ./ORCA_workdir/input_parallel.inp
cd ORCA_workdir

# Run ORCA
shifter /your/install/path/orca /your/install/path/orca/ORCA_workdir/input_parallel.inp "--bind-to core"

Notes

  1. With this configuration it is only possible to run on a single node.
  2. Ensure that the number of NPROCS in the input file matches the number MPI tasks (-n) selected in the job script.

Multi-Node Batch Job

Warning

The following directions use OpenMPI version 5.0. This method is not recommended by ORCA developers.

Although ORCA depends on OpenMPI version 4.1.1, some functionality can be obtained using the OpenMPI version 5.0 module provided by NERSC. Below is an example jobscript that demonstrates a multinode run:

#!/bin/bash

#SBATCH -q regular
#SBATCH -N 2
#SBATCH -n 16
#SBATCH --ntasks-per-node=8
#SBATCH -c 16
#SBATCH -t 00:20:00
#SBATCH -C cpu

# Prepare Environment
export PATH=/your/install/path/orca:$PATH
module load openmpi

# Prepare Working Directory
mkdir -p ORCA_workdir
cp input_parallel.inp ./ORCA_workdir/input_parallel.inp
cd ORCA_workdir

# Populate a Node List
scontrol show hostnames $SLURM_JOB_NODELIST > input_parallel.nodes
/your/install/path/orca /your/install/path/orca/ORCA_workdir/input_parallel.inp "--bind-to core"

Notes

  1. See warning above regarding use of a higher OpenMPI version than recommended.
  2. Ensure that the number of NPROCS in the input file matches the number MPI tasks (-n) selected in the job script.