You are here

ORCA

Table of Contents

Introduction

ORCA is an electronic structure package "with specific emphasis on spectroscopic properties of open-shell molecules" and "features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods."

Licensing and access

2017-09-05: Recently, the licensing conditions for newer versions of ORCA (4.x) have changed compared to the old versions. To get access to ORCA (the latest versions), users should the license conditions and register with ORCA first at https://cec.mpg.de/orcadownload/ . ORCA is licensed to single users as well as research groups. For a research group, the research group leader is responsible for the compliance of his/her group with ORCAs End User License Agreement.

You need first to register with ORCA website. Please note that the registration with ORCA Forum (https://orcaforum.cec.mpg.de/) is different from the registration with ORCA (https://cec.mpg.de/orcadownload/). Once you read, agree the license conditions to use and download ORCA by registering on ORCA web site (you do not need to download the program), you need to send or forward the confirmation message from your registration with ORCA to support@westgrid.ca with a subject line: ORCA access request (your_WestGrid_username). After that, we will add you to the UNIX group that controls access to ORCA.

Running ORCA on Lattice and Parallel

ORCA has been installed in a version-specific directory under /global/software/orca. Check there for the version you would like to use.

Here is a sample batch job script, orca.pbs, which is also available as /global/software/orca/examples/orca.pbs .

#!/bin/bash
#PBS -S /bin/bash

# Sample ORCA script.
# 2011-03-04 DSP
# 2016-04-19 DSP - Updated for version 3.0.3

# In this version, the program will be run in the same directory
# as this script. (No attempt is made to copy files to and from
# storage local to the compute nodes.)

# Specify the ORCA input (.inp) file.
# Note any PAL directives in the file are ignored.
# The number of parallel processes to use
# will be taken from the TORQUE environment.

ORCA_RAW_IN=orca.inp

# Specify an output file

ORCA_OUT=orca_${PBS_JOBID}.out

cd $PBS_O_WORKDIR

echo "Current working directory is `pwd`"

echo "Node file: $PBS_NODEFILE :"
echo "---------------------"
cat $PBS_NODEFILE
echo "---------------------"
NUM_PROCS=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $NUM_PROCS processors."

# Create a temporary input by copying the
# raw input file specified above and then
# appending a line to specify the number of
# parallel processes to use.

echo "Creating temporary input file ${ORCA_IN}"

ORCA_IN=${ORCA_RAW_IN}_${PBS_JOBID}
cp ${ORCA_RAW_IN} ${ORCA_IN}

cat >> ${ORCA_IN} <<EOF
%PAL nprocs $NUM_PROCS
end
EOF

# The orca command should be called with a full path
# and the other executables should be on command PATH.

ORCA_HOME=/global/software/orca/orca_3_0_3_linux_x86-64/
ORCA=${ORCA_HOME}/orca
export PATH=${ORCA_HOME}:$PATH

# Define the variable RSH_COMMAND for communication
# between nodes for starting independent calculations
# as described in the user manual, section 3.

export RSH_COMMAND="/usr/bin/ssh"

echo "Starting run at: `date`"
$ORCA ${ORCA_IN} > ${ORCA_OUT}
echo "Job finished at: `date`"

Change the ORCA input file name, orca.inp, to match your own input file and submit the job with qsub.  On Lattice and Parallel,  we prefer you use whole nodes by specifying ppn=8 for Lattice and ppn=12 for Parallel, adjusting the number of nodes used to the appropriate level based on the parallelization characteristics of ORCA for your type of calculation.  For example, using two complete nodes on Lattice:

qsub -l nodes=2:ppn=8,walltime=72:00:00 orca.pbs

 Running ORCA on Grex

On on Grex, the most recent version of ORCA (currently 3.0.1), is available as the standard module command (module load orca). ORCA is installed in a version-specific directory under /global/software/orca. Using ORCA on Grex generally similar to the method for Lattice/Parallel described above, with the difference that environment modules should be used on Grex to load necessary dependencies (OpenMPI). Here is a sample batch job script, orca.pbs, which is also available as /global/software/orca/examples/orca.pbs .

#!/bin/bash
#PBS -S /bin/bash

# Sample ORCA script, modified from original Lattice version by DSP.
# 2014-01-06 GAS used correct module and mpiexec calling for ORCA OpenMPI versions
# In this version, the program will be run in the same directory
# as this script. (No attempt is made to copy files to and from
# storage local to the compute nodes.)

# Specify the ORCA input (.inp) file.
# Note any PAL directives in the file are ignored.
# The number of parallel processes to use
# will be taken from the TORQUE environment.

ORCA_RAW_IN=orca.inp

# Specify an output file

ORCA_OUT=orca_${PBS_JOBID}.out

cd $PBS_O_WORKDIR

echo "Current working directory is `pwd`"

echo "Node file: $PBS_NODEFILE :"
echo "---------------------"
cat $PBS_NODEFILE
echo "---------------------"
NUM_PROCS=$PBS_NP
echo "Running on $NUM_PROCS processors."

# Create a temporary input by copying the
# raw input file specified above and then
# appending a line to specify the number of
# parallel processes to use.

echo "Creating temporary input file ${ORCA_IN}"

ORCA_IN=${ORCA_RAW_IN}_${PBS_JOBID}
cp ${ORCA_RAW_IN} ${ORCA_IN}

cat >> ${ORCA_IN} <<EOF 
%PAL nprocs $NUM_PROCS
end 
EOF

# The orca command should be called with a full path
# and the orca module should be loaded.
module load orca

ORCA=`which orca`

export RSH_COMMAND="/usr/bin/ssh"

echo "Starting run at: `date`"
$ORCA ${ORCA_IN} > ${ORCA_OUT}
echo "Job finished at: `date`"

Change the ORCA input file name, orca.inp, to match your own input file and submit the job with qsub

On Grex, there are 12 cores per node, and the nodes are connected by high-speed, non-blocking QDR InfiniBand interconnect. Usually there is no need to restrain oneself to whole nodes or to one node only; the flexible procs resource specification might give you shorter queuing times. On Grex, the memory limits are enforces, so you should specify a sufficient pmem (memory per process) parameter. Often quantum chemistry methods, especially ab initio methods and analytical frequencies do require at least a few GB per process.  Some ORCA jobs, such as Coupled Cluster or CI or MC-SCF computations also use a lot of disk space. Therefore, it is recommended to specify the file resource as well to make sure that these jobs will land on large-disk nodes.  For example

qsub -l procs=16,pmem=4gb,walltime=72:00:00,file=30gb orca.pbs

Updated:

2017-09-05 - Changed the licensing section to fufill the new license conditions for newr versions.
2012-02-24 - Removed redundancies between Lattice and Grex instructions, updated Grex instructions.
2014-01-17 - Changed version in Lattice/Parallel example to 3.0.1.
2016-04-19 - Changed version in Lattice/Parallel example to 3.0.3.
2016-06-17 - Removed mpiexec from Lattice/Parallel example.
2017-05-18 - Corrected external links to ORCA Forum and ORCA 4 license.