Skip to content

ORCA

Description

ORCA is a flexible, efficient and easy-to-use general purpose for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single and multi-reference correlated ab initio methods. It can also treat environmental and relativistic effects.

For more information, visit the ORCA Official Website.
Useful links: ORCA Input Library, ORCA Tutorials

Access

Access to ORCA is granted to users who are able to show that they have registered with the ORCA Forum to download ORCA:

  1. Register for an ORCA Forum account.
  2. Provide the requested information and Acceptance of an End User License Agreement (EULA) containing the terms of registration.
  3. Once registration is complete, several download links will be available. You do not need to download anything to run ORCA on the HPRC systems.
  4. An ORCA registration verification email will be sent to the email address that you used to register for the ORCA Forum.
  5. Send a copy of the ORCA registration verification email to help@hprc.tamu.edu as proof of ORCA registration.

Once we received your proof of ORCA registration, you will be given access to the HPRC ORCA installs and notified.

License Information

By using ORCA, you are agreeing to the terms and conditions that you agreed to when registering with ORCA.

End User License Agreement (EULA) for the ORCA software

Loading the ORCA modules

Grace and FASTER Instructions:

List the versions of ORCA installed:

mla ORCA

List the required module dependencies for ORCAversion:

ml spiderORCAversion

Finally, load ORCAversion with the dependencies listed first to setup your environment to run ORCA:

mldependenciesORCAversion

Terra Instructions:

List the versions of ORCA installed:

mla ORCA

Finally, load ORCAversion to setup your environment to run ORCA:

mlORCAversion

Running ORCA in Parallel

ORCA takes care of communicating with the OpenMPI interface on its own when needed. ORCA should NOT be started with mpirun: e.g. mpirun -np 16 orca etc., like many MPI programs. Use the !PalX keyword in the inputfile to tell ORCA to start multiple processes. Everything from PAL2 to PAL8 is recognized. For example, to start a 4-process job, the input file might look like this:

! B3LYP def2-SVP Opt PAL4

or using block input to start a 48-core (48-processor) job:

! B3LYP def2-SVP Opt  
%pal  
nprocs 48
end

FASTER Example Job file

A multicore (64-core) example: (Updated Apr. 22, 2023)

#!/bin/bash  
##NECESSARY JOB SPECIFICATIONS  
#SBATCH --job-name=orcaJob            # Sets the job name to orcaJob  
#SBATCH --time=2:00:00                # Sets the runtime limit to 2 hr  
#SBATCH --ntasks=64                   # Requests 64 cores  
#SBATCH --ntasks-per-node=64          # Requests 64 cores per node (1 node)  
#SBATCH --mem=250G                    # Requests 250GB of memory per node  
#SBATCH --error=orcaJob.job.e%J       # Sends stderr to orcaJob.job.e[jobID]  
#SBATCH --output=orcaJob.job.o%J      # Sends stdout to orcaJob.job.o[jobID]i

# setup your environment to run ORCA 
ml purge                                # purge all module  
ml GCC/11.3.0 OpenMPI/4.1.4 ORCA/5.0.4  # load the module for ORCA

# run ORCA
$EBROOTORCA/bin/orca orcaJob.inp  >  orcaJob.out

exit                                              #exit when the job is done

To submit the job to the queue, use the following command:

[ username@Grace ~]$ sbatch jobscript

Grace Example Job file

A multicore (48-core) example: (Updated Apr. 22, 2023)

#!/bin/bash  
##NECESSARY JOB SPECIFICATIONS  
#SBATCH --job-name=orcaJob            # Sets the job name to orcaJob  
#SBATCH --time=2:00:00                # Sets the runtime limit to 2 hr  
#SBATCH --ntasks=48                   # Requests 48 cores  
#SBATCH --ntasks-per-node=48          # Requests 48 cores per node (1 node)  
#SBATCH --mem=360G                    # Requests 360GB of memory per node  
#SBATCH --error=orcaJob.job.e%J       # Sends stderr to orcaJob.job.e[jobID]  
#SBATCH --output=orcaJob.job.o%J      # Sends stdout to orcaJob.job.o[jobID]i

# setup your environment to run ORCA 
ml purge                                # purge all module  
ml GCC/11.3.0 OpenMPI/4.1.4 ORCA/5.0.4  # load the module for ORCA

# run ORCA
$EBROOTORCA/bin/orca orcaJob.inp  >  orcaJob.out

exit                                              #exit when the job is done

To submit the job to the queue, use the following command:

[ username@Grace ~]$ sbatch jobscript

Terra Example Job files

A multicore (28-core) example: (Updated Apr. 22, 2023)

#!/bin/bash  
##NECESSARY JOB SPECIFICATIONS  
#SBATCH --job-name=orcaJob            # Sets the job name to orcaJob  
#SBATCH --time=2:00:00                # Sets the runtime limit to 2 hr  
#SBATCH --ntasks=28                   # Requests 28 cores  
#SBATCH --ntasks-per-node=28          # Requests 28 cores per node (1 node)  
#SBATCH --mem=56G                     # Requests 56GB of memory per node  
#SBATCH --error=orcaJob.job.e%J       # Sends stderr to orcaJob.job.e[jobID]  
#SBATCH --output=orcaJob.job.o%J      # Sends stdout to orcaJob.job.o[jobID]

# setup your environment to run ORCA
ml purge                              # purge all module  
ml ORCA/5.0.3-gompi-2021b             # load the module for Orca

# run ORCA
$EBROOTORCA/bin/orca orcaJob.inp  >  orcaJob.out

exit                                  #exit when the job is done

To submit the job to the queue, use the following command:

[ username@terra ~]$ sbatch jobscript

A multinode (56-core) example: (Updated Apr. 22, 2023)

#!/bin/bash  
##NECESSARY JOB SPECIFICATIONS  
#SBATCH --job-name=orcaJob            # Sets the job name to orcaJob  
#SBATCH --time=2:00:00                # Sets the runtime limit to 2 hr  
#SBATCH --ntasks=56                   # Requests 56 cores in total
#SBATCH --ntasks-per-node=28          # Requests 28 cores per node (2-nodes)
#SBATCH --mem=56G                     # Requests 56GB of memory per node  
#SBATCH --error=orcaJob.job.e%J       # Sends stderr to orcaJob.job.e[jobID]  
#SBATCH --output=orcaJob.job.o%J      # Sends stdout to orcaJob.job.o[jobID]

# setup your environment to run ORCA
ml purge                              # purge all module  
ml ORCA/5.0.3-gompi-2021b             # load the module for Orca

# run ORCA
$EBROOTORCA/orca orcaJob.inp  >  orcaJob.out

exit                                  #exit when the job is done

To submit the job to the queue, use the following command:

[ username@terra ~]$ sbatch jobscript

For further instructions on how to create and submit a batch job, please see the appropriate kb page for each respective cluster:

Using xTB with ORCA

ORCA 4.2.1 supports the semiempirical quantum mechanical methods GFNn-xTB with an IO-based interface to the xtb binary. The otool_xtb wrapper script has been added to the directory that contains the ORCA binaries.

To use ORCA with xTB, you will need to load both the ORCA and xtb modules.

Grace module load example with xtb:

ml GCC/8.3.0 OpenMPI/3.1.4 ORCA/4.2.1-shared xtb/6.2.3

Terra module load example with xtb:

ml ORCA/4.2.1-gompi-2019b-shared xtb/6.2.3-foss-2019b

Using CREST with xTB

Conformer-Rotamer Ensemble Sampling Tool (CREST) is an utility/driver program for the xtb program. CREST Version 2.9 has been installed on grace and terra and is available by loading the xtb module.

Grace module load example with xtb:

ml GCC/8.3.0  OpenMPI/3.1.4 xtb/6.2.3

Terra module load example with xtb:

ml xtb/6.2.3-foss-2019b
Back to top