You are here



VASP (Vienna ab-initio simulation package, also VAMP) is a mature and well-developed commercial package for solid-state periodic Density Functional Theory calculations "using pseudopotentials and a plane wave basis set". The VASP web site is

VASP 4.6 has been installed on the WestGrid Bugaboo and Orcinus clusters for the convenience of those researchers who hold a license for the software and do not need to build a custom version. VASP version 5 is available on Bugaboo, Grex, Orcinus and Parallel. Access is restricted to approved users only. Also note that Parallel accounts require a special request. See the Parallel QuickStart Guide for more information.

Requesting Access

Before WestGrid administrators are allowed to permit access to the software for a given researcher, we are required to confirm with our VASP licensing contact that the researcher has been registered with them as a member of a research group with a valid VASP license.

If you are a member of a research group that has purchased a VASP license and would like to use the version built by WestGrid, please contact your VASP licensing contact to request permission to run on WestGrid. After you have obtained their approval, write to us at to request access on WestGrid. Please include the name of the license holder who is sponsoring you and the name of your department and university. Also, please indicate that it is okay that we forward your email address to our VASP licensing contact as this information is used to identify you as a legitimate registered user of the software.

Note that VASP 5 licensees may use older versions of VASP, such as VASP 4.6, but, VASP 4.6 license holders may not use more recent versions without upgrading their license.

Running VASP on Bugaboo

Executable files of VASP 5.x.x versions for non-gamma and gamma point have been renamed to vasp5 and vasp5-gamma respectively. vasp5-npt and vasp5-npt-gamma are executable files that are used for NPT ensemble. The 4.x versions are available as vasp and vasp-gamma. The pseudopotentials of different elements are located in /usr/local/vasp-5.x.x/potentials and /usr/local/vasp-4.x.x/potentials for those who have vasp5 and vasp4 license respectively.

Running VASP on Grex

The current version  installed on Grex is VASP 5.4.1. Note that unlike previous VASP 5.2 and earlier, this version includes parallelization over k-points and thus has different memory requirements. To use this kind of parallelism, the number of cores requested for the job should match the number of k-points. The software was built with the Intel Fortran compiler v. 14 and uses the Intel Math Kernel Libraries (MKL), including ScaLAPACK and FFT bindings of MKL.

To initialize your environment for VASP, use

module load vasp/5.4.1

There are three  executables for VASP 5.4.1 : the default vasp, which is a link to vasp_stdvasp_gam for Gamma-point only calculations, and vasp_ncl is a fully-complex build for non-collinear calculations.  The GGA and LDA pseudopotentials as shipped with VASP 5 are located in sub-directories of /global/software/vasp5.4.1/pseudo. A sample batch job script is available in /global/software/vasp5.4.1/test/vasp.job. Documentation is placed under /global/software/vasp5.4.1/doc.

The older versions, VASP 5.3.5, 5.3.3 and 5.3.2 are still available on Grex, by using corresponding module load commands. Probably the more interesting of them is VASP 5.3.5, which includes "Wannier90" module. The names for VASP executables for the older 5.3 releases were vasp, vasp-gamma and vasp-full correspondingly.

Running VASP on Parallel

VASP 5.4.1 with some patches has been installed in /global/software/vasp/vasp541/bin.  There are three binaries, vasp_gam (gamma-point only), vasp_std (Z-averaged calculations) and vasp_ncl (non-collinear calculations).  The software was built with Intel 12.1 compilers and corresponding Math Kernel Library (MKL) with ScaLAPACK and FFTW3 interfaces (to the MKL FFT routines).  As MKL is multi-threaded, to avoid problems with over-subscribed cores, limit the number of threads to one per MPI process by setting the OMP_NUM_THREADS variable before calling any VASP executable, as shown below (bash shell example shown).


In addition, executables from a build of VASP 5.4.1 supporting VASP Transition State Tools (VTST) have been installed in /global/software/vasp/vasp541_vtst169/bin and binaries incorporating the VASPsol solvation model are in /global/software/vasp/vasp541_vaspsol.

Here is a sample job script:

#PBS -S /bin/bash
#PBS -l nodes=1:ppn=12
#PBS -l mem=23gb
#PBS -l walltime=3:00:00

# Sample script for running VASP 5.4.1 on Parallel.

# You can change the number of nodes requested, but for
# Parallel use ppn=12 and a mem parameter of up to 23 GB per node.

# Choose the version of VASP to use by uncommenting one of these lines:

# These are the VASP Transition State Tools versions:

# These are the VASPsol solvation model versions:

# Define an output file name:

echo "Current working directory is `pwd`"

echo "Node file: $PBS_NODEFILE :"
echo "---------------------"
echo "---------------------"

# On many WestGrid systems a variable PBS_NP is automatically
# assigned the number of cores requested of the batch system
# and one could use # echo "Running on $PBS_NP cores."
# On systems where $PBS_NP is not available, one could use:
# `/bin/awk 'END {print NR}' $PBS_NODEFILE`

echo "Running on $CORES cores."

# Set number of OpenMP threads per MPI process to one.

echo "Starting run at: `date`"

# On most WestGrid systems, mpiexec will automatically start
# a number of MPI processes equal to the number of cores
# requested. The -n argument can be used to explicitly
# use a specific number of cores.

mpiexec -n ${CORES} $VASP > $OUTPUT

echo "VASP finished with exit code $? at: `date`"

The 5.4 release of PAW (LDA and PBE) potentials is available in subdirectories under /global/software/vasp/potentials.

Older versions

Here are some remarks regarding older versions of VASP.

VASP 5.3.3 has been installed in /global/software/vasp/vasp533/bin. It was built with the Intel 12.0 compiler and the MKL library with ScaLAPACK and FFTW3 interface (to the MKL FFT routines).  This is a multi-threaded Intel Math Kernel Library.  To avoid problems with over-subscribed cores, limit the number of threads to one per MPI process by setting the OMP_NUM_THREADS variable as shown below (bash shell example shown). This installation occurred earlier than a system upgrade in November 2013 and is incompatible with the default Open MPI environment.  Use the following initialization in your job script to use an older Open MPI version that should be compatible with the VASP 5.3.3 installation on Parallel:

module unload intel
module load intel/12
module load openmpi/old

Parallel VASP 5.2 executables are available in /global/software/vasp/vasp52/bin, with a link from vasp to one build of version 5.2.12. Several variations on VASP 5.2.11 are also there. The current default version was compiled with the Intel 11.1 compiler using MKL libraries for the linear algebra and the internal VASP FFTs, but, that may change without notice. Other versions using the Goto BLAS libraries for the linear algebra or FFTW for the Fourier transforms, failed a test example. If you want to be sure about which version you are using, use the specific name, for example, /global/software/vasp/vasp52/bin/vasp_5.2.12_intel11.1_mkl_fft3dfurth in your job script, rather than the generic name, /global/software/vasp/vasp52/bin/vasp.

Running VASP on Orcinus

VASP 4.6 (release Feb 2009) and VASP 5.2 have been installed on Orcinus (parallel vasp and a serial vasp-gamma executables). The 4.6 versions are available as:




The 5.2 versions are in the VASP5 directory:




2010-02-23: Please note that initial testing of VASP 5.2 showed that the program failed for large tests, probably due to a stack size problem. Several workarounds are available (essentially amounting to setting ulimit -s unlimited in the environment on the nodes on which VASP is run), but, it is not yet clear whether this will be addressed at a system level, by recompiling VASP, or by modifying user job submission procedures. Write to us at for the latest information.

An example batch job submission script for a parallel run can be found on Orcinus at


For a serial run, one can just use the full path to the executable.


2015-01-20: - Clarified initialization requirement for VASP 5.3.3 on Lattice and Parallel.
2015-04-08: - Added OMP_NUM_THREADS=1 to initialization for VASP 5.3.3 on Lattice and Parallel.
2015-10-16: - Updated the Lattice/Parallel section for VASP 5.4.1.
2016-05-19: - Updated the Lattice/Parallel section for VTST support in VASP 5.4.1.
2016-07-19: - Updated the Lattice/Parallel section for VASPsol support in VASP 5.4.1.
2017-01-23: - Added reference to 5.4 PAW potential files on Lattice and Parallel.
2017-03-08: - Added sample batch job script for Lattice/Parallel.
2017-09-04: - Removed references to Lattice as that cluster has been removed from WestGrid.