You are here

Amber

Table of Contents

Introduction

Amber, together with AmberTools, is a suite of programs for molecular calculations, including molecular dynamics. The main programs for use on WestGrid systems are pmemd and sander.  For the types of analyses for which it can be used, pmemd is generally preferred as it has better parallel scaling and other optimizations that make it faster than sander.  Also, GPU acceleration is supported in pmemd on the WestGrid Parallel cluster. See the Amber web site for a description of these and other programs in the Amber and AmberTools packages.

WestGrid has purchased a licenses for Amber 10, 11, 12 and 14.  Access to the Amber executables is available only to users who have agreed to the license conditions, as indicated in below.

A few points about using Amber on WestGrid systems are given in the following. Like other jobs on WestGrid systems, Amber jobs are run by submitting an appropriate script for batch scheduling using the qsub command. See documentation on running batch jobs for more information. Please write to support@westgrid.ca if you have questions about running this software that are not answered here.

Licensing and access

Please review the relevant sections of the license terms at http://ambermd.org/amber14.license.html. The section about fees can be ignored. Then, if you agree, send an email to support@westgrid.ca with the subject line "Amber access requested for your_user_name" and indicate that you have read and can abide by the conditions of use. Your user name will then be added to the wg-amber UNIX group that is used to control access to the software on WestGrid systems.

(Note, WestGrid has purchased a site license, so, the part of the license pages referring to fees and software orders is not relevant to using the software on WestGrid.)

Running Amber on Bugaboo

Amber 12 has been installed on Bugaboo. Add the following line in your job submission scripts to set up the environment before calling Amber commands.

module load amber

(If you are not already a member of the wg-amber UNIX group, you will get an error message about the amber modulefile not being found).

Running Amber on Grex

Amber 14 has been installed on Grex, with AmberTools15 under /global/software/amber-15. The previous version 12 also exists. Notice that the new version 15 uses Intel Compilers version 14 and OpenMPI 1.6.5 which are default on Grex; for version 12 that uses the older compilers these have to be purged ('module purge) first. To initialize the Aber environment, do:

module load amber/15

Note: Please do not specify -n or -np options when you are using mpiexec to run the Amber code in your Torque batch scripts.

Running Amber on Jasper

Amber 12 has been installed on Jasper under /global/software/amber/amber12. Initialize the Amber environment using:

module load application/amber/12

Running Amber on Lattice

Amber has been installed on Lattice and Parallel in version-specific subdirectories under /global/software/amber.

There are complete manuals available on Lattice as PDFs in the doc subdirectory for each version.

Please note that WestGrid accounts are not automatically set up on Lattice. Instructions for obtaining an account are in the Lattice QuickStart Guide.

Running Amber 14 on Lattice

Although these notes include references to older versions, Amber 14 is reported to be much faster and incorporates new features and bug fixes, so, should be used unless you have strong reasons to choose an older version.

Amber 14 (serial and Open MPI parallel versions) along with AmberTools 14 have been installed on Lattice under /global/software/amber/amber14p8_at14p21. This has Amber with patches to level 8 and AmberTools 14 with patches to level 21, compiled with Intel 12.1 compilers and using the MKL libraries.

Here is an example batch job script for running the MPI version of pmemd on Lattice.

#!/bin/bash
#PBS -S /bin/bash

# Script for running Amber 14 pmemd.MPI (Open MPI) on Lattice

INPUT=mdin
OUTPUT=mdout
PARM=prmtop
INPCRD=inpcrd

AMBERHOME=/global/software/amber/amber14p8_at14p21
. $AMBERHOME/amber.sh

cd $PBS_O_WORKDIR
echo "Current working directory is `pwd`"

CORES=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $CORES cores."

echo "PBS node file location: $PBS_NODEFILE"
echo "------------------"
cat $PBS_NODEFILE
echo "------------------"

echo "Starting run at: `date`"
mpiexec -n $CORES pmemd.MPI -O -i $INPUT -o $OUTPUT -p $PARM -c $INPCRD
echo "Finished run at: `date`"

To run the above script, if it is called amber.pbs, it can be submitted for execution with a qsub command of the form:

qsub -l nodes=1:ppn=8,mem=11gb,walltime=00:10:00 amber.pbs

Note that whole nodes (ppn=8, with mem equal to 10-11 GB per node requested) should be used for Lattice jobs, as mentioned in the Lattice QuickStart Guide.

Running Amber 11 or 12 on Lattice

Amber 11 (serial and OpenMPI parallel) has been installed on Lattice under /global/software/amber/amber11+at15. This is based on AmberTools1.5. An older version with AmberTools 1.4 is in /global/software/amber/amber11.

Two builds of Amber 12 are available in /global/software/amber/amber12 and /global/software/amber/amber12_mkl . The latter version, which uses the Intel Math Kernel Library (MKL) was about twenty percent faster than the version built without MKL in initial testing. Note that binaries for Amber 11 are in a subdirectory called exe, but, in the Amber 12 release this was changed to bin. Also, the Intel 12 compiler was used (rather than the Intel 11 compiler that was used for previous versions), so, an appropriate module must be loaded before running the code, as shown in the example script below.

Here is an example batch job script.

#!/bin/bash
#PBS -S /bin/bash

# Script for running Amber 12 pmemd.MPI (OpenMPI) on Lattice

INPUT=mdin
OUTPUT=mdout
PARM=prmtop
INPCRD=inpcrd

AMBERHOME=/global/software/amber/amber12_mkl
export PATH=$PATH:$AMBERHOME/bin

# Amber 12 was compiled with the Intel 12 compiler, so, set up that environment:
module unload intel
module load intel/12

# The selected version of Amber has not been recompiled since the
# October 2013 system upgrade on Lattice and Parallel.
# As such, it requires an old version of Open MPI
module load openmpi/old

cd $PBS_O_WORKDIR
echo "Current working directory is `pwd`"

NUM_PROCS=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $NUM_PROCS processors."

echo "PBS node file location: $PBS_NODEFILE"
echo "------------------"
cat $PBS_NODEFILE
echo "------------------"

echo "Starting run at: `date`"
mpiexec -n $NUM_PROCS pmemd.MPI -O -i $INPUT -o $OUTPUT -p $PARM -c $INPCRD
echo "Finished run at: `date`"

 

Running Amber on Parallel

On Parallel, some of the compute nodes have general purpose graphics processing units (GPUs) that can be used to speed up Amber calculations. There is a discussion of the GPU-enhanced capabilities in Amber at http://ambermd.org/gpus/.

Please note that WestGrid accounts are not automatically set up on Parallel. Instructions for obtaining an account are in the Parallel QuickStart Guide.

Lattice and Parallel share /global/software, so, most of the description for Lattice above also applies for Parallel. However, there are a couple of important differences with respect to job submission:

  • Parallel has 12 cores per node, so, instead of ppn=8, one should use ppn=12. See the Parallel QuickStart Guide for more information.
  • GPU-enabled nodes are not assigned by default, but, have to be requested using TORQUE queue and resource directives as explained on the WestGrid GPU computations page.
  • The GPU-enabled binaries are the ones that have cuda in the name, for example, pmemd.cuda_SPDP.MPI.

Running Amber 14 on Parallel

Although these notes include references to older versions, Amber 14 is reported to be much faster and incorporates new features and bug fixes, so, should be used unless you have strong reasons to choose an older version.

Amber 14 (serial and Open MPI parallel versions) along with AmberTools 14 have been installed on Parallel under /global/software/amber/amber14p8_at14p21. As suggested by the name, this has Amber with patches to level 8 and AmberTools 14 with patches to level 21, compiled with Intel 12.1 compilers and using the MKL libraries.

Here is an example batch job script for running the MPI version of pmemd on Parallel.

#!/bin/bash
#PBS -S /bin/bash
#PBS -l nodes=1:ppn=12
#PBS -l mem=22gb
#PBS -l walltime=03:00:00
# If you increase the number of nodes requested, increase the mem request in proportion.
# Leave the ppn=12 as it is, as this is a per-node value.
# Adjust the walltime value as appropriate.

# Script for running Amber 14 pmemd.MPI (Open MPI) on Parallel
# 2015-01-08 DSP

INPUT=mdin
OUTPUT=mdout
PARM=prmtop
INPCRD=inpcrd

export AMBERHOME=/global/software/amber/amber14p8_at14p21
. $AMBERHOME/amber.sh

cd $PBS_O_WORKDIR
echo "Current working directory is `pwd`"

# $PBS_NP can be used on some systems instead of calculating $CORES
CORES=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $CORES cores."

echo "PBS node file location: $PBS_NODEFILE"
echo "------------------"
cat $PBS_NODEFILE
echo "------------------"

echo "Starting run at: `date`"
mpiexec -n $CORES pmemd.MPI -O -i $INPUT -o $OUTPUT -p $PARM -c $INPCRD
echo "Finished run at: `date`"

It the above script is called amber.pbs, it can be submitted for execution with a qsub command of the form:

qsub amber.pbs

Note that whole nodes (ppn=12, with mem equal to 22-23 GB per node requested) should be used for Parallel jobs, as mentioned in the Parallel QuickStart Guide.

Here is a similar batch jobs script for running the GPU-enabled version of Amber 14.  Note the name of the Amber executable in this case is pmemd.cuda.MPI.  Other changes from the non-GPU example are the resource requests for the gpu queue and the addition of the gpus=3 modifier on the node request.  Note also that the maximum walltime limit is 24 hours for the GPU-based jobs compared to 72 hours for CPU-only jobs.

#!/bin/bash
#PBS -S /bin/bash
#PBS -q gpu
#PBS -l nodes=1:ppn=12:gpus=3
#PBS -l mem=22gb
#PBS -l walltime=03:00:00
# If you increase the number of nodes requested, increase the mem request in proportion.
# Leave the ppn=12 and number of gpus=3 as they are, as these are per-node values.
# Adjust the walltime value as appropriate.

# Script for running Amber 14 pmemd.MPI (Open MPI) on Parallel
# 2015-01-08 DSP

INPUT=mdin
OUTPUT=mdout
PARM=prmtop
INPCRD=inpcrd

export AMBERHOME=/global/software/amber/amber14p8_at14p21
. $AMBERHOME/amber.sh

cd $PBS_O_WORKDIR
echo "Current working directory is `pwd`"

# Initialize the CUDA run-time environment for GPU-based calculations
module load cuda

# $PBS_NP can be used on some systems instead of calculating $CORES
CORES=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $CORES cores."

echo "PBS node file location: $PBS_NODEFILE"
echo "------------------"
cat $PBS_NODEFILE
echo "------------------"

echo "GPU file: $PBS_GPUFILE :"
echo "------------------"
cat $PBS_GPUFILE
echo "------------------"
NUM_GPUS=`/bin/awk 'END {print NR}' $PBS_GPUFILE`
echo "$NUM_GPUS GPUs assigned."

echo "Starting run at: `date`"
mpiexec -n $CORES pmemd.cuda.MPI -O -i $INPUT -o $OUTPUT -p $PARM -c $INPCRD
echo "Finished run at: `date`"

If the above script is called amber_gpu.pbs, it can be submitted for execution with a qsub command of the form:

qsub amber_gpu.pbs

Running Amber 11 or 12 on Parallel

As mentioned in the Running Amber 11 or 12 on Lattice section above, these old versions of Amber are available.  The Amber 12 batch job script example in that section can be used for non-GPU runs on Parallel, taking into account that Parallel has 12 cores per node.  So, instead of ppn=8, one should use ppn=12 on Parallel. See the Parallel QuickStart Guide for more information.

There is also the possibility of using a GPU-enabled version of Amber on Parallel.  To do so, in addition to the Intel compiler-related modules shown in the Lattice example script, use module load cuda/4.1 before running pmemd.cuda_SPDP.MPI.
The lines highlighted in red are specific to versions of the code that have not been recompiled since the October 2013 system upgrade on Parallel.  However, rather than try to use these old versions, we recommend that you use Amber 14 instead.  A full example script for the GPU-enabled version of Amber 14 is given above.

module unload intel
module load intel/12
module load cuda/4.1
module load openmpi/old
...
mpiexec -n $NUM_PROCS pmemd.cuda_SPDP.MPI -O -i $INPUT -o $OUTPUT -p $PARM -c $INPCRD


 

For More Information

Updates:

2013-12-13 - Added module load cuda/4.1 to the Parallel example script.
2013-12-11 - Added module load openmpi/old to the Lattice example script.
2014-12-05 - Added Amber 14 section for Lattice.
2015-01-08 - Added Amber 14 section for Parallel.