You are here

NAMD

Table of Contents

Introduction

NAMD is a molecular dynamics simulator for large biomolecular systems.

Users are expected to be generally familiar with NAMD capabilities and input file formats. A user's guide and detailed tutorial notes are available at the NAMD web site. Like other jobs on WestGrid systems, NAMD jobs are run by submitting an appropriate script for batch scheduling using the qsub command. Details of scheduling and job management are explained on the Running Jobs page. See the Software Versions tab as well as notes below for hints about running NAMD on WestGrid clusters.

Licensing and access requests

The NAMD license agreement stipulates that "each member of the institution or corporation who has access to or uses Software must agree to and abide by the terms of this license". Therefore, access to the software will be allowed only for those users who have read the license agreement on the NAMD web site and can agree to the conditions given there. If you would like to use NAMD on WestGrid systems please review the license and send email to support@westgrid.ca with the subject line "NAMD access requested for your_user_name" and indicate that you have read and can abide by the conditions of use. Also see VMD entry in graphics section.

Running NAMD on Bugaboo

We do not have a specific batch job example for running NAMD on Bugaboo, but, see the Using Bugaboo section of the Bugaboo QuickStart Guide for some hints about running parallel jobs on those systems. Your batch job could contain lines of the form:

INPUT=input_file
OUTPUT=output_file
mpiexec namd2 $INPUT > $OUTPUT

Running NAMD on Jasper

In your batch job script for running NAMD on Jasper, include lines similar to the following:

INPUT=input_file
OUTPUT=output_file
module load application/namd/2.10
mpiexec namd2 $INPUT > $OUTPUT

Running NAMD on Lattice and Parallel

Note:  These notes were written several years ago before the Parallel cluster existed.  Lattice and Parallel share file systems, so, the same software is visible from both systems.  Since only Parallel has GPU nodes, however, GPU-specific versions (the ones with CUDA in the name) should be run only on Parallel, not on Lattice.  Also, version 2.9.1 of NAMD has been installed since the notes in this section were written.  See /global/software/namd for the versions available.

Versions of NAMD on Lattice include the final 2.7 release using direct InfiniBand (ibverbs) libraries and the 2.7b3 release using Open MPI over InfiniBand. In general the ibverbs version is recommended as it performs slightly better and scales to more processors better than the OpenMPI version. In both versions ppn can be set to a maximum of 8 on Lattice.

NAMD 2.7 (ibverbs)

The following command submits a 1-hour, 2-node, 8-processors per node parallel job based on the script namd_ib.pbs, for the alanin test case distributed with NAMD.

qsub -l walltime=1:00:00,nodes=2:ppn=8 namd_ib.pbs

where namd-ib.pbs contains:

#!/bin/bash
#PBS -S /bin/bash

# Script for running October 15, 2010 NAMD 2.7 (ibverbs) on Lattice
# 2010-11-25 KEW

cd $PBS_O_WORKDIR
echo "Current working directory is `pwd`"

NUM_PROCS=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $NUM_PROCS processors."

INPUT=alanin
OUTPUT=alanin.out

BINDIR=/global/software/namd/namd27
NODELIST=/tmp/charmrun-nodelist.$PBS_JOBID

echo "PBS node file location: $PBS_NODEFILE"
cat $PBS_NODEFILE | sed -e 's/^/host /' > ${NODELIST}
echo Node file used by Charm++: ${NODELIST}
echo "------------------"
cat ${NODELIST}
echo "------------------"

echo "Starting run at: `date`"

$BINDIR/charmrun +p$NUM_PROCS ++remote-shell ssh \
++nodelist $NODELIST $BINDIR/namd2 $INPUT > $OUTPUT

echo "Finished run at: `date`"

rm $NODELIST

exit

NAMD 2.7 b3 (OpenMPI)

The following command submits a 10-minute, 1-node, 4-processors per node parallel job based on the script namd.pbs, for the alanin test case distributed with NAMD.

qsub -l walltime=10:00,nodes=1:ppn=4 namd.pbs

where namd.pbs contains:

#!/bin/bash
#PBS -S /bin/bash
# Script for running NAMD on Lattice
# 2010-10-18 KEW

cd $PBS_O_WORKDIR
echo "Current working directory is `pwd`"
echo "Node file: $PBS_NODEFILE"
echo "------------------"
cat $PBS_NODEFILE
echo "------------------"
NUM_PROCS=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $NUM_PROCS processors."

module load openmpi/old

INPUT=alanin
OUTPUT=alanin.out
BINDIR=/global/software/namd/namd27b3

echo "Starting run at: `date`"
mpiexec -n $NUM_PROCS ${BINDIR}/namd2 $INPUT > $OUTPUT
echo "Finished run at: `date`"

Modify the INPUT and OUTPUT lines in the script to specify your own input and output files. The sample scripts and input files are available on Lattice in the directory /global/software/namd/examples.

Running NAMD on Orcinus

As shown in the sample script below, a module command, module load namd, is used to set up the NAMD environment on Orincus. See the WestGrid module page for more information on modules.

A sample batch job script, similar to the one shown below, is available on Orincus at /global/system/info/notes/script-examples/NAMD/job-NAMD-parallel.bash.

#!/bin/bash
#PBS -S /bin/bash
#PBS -N NAMD
# If you are requesting at least 4 cores use qos=parallel
# If you are requesting less than 4 cores use qos=normal
#PBS -l qos=parallel
# Specify the number of cores (either proc=NN or nodes=nn:ppn=mm)
# walltime and memory per process pmem
#PBS -l nodes=2:ppn=8,walltime=10:00,pmem=1gb
#For all other options man qsub
#PBS -j oe
# ******************** end of job specification section **********

cd $PBS_O_WORKDIR
echo "Current working directory is `pwd`"

echo "Node file: $PBS_NODEFILE :"
echo "---------------------"
cat $PBS_NODEFILE
echo "---------------------"

NUM_PROCS=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $NUM_PROCS processors."

echo "Starting run at: `date`"

module load namd

# Please set the input parameters
INPUT=stmv.namd

mpiexec -np ${NUM_PROCS} namd2 $INPUT
echo "Job finished at: `date`"

For More Information

2015-04-17 - Added Jasper reference and a table of contents.