You are here

HOOMD-blue particle simulation toolbox

Table of Contents

Introduction

HOOMD-blue is a general purpose particle simulation toolbox.

Running HOOMD-blue on Parallel

The HOOMD-blue software depends on the availability of graphics processing units (GPUs).  As such it is only available on the Parallel cluster.  Note that Parallel accounts are not automatically set up when a researcher obtains a WestGrid account.  See the Parallel QuickStart Guide for instructions on requesting a Parallel account.

See the GPU computations page for information about requesting GPU-enabled nodes on Parallel.

HOOMD-blue depends extensively on Python scripting.  It was compiled to use Python 3.4.1.  To set up the environment to use this version of Python and other dependencies, use:

module load hoomd

in your batch job script before running the hoomd executable.

Here is a sample batch job script that was used for testing HOOMD-blue using an input file, lj_liquid_bmark.hoomd, that was part of the HOOMD-blue distribution.  Note that the ppn parameter must match the number of GPUs requested per node (3).

#!/bin/bash
#PBS -S /bin/bash

#PBS -q gpu
#PBS -l nodes=2:ppn=3:gpus=3
#PBS -l mem=46gb
#PBS -l walltime=00:30:00

# Sample script for running HOOMD-blue on Parallel
# 2014-07-28 DSP.

# Test with benchmark file from /global/software/hoomd/hoomd100/share/hoomd/benchmarks
INPUT=lj_liquid_bmark.hoomd
OUTPUT=lj_liquid_bmark_${PBS_JOBID}.out

# Set up the environment to run HOOMD-blue:

module load hoomd

cd $PBS_O_WORKDIR
echo "Current working directory is `pwd`"

echo "Node file: $PBS_NODEFILE :"
echo "---------------------"
cat $PBS_NODEFILE
echo "---------------------"

CORES=`/bin/awk 'END {print NR}' $PBS_NODEFILE`
echo "Running on $CORES cores."

echo "GPU file: $PBS_GPUFILE :"
echo "------------------"
cat $PBS_GPUFILE
echo "------------------"
NUM_GPUS=`/bin/awk 'END {print NR}' $PBS_GPUFILE`
echo "$NUM_GPUS GPUs assigned."

echo "Starting run at: `date`"

# Start one MPI process for each GPU assigned.

mpiexec -n ${NUM_GPUS} --mca mpi_warn_on_fork 0 hoomd $INPUT > $OUTPUT

echo "Program finished with exit code $? at: `date`"

Compared to a run on a single GPU (nodes=1:ppn=1:gpus=1), a speed up of about a factor of 2 was seen when running on 3 GPUs (nodes=1:ppn=3:gpus=3) and about a factor of 3 when running on 6 GPUs (nodes=2:ppn=3:gpus=3).  So, depending on the case, it may not be worthwhile using more than a single GPU-enabled node.

Updated 2014-07-28 - Page created.

System Parallel
Version 1.0.0