You are here

Elk

Introduction

According to the Elk web page, Elk is an electronic structure code for determining the properties of crystalline solids, based on Density Functional Theory and implemented through a full-potential linearized augmented-plane wave (FP-LAPW) approach.

This is a mixed OpenMP-MPI code so therefore caution should be used to ensure that an appropriate combination of MPI processes and OpenMP threads is used. See the Elk user manual for advice.  The key point there is to use the -pernode flag on the mpiexec command line when running Elk so that only one MPI process is started on each node:

mpiexec -pernode -n $NODES elk

where NODES is the number of complete nodes requested.  However, in testing Elk on Lattice and Parallel it was found that using the -pernode argument caused the software to crash.

The number of OpenMP threads to start on each node is controlled by the OMP_NUM_THREADS environment variable, which could be set to the number of cores on a compute node of the system being used.  However, it is easier to just set this variable to 1 and run the code in a pure MPI mode. 

Running Elk on Jasper

To set up the environment (for a bash shell user) to use Elk on Jasper, use

module load application/elk/2.1.22
export OMP_NUM_THREADS=1

If you use the -l procs=n resource request format that is normally used on Jasper, then, you cannot use OpenMP parallelism and must use PPN=1 to ensure that you don't use more cores than requested.

Running Elk on Lattice and Parallel

To set up the environment (for a bash shell user) to use Elk on Lattice and Parallel, use

module unload intel
module load intel/12
export OMP_NUM_THREADS=1
export PATH=/global/software/elk/elk-2.1.25/bin:$PATH
mpiexec elk

On Lattice and Parallel you should always request complete nodes (-l nodes=n:ppn=8 for Lattice, -l nodes=n:ppn=12 for Parallel).

Elk was built on Lattice and Parallel using the Intel Math Kernel Library (MKL) as the implementation of the BLAS and LAPACK linear algebra libraries.  MKL can be run in sequential or threaded form.  The binary named elk is the sequential form.  The threaded version is called elk_mpi_threaded_mkl.  We recommend that you use the sequential version.

If you really need to run the code in a mixed OpenMP/MPI mode, one can do that by creating a custom file to use with the -hostfile argument to mpiexec:

module unload intel
module load intel/12
# For Lattice use PPN=8. For Parallel use PPN=12.
PPN=8
export OMP_NUM_THREADS=$PPN
HOSTFILE="nodefile_${PBS_JOBID}"
uniq $PBS_NODEFILE > $HOSTFILE
export PATH=/global/software/elk/elk-2.1.25/bin:$PATH
mpiexec -hostfile $HOSTFILE elk


Updated 2013-09-05.

For More Information