You are here

Parallel

Software Available on Parallel

Title Version
Usage Notes
Julia 0.3.7

Julia can be run in a batch job (bash script example) with

export PATH=/global/software/julia/julia037/julia-cb9bcae93a/bin:$PATH
julia --machinefile $PBS_NODEFILE <other arguments>

Note, the default versions of git and Python are too old for some Julia packages (including the package manager) to work. Set up your environment to use newer versions with

module unload intel
module load intel/12
module load python
export PATH=/global/software/git/bin:$PATH

Gephi - interactive network graph visualization 0.9.1

See the main WestGrid Gephi page.

HOOMD-blue particle simulation toolbox 1.0.0

See the main WestGrid HOOMD-blue page.

BEAGLE (beagle-lib) 1.0.1

/global/software/beaglelib/beaglelib_1086

Ezys - GPU-accelerated non-rigid 3D medical image registration 1.0.32

Run on GPU-enabled nodes on Parallel.  See main Ezys page for details.  Start the graphical version with:

/global/software/ezys/ezys1032/bin/ezys

In batch job scripts use the console version:

/global/software/ezys/ezys1032/bin/ezysc

GSL 1.16

Both GCC and Intel-compiled versions of the GSL 1.16 library are available, installed in /global/software/gsl/gsl116_gcc and /global/software/gsl/gsl116_intel respectively.  Performance testing has not been done, but, usually the Intel compiler produces faster code.  See the main WestGrid GSL page for more instructions.

Subversion 1.4.2
Boost 1.55.0

See the lib and include directories under /global/software/boost/boost_1_55_0_gcc.

Use

module load python/3.4.1

if using the Python-related Boost libraries.

 

Qbox DFT 1.56.2
Subversion 1.6.15

/opt/ CollabNet_ Subversion/bin/svn

PyMOL - Molecular visualization system 1.7.0.0

See the WestGrid PyMOL page.

Git 1.7.8.3

module load git/

HDF 5 1.8.16

HDF 1.8.16 built with Intel 12.1.4 compilers has been installed to /global/software/hdf/hdf5-1.8.16 .

HDF 5 1.8.5-patch1
VMD 1.8.7

/global/software/vmd/vmd-1.8.7/bin

VMD 1.9.1

Run VMD with

/global/software/vmd/vmd-1.9.1/bin/vmd

or, for best performance, run on GPU-enabled nodes through VNC as described on the WestGrid Remote Visualization page.

VMD 1.9.2

Run VMD with

/global/software/vmd/vmd-1.9.2/bin/vmd

or, for best performance, run on GPU-enabled nodes through VNC as described on the WestGrid Remote Visualization page.

Ruby 1.9.3p0

/global/software/ruby/ruby193p0

MKL 10.2.6
MKL 10.3.x

module load intel/12

Netpbm 10.35.80

/global/software/netpbm/netpbm-10.35.80

Amber 11

See detailed Parallel instructions.

idb 11.1
Amber 12

See detailed Parallel instructions.

Amber 14

This is version 14 of Amber with patches to level 8 and AmberTools 14 with patches to level 21 compiled with Intel 12.1 compilers and using the MKL libraries. It has been installed on Lattice in /global/software/amber/amber14p8_at14p21. See the more detailed Parallel instructions

nano 2.0.9
Szip 2.1

/global/software/lib

Elk 2.1.25

If NODES is the number of complete Parallel nodes requested, then use:

module unload intel
module load intel/12
export OMP_NUM_THREADS=12
mpiexec -pernode $NODES /global/software/elk/elk-2.1.25/bin/elk

See the Elk user manual for additional advice on using a combination of OpenMP and MPI parallelism.

FFTW 2.1.5

Look in /global/software/fftw.

NAMD 2.10

There are two variants of NAMD installed based on the binary distributions provided by the developers:

For multinode MPI parallel version:

export PATH=/global/software/namd/NAMD_2.10_Linux-x86_64-ibverbs:$PATH

For multinode MPI and shared memory (SMP) version:

export PATH=/global/software/namd/NAMD_2.10_Linux-x86_64-ibverbs-smp:$PATH

 Note: these are not GPU-enabled versions.

NAMD 2.11

Note: These are beta versions, b2, not the final release.

There are two variants of NAMD installed based on the binary distributions provided by the developers:

For multinode MPI parallel version:

export PATH=/global/software/namd/NAMD_2.11b2_Linux-x86_64-ibverbs:$PATH

For multinode MPI and shared memory (SMP) version:

export PATH=/global/software/namd/NAMD_2.11b2_Linux-x86_64-ibverbs-smp:$PATH

 Note: these are not GPU-enabled versions.

OpenFOAM, OpenFOAM+ and OpenFOAM Extend 2.2.1

See main WestGrid OpenFOAM page.

Ray 2.3.0

The Ray binary is /global/software/ray/ray230/Ray.

OpenFOAM, OpenFOAM+ and OpenFOAM Extend 2.3.0

See the main WestGrid OpenFOAM page for a sample batch job script.

Ray 2.3.1

The Ray binary compiled for the default maximum k-mer length of 32 is /global/software/ray/ray231/Ray .  A second version with a maximum k-mer length of 201 is /global/software/ray/ray231_maxkmer201/Ray.  Use the latter version only if necessary as it will use more memory than the one with the more restricted maximum k-mer length.

Python 2.4.3
SUNDIALS (Suite of Nonlinear and Differential/Algebraic Equation Solvers) 2.5.0

The SUNDIALS software has been installed in /global/software/sundials/sundials250.  There is a Makefile for each program in the examples subdirectory that illustrates how programs using the SUNDIALS libraries can be built with Intel compilers and the Intel Math Kernel Library for optimized linear algebra support.

SCons 2.5.1

SCons 2.5.1 has been installed on Parallel as /global/software/scons/scons-2.5.1/bin/scons .

NAMD 2.7 (ibverbs)
VisIT - general purpose visualization tool 2.7.1

See the main WestGrid VisIT page for a link to instructions for running VisIT remotely through a VNC server for best performance.

Python 2.7.2

Initialize your environment with

module unload intel
module load intel/12
module load python

This Python installation includes NumPy 1.6.1 and Matplotlib 1.1.0 .

Python 2.7.9

Version 2.7.9 of Python has been installed for a project that required the bz2 support that was missing in the previous versions of Python installed on Parallel.  However, very few additional modules have been installed for use with this version, so, unless you specifically need bz2 support or can make do with a bare-bones version of Python, please consider using version 2.7.2.  To use version 2.7.9, initialize your environment with:

module load python/2.7.9

NAMD 2.7b3 (OpenMPI)
NAMD 2.7b3 (with PLUMED 1.3, legacy Open MPI)

The namd2 and associated binaries have been installed to /global/software/namd/namd27b3_plumed .

Set up the environment to use a legacy version of Open MPI:

module load openmpi/old

ORCA 2.8.0.2 is also available
CMake 2.8.11

See /global/software/cmake/cmake2811/bin (or links from /global/software/bin).

CMake 2.8.4

/global/ software/ bin/cmake

ORCA 2.9.0

/global/software/orca orca_2_9_0_ linux_ x86-64

NAMD 2.9.1

For information about NAMD installations on Parallel see the main WestGrid NAMD page.

Gerris Flow Solver 2013-12-06

See the main WestGrid Gerris software page.

FFmpeg (video encoding) 20150429

See the main WestGrid FFmpeg page.

Emacs 23.1.1
ORCA 3.0.3

Binaries have been installed in /global/software/orca/orca_3_0_3_linux_x86-64/ .   See the main WestGrid ORCA page for a sample job script.

LIKWID - Performance analysis tools 3.1.1

The software has been installed under /global/software/likwid/likwid311 but not extensively tested. A typical build of software using the likwid library would use an include directive -I/global/software/likwid/likwid311/include and specify the library with -L/global/software/likwid/likwid311/lib -llikwid.

Xerces-C++ 3.1.1

/global/software/xercesc/xercesc311

Mono (C# environment) 3.10.0

See the instructions for running on Parallel on the main WestGrid mono page.

CPMD 3.13.2

Under /global/software/cpmd MKL and GotoBLAS versions

ParaView 3.14.1

/global/software/ParaView/ParaView-3.14.1

CPMD 3.15.3p1

Located in /global/software/cpmd/CPMD-3.15.3.p1_ifort_mpi_mkl_defaultfft

CPMD 3.17.1

The CPMD executable, version 3.17.1 with 7 patches applied, was built with the Intel 14 compiler, Math Kernel Libraries (MKL) and default FFT implementation.  It is /global/software/cpmd/CPMD-3.17.1.p7_ifort14_mpi_mkl_defaultfft/bin/cpmd.x.

FFTW 3.2.2

Look in /global/software/fftw.

bash (default) 3.2.25 or later
FFTW 3.3.4
/global/software/fftw/fftw-3.3.4-gcc447
/global/software/fftw/fftw-3.3.4-intel12

MuMax3 - Micromagnetic Simulation 3.3b3

Initialize the GPU environment and run MuMax3 with

module load cuda/5.5
/global/software/mumax3/mumax33b3_cuda55/mumax3 input_file

MuMax3 - Micromagnetic Simulation 3.3b4

Initialize the GPU environment and run MuMax3 with

module load cuda/5.5
/global/software/mumax3/mumax33b4_cuda55/mumax3 input_file

MuMax3 - Micromagnetic Simulation 3.4

Initialize the GPU environment and run MuMax3 with

module load cuda/5.5
/global/software/mumax3/mumax34/mumax3 input_file

CMake 3.4.1

Pre-compiled binaries (cmake, ccmake, cmake-gui, cpack and ctest) for CMake 3.4.1 have been installed in /global/software/cmake/cmake341/bin.

Python 3.4.1

Initialize your environment with

module load python/3.4.1

Migrate (Migrate-n) - Population genetics program to estimate migration rates and population sizes 3.6

Migrate has been installed on Parallel as /global/software/migrate/migrate36/bin/migrate-n-mpi.

See the generic Running Instructions section of the WestGrid Migrate page for more information.

MuMax3 - Micromagnetic Simulation 3.8

Initialize the GPU environment and run MuMax3 with

module load mumax3/3.8
mumax3 input_file

MuMax3 - Micromagnetic Simulation 3.9.1

Initialize the GPU environment and run MuMax3 with

module load mumax3/3.9.1
mumax3 input_file

LAMMPS 31Mar17

The 31 March 2017 version of LAMMPS (built with the Intel 12.1.4 compiler with FFTW3 and MKL) has been installed on Parallel as /global/software/lammps/bin/31Mar17/lmp_parallel .

ParaView 4.0.1

The main executable is /global/software/ParaView/ParaView-4.0.1-Linux-64bit/bin/paraview.

METIS 4.0.1

/global/software (bin, lib, include)

GROMACS 4.0.7

See /global/software/gromacs/gromacs407_intel111.

DL_POLY 4.02

See /global/software/dl_poly .

OpenFOAM, OpenFOAM+ and OpenFOAM Extend 4.1

See the main WestGrid OpenFOAM page for a sample batch job script.

NetCDF 4.1.1

/global/software/netcdf-4.1.1

Mono (C# environment) 4.2.1

See the instructions for running Mono on Parallel on the main WestGrid mono page.

HDF 4 4.2.5
ParaView 4.3.1

With an X Window connection, ParaView can be started on Parallel with

/global/software/paraview/ParaView-4.3.1-Linux-64bit/bin/paraview

but would best run on the Parallel GPU nodes with VNC as described on the WestGrid remote visualization page.

GROMACS 4.5.4

See /global/software/gromacs .

GMT - Generic Mapping Tools 4.5.6

/global/software/gmt/gmt456

GROMACS 4.6

See /global/software/gromacs .

GROMACS 4.6.1

Initialize environment with:

module load cuda/4.1
module unload intel
module load intel/12
module load openmpi/old
source /global/software/gromacs/gromacs461_openmp_mkl_gpu/bin/GMXRC

GROMACS 4.6.2

Initialize the environment with:

module load cuda/4.1
module unload intel
module load intel/12
module load openmpi/old
source /global/software/gromacs/gromacs462_openmp_mkl_gpu/bin/GMXRC

GROMACS 4.6.3

Installed version 4.6.3 incidentally as part of a PLUMED request. You may want to ignore this and go to a newer version.  See /global/software/gromacs/gromacs_463_intel1214_mkl_openmp_nogpu_sse41/bin or, for a GPU-enabled version, /global/software/gromacs/gromacs463_intel1214_openmp_mkl_gpu/bin.

GROMACS 4.6.3 with PLUMED 2.0.1

Installed GROMACS 4.6.3 with patches for use with PLUMED 2.0.1 into /global/software/gromacs/gromacs_463_intel1214_mkl_openmp_nogpu_sse41_plumed201/bin.  Initialize your environment with

source /global/software/gromacs/gromacs_463_intel1214_mkl_openmp_nogpu_sse41_plumed201/bin/GMXRC

GROMACS 4.6.5

Installed GROMACS 4.6.5 in /global/software/gromacs/gromacs_465_intel1214_mkl_openmp_nogpu_sse41/bin and a GPU-enabled version in /global/software/gromacs/gromacs465_intel1214_openmp_mkl_gpu/bin. Iniitialize your environment with:

source /global/software/gromacs/gromacs_465_intel1214_mkl_openmp_nogpu_sse41/bin/GMXRC

or

module load cuda
source /global/software/gromacs/gromacs465_intel1214_openmp_mkl_gpu/bin/GMXRC

GROMACS 4.6.7

There are two versions of GROMACS 4.6.7 available on Parallel, one compiled with GNU compilers for use with g_mmpbsa and one with Intel compilers for use with PLUMED.   Note, these installations do not include support for GPU-based acceleration.

To set up your environment to use the GNU-compiled version use:

module load gromacs/4.6.7-gnu

This version of GROMACS also has the g_mmpbsa (http://rashmikumari.github.io/g_mmpbsa), a tool for evaluation of bonding free energy, installed in it. After loading the module two g_mmpbsa-related commands, g_mmpbsa and energy2bfac will be available.

To set up your environment to use the Intel-compiled version use (bash shell example):

module load plumed/2.2.0
BINDIR=/global/software/gromacs/gromacs_467_intel1214_mkl_openmp_nogpu_sse41_plumed220/bin
. ${BINDIR}/GMXRC

 

Gnuplot 4.6p0
Qt 4.8.2

/global/software/qt4/qt4.8.2

LAMMPS 4Dec12

A relatively old version of the software is available as /global/software/lammps/bin/4Dec12/lmp

Quantum ESPRESSO 5.0.3

Quantum ESPRESSO was installed using Intel 12.1.4 compilers, MKL and FFTW 3.2.2 libraries. Before running ESPRESSO binaries, initialize your environment with:

module load espresso/5.0.3

GROMACS 5.0.4

Single and double precision versions of GROMACS 5.0.4 have been installed on Parallel, including GPU-enabled variants. See the main WestGrid GROMACS page for an example script for Lattice, which can be adapted for Parallel.

Grace/Xmgrace 5.1.22

/global/software/bin/xmgrace

Perl 5.10.1

Perl 5.10.1 is the default on Parallel, but, does not have many add-on modules. 

VASP 5.2.12

several variations in /global/software/vasp/vasp52

VASP 5.3.3

This version was built before a system upgrade in November 2013 and requires an old version of Open MPI.  Also, it was built with a multi-threaded Intel Math Kernel Library.  To avoid problems with over-subscribed cores, limit the number of threads to one per MPI process by setting the OMP_NUM_THREADS variable.  Initialize your environment with (bash shell example shown):

export OMP_NUM_THREADS=1
module unload intel
module load intel/12
module load openmpi/old

ROOT (data analysis framework) 5.34.32

See the main WestGrid ROOT page.

VASP 5.4.1

VASP 5.4.1 with patches has been installed on Parallel. Binaries with support for VTST Tools and VASPsol are also available. See the main WestGrid VASP page for details.

NEdit 5.5
Allinea DDT 6.0

Add /global/software/ddt/6.0/bin to your PATH environment variable.

NCAR Graphics and NCL 6.0.0

Set NCARG_ROOT to /global/software/ncl/ncl_ncarg _600

ABAQUS 6.10-1

(U of C only) See /global/software/abaqus

ABAQUS 6.11-1

(U of C only) See /global/software/abaqus

ABAQUS 6.12-2

(U of A only) See /global/software/abaqusua .

(U of C only) See /global/software/abaqus .

NCAR Graphics and NCL 6.3.0

Initialize your enivronment (NCARG_ROOT and PATH) to use NCAR Command Language 6.3.0 on Lattice with

module load ncl/6.3.0

Please note that this installation provides NCL only. NCAR Graphics does not work due to the Cairo library on Lattice being too old.

ImageMagick 6.7.0.0

/global/software/imagemagick/ImageMagick-6.7.0-0

ImageMagick 6.9.1-10

Usage:

module load imagemagick/6.9.1

This make ImageMagic available on the compute nodes and replaces an older versoin on the login node.

pdflib 7.05

/global/software/pdf/pdflib_lite _705

NEURON 7.1

See /global/software/neuron/nrn-7.1. This is the non-graphical version only.

MATLAB Compiler Runtime (MCR) 7.13

For use with standalone applications compiled with the version of the MATLAB compiler distributed with R2010a of MATLAB.

Installed to /global/software/matlab/mcr/v713.

See full instructions for creating standalone applications.

MATLAB Compiler Runtime (MCR) 7.14

For use with standalone applications compiled with the version of the MATLAB compiler distributed with R2010b of MATLAB.

Installed to /usr/local/matlab-mcr/v714.

See full instructions for creating standalone applications.

MATLAB Compiler Runtime (MCR) 7.15

For use with standalone applications compiled with the version of the MATLAB compiler distributed with R2011a of MATLAB.

Installed to /global/software/matlab/mcr/v715.

See full instructions for creating standalone applications.

MATLAB Compiler Runtime (MCR) 7.16

For use with standalone applications compiled with the version of the MATLAB compiler distributed with R2011b of MATLAB.

Installed to /global/software/matlab/mcr/v716.

See full instructions for creating standalone applications.

MATLAB Compiler Runtime (MCR) 7.17

For use with standalone applications compiled with the version of the MATLAB compiler distributed with R2012a of MATLAB.

Installed to /global/software/matlab/mcr/v717.

See full instructions for creating standalone applications.

 

Vim 7.2.411
NEURON 7.3

See /global/software/neuron/nrn-7.3.

NEURON 7.4

Initialize your environment with

module load python/2.7.9

before running NEURON, which has been installed under /global/software/neuron/nrn-7.4 .

Note this is not a release version of NEURON, but, is a development snapshot.  It was built with InterViews, MPI and Python support.

LAMMPS 7Dec15

The 7 Dec. 2015 version of LAMMPS (Intel 12.1.4 compiler with FFTW3 and MKL, no GPU support) has been installed on Parallel as /global/software/lammps/bin/7Dec15/lmp .

MATLAB Compiler Runtime (MCR) 8.0

For use with standalone applications compiled with the version of the MATLAB compiler distributed with R2012b of MATLAB.

Installed to /global/software/matlab/mcr/v80.

See full instructions for creating standalone applications.

MATLAB Compiler Runtime (MCR) 8.4

Version 8.4 of the MATLAB Compiler Runtime files for use with standalone applications compiled with version 5.2 of the MATLAB compiler (distributed with MATLAB R2014b) has been installed to /global/software/matlab/mcr/v84.  This is intended for use with the GPU-enabled nodes on Parallel.

MATLAB Compiler Runtime (MCR) 9.0

The MATLAB Compiler Runtime files were installed to /global/software/matlab/mcr/v90 for use with standalone applications compiled with the version of the MATLAB compiler distributed with R2015b of MATLAB.  See the main MATLAB page for details.  Please note that MATLAB is not normally run on Parallel, but, can be used for GPU-specific code or for minor data processing tasks associated with large multi-node parallel MPI codes.

64-bit Linux CentOS CentOS release 6.4 (Final) - 2.6.32-358.el6.x86_64 #1 SMP
DL_POLY CLASSIC 1.7
OpenFOAM, OpenFOAM+ and OpenFOAM Extend foam-extend-3.1

Initialize your environment with

export FOAM_INST_DIR=/global/software/openfoam
source $FOAM_INST_DIR/foam-extend-3.1_20140710/etc/bashrc

See the main WestGrid OpenFOAM page for a sample batch job script.

Compilers - C, C++ and Fortran GCC 4.4.7

The GCC 4.4.7 compilers are on the default command PATH.

Compilers - C, C++ and Fortran GCC 4.8.4

Initialize your environment to use the GCC 4.8.4 compilers with

module load gcc/4.8.4

Compilers - C, C++ and Fortran GCC 4.9.0 20140406 (experimental)

Initialize your environment to use the GCC 4.9.0 20140406 (experimental) compilers with

module load gcc/gcc-4.9-20140406-beta

Compilers - C, C++ and Fortran GCC 5.1.0

Initialize your environment to use the GCC 5.1.0 compilers with

module load gcc/5.1.0

C++ - GNU GCC, Intel, Portland Group (PGI) compilers GNU 4.4.4
C++ - GNU GCC, Intel, Portland Group (PGI) compilers GNU 4.9 (20140406-beta)

To set up your environment to use the 4.9 (beta) version of the GCC compilers (gcc, g++, gfortran) use

module load gcc/gcc-4.9-20140406-beta

C - GNU GCC, Intel, Portland Group (PGI) compilers GNU 4.9 (20140406-beta)

To set up your environment to use the 4.9 (beta) version of the GCC compilers (gcc, g++, gfortran) use

module load gcc/gcc-4.9-20140406-beta

Fortran - GNU GCC, Intel, Portland Group (PGI) and Absoft compilers GNU 4.9 (20140406-beta)

To set up your environment to use the 4.9 (beta) version of the GCC compilers (gcc, g++, gfortran) use

module load gcc/gcc-4.9-20140406-beta

BLAS In MKL
LAPACK In MKL
Compilers - C, C++ and Fortran Intel 11.1

Initialize your environment to use the Intel 11.1 compilers with

module load intel/11.1

Compilers - C, C++ and Fortran Intel 12.0.2

Initialize your environment to use the Intel 12.0.2 compilers with

module load intel/12

C++ - GNU GCC, Intel, Portland Group (PGI) compilers Intel 12.1-10.319

(icpc- module load intel/2011-sp1.10.319)

Fortran - GNU GCC, Intel, Portland Group (PGI) and Absoft compilers Intel 12.1.2

(ifort - module load intel-2011)

Compilers - C, C++ and Fortran Intel 12.1.4

The Intel 12.1.4 compilers are on the default command PATH, but, you can also explicitly initialize your environment to use them with

module load intel

Compilers - C, C++ and Fortran Intel 14.0.3

Initialize your environment to use the Intel 14.0.3 compilers with

module load intel/14.0.3

Compilers - C, C++ and Fortran Intel 16.0.0

Initialize your environment to use the Intel 16.0.0 compilers with

module load intel/16.0.0

sh Link to bash
csh Link to tcsh
ScaLAPACK MKL

ScaLAPACK is available through the Math Kernel Library (MKL).

tcsh NA
Compilers - C, C++ and Fortran PGI 12.1-0

Initialize your environment to use the Portland Group 12.1-0 compilers with

module load pgi/12.1

Compilers - C, C++ and Fortran PGI 14.6-0

Initialize your environment to use the Portland Group 14.6-0 compilers with

module load pgi/14.6

Fortran - GNU GCC, Intel, Portland Group (PGI) and Absoft compilers Portl

Initialize your environment to use the Portland Group 14.6 compilers with

module load pgi/14.6

C++ - GNU GCC, Intel, Portland Group (PGI) compilers Portland Group 12.10

(pgCC- module load pgi)

C++ - GNU GCC, Intel, Portland Group (PGI) compilers Portland Group 14.6

Initialize your environment to use the Portland Group version 14.6 compilers with

module load pgi/14.6

C - GNU GCC, Intel, Portland Group (PGI) compilers Portland Group 14.6

Intialize your environment to use version 14.6 of the Portland Group compilers with

module load pgi/14.6

MATLAB R2011a

(UofC only)

ANSYS simulation software, including Fluent and CFX Release 16.1

A suite of ANSYS Release 16.1 products including Structural Mechanics, Fluid Dynamics (CFX, Fluent, TurboGrid and CFD-Post) and Composite PrepPost have been installed on Parallel in /global/software/ansys/ansys16/v161.  Look in subdirectories for the specific products.  See the main WestGrid ANSYS page for details.

ANSYS simulation software, including Fluent and CFX Release 16.2

A suite of ANSYS Release 16.2 products including Structural Mechanics, Fluid Dynamics (CFX, Fluent, TurboGrid and CFD-Post) and Composite PrepPost have been installed on Parallel in /global/software/ansys/ansys16/v162.  Look in subdirectories for the specific products.  See the main WestGrid ANSYS page for details.  This installation is for approved University of Calgary researchers only.

MATLAB Release 2011b (7.13.0)

(U of C only) /global/software/bin/matlab

Java Sun Java SE JRE Oracle HotSpot 1.6.0_23 module unload intel will revert default to GCC gij 4.1.2

/global/software/java/jre1.6.0_23
ksh Y (ksh93)