You are here

Jasper

Jasper cluster is intended for general purpose serial and MPI-based parallel computing.

Log in to Jasper by connecting to the host name jasper.westgrid.ca using an ssh (secure shell) client.

As on other WestGrid systems batch jobs are handled by a combination of TORQUE and Moab software. For more information about submitting jobs, see Running Jobs.

Resource Policy or limit
Maximum walltime 72 hours
Maximum number of running jobs for a single user 2880
Maximum number of jobs submitted 2880
Maximum jobs in Idle queue 5

 

Some of the former Checkers compute nodes have been added to the Jasper cluster. The former Checkers nodes have 2, 4-core Xeon L5420 processors, while the original Jasper Nodes have 2, 6-core Xeon X5675 processors.  Please note that serial jobs longer than 12 hours will run on L5420 (old checkers) nodes only. Shorter serial jobs may also run on  X5675 nodes. The shorter the walltime of a serial job the better chance it will have of running on X5675 nodes. Parallel jobs may run on either nodes. Howevever, jobs with resource requests of the form 'nodes=n:ppn=12' will run exclusively on the X5675 nodes. Altenatively, if you have larger memory requirements that require whole nodes, 'naccesspolicy=singlejob' can be used. For example

#PBS -l naccesspolicy=singlejob  
#PBS -l nodes=2:ppn=4  
#PBS -l pmem=6000mb

Except for compiling programs and small tests(less than 1 hour, no more than 2 cores), interactive use of Jasper should be through the '-I' option to qsub.

If you require more than the default disk space, you should apply to the Annual Resource Allocation Call. If you require more than the default file count, you should contact support@westgrid.ca.

Storage Information on Jasper

Directory path Size Quota Command to check quota Purpose Backup Policy
/home 356 TB 1 TB; Files: 500,000 lfs quota -u <your username> /lustre

Home directory. Also serves as a global scratch directory on Jasper.

Note: Jasper and Hungabee share the same /home file system.

Not backed up. Users are encouraged to backup their own data. Silo is available for this purpose.

Program Information on Jasper

OpenMP programs

The Intel compilers include support for shared-memory parallel programs that include parallel directives from the OpenMP standard. Use the -openmp compiler option to enable this support, for example:

module load compiler/intel/12.1 
icc -o prog -openmp prog.c 
ifort -o prog -openmp prog.f90

Before running an OpenMP program, set the OMP_NUM_THREADS environment variable to the desired number of threads using bash-shell syntax:

export OMP_NUM_THREADS=12

or C-shell (tsch-shell) syntax:

setenv OMP_NUM_THREADS 12

according to the shell you are using. Then, to test your program interactively, launch it like you would any other:

./prog

Here is a sample TORQUE job script for running an OpenMP-based program.

#!/bin/bash 
#PBS -S /bin/bash 
#PBS -l pmem=2000mb 
#PBS -l nodes=1:ppn=12 
#PBS -l walltime=12:00:00 
#PBS -m bea 
#PBS -M yourEmail@address 
cd $PBS_O_WORKDIR 
export OMP_NUM_THREADS=$PBS_NUM_PPN 
./prog

MPI Programs

MPI programs can be compiled using the compiler wrapper scripts mpicc, mpicxx, and mpif90. These scripts invoke the gnu compilers. For the Intel compilers, use mpiicc, mpiicpc, and mpifort. Use the mpirun command to launch an MPI program, for example:

module load library/openmpi/1.6.5-intel 
mpicc -o prog prog.c 
mpirun -np 8 ./prog

After your program is compiled and tested, you can submit large-scale production runs to the batch job system. Here are some sample TORQUE batch job scripts for MPI-based programs. People are encouraged to use entire nodes for MPI jobs. So requests should be in the form 'nodes=x:ppn=12'. Requests in the form 'procs=n' are still accepted but these jobs will have lower priority.

#!/bin/bash 
#PBS -S /bin/bash 
#PBS -l pmem=2000mb 
#PBS -l nodes=2:ppn=12 
#PBS -l walltime=12:00:00 
#PBS -m bea 
#PBS -M yourEmail@address 
cd $PBS_O_WORKDIR  
module load library/openmpi/1.6.5-intel 
mpirun ./prog > out

 If you need more than pmem=2000mb, then you can use 'naccesspolicy=singlejob' to reserve entire nodes as follows:

#!/bin/bash 
#PBS -S /bin/bash 
#PBS -l pmem=4000mb 
#PBS -l nodes=4:ppn=6 
#PBS -l naccesspolicy=singlejob 
#PBS -l walltime=12:00:00 
#PBS -m bea 
#PBS -M yourEmail@address 
cd $PBS_O_WORKDIR  
module load library/openmpi/1.6.5-intel 
mpirun ./prog > out

 

 

Compiling and running programs

The latest Intel compilers are available on Jasper. The compiler commands are icc (C compiler), icpc (C++ compiler), and ifort (Fortran compiler). Please note that modules need to be loaded to use the compilers and MPI. In the sections below, basic use of the compilers is shown for OpenMP and for MPI-based parallel programs. Additional compiler directives for optimization or debugging should often be used.