You are here

Jasper

Jasper cluster is intended for general purpose serial and MPI-based parallel computing.

Log in to Jasper by connecting to the host name jasper.westgrid.ca using an ssh (secure shell) client.

As on other WestGrid systems batch jobs are handled by a combination of TORQUE and Moab software. For more information about submitting jobs, see Running Jobs.

Resource Policy or limit
Maximum walltime 72 hours
Maximum number of running jobs for a single user 2880
Maximum number of jobs submitted 2880
Maximum jobs in Idle queue 5

 

Some of the former Checkers compute nodes have ben added to the Jasper cluster. The former Checkers nodes have 2, 4-core Xeon L5420 processors, while the original Jasper Nodes have 2, 6-core Xeon X5675 processors. If your job needs to run on a specific type please specify the feature eg.:

    #PBS -l feature=X5675

or                                                                         

    #PBS -l feature=L5420

In order to minimize the disruption of this change, if you do not specify a feature, the temporary default is to specify the X5675 feature. Starting from July 2, 2013 if you do not specify a feature your job may be run on either type of node. Until July 2, 2013 if you do not specify a feature, your job will be scheduled to run on the original Jasper (X5675) nodes. In order to make sure your jobs can run on former checkers nodes, please submit test jobs with the L5420 feature. The former checkers nodes may be less busy if so jobs submitted with this feature may run quicker. Please note in the future (July 2, 2013) serial jobs longer than 12 hours will run on L5420 (old checkers) nodes only. Shorter serial jobs may also run on  X5675 nodes. The shorter the walltime of a serial job the better chance it will have of running on X5675 nodes. Parallel jobs may run on either nodes.

Except for compiling programs and small tests, interactive use of Jasper should be through the '-I' option to qsub.

If you require more than the default disk space, you should apply for a RAC allocation. If you require more than the default file count, you should contact support@westgrid.ca.

Storage Information on Jasper

Directory path Size Quota Command to check quota Purpose Backup Policy
/home 356 TB 1 TB; Files: 500,000 lfs quota -u <your username> /lustre

Home directory.  Also serves as a global scratch directory on Jasper.

Note: Jasper and Hungabee share the same /home filesystem.

Not backed up.  Users are encouraged to backup their own data.  Silo is available for this purpose.

Program Information on Jasper

OpenMP programs

The Intel compilers include support for shared-memory parallel programs that include parallel directives from the OpenMP standard. Use the -openmp compiler option to enable this support, for example:

module load compiler/intel/12.1 
icc -o prog -openmp prog.c 
ifort -o prog -openmp prog.f90

Before running an OpenMP program, set the OMP_NUM_THREADS environment variable to the desired number of threads using bash-shell syntax:

export OMP_NUM_THREADS=12

or C-shell (tsch-shell) syntax:

setenv OMP_NUM_THREADS 12

according to the shell you are using. Then, to test your program interactively, launch it like you would any other:

./prog

Here is a sample TORQUE job script for running an OpenMP-based program.

#!/bin/bash 
#PBS -S /bin/bash 
#PBS -l pmem=2000mb 
#PBS -l nodes=1:ppn=12 
#PBS -l walltime=12:00:00 
#PBS -m bea 
#PBS -M yourEmail@address 
cd $PBS_O_WORKDIR 
export OMP_NUM_THREADS=$PBS_NUM_PPN 
./prog

MPI Programs

MPI programs can be compiled using the compiler wrapper scripts mpicc, mpicxx, and mpif90. These scripts invoke the gnu compilers. For the Intel compilers, use mpiicc, mpiicpc, and mpifort. Use the mpirun command to launch an MPI program, for example:

module load compiler/intel/12.1 
module load library/intelmpi/4.0.3.008 
mpiicc -o prog prog.c 
mpirun -np 8 ./prog

After your program is compiled and tested, you can submit large-scale production runs to the batch job system. Here are some sample TORQUE batch job scripts for MPI-based programs.

#!/bin/bash 
#PBS -S /bin/bash 
#PBS -l pmem=2000mb 
#PBS -l procs=30 
#PBS -l walltime=12:00:00 
#PBS -m bea 
#PBS -M yourEmail@address 
cd $PBS_O_WORKDIR 
module load compiler/intel/12.1 
module load library/intelmpi/4.0.3.008 
mpirun ./prog > out

 

Compiling and running programs

The latest Intel compilers are available on Jasper. The compiler commands are icc (C compiler), icpc (C++ compiler), and ifort (Fortran compiler). Please note that modules need to be loaded to use the compilers and MPI. In the sections below, basic use of the compilers is shown for OpenMP and for MPI-based parallel programs. Additional compiler directives for optimization or debugging should often be used.