You are here
Lattice (University of Calgary research projects only)
***NOTE: On September 1, 2017, Lattice was withdrawn from general WestGrid service. It is now available only to approved University of Calgary-based projects. This is part of a continuing process for handling the retirement of WestGrid systems. Please visit the Migration Process page for more information. Questions regarding Lattice should now be directed to firstname.lastname@example.org, rather than email@example.com.
Lattice is one of a number of local computing resources available to University of Calgary researchers. However, Lattice is an old cluster that should be viewed as a supplement to other, more modern, machines. For example, Compute Canada Cedar and Graham clusters will provide better performance, but, job waiting times may be shorter on Lattice. Please write to us at firstname.lastname@example.org if you would like to discuss the details of the advantages and disadvantages of the various alternatives.
The primary intended use for Lattice is MPI-based parallel processing, but, it can also be used for serial programs with low memory requirements (<11 GB/job).
As of September 1, 2017, new Lattice accounts are being set up only for researchers associated with the University of Calgary. If you are a University of Calgary researcher and think that the software you would like to run is appropriate for the Lattice cluster, please write to email@example.com with a subject line of the form "Lattice account request (your_username)" with a request for an account and a mention of the software you propose to use.
To log in to Lattice, connect to lattice.westgrid.ca using an ssh (secure shell) client.
Batch jobs are handled by a combination of TORQUE and Moab software. For more information about submitting jobs, see Running Jobs.
In the past, Lattice was used almost exclusively for large parallel jobs that use whole nodes. This has the potential of improving the performance of some jobs and minimizes the impact of a misbehaving job or hardware failure. Since there are 8 cores per node on Lattice, a ppn (processors per node) parameter of 8 will request that all the processors on a node be used. Also, it is recommended that you ask for 10-11 GB of memory per node requested, using the mem parameter. So, a typical job submission on Lattice would look like:
qsub -l nodes=4:ppn=8,mem=40gb,walltime=72:00:00 parallel_diffuse.pbs
When Lattice was withdrawn from WestGrid service at the beginning of September, 2017, it was decided to allow other types of jobs on the Lattice cluster, including serial (single-core) jobs. When requesting serial jobs, use nodes=1:ppn=1 and specify an appropriate mem parameter according to the memory requirements of your job. Using mem=1350mb (or less) will allow the job scheduler to pack up to 8 serial jobs per node.
The following limits are in place for batch jobs submitted to the default queue (that is, if no queue is specified on qsub command):
|Resource||Policy or limit|
|Maximum walltime, but, see below for other comments related to walltime.||168 hours|
|Suggested maximum memory resource request, mem.||11 GB|
|Maximum number of running jobs for a single user.||512|
|Maximum cores (sum for all jobs) for a single user.||512|
|Maximum jobs in Eligible (for scheduling section of the input queue).||8|
|Maximum number of jobs in the queue for a single user||1000|
Six to eight nodes are generally reserved for short batch jobs, some with a maximum walltime limit of 3 hours and some with a maximum of 24 hours.
Two to four nodes are reserved for interactive use or short jobs. These can be accessed by specifying the interactive queue and a walltime of less than or equal to three hours on your qsub command line. If you require exclusive access to a node, you can ask for all 8 cores on a node:
qsub -q interactive -l nodes=1:ppn=8,mem=11gb,walltime=03:00:00 job_script.pbs
See the Working Interactively section of the Running Jobs page for an alternate method to reserve processors for interactive use, which can be used in cases where you need more than two nodes or a walltime longer than 3 hours.
The login node can be used for short testing and debugging sessions, but, a virtual memory limit of 6 GB per process has been imposed to reduce the chance of a user making the login node unusable for others. If you need to test a program that has a process requiring more virtual memory than that you could one the compute nodes.
Storage Information on Lattice (University of Calgary research projects only)
|Directory path||Size||Quota||Command to check quota||Purpose||Backup Policy|
|/home||20 TB (Shared with Breezy and Parallel)||50GB with a 200,000 file limit, for each individual home directory||Write to firstname.lastname@example.org with a subject line of form "Disk quota for user your_user_name requested for Lattice"||
Use your home directory for files that you want to save for longer than 30 days.
Users are responsible for their own backups.
|/global/scratch/user_name||300 TB (Shared with Breezy and Parallel)||The default quota in /global/scratch for individual users is 450 GB, with a 200,000-file limit. If you need an increased quota, please write to email@example.com .||Write to firstname.lastname@example.org with a subject line of form "Disk quota for user your_user_name requested for Lattice"||
Please note that /global/scratch is intended only for files associated with running jobs or waiting for post-processing.
Files older than 30 days are subject to deletion.
2015-11-09 - Changed file system sizes and quota checking procedure.
2017-09-18 - Noted that Lattice is now available only to U of C-based projects and that it can now be used for serial processing for up to 512 jobs using up to 512 cores.