You are here
***NOTE: This system will be defunded April 1, 2017. Please wait for further instruction from the WestGrid Support Team or read about the Migration Process. If you have questions, email firstname.lastname@example.org.
Breezy is intended for jobs that need more memory per node than can be obtained on other WestGrid clusters (for example, the 48 GB of memory on a Grex node). Large OpenMP-based parallel programs are the expected type of Breezy workload. Serial jobs requiring more than about 4 GB can also be run on Breezy.
Unlike most WestGrid systems, a separate request is required to obtain a WestGrid account on Breezy. If you think the software you would like to run is appropriate for Breezy, please write to email@example.com with a subject line of the form "Breezy account request (your_username)" with a request for an account and a mention of the software you propose to use.
To log in to Breezy, connect to breezy.westgrid.ca using an ssh (secure shell) client. For more information about connecting and setting up your environment, see the QuickStart Guide for New Users.
Batch jobs are handled by a combination of TORQUE and Moab software. For more information about submitting jobs, see the general Running Jobs page.
The maximum walltime limit for Breezy jobs is 3 days.
For a single user, the maximum number of jobs in the system at a time is 1000.
Since Breezy is intended for applications requiring large amounts of memory, one will often be expected to specify a TORQUE mem parameter on the qsub command line (or in #PBS directives in the batch job script). Although the memory per node is nominally 256 GB, there is not quite that much available. Do not specify more than 250 GB for the mem or pmem resource requests or your job will get stuck in input queue waiting for memory that will never be available.
Another intended use for Breezy is for multi-threaded single-node applications. For such cases, use a resource request of the form -l nodes=1:ppn=24,mem=250gb, where ppn, the processors per node, is the number of cores required and the memory needed is specified with the mem parameter. Since Breezy compute nodes have 24 cores, that is maximum number you can specify for ppn. You can use smaller values for ppn and mem as appropriate for your calculation. However, if you are using less than 24 cores, it is important to limit the number of threads used by your application to the number of cores requested, so as not to interfere with other users' jobs, which may be assigned to the same node. Often this can be accomplished by setting the OMP_NUM_THREADS variable. See the example script in the OpenMP section of Running Jobs page for an example.
Please do not use the ncpus or procs parameters when requesting processors on Breezy. In the rare cases in which multiple nodes are used for a single job, use the -l nodes=n:ppn=24 format to request multiple nodes, where n is the number required.
The Breezy login node may be used for short interactive runs during development. Production runs should be submitted as batch jobs.
Storage Information on Breezy
|Directory path||Size||Quota||Command to check quota||Purpose||Backup Policy|
|/home||20 TB (Shared with Lattice and Parallel)||50GB||Write to firstname.lastname@example.org with a subject line of form "Disk quota for user your_user_name requested for Breezy"||
Use your home directory for files that you want to save for longer than 30 days.
Users are responsible for their own backup.
|/global/scratch||300 TB (Shared with Lattice and Parallel)||450GB||Write to email@example.com with a subject line of form "Disk quota for user your_user_name requested for Breezy"||
Please note that /global/scratch is intended only for files associated with running jobs or waiting for post-processing.
Files older than 30 days are subject to deletion.
2015-05-30 - Updated purpose and backup policy for /home and /global/scratch file systems.
2015-07-29 - Added a note to clarify that large-memory serial jobs are acceptable for Breezy.
2015-11-09 - Replaced file system sizes and quota checking procedure.