You are here


***NOTE: This system will be defunded on August 31, 2017. Please visit the Migration Process page for more information. 

Lattice cluster is intended for large-scale parallel applications that can take advantage of its InfiniBand interconnect.

Unlike most WestGrid systems, a separate request is required to obtain a WestGrid account on Lattice. If you think the software you would like to run is appropriate for the Lattice cluster, please write to with a subject line of the form "Lattice account request (your_username)" with a request for an account and a mention of the software you propose to use.

To log in to Lattice, connect to using an ssh (secure shell) client.

As on other WestGrid systems batch jobs are handled by a combination of TORQUE and Moab software. For more information about submitting jobs, see Running Jobs.

Unlike most other WestGrid systems, we prefer that the syntax "-l nodes=xx,ppn=8" be used rather than "-l procs=yyy" when requesting processor resources on Lattice. Lattice is used almost exclusively for large parallel jobs that use whole nodes. This has the potential of improving the performance of some jobs and minimizes the impact of a misbehaving job or hardware failure. Since there are 8 cores per node on Lattice, a ppn (processors per node) parameter of 8 will request that all the processors on a node be used. Also, it is recommended that you ask for 10-11 GB of memory per node requested, using the mem parameter. So, a typical job submission on Lattice would look like:

qsub -l nodes=4:ppn=8,mem=40gb,walltime=72:00:00 parallel_diffuse.pbs

The following limits are in place for batch jobs submitted to the default queue (that is, if no queue is specified on qsub command):

Resource Policy or limit
Maximum walltime, but, see below for other comments related to walltime. 168 hours
Suggested maximum memory resource request, mem. 11 GB
Maximum number of running jobs for a single user. 64
Maximum cores (sum for all jobs) for a single user. 1024
Maximum jobs in Eligible (for scheduling section of the input queue). 8
Maximum number of jobs in the queue for a single user 1000


Six to eight nodes are generally reserved for short batch jobs, some with a maximum walltime limit of 3 hours and some with a maximum of 24 hours.

Two to four nodes are reserved for interactive use or short jobs. These can be accessed by specifying the interactive queue and a walltime of less than or equal to three hours on your qsub command line. If you require exclusive access to a node, you can add naccesspolicy=singlejob, as shown here, or ask for all 8 cores on a node (for example, -l nodes=1:ppn=8):

qsub -q interactive -l walltime=03:00:00,naccesspolicy=singlejob job_script.pbs

See the Working Interactively section of the Running Jobs page for an alternate method to reserve processors for interactive use, which can be used in cases where you need more than two nodes or a walltime longer than 3 hours.

The login node can be used for short testing and debugging sessions, but, a virtual memory limit of 6 GB per process has been imposed to reduce the chance of a user making the login node unusable for others. If you need to test a program that has a process requiring more virtual memory than that you could one the compute nodes.

Storage Information on Lattice

Directory path Size Quota Command to check quota Purpose Backup Policy
/home 20 TB (Shared with Breezy and Parallel) 50GB with a 200,000 file limit, for each individual home directory Write to with a subject line of form "Disk quota for user your_user_name requested for Lattice"

Use your home directory for files that you want to save for longer than 30 days.

Users are responsible for their own backups.

/global/scratch/user_name 300 TB (Shared with Breezy and Parallel) The default quota in /global/scratch for individual users is 450 GB, with a 200,000-file limit. If you need an increased quota, please write to . Write to with a subject line of form "Disk quota for user your_user_name requested for Lattice"

Please note that /global/scratch is intended only for files associated with running jobs or waiting for post-processing.

Files older than 30 days are subject to deletion.

2015-11-09 - Changed file system sizes and quota checking procedure.