You are here
WestGrid Computing Facilities
WestGrid works in partnership with Compute Canada to deliver a national platform of computing and data storage resources distributed among several resource provider sites across the country. These systems are connected by high-performance networks (operated by CANARIE and the regional NRENs), so that researchers can access the system which best fits their needs, regardless of where they are physically located.
Which system should you use?
Use the system which best fits your needs, not necessarily the one closest to you. Click on the system names in the table below for more detailed technical information about each system or refer to our Getting Started on the National Systems Guide for a general overview of how to begin using the resources.
Compute Canada's national systems offer expandable and modern data centres with highly qualified personnel. Operations of the new systems are delivered through national teams, drawing upon regional and local expertise distributed across Canada. Please refer to the CC User Documentation wiki for more information on these national systems and services.
|University of Victoria||Arbutus / CC-Cloud|
|Simon Fraser University (SFU)||Cedar|
|University of Waterloo (UWaterloo)||Graham|
|University of Toronto||Niagara|
|École de technologie supérieure (ETS)||Béluga|
SFU & UWaterloo & ETS
|National Data Cyberinfrastructure|
Visit our Support section for help with using or getting started on the systems.
|Breezy||University of Calgary||384||Shared memory||
***NOTE: On September 1, 2017, Breezy was withdrawn from general WestGrid service. It is now available only to approved University of Calgary-based projects. This is part of a continuing process for handling the retirement of WestGrid systems. Please visit the Migration Process page for more information. Questions regarding Breezy should now be directed to email@example.com, rather than firstname.lastname@example.org.
|Bugaboo||Simon Fraser University||4584||Storage, Cluster with fast interconnect||
This system was decommissioned March 2018. Visit the WestGrid Migration page for more information.
|Grex||University of Manitoba||3792||Storage, Cluster with fast interconnect||
This system has been defunded. Visit the WestGrid Migration page for more information.
|Hermes/Nestor||University of Victoria||4416||Storage, Cluster with fast interconnect||
***NOTE: The Hermes and Nestor systems was “defunded” on June 1, 2017. Researchers affiliated with UVic should contact email@example.com for information about ongoing use of systems/storage at UVic.
288 x 8 core/node, IBM iDataplex X5550 2.67 GHz, 24 GB/Node, QDR IB nonblocking GPFS 1.2 PB for home and scratch (shared with hermes)
|Lattice||University of Calgary||4096||Storage, Cluster with fast interconnect||
***NOTE: On September 1, 2017, Lattice was withdrawn from general WestGrid service. It is now available only to approved University of Calgary-based projects. Questions regarding Lattice should now be directed to firstname.lastname@example.org, rather than email@example.com.
|Orcinus||University of British Columbia||9600||Storage, Cluster with fast interconnect||
IMPORTANT NOTE: After March 31, 2019 this system will no longer be supported by WestGrid / Compute Canada. The University of British Columbia has generously offered to continue operating this system for opportunistic use by existing WestGrid users only. No new accounts will be created or new software installed. Priority access will be given to researchers located at or collaborating with the University of British Columbia. Support and maintenance of this system will be provided on a best efforts basis only. Users should ensure all important data is backed-up on another system. Please contact firstname.lastname@example.org for more information or refer to WestGrid’s Migration Process page for more help.
|Parallel||University of Calgary||7056||Storage, Cluster with fast interconnect, Visualization||
This system was defunded on March 31, 2018. Visit the WestGrid Migration page for more information.
Some older WestGrid systems have been removed from general service, typically being replaced with more energy-efficient machines with more capability.
|Machine name||Period of Service||Description|
|Dec. 2008 - Mar. 2017||
Silo was the primary storage facility at WestGrid with over 3.15 PB (3150 TB) of spinning disk. It was an archival facility, which was backed up. There were two main login servers: Silo and Hopper. Silo and Hopper shared filesystems.
Disk storage: total 4.2 PB raw, 3.15 PB usable
Tape System: IBM LTO 3584 tape library
Backup Software: IBM Tivoli Storage Manager (TSM)
Hermes / Nestor
|July 2010-June 2017||
Hermes & Nestor were systems located at the University of Victoria.
|April 2012 - December 2017||
Note: This system was "defunded" in Fall 2017. Researchers affiliated with the University of Alberta should contact email@example.com for information about ongoing use of local systems.Special request only
SGI UV1000, NUMA Shared-memory
2048 Intel Xeon E7 cores
16 TB total (shared) memory
NFS: 2x SGI IS5000 storage arrays
Available to BOTH Hungabee and Jasper through QDR IB
|April 2012 - December 2017||
Note: This system was "defunded" in Fall 2017. Researchers affiliated with the University of Alberta should contact firstname.lastname@example.org for information about ongoing use of local systems.SGI Altix XE, 400 nodes, 4160 cores and 8320 GB of memory
204 Xeon X5675 nodes - 12 cores (2 x 6), 24 GB, 40 Gbit/sec 1:1 Infiniband interconnect
36 Xeon X5675 nodes - 12 cores (2x6) 48 GB, 40 Gbit/sec 1:1 Infiniband interconnect
160 Xeon L5420 nodes - 8 cores (2x6) 16 GB, 20 Gbit/sec 1:1 Infiniband interconnect
Lustre parrallel distributed filesystem, 356 TB - shared with all nodes via Infiniband