You are here

WestGrid Computing Facilities

Introduction

WestGrid's computing facilities are part of Compute Canada's national platform of Advanced Research Computing resources. The Compute Canada computing facilities are distributed among several resource provider sites and connected by high-performance networks so that users can access the system which best fits their needs, regardless of where it is physically located. The systems are for high performance computing, so they are something beyond what you would find on a desktop. 

Which system should you use?

Use the system which best fits your needs, not necessarily the one closest to you. 

See the QuickStart Guide for New Users and the Getting Started on the National Systems Guide for information on choosing the most appropriate system. For more detailed technical information about the differences between the systems, read the information below and in the linked pages for each system.  

National Sites

Compute Canada's new national sites offer expandable and modern data centres with highly qualified personnel. Operations of the new systems are delivered through national teams, drawing upon regional and local expertise distributed across Canada. 

Location

System Name

University of Victoria Arbutus / GP1 / Cloud-West
Simon Fraser University Cedar / GP2
University of Waterloo Graham / GP3
University of Toronto Niagara / LP (Large Parallel)

SFU & Waterloo

National Data Cyberinfrastructure (NDC -Storage)

 

Please refer to the CC User Documentation wiki for more information on national systems and services.

Systems

System(s) Site Cores Type Details
Arbutus University of Victoria 7640 OpenStack Cloud

Visit the CC-Cloud Resources page on the Compute Canada User Wiki for full system details.

Cedar Simon Fraser University 27,696 Cluster with fast interconnect

See https://docs.computecanada.ca/wiki/Cedar

Orcinus University of British Columbia 9600 Storage, Cluster with fast interconnect

Please note: Orcinus is being minimally supported this RAC 2018 year (until Mar.31, 2019) as a "bonus" to tide us over while new resources are installed. New software is being installed only on the new systems through the national software distribution service. We recommend that you consider migrating to the new national systems Cedar or Graham.

  • Phase 1: 384 nodes, 3072 cores
    • 8 cores/node
    • Xeon E5450 3.0GHz
    • 16 GB Ram
    • DDR IB
  • Phase 2: 554 nodes, 6528 cores
    • 12 cores/node
    • Xeon X5650 2.66 GHz
    • QDR IB
  • IB with 2:1 blocking factor
  • Phase 1 and Phase 2 share filesystems but otherwise run as separate systems
ownCloud Simon Fraser University Storage
WestGrid portal None

X

Defunded Systems

   

System(s) Site Cores Type Details
Breezy University of Calgary 384 Shared memory

***NOTE: On September 1, 2017, Breezy was withdrawn from general WestGrid service.  It is now available only to approved University of Calgary-based projects. This is part of a continuing process for handling the retirement of WestGrid systems. Please visit the Migration Process page for more information. Questions regarding Breezy should now be directed to support@hpc.ucalgary.ca, rather than support@westgrid.ca.

  • Appro
  • 24 nodes: quad-socket, 6-core AMD 2.4GHz nodes
  • 256 GB per node
  • Infiniband 4X QDR
  • Dell FluidFS file system
Bugaboo Simon Fraser University 4584 Storage, Cluster with fast interconnect

This system was decommissioned March 2018. Visit the WestGrid Migration page for more information.

  • Dell
  • 160 nodes: 8 cores, Xeon X5430 with 16 GB/node = 1,280 cores (Infiniband, 2:1 blocking)
  • 254 nodes: 12 cores, Xeon X5650 (212 nodes with 24 GB/node, 32 nodes with 48 GB/node) = 3,048 cores (Infiniband, 2:1 blocking)
Grex University of Manitoba 3792 Storage, Cluster with fast interconnect

This system has been defunded. Visit the WestGrid Migration page for more information.

  • SGI Altix XE 1300
  • 316 compute nodes
  • 2 x 6core Intel Xeon X5650 2.66 MHz processors per node
  • 24 nodes have 96 GB, 292 nodes have 48 GB
  • Infiniband 4X QDR
Hermes/Nestor University of Victoria 4416 Storage, Cluster with fast interconnect

***NOTE: The Hermes and Nestor systems was “defunded” on June 1, 2017. Researchers affiliated with UVic should contact sysadmin@uvic.ca for information about ongoing use of systems/storage at UVic. 

Hermes
Original nodes: 84 x 8 core, IBM iDataplex X5550 2.67 GHz, 24 GB/node, 2 x GigE interconnects
Newer nodes: 120 x 12 core, Dell C6100 servers, 2.66 GHz X5650 cores with 24 GB/node, QDR IB 10:1 blocking GPFS 1.2 PB for home, scratch (shared with nestor)

Nestor

288 x 8 core/node, IBM iDataplex X5550 2.67 GHz, 24 GB/Node, QDR IB nonblocking GPFS 1.2 PB for home and scratch (shared with hermes) 

 

Hungabee University of Alberta 2048 Shared memory

NOTE: This system was “defunded” in Fall 2017. Researchers affiliated with the University of Alberta should contact research.support@ualberta.ca for information about ongoing use of local systems.

  • Special Request Only
  • SGI UV1000, NUMA Shared-memory
  • 2048 Intel Xeon E7 cores
  • 16 TB total (shared) memory
  • NFS: 2 x SGI IS5000 storage arrays
    • 8 x fibrechannel direct to the UV1000. (short term storage)
    • 50 TB
  • Lustre: 1 x SGI IS16000 array 355 TB. (medium term storage)
    • Available to BOTH Hungabee and Jasper through QDR IB
Jasper University of Alberta 4160 Cluster with fast interconnect

NOTE: This system was “defunded” in Fall 2017. Researchers affiliated with the University of Alberta should contact research.support@ualberta.ca for information about ongoing use of local systems.

  • SGI Altix XE, 400 nodes, 4160 cores and 8320 GB of memory
    • 204 Xeon X5675 nodes - 12 cores (2 x 6), 24 GB, 40 Gbit/sec 1:1 Infiniband interconnect
    • 36 Xeon X5675 nodes - 12 cores (2 x 6), 48 GB, 40 Gbit/sec 1:1 Infiniband interconnect
    • 160 Xeon L5420 nodes - 8 cores (2 x 4), 16 GB, 20 Gbit/sec 2:1 Infiniband interconnect
  • Lustre parallel distributed filesystem, 356 TB - shared with all nodes via Infiniband
Lattice University of Calgary 4096 Storage, Cluster with fast interconnect

***NOTE: On September 1, 2017, Lattice was withdrawn from general WestGrid service.  It is now available only to approved University of Calgary-based projects. Questions regarding Lattice should now be directed to support@hpc.ucalgary.ca, rather than support@westgrid.ca.

  • 512 x 8-core nodes.
    • Intel Xeon L5520 quad core 2.27 GHz
    • 12 GB/node
  • QDR IB (2:1 blocking factor)
Parallel University of Calgary 7056 Storage, Cluster with fast interconnect, Visualization

This system was defunded on March 31, 2018. Visit the WestGrid Migration page for more information.

  • HP ProLiant SL390
  • 528 x 12 core nodes
    • Intel E5649 (6 core) 2.53 GHz
  • 60 special 12 core nodes with GPU
    • NVidia Tesla M2070s (5.5 GB ram and Compute Capability 2)
  • IB QDR (2:1 blocking factor to reduce costs)
  • Global scratch shared between breezy, lattice and parallel

Retired WestGrid Systems

Some older WestGrid systems have been removed from general service, typically being replaced with more energy-efficient machines with more capability.

Machine name Period of Service Description

Silo

Dec. 2008 - Mar. 2017

Silo was the primary storage facility at WestGrid with over 3.15 PB (3150 TB) of spinning disk. It was an archival facility, which was backed up. There were two main login servers: Silo and Hopper. Silo and Hopper shared filesystems.

Disk storage: total 4.2 PB raw, 3.15 PB usable
600 x 1TB SATA drives
1800 x 2TB SATA drives
RAID 6
2 pairs of Dual IBM/DDN DSC9900 Controllers

Tape System: IBM LTO 3584 tape library
6 frames capable of holding 6000 LTO tapes
6 LTO4 drives
6 LTO5 drives
1780 x LTO4 tapes (averaging >1TB/tape with compression)
1400 LTO5 tapes (averaging 1.5TB/tape with compression)

Backup Software: IBM Tivoli Storage Manager (TSM)

 Hermes / Nestor

July 2010-June 2017 

Hermes & Nestor were systems located at the University of Victoria.

Hermes
Original nodes: 84 x 8 core, IBM iDataplex X5550 2.67 GHz, 24 GB/node, 2 x GigE interconnects
Newer nodes: 120 x 12 core, Dell C6100 servers, 2.66 GHz X5650 cores with 24 GB/node, QDR IB 10:1 blocking GPFS 1.2 PB for home, scratch (shared with nestor)

Nestor
288 x 8 core/node, IBM iDataplex X5550 2.67 GHz, 24 GB/Node, QDR IB nonblocking GPFS 1.2 PB for home and scratch (shared with hermes)