You are here

WestGrid Computing Facilities

Introduction

The WestGrid computing facilities are distributed among several resource provider sites, with some specialization at each site. WestGrid is connected by high-performance networks so that users can access the system which best fits their needs, regardless of where it is physically located.

The systems are for high performance computing, so they are something beyond what you would find on a desktop. WestGrid provides several types of computing systems, since different users' programs will run best on different kinds of hardware.  We have clusters, clusters with fast interconnect, and shared memory systems. 

Which system should you use?

Use the system which best fits your needs, not necessarily the one closest to you. 

See the QuickStart Guide for New Users introduction to choosing the most appropriate system. For more detailed technical information about the differences between the WestGrid systems, read the information below and in the linked pages for each system.

National Resources

WestGrid's computing facilities are part of Compute Canada's national platform of Advanced Research Computing resources.  

New National Sites

Compute Canada's new national sites offer expandable and modern data centres with highly qualified personnel. Operations of the new systems are delivered through national teams, drawing upon regional and local expertise distributed across Canada. 

Location

System Name

University of Victoria Arbutus / GP1 / Cloud-West
Simon Fraser University Cedar / GP2
University of Waterloo Graham / GP3
University of Toronto Niagara / LP (Large Parallel) - in production mid-2018

SFU & Waterloo

National Data Cyberinfrastructure (NDC -Storage)

 

Please refer to the CC User Documentation wiki for more information on national systems and services.

Migration to the New National Systems

Compute Canada has begun one of the biggest advanced research computing renewals in Canada’s history to replace several ageing systems with new national systems. For more information on the national migration process and the latest update to the technology refresh program, please visit the Compute Canada website

All current WestGrid systems were installed prior to 2012 and will be defunded by April 2018. For details about this process, as well as key information and instructions for WestGrid users, visit our Migration Process page.

WestGrid Systems

System(s) Site Cores Type Details
Arbutus University of Victoria 7640 OpenStack Cloud

Visit the CC-Cloud Resources page on the Compute Canada User Wiki for full system details.

Bugaboo Simon Fraser University 4584 Storage, Cluster with fast interconnect

This system will be decommissioned on January 31, 2018. Visit the WestGrid Migration page for more information.

  • Dell
  • 160 nodes: 8 cores, Xeon X5430 with 16 GB/node = 1,280 cores (Infiniband, 2:1 blocking)
  • 254 nodes: 12 cores, Xeon X5650 (212 nodes with 24 GB/node, 32 nodes with 48 GB/node) = 3,048 cores (Infiniband, 2:1 blocking)
Cedar Simon Fraser University 27,696 Cluster with fast interconnect

See https://docs.computecanada.ca/wiki/Cedar

Grex University of Manitoba 3792 Storage, Cluster with fast interconnect

This system will be defunded on March 31, 2018. Visit the WestGrid Migration page for more information.

  • SGI Altix XE 1300
  • 316 compute nodes
  • 2 x 6core Intel Xeon X5650 2.66 MHz processors per node
  • 24 nodes have 96 GB, 292 nodes have 48 GB
  • Infiniband 4X QDR
Orcinus University of British Columbia 9600 Storage, Cluster with fast interconnect
  • Phase 1: 384 nodes, 3072 cores
    • 8 cores/node
    • Xeon E5450 3.0GHz
    • 16 GB Ram
    • DDR IB
  • Phase 2: 554 nodes, 6528 cores
    • 12 cores/node
    • Xeon X5650 2.66 GHz
    • QDR IB
  • IB with 2:1 blocking factor
  • Phase 1 and Phase 2 share filesystems but otherwise run as separate systems
ownCloud Simon Fraser University Storage
Parallel University of Calgary 7056 Storage, Cluster with fast interconnect, Visualization

This system will be defunded on March 31, 2018. Visit the WestGrid Migration page for more information.

  • HP ProLiant SL390
  • 528 x 12 core nodes
    • Intel E5649 (6 core) 2.53 GHz
  • 60 special 12 core nodes with GPU
    • NVidia Tesla M2070s (5.5 GB ram and Compute Capability 2)
  • IB QDR (2:1 blocking factor to reduce costs)
  • Global scratch shared between breezy, lattice and parallel
WestGrid portal None

List of WestGrid facilities by general type

  • Cluster with fast interconnect
    • Bugaboo, Grex, Orcinus, Parallel
  • Visualization
    • Parallel has special nodes with Graphics Processing Units (GPUs).

Defunded Systems

   

Retired WestGrid Systems

Some older WestGrid systems have been removed from general service, typically being replaced with more energy-efficient machines with more capability.

Machine name Period of Service Description

Silo

Dec. 2008 - Mar. 2017

Silo was the primary storage facility at WestGrid with over 3.15 PB (3150 TB) of spinning disk. It was an archival facility, which was backed up. There were two main login servers: Silo and Hopper. Silo and Hopper shared filesystems.

Disk storage: total 4.2 PB raw, 3.15 PB usable
600 x 1TB SATA drives
1800 x 2TB SATA drives
RAID 6
2 pairs of Dual IBM/DDN DSC9900 Controllers

Tape System: IBM LTO 3584 tape library
6 frames capable of holding 6000 LTO tapes
6 LTO4 drives
6 LTO5 drives
1780 x LTO4 tapes (averaging >1TB/tape with compression)
1400 LTO5 tapes (averaging 1.5TB/tape with compression)

Backup Software: IBM Tivoli Storage Manager (TSM)

 Hermes / Nestor

July 2010-June 2017 

Hermes & Nestor were systems located at the University of Victoria.

Hermes
Original nodes: 84 x 8 core, IBM iDataplex X5550 2.67 GHz, 24 GB/node, 2 x GigE interconnects
Newer nodes: 120 x 12 core, Dell C6100 servers, 2.66 GHz X5650 cores with 24 GB/node, QDR IB 10:1 blocking GPFS 1.2 PB for home, scratch (shared with nestor)

Nestor
288 x 8 core/node, IBM iDataplex X5550 2.67 GHz, 24 GB/Node, QDR IB nonblocking GPFS 1.2 PB for home and scratch (shared with hermes)