You are here

Orcinus

After March 31, 2019 this system will no longer be supported by WestGrid / Compute Canada. The University of British Columbia has generously offered to continue operating this system for opportunistic use by existing WestGrid users only. No new accounts will be created or new software installed. Priority access will be given to researchers located at or collaborating with the University of British Columbia. Support and maintenance of this system will be provided on a best efforts basis only. Users should ensure all important data is backed-up on another system. Please contact arc.support@ubc.ca for more information or refer to WestGrid’s Migration Process page for more help.

University of British Columbia
9600
orcinus.westgrid.ca

Technical Specifications

Processors

Phase One of the Orcinus cluster is comprised of 12 c7000 chassis, each containing 16 dual-density BL2x220 Generation 5 blades. There are 2 compute servers per blade (an A node, and a B node). Every node has 2 sockets, each containing an Intel Xeon E5450 quad-core processor, running at 3.0GHz. In total there are 3072 Phase One cores (12 * 16 * 2 * 2 * 8). The 8 cores in a single Phase One node share 16GB of RAM. Phase Two is comprised of 17 c7000 chassis, each containing 16 dual-density BL2x220 Generation 6 blades. Again, there are 2 compute servers per blade (an A node, and a B node). Every node has 2 sockets, each containing an Intel Xeon X5650 six-core processor, running at 2.66GHz. In total there are 6528 Phase Two cores (17 * 16 * 2 * 2 * 12). The 12 cores in a single Phase Two node share 24GB of RAM. The total number of cores available is 9600.

Interconnect

All Orcinus nodes are connected via an InfiniBand network fabric with a 2:1 blocking factor (1:1 all A nodes or all B nodes within a single chassis). The Phase One hardware uses a 288-port Voltaire DDR (20Gb/s) Grid Detector 2012 switch and Phase Two uses a 324-port Voltaire QDR (40Gb/s) Grid Detector 4700 switch. In order to maintain a single fabric across the entire cluster (and to consequently mount shared file systems), both switches are linked via a 14-port trunk. However, to ensure that parallel jobs do not run in a mixed InfiniBand environment, the cluster is sectioned into DDR and QDR partitions. (See the "Batch jobs" section in Orcinus QuickStart Guide.)

 

2018-04