You are here

Orcinus

Orcinus is being minimally supported this RAC 2018 year (until Mar.31, 2019) as a "bonus" to tide us over while new resources are installed. New software is being installed only on the new systems through the national software distribution service. We recommend that you consider migrating to the new national systems cedar or graham.

University of British Columbia
9600
orcinus.westgrid.ca

Technical Specifications

Processors

Phase One of the Orcinus cluster is comprised of 12 c7000 chassis, each containing 16 dual-density BL2x220 Generation 5 blades. There are 2 compute servers per blade (an A node, and a B node). Every node has 2 sockets, each containing an Intel Xeon E5450 quad-core processor, running at 3.0GHz. In total there are 3072 Phase One cores (12 * 16 * 2 * 2 * 8). The 8 cores in a single Phase One node share 16GB of RAM. Phase Two is comprised of 17 c7000 chassis, each containing 16 dual-density BL2x220 Generation 6 blades. Again, there are 2 compute servers per blade (an A node, and a B node). Every node has 2 sockets, each containing an Intel Xeon X5650 six-core processor, running at 2.66GHz. In total there are 6528 Phase Two cores (17 * 16 * 2 * 2 * 12). The 12 cores in a single Phase Two node share 24GB of RAM. The total number of cores available is 9600.

Interconnect

All Orcinus nodes are connected via an InfiniBand network fabric with a 2:1 blocking factor (1:1 all A nodes or all B nodes within a single chassis). The Phase One hardware uses a 288-port Voltaire DDR (20Gb/s) Grid Detector 2012 switch and Phase Two uses a 324-port Voltaire QDR (40Gb/s) Grid Detector 4700 switch. In order to maintain a single fabric across the entire cluster (and to consequently mount shared file systems), both switches are linked via a 14-port trunk. However, to ensure that parallel jobs do not run in a mixed InfiniBand environment, the cluster is sectioned into DDR and QDR partitions. (See the "Batch jobs" section in Orcinus QuickStart Guide.)

 

2018-04