You are here
Each node in the Nestor and Hermes clusters is an IBM iDataplex server with eight 2.67-GHz Xeon x5550 cores with 24 GB of RAM. The newer 120 Hermes nodes are Dell C6100 servers with twelve 2.66-GHz Xeon x5650 cores and 24 GB of RAM.
Recently, we decided to virtualize all the hermes nodes and move them to the cloud. As for November 10, 2016, all the hermes nodes have been virtualized and the old nodes are turned off. Each virtual node has 28 cores, Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz. This move is transparent to the users and should not impact their jobs.
The original 84 Hermes nodes use two bonded Gigabit/s Ethernet links (2 Gbit/s aggregate bandwidth) to get data from NFS and GPFS filesystems. The Hermes expansion nodes use 4X QDR instead with a 10:1 blocking factor.
Nestor nodes share data with each other and the GPFS filesystem over a high-speed InfiniBand interconnect (4X QDR non-blocking connections giving a 40 Gbit/s signal rate with a 32 Gbit/s data rate).
1.2 PB of storage is deployed to the clusters through the General Parallel File System (GPFS), a high-performance parallel file system that provides both fast data access and fault tolerance in cluster participants. This storage provides both user home directories, scratch space for running jobs, and space for installed software. Disk storage (home directories) is backed up, where appropriate, to a dedicated backup system.