HPC Systems

IBM NeXtScale Cluster | IBM iDataplex Cluster | IBM Power7+ Cluster | IBM BlueGene/Q Cluster | IBM Power7+ BigData Cluster

ada: an IBM (mostly) NeXtScale Cluster

System Name: Ada ada
Host Name: ada.tamu.edu
Operating System: Linux (CentOS 6.6)
Nodes/cores per node: 845/20-core @ 2.5 GHz IvyBridge
Nodes with GPUs: 30 (2 Nvidia K20 GPUs/node)
Nodes with Phis: 9 (2 Phi coprocessors/node)
Memory size: 811 nodes with 64 GB/node;
34 nodes with 256 GB (DDR3 1866 MHz)
Extra-fat nodes/cores per node: 15/40-core @ 2.26 GHz Westmere;
4 2TB and 11 1TB (DDR2 1066 MHz)
Interconnect: FDR10 fabric based on the
Mellanox SX6536 core switch
Peak Performance: ~337 TFLOPs
Global Disk: 4 PB (raw) via IBM's GSS26 appliance
File System: Global Parallel File System (GPFS)
Batch: Platform LSF
Production Date: September 2014

Ada is a 17,500-core IBM commodity cluster with nodes based mostly on Intel's 64-bit 10-core IvyBridge processors. 20 of the nodes with GPUs have 256 GB of memory. Included in the 845 nodes are 8 login nodes with 256 GB of memory per node, 3 with 2 GPUs, and 3 with 2 Phi coprocessors.

Get details on using this system, see the User Guide for Ada.


eos: an IBM iDataplex Cluster

System Name: Eos eos
Host Name: eos.tamu.edu
Operating System: Linux (RedHat Enterprise Linux and CentOS)
Number of Nodes: 372(324 8-way Nehalem- and 48 12-way Westmere-based)
Number of Nodes with Fermi GPUs: 4(2 w 2 M2050 each and 2 w 1 M2070 each)
Number of Processing Cores: 3168(all@2.8GHz)
Interconnect Type: 4x QDR Infiniband (Voltaire Grid Director GD4700 switch)
Total Memory: 9,056 GB
Peak Performance: 35.5 TFlops
Total Disk: ~500 TB by a DDN S2A9900 RAID Array
File System: GPFS
Production Date: May 2010

Eos is an IBM "iDataPlex" commodity cluster with nodes based on Intel's 64-bit Nehalem & Westmere processor. The cluster is composed of 6 head nodes, 4 storage nodes, and 362 compute nodes. The storage and compute nodes have 24 GB of DDR3 1333 MHz memory while the head nodes have 48 GB of DDR3 1066 MHz memory. A Voltaire Grid Director 4700 QDR IB switch provides the core switching infrastructure.

Get details on using this system, see the User Guide for Eos. (For detailed technical information, click here.)


curie: an IBM Power7+ Cluster

System Name: Curie curie
Host Name: curie.tamu.edu
Operating System: Linux (RedHat Enterprise Linux 6.6)
Nodes/cores per node: 50/16-core @ 4.2 GHz Power7+
Memory size: 49 nodes with 256 GB/node; 1 node with 128 GB (DDR3 1066 MHz)
Interconnect: 10 Gbps Ethernet
Peak Performance: ~26 TFLOPs
Global Disk: 4 PB (raw) via IBM's GSS26 appliance (shared with Ada)
File System: Global Parallel File System (GPFS) (shared with Ada)
Batch: Platform LSF (shared with Ada)
Production Date: May 2015

Curie is an 800-core IBM Power7+ cluster with nodes based on IBM's 64-bit 16-core Power7+ processors. Included in the 50 nodes are 1 login node with 128 GB of memory per node and 1 login node with 256 GB of memory per node. Curie's file system and batch scheduler are shared with Ada cluster.


neumann: an IBM BlueGene/Q (BG/Q) Cluster

System Name: Neumann Neumann
Host Name: neumann.tamu.edu
Operating System: Redhat Enterprise Linux 6.6 (login nodes), CNK (IBM's BG Compute Node Kernel) and INK (IBM's BG I/O Node Kernel).
Number of Nodes: 2048 IBM BG/Q nodes
Number of Processing Cores: 32,736 PowerPC A2 (all@1.6GHz)
Interconnect Type: QDR Infiniband
Total Memory: 32TB DDR3 (2048 x 16GB/node)
Peak Performance: ~400 TFlops (2048 x 204.8GF/node)
Total Disk: ~2PB
File System: General Parallel File System (GPFS)
Batch Facility: Load Leveler
Location: IBM Campus, Rochester, Minnesota
Production Date: estimated Q2 2015

crick: an IBM Power7+ BigData Cluster

System Name: Crick curie
Host Name: crick.tamu.edu
Operating System: Linux (RedHat Enterprise Linux 6.6)
Nodes/cores per node: 25/16-core @ 4.2 GHz Power7+
Memory size: 23 nodes with 256 GB/node; 2 node with 128 GB (DDR3 1066 MHz)
Interconnect: 10 Gbps Ethernet
Peak Performance: ~13 TFLOPs
Storage: 23 nodes with 14 TB (raw) via a SAS expansion chassis of 24 600GB 10k rpm drives
File System: Global Parallel File System - File Placement Optimizer (GPFS-FPO)
Data Processing Software: IBM BigInsights 3.5
Production Date: estimated Fall 2015

Crick is an 400-core IBM Power7+ BigData cluster with nodes based on IBM's 64-bit 16-core Power7+ processors. Included in the 25 nodes are 2 management nodes with 128 GB of memory per node, 1 BigSQL node with 256 GB of memory per node and 14 TB (raw) of storage, and 22 data nodes with 14 TB (raw) storage for GPFS-FPO and local caching. Curie is primarily used for big data analytics.