Table 1.
The HCP computing infrastructure.
Component | Device | Notes |
---|---|---|
Virtual cluster | 2 Dell PowerEdge R610s managed byVMWare ESXi | Additional nodes will be added in years 3 and 5. Dynamically expandable using NIAC cluster. |
Web servers | VMs running Tomcat 6.0.29 and XNAT 1.5 | Load-balanced web servers host XNAT system and handle all API requests. Monitored by Pingdom and Google Analytics. |
Database servers | VMs running Postgres 9.0.3. | Postgres 9 is run in synchronous multi-master replication mode, enabling high availability and load balancing. |
Compute Cluster | VMs running Sun Grid Engine-based queuing. | Executes pipelines and on-the-fly computations that require short latencies. |
Data storage | Scale-out NAS (Vendor TBD) | Planned 1 PB capacity will include tiered storage pools and 10Gb connectivity to cluster and HPCS. |
Load balancing | Kemp Technologies LoadMaster 2600 | Distributes web traffic across multiple servers and provides hardware-accelerated SSL encryption |
HPCS | IBM system in WU's CHPC | The HPC will execute computationally intensive processing including “standard” pipelines and user-submitted jobs. |
DICOM gateway | Shuttle XS35-704 Intel Atom D510 | The gateway uses CTP to manage secure transmission of scans from UMinn scanner to ConnectomeDB. |
Elastic computing and storage | Partner institutions, cloud computing | Mirror data sites will ease bottlenecks during peak traffic periods. Elastic computing strategies will automatically detect stress on compute cluster and recruit additional resources. |
The web servers, database servers, and compute cluster are jointly managed as a single VMware ESXi cluster for efficient resource utilization and high availability. The underlying servers each include 48-GB memory and dual 6-core processors. Each node in the VMware cluster is redundantly tied back in to the storage system for VM storage. All nodes run 64-bit CentOS 5.5. The HPCS includes an iDataPlex cluster (168 nodes with dual quad core Nehalem processors and 24-GB RAM), an e1350 cluster (7 SMP servers, each with 64 cores and 256-GB RAM), a 288-port Qlogic Infiniband switch to interconnect all processors and storage nodes, and 9 TB of high-speed storage. Connectivity to the system is provided by a 4 × 10 Gb research network backbone.