Table 1.
Technical specifications of the MASSIVE high performance computing system.
| M1 AT THE AUSTRALIAN SYNCHROTRON |
| 42 nodes (504 CPU-cores total) in one configuration: |
| 42 nodes with 12 cores per node running at 2.66 GHz |
| 48 GB RAM per node (2016 GB RAM total) |
| 2 NVIDIA M2070 GPUs with 6GB GDDR5 per node (84 GPUs total) |
| 153 TB of fast access parallel file system |
| 4x QDR Infiniband Interconnect |
| M2 AT MONASH UNIVERSITY |
| 118 nodes (1720 CPU-cores total) in four configurations: |
| 32 nodes with 12 cores per node running at 2.66 GHz |
| 48 GB RAM per node (1536 GB RAM total) |
| 2 × NVIDIA M2070 GPUs with 6 GB GDDR5 per node (64 GPUs total) |
| 10 nodes with 12 cores per node (visualization/high memory configuration) |
| 192 GB RAM per node (1920 GB RAM total) |
| 2 × NVIDIA M2070Q GPUs with 6 GB GDDR5 per node (20 GPUs total) |
| 56 nodes with 16 cores per node running at 2.66 GHz |
| 64 GB RAM per node (3584 GB RAM total) |
| 2 × NVIDIA K20 (9 nodes—18 GPUs total) |
| 2 × Intel PHI (10 nodes—20 coprocessors total) |
| 20 nodes with 16 cores per node running at 2.66 GHz |
| 128 GB RAM per node (2560 GB RAM total) |
| 2 × NVIDIA K20 (40 GPUs total) |
| 345 TB of fast access parallel file system |
| 4 × QDR Infiniband Interconnect |
| Combined the M1 and M2 have 2,224 CPU-cores. |