About Green II

Green II Architecture

Green II Architecture

Green II: gSTAR and SwinSTAR

This new Swinburne HPC system features a compute component that is a hybrid of traditional x64 processing cores (the CPUs) and graphics processing units (GPUs). The compute nodes are combined with a petascale data store and the entire system is networked with QDR infiniband.The compute component comprises two facilities: the GPU Supercomputer for Theoretical Astrophysics Research (gSTAR) and the Swinburne Supercomputer for Theoretical Academic Research (swinSTAR). In practice the two facilities are networked together as one system and both interact directly with ~ 3 Petabyte Lustre file system.Accounts on the system are open to all astronomers at publicly funded institutions in Australia and all Swinburne staff/students.Time on the facility is split as 40% for national astronomy use and 60% for Swinburne-only use. Up to half of the astronomy time will be allocated through a merit based proposal scheme judged by the Astronomy Supercomputer Time Allocation Committee (ASTAC) which is a committee of AAL. Calls for proposals will be published on the AAL website and through the Astronomical Society of Australia. The remaining astronomy time will be available through a general access job queue.

Green II Cabinet

Green II Cabinet

The gSTAR Cluster

The purpose of gSTAR is to provide the national astrophysical community with a GPU-based facility for performing world-class simulations and to enable rapid processing of telescope data. Funding for gSTAR is provided by an Education Investment Fund (EIF) grant obtained in co-operation with (and administered by) Astronomy Australia Limited (AAL). It is hosted at Swinburne and operated as a national facility.The gSTAR hardware is provided by SGI.There are currently 50 standard SGI C3108-TY11 nodes that each contain:

  • 2 six-core Westmere processors at 2.66 GHz
    (each processor is 64-bit Intel Xeon 5650)
  • 48 GB RAM
  • 2 NVIDIA Tesla C2070 GPUs (each with 6 GB RAM).

There are also 3 high-density GPU nodes that have the same CPU capabilities as the standard nodes but each contain 7 NVIDIA Tesla M2090 GPUs. All GPUs perform at greater than 1 Tflop/s (single precision).

gSTAR Units

gSTAR Units

The SwinSTAR Cluster

The purpose of swinSTAR is to provide the Swinburne research community with a world-class HPC facility to enhance research endeavours. It can be seen as the successor to the Green Machine: the CPU-based supercomputer installed at Swinburne in 2007. Time on swinSTAR is mainly for Swinburne staff and students but gSTAR users also have access (the same account system applies on all nodes).The swinSTAR hardware is provided by SGI.There are currently 86 standard SGI C2110G-RP5 nodes that each contain:

  • 2 eight-core SandyBridge processors at 2.2 GHz
    (each processor is 64-bit 95W Intel Xeon E5-2660)
  • 64 GB RAM
  • PCI-e Gen3 motherboard.

64 of these nodes contain a NVIDIA Tesla K10 GPU (8GB RAM, 2 GK104 GPUs).

There are also 4 large-memory swinSTAR nodes that each have 32 CPU-cores and 512GB RAM (no GPUs).

SwinSTAR Units

SwinSTAR Units