Hardware

Head Node

g2.hpc.swin.edu.au is the head or login node. It is primarily for qsub, logins to interactive nodes, and data transfers. No processing tasks should be run here.

Interactive Nodes

We have four interactive nodes (gstar001, gstar002, sstar001 and sstar002) which you can login to directly to run short jobs, compile and test code, etc. Please don’t use these nodes to run long jobs or jobs with big computational requirements. Running such jobs in interactive mode can be done by requesting interactive nodes from the queue. Jobs running on interactive nodes can use up to 80% of the nodes total memory with a maximum of 4 GB of swap.
All other nodes are accessed via the Job Queue.

Compute Nodes

gSTAR Nodes

gstar001-002

These are the only gstar compute nodes available for direct access.
They can be used for short jobs to test and compile code.

Each node contains 12 CPU cores and 2 GPUs (exactly the same as the standard gstar nodes below).

gstar011-058

These nodes are only to be accessed through the job queue (select the gstar queue).

Each node contains two six-core X5650 CPUs, 48GB RAM and two Tesla C2070 GPUs (6GB RAM, 1 Tflops single precision).

gstar101-103

These nodes are only to be accessed through the job queue (select the manygpu queue).

Each node contains two six-core X5650 CPUs, 48GB RAM and seven Tesla M2090 GPUs (6GB RAM, 1.3 Tflops single precision).

SwinSTAR Nodes

sstar001-003

These are the only sstar compute nodes available for direct access.
They can be used for short jobs to test and compile code.

Each node contains 16 CPU cores (same as the sstar011-030 nodes below). Special hardware configurations on these nodes:

  • sstar001 has a NVIDIA Tesla K20 card.
  • sstar002 has a NVIDIA Tesla K40C card.
  • sstar003 has two Intel Xeon Phi cards (mic0,mic1) and 128 GB memory. Please note that by default the two Intel Phi cards are in sleep mode. You need to ssh mic0 and/or ssh mic1 to activate them.

sstar011-030

These nodes are only to be accessed through the job queue (select the sstar queue).

Each node contains two eight-core E5-2660 CPUs and 64GB RAM.

sstar101-162

These nodes are only to be accessed through the job queue (select the sstar queue and set gpus=1 or 2 in your resource list if you need a GPU).

Each node contains two eight-core E5-2660 CPUs, 64GB RAM and a NVIDIA Tesla K10 (2 GK104 Kepler GPUs, 8GB RAM, 4.6 Tflops single precision).

sstar201-204

These nodes are only to be accessed through the job queue (select the largemem queue).

These are SGI UV10 large-memory nodes. Each node contains four eight-core E78837 CPUs and 512GB RAM (no GPUs).

Storage

The storage system provides ~ 3 petabytes of usable disk space served by a Lustre file system. See the Filesystems page for details. Approximately 200 terabytes is available to non-Swinburne gSTAR users.

SGI InfiniteStorage Nodes

SGI InfiniteStorage Nodes

Interconnect

The primary interconnect is a QDR InfiniBand network made by QLogic (now Intel) . This fabric has a few micro-seconds latency (user process to user process) and approximately 3GB/s per node of non-blocking full fat tree bandwidth available.

QLogic InfiniBand Switch

QLogic InfiniBand Switch