Policies

All users of the Swinburne HPC facilities must agree to abide by the Swinburne IT Systems Acceptable Use policy.

It is not permitted to share login details or let others know your password for systems access. It is also the responsibility of individuals to ensure that their contact details are kept up-to-date as part of their account information. The principal avenue of communication of important system information will be via the email address linked by the user to their account — it is not the responsibility of Swinburne to take additional steps to contact individuals. All users should be aware that information about the level of activity of their account on the system may be observable by other users and that Swinburne HPC staff may monitor the contents of files on the system (such as job scripts).

In addition, all users agree to abide by the following usage policies.

USA Export Conditions

All hardware is subject to the export control laws and regulations of the U.S. and other countries, including without limitation, the Export Administration Regulations, International Traffic in Arms Regulations, and the sanctions regulations of the U.S. Department of Treasury Office of Foreign Assets Control. It is the responsibility of Project and Research Group leaders to be aware of these restrictions and how they relate to users within their group.

UN and Australian Autonomous Sanctions

It is also the responsibility of Project and Research Group leaders to ensure that members of their projects and the research being conducted adhere to Australian sanctions laws. These laws implement the United Nations Security Council sanctions regimes and the Australian autonomous sanctions regimes. More information can be found at the DFAT website. All users of Swinburne HPC facilities must ensure that their research activities have been cleared by their host institution where necessary, prior to any use of the facility.

Resource Usage

It is a condition of use of the Swinburne HPC facilities that users attempt to maintain acceptable use of the available resources and make every reasonable effort to not impact negatively on the computations of others. As a minimum users should aim to follow these guidelines:

  • Users shall endeavour to request physical resources that reasonably reflect the needs of their jobs

  • Parallel usage should maintain acceptable efficiency and CPU utilisation. A value of approximately 75% should be considered a reasonable minimum.

  • Users shall test and verify all software before deploying on a large-scale.

  • Users should not use the head node to run any processing tasks.

  • Users should not use the interactive nodes to run long jobs or jobs with large computational requirements. Jobs running on interactive nodes can use up to 80% of the nodes total memory with a maximum of 4 GB of swap.

Misuse of the system or repeated underutilization of requested resources may lead to a temporary restriction of usage rights. It should also be noted that a demonstrated record of efficient usage of the system will be looked upon favourably when allocating future resources. However, it is also recognized that some development work will, by its nature, be inefficient and will be deemed acceptable providing it is directed at achieving reasonable production resource usage.

Data Storage and Security

Unless otherwise specified all data storage on the Swinburne HPC system is not backed up. While Swinburne HPC staff will take all reasonable steps to preserve the integrity of data, such as using appropriate RAID settings, Swinburne takes no responsibility for loss of data. This includes major disasters such as massive disk failure, fire, etc.

As such, it is the responsibility of individual users to take their own measures locally to protect themselves against loss of valuable data. At the very least we urge users to store copies of source code and any data that will be difficult to reproduce on systems elsewhere.

If a project approved through the merit allocation scheme has been granted additional storage, availability of this storage is only guaranteed for a period of three months after the end point of the compute component of the project. It is the responsibility of the project leader to make arrangements to transfer the data in a timely manner.

For usage related to external funding, e.g. ARC, it is the responsibility of the project leader to be aware of (and implement) any data management plans and/or retention policies related to that funding.

Acknowledgements

It is a condition of use that the HPC facility be appropriately acknowledged in any related publications.

For MAS projects on OzSTAR, i.e. those given time through ASTAC:

This work was supported by resources awarded under Astronomy Australia Ltd's ASTAC merit allocation scheme on the OzSTAR national facility at Swinburne University of Technology. The OzSTAR program receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government, and from the Victorian Higher Education State Investment Fund (VHESIF) provided by the Victorian Government.

For general use on OzSTAR:

This work was performed on the OzSTAR national facility at Swinburne University of Technology. The OzSTAR program receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government, and from the Victorian Higher Education State Investment Fund (VHESIF) provided by the Victorian Government.

Reporting Requirements

In addition to any reporting requirements set under merit allocation schemes for large-scale allocations of astronomy-related time, all Project Leaders must agree to supply a project description and a report on work conducted (including outcomes) upon request. All supplied material may be published in related facility reports.

Usage Cost

The facility is intended for academic usage for which there is no direct cost. For the purposes of documenting in-kind contributions on grant proposals, or otherwise, the value of the resource for academic use is set at $0.07 per (CPU) processor hour.