Skip to main content

Harvesting data from large radio telescope arrays

Mark Stickells, executive director at Pawsey Supercomputing Centre, highlights the work done in Australia to deliver supercomputing facilities that can support the SKA and accommodate the country's research goals.

From its inception, the Pawsey facility has been dedicated to providing supercomputing to Australian scientists and researchers. The facility is also positioned as a major hub for radio astronomy. 

Using investment from the Australian government and The Commonwealth Scientific and Industrial Research Organisation (CSIRO), the centre has been able to establish itself and deliver research computing services for both scientists and researchers across Australia but also Astronomoeers across the world. 

Mark Stickells, executive director, said: 'Pawsey is named after an eminent founding Australian physicist and astronomer, Dr Joseph Pawsey. It was created to support Australia’s investment in the Square Kilometre Array project.'

In 2000, an unincorporated joint venture with key partners was established and named iVEC. In 2009, as part of the Commonwealth Government’s Super Science initiative, iVEC secured $80 million in funding to establish a petascale supercomputing facility and house the computers under one roof, at the Australian Resources Research Centre in Perth.

In 2014, iVEC adopted the name, the Pawsey Supercomputing Centre.

In early 2018, the Australian Prime Minister announced that Pawsey would receive a further $70 million in funding to refresh its infrastructure and continue as a vital facility to national science. ‘That point of difference distinguishes us as an HPC facility internationally because we manage the direct ingest of telescope data, some 800 km away from Pawsey but directly ingested into the compute infrastructure,’ said Stickells. ‘We manage that process of high volume data ingest as well as a compute facility for classical computational science that you would expect to find in most HPC facilities that provide support for computational sciences.’

The international Square Kilometre Array (SKA) is headquartered near Manchester, UK, with two telescope arrays being constructed, one in South Africa and one in Western Australia (WA). The West Australian array is underpinned by the Pawsey compute facility. ‘We have a data intensive data ingest requirement that makes us part of the experimental infrastructure for large scale radio astronomy projects,’ Stickells said.

The choice to consolidate the facilities allows managers to better deliver computational resources by splitting the systems into two major HPC clusters. ‘We were able to build a purpose-built common facility about five or six kilometres outside of Perth and build the Pawsey centre and the first compute infrastructure for the first systems Magnus and Galaxy as two of there flagship systems,’ added Stickells.

'That was then renamed the Pawsey centre as part of this commitment to radio astronomy as one of the key domains supported by our facility. We are also one of two national facilities – the Pawsey centre in Perth and tthe National Computational infrastructure (NCI) in Canberra. Both have different domain focuses but both contribute a portion of their compute resources into the national merit based scheme,’ Stickells continued. ‘Universities around the country can apply for compute cycles on major projects on both of our resources.

Stickells explained that Pawsey is in a fairly unique position as it receives funding from the commonwealth government to support its status as a piece of national research infrastructure: ‘They also receive funding from the West Australian government because we are based in WA and there is some direct benefit that flows to WA universities and partners in WA.’

‘The founding universities also contribute to Pawsey directly so it is a complicated funding model but the universities in WA and CSIRO get some direct access to our systems and get to benefit from that because they share some of the costs,’ added Stickells. ‘Then the national government funds Pawsey and some of our resources get put into the national pool for researchers around the country and then there ius the strategic investment from the state which enables us to support the state’s interest, the SKA and so on.’

Magnus and Galaxy

Pawsey’s two main HPC systems are set up to deliver computational resources for their two main application focuses, computational research and radio astronomy. 

Magnus is a petascale supercomputer, which is the only public access research Cray XC40 in Australia. The system provides high-end computing capability to projects across the entire spectrum of scientific fields that Pawsey and its partners are currently engaged with.

It is a Cray XC40 Series with Intel Xeon E5-2690V3 ‘Haswell’ processors (12-core, 2.6 GHz),  with 93 terabytes (64 GB of DDR4-2133 per compute node) and uses the standard Cray Aries series interconnect and Cray Dragonfly network topology. Total computing power is around one PetaFLOP.

Galaxy, is the world’s only real-time, supercomputing service for telescopes used in astronomy research. The telescopes used in the Australian Square Kilometre Array Pathfinder (ASKAP) and Murchison Widefield Array (MWA) – precursor projects for the Square Kilometre Array (SKA) would not be able to perform their observations without Galaxy to process their data.

Galaxy is based on an XC30 Cray system with Intel Xeon E5-2690v2 ‘Ivy Bridge’ (10-core, 3.00 GHz), Intel Xeon E5-2690 (8-core, 2.6 GHz) with 31.55 terabytes (64 Gigabytes of DDR3-1866 per compute node, 32 gigabytes of DDR3-1866 per GPU node) of memory. The system also contains Nvidia K20X ‘Kepler’ GPUs on 64 nodes with the same Aries interconnect and Cray Dragonfly network topology. The system delivers around 200 teraFLOPS of compute performance.

‘Our current system [Magnus] is approaching end of life, it is a petaflop system that this year dropped off the Top500. But we have installed some new systems in the last few months, we have updated our research cloud and there are several procurements on the go but the key announcement is just a couple of weeks away,’ noted Stickells.

Pawsey’s cloud infrastructure ‘Nimbus’ is an on-site, high-throughput computing (HTC) infrastructure complementary to Pawsey’s large-scale HPC facilities. Nimbus is an integrated data-intensive infrastructure that facilitates large data workflows and computational tasks and offers a data analytics capability. It is a self-service structure that allows researchers to administer the entirety of their software environment, data storage and even the operating system within their own instance. 

Stickells also notes that Pawsey is investigating the use of quantum computing and is currently evalusating technologies that could be implemented at the centre in the future.

‘We are exploring having access to an emulator and there are some commercial companies that do that. We are actually on a path to try and work with a company to put computing hardware into our facility and then demonstrate some access and scalability alongside classical supercomputing. That is an objective for us in the next 12 months.’

Repurposing for Covid-19

Earlier this year during the initial outbreak of Covid-19 Paswey and the NCI delivered Covid-19 Accelerated Access Initiatives, a joint initiative of the NCRIS-supported Australian high-performance research computing capabilities. The National Computational Infrastructure (NCI) and Pawsey joined forces to offer additional computation and data resources to support the national and international research community to acquire, process, analyse, store and share data.

The Covid-19 Special Call was intended to identify and provide resources to research projects directly responding to the pandemic. Covid-19 Special Call projects could apply for high performance computing (HPC), cloud, storage resources and associated expert support across Australia’s national research computation providers.

The call supported research focused on takling the Covid-19 pandemic with a particular focus on research processing or analysing gene sequences, predictring transmission, prediction of protein structures, modelling the economic impact of Covid-19, epidemiological modelling and many others.

Stickells said this research is going to help assist recovery and is necessary to accelerate the process of understanding and combatting the virus but also providing a path to help recovery after the pandemic. ‘That is an interesting area for us, I see the industry forecasts and I think many governments will be looking at their own economic recovery plans and what role that digital infrastructure and innovation plays to support economic recovery as we try to navigate our way through Covid-19,’ stated Stickells. ‘I am really interested in the impact of technology and trying to understand how we can better support communities and work with governments to facilitate scientific discovery.’

Both facilities contributed extensive resources to assist researchers in the fight to overcome Covid-19.  Through this initiative, researchers throughout Australia are now working with NCI – which is currently supporting three projects with more than 40 million units of compute time on the Gadi supercomputer; and Pawsey Supercomputer Centre, which provides access across five projects to more than 1,100 cores on the newly deployed Nimbus cloud and Topaz.

‘There were two initiatives that came under the same banner. Our new cloud infrastructure and the other was the supercomputing infrastructure GADI were put under the rapid response process to support researchers in Australia for data processing requirements related to Covid research,’ said Stickells. 

‘We have got a couple of really bright HPC research fellows with a backgrounds in medical research and bioinformatics. They work with existing users, some in WA and some interstate, we were able stand up our new cloud which acts as an entry point to HPC, and support five or six projects in fairly quickly within a matter of weeks. 

‘I think they had four or five projects that were 40 million hours of compute time on their GADI system. Both of us [Pawsey and NCI] were just responding to projects that were in the pipeline. Applying the best resources that we had at the time. We were fortunate that our cloud was available to do this.’ 

Topics

Read more about:

HPC

Media Partners