SKA launches AstroCompute cloud grant programme

Clouds were once the bane of astronomers but now the cloud is being used to accelerate astronomical research and data processing of the world’s largest radio telescope - The Square Kilometre Array (SKA).

The SKA organisation has teamed up with Amazon Web Services to create the AstroCompute cloud grant programme which hopes to address the huge amount of data produced by the array of telescopes.

Once fully operational the SKA will produce data equal to several times global internet traffic.

The AstroCompute in the Cloud programme is being launched to accelerate the development of tools and techniques for processing, storing and analysing all this new data once the SKA becomes operational.

Grant recipients will have access to credits for AWS cloud services over a two-year period and up to one petabyte of storage for data contributed by SKA partners, which AWS will make available as a public dataset. Anyone associated with or using radio astronomical telescopes or radio astronomical data resources around the world can apply to the programme.

Tim Cornwell, the SKA organisation architect and administrator of the grants said: ‘With the SKA, we will be generating more data than the entire internet traffic at any single time. So we’re looking into innovative cloud solutions to help us cope with never-before-seen volumes of data, using techniques that are yet to be invented.’

In its first phase of construction, SKA will include two telescopes, one consisting of more than one hundred thousand low frequency antennas, and one with about two hundred large dishes. Supercomputers will translate the enormous volume of raw data coming from the telescopes into a useable form for astronomers.

This system presents significant technological challenges in processing huge amounts of data in real-time as observations are expected to run full-time, at least once the system has been optimised to cope with this amount of data. This will require researchers to re-design current methods of storing and processing data in real-time, new algorithms and software will be required which has prompted further investigation and the creation of the AstroCompute programme.

Jamie Kinney, senior manager for scientific computing, Amazon Web Services explained that it is often the creative members of the scientific community that find a way to solve these challenges. Kinney said: ‘Through our scientific computing programme, our grants and our public datasets, we’ve found that when researchers have access to the tools and data they need, they find innovative ways of solving big data challenges.

‘The SKA is an ambitious project which presents an unprecedented opportunity to leverage a tremendous amount of data to explore the Universe’ he concluded.

The project will provide grants, in the form of AWS credits, and up to one petabyte of storage for an AWS public data set. The data set will be initially provided by several of the SKA’s precursor telescopes including CSIRO’s ASKAP, the MWA in Australia, and KAT-7 (pathfinder to the SKA precursor telescope Meerkat) in South Africa.

Over time, the data set will grow to the full petabyte using data provided by the other SKA partners. The grants are open to anyone who is making use of radio astronomical telescopes or radio astronomical data resources around the world.

The SKA organisation will administer the grants, they will be looking for innovative, cloud-based algorithms and tools that will be able to handle and process this never-ending data stream.

Cornwell concluded by highlighting the increasingly strong links between ‘fundamental research and computing’ and the benefits to society that arise because of the advancements made in computing research.

‘CERN, the European Organisation for Nuclear Research, realised very early they would face a challenge to distribute the amount of data from their experiments to physicists around the world. To solve it, they created the World Wide Web. SKA is the next step’ said Cornwell.

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

Building a Smart Laboratory 2018 highlights the importance of adopting smart laboratory technology, as well as pointing out the challenges and pitfalls of the process


Informatics experts share their experiences on the implementing new technologies and manging change in the modern laboratory


This chapter will consider the different classes of instruments and computerised instrument systems to be found in laboratories and the role they play in computerised experiments and sample processing – and the steady progress towards all-electronic laboratories.


This chapter considers how the smart laboratory contributes to the requirements of a knowledge eco-system, and the practical consequences of joined-up science. Knowledge management describes the processes that bring people and information together to address the acquisition, processing, storage, use, and re-use of knowledge to develop understanding and to create value


This chapter takes the theme of knowledge management beyond document handling into the analysis and mining of data. Technology by itself is not enough – laboratory staff need to understand the output from the data analysis tools – and so data analytics must be considered holistically, starting with the design of the experiment