NEWS
Tags: 

US researchers link supercomputers to power cosmic simulation

Researchers at the US Department of Energy’s (DOE) Argonne National Laboratory have increased their capacity for cosmological simulation by opening up a link to another research centre - the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UI).

This new approach links supercomputers at the Argonne Leadership Computing Facility (ALCF) and at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UI) to enable computationally demanding and highly intricate simulations of how the cosmos evolved after the Big Bang.

‘You need high-performance supercomputers that are capable of not only capturing the dynamics of trillions of different particles but also doing exhaustive analysis on the simulated data,’ said Argonne cosmologist Katrin Heitmann. ‘Sometimes, it’s advantageous to run the simulation and do the analysis on different machines.’

This link enabled scientists to transfer massive amounts of data and to run two different types of demanding computations in a coordinated fashion – referred to technically as a workflow.

Argonne transferred data produced as part of the simulation directly to the Blue Waters system for analysis. This is no small feat as the researchers aimed to establish sustained bandwidth that would allow them to transfer up to one petabyte per day.

While similar simulations have been conducted previously what separates this work from previous studies is the scale of the computation, the associated data generation and transfer, and the size and complexity of the final analysis. Researchers also tapped the unique capabilities of each supercomputer: They performed cosmological simulations on the ALCF’s Mira supercomputer and then sent huge quantities of data to UI’s Blue Waters, which is better suited to perform the required data analysis tasks because of its processing power and memory balance.

Typically, cosmological simulations can only output a fraction of data generated in such experiments because of data storage limitations. However, with this new approach, Argonne sent every data frame to NCSA as soon it was generated, allowing Heitmann and her team to reduce the storage demands on the ALCF file system.

The full article relating to this research project – written by Jared Sagoff and Austin Keating – can be found on the Argonne National Laboratory website. 

Other tags: 
Twitter icon
Google icon
Del.icio.us icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon
Feature

Robert Roe reports on developments in AI that are helping to shape the future of high performance computing technology at the International Supercomputing Conference

Feature

James Reinders is a parallel programming and HPC expert with more than 27 years’ experience working for Intel until his retirement in 2017. In this article Reinders gives his take on the use of roofline estimation as a tool for code optimisation in HPC

Feature

Sophia Ktori concludes her two-part series exploring the use of laboratory informatics software in regulated industries.

Feature

As storage technology adapts to changing HPC workloads, Robert Roe looks at the technologies that could help to enhance performance and accessibility of
storage in HPC

Feature

By using simulation software, road bike manufacturers can deliver higher performance products in less time and at a lower cost than previously achievable, as Keely Portway discovers