Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Towering performance by Sequoia

Share this on social media:

Breaking new ground for scientific computing, two teams of Department of Energy (DOE) scientists have for the first time exceeded a sustained performance level of 10 petaflops (quadrillion floating point operations per second) on the Sequoia supercomputer at the US National Nuclear Security Administration’s (NNSA) Lawrence Livermore National Laboratory (LLNL).

A team led by Argonne National Laboratory used the recently developed Hardware/Hybrid Accelerated Cosmology Codes (HACC) framework to achieve nearly 14 petaflops on the 20-petaflop Sequoia, an IBM BlueGene/Q supercomputer, in a record-setting benchmark run with 3.6 trillion simulation particles. HACC provides cosmologists the ability to simulate entire survey-sized volumes of the universe at a high resolution, with the ability to track billions of individual galaxies.

Simulations of this kind are required by the next generation of cosmological surveys to help elucidate the nature of dark energy and dark matter. The HACC framework is designed for extreme performance in the weak scaling limit (high levels of memory utilisation) by integrating innovative algorithms, as well as programming paradigms, in a way that easily adapts to different computer architectures.

The HACC team is now conducting a fully-instrumented science run with more than a trillion particles on Argonne’s 10-petaflop Mira, which is also an IBM BlueGene/Q system.