Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Petascale computing delivered to NERSC

Share this on social media:

The National Energy Research Scientific Computing Center (NERSC) in the USA has accepted its first petascale supercomputer. The flagship Cray XE6 system, called ‘Hopper’ in honour of American computer scientist Grace Murray Hopper, is capable of more than one quadrillion floating point operations per second, or one petaflops, and is currently the second most powerful supercomputer in the United States, according to the Top500 list.

‘We are very excited to make this unique petascale capability available to our users, who are working on some of the most important problems facing the scientific community and the world,’ said Kathy Yelick, NERSC director. ‘With its 12-core AMD processor chips, the system reflects an aggressive step forward in the industry-wide trend toward increasing the core counts, combined with the latest innovations in high-speed networking from Cray. The result is a powerful instrument for science. Our goal at NERSC is to maximise performance across a broad set of applications, and by our metric, the addition of Hopper represents an impressive five-fold increase in the application capability of NERSC.’

The Hopper system has already been utilised for many different projects, such as harnessing wind power. Although wind power technology is close to being cost-competitive with fossil fuel plants for generating electricity, wind turbine installations still provide less than one per cent of all US electricity. Because scientists don’t have detailed knowledge about how unsteady flows interact with wind turbines, many turbines underperform, suffer permanent failures or break down sooner than expected.

Since standard meteorological datasets and weather forecasting models do not provide detailed information on the variability of conditions needed for the optimal design and operation of wind turbines, researchers at the National Center for Atmospheric Research (NCAR) developed a massively parallel large-eddy simulation (LES) code for modelling turbulent flows in the planetary boundary layer – the lowest part of the atmosphere, which interacts with the shape and ground cover of the land. With approximately 16,000 processor cores on Hopper, the NCAR team simulated the turbulent wind flows over hills in unprecedented resolution and increased the scalability of the code to ensure that it will be able to take advantage of peta- and exascale computer systems.

‘The best part of Hopper is the ability to put previously unavailable computing resources toward investigations that would otherwise be unapproachable,’ says Ned Patton of NCAR, who heads the investigation. ‘We seriously couldn't make the progress we have been without NERSC's support. We find NERSC's services to be fantastic and truly appreciate being able to compute there.’