Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Kraken hits 2 billionth CPU hour

Share this on social media:

The Kraken Cray XT5 supercomputer will deliver its 2 billionth CPU hour to open science this month, and achieved a 96 per cent utilisation rate during October.

Funded by the National Science Foundation (NSF) and managed by the University of Tennessee’s National Institute for Computational Sciences (NICS), Kraken is located at Oak Ridge National Laboratory. Kraken is one of the integrated digital resources of the eXtreme Science and Engineering Discovery Environment (XSEDE), successor to NSF’s TeraGrid project.

These noteworthy achievements highlight different aspects of NSF’s largest machine. Delivering 2 billion hours emphasises the long-term success of the machine and the staff at NICS who work to maintain it and aid users in their scientific endeavours.

'Kraken meets the user community’s needs across a broad range of scientific domains and job sizes,' said NICS executive director Sean Ahern. 'NICS provides roughly 60 per cent of all the hours allocated on NSF resources, and we’re able to do this while maintaining extremely high utilization and delivering billions of hours.'

'Other centres with large machines have policies stipulating that projects must use a sizeable portion of the machine to even be considered for an allocation,' said NICS system administrator Troy Baer. 'Kraken runs everything from projects that need one node to teams who would run the entire machine for weeks if we would let them.' This mix of jobs permits consumption of nearly all of Kraken’s 112,896 cores as smaller jobs take advantage of unused nodes in the spaces between larger jobs.