Scientific Computing World's Beth Sharp and Dr Tom Wilkie talk exascale in the ISC HPC blog
As we write this blog, drivers are queuing at petrol stations across Britain, panic-buying fuel for their cars and gripped by fear that supplies are about to run out. Ironically, if there is a shortage it will result from the actions of the panic-buyers themselves, triggered by a few ill-considered words from a senior Government minister about a strike by tanker-drivers – a strike which may not happen at all, as the unions and employers have restarted negotiations.
The broader message, though, is stark. The security and the price of energy are major concerns of consumers and political leaders alike. It is therefore no surprise to see that, within the programme of ISC’12, two keynote speeches focus specifically on energy efficiency: on Tuesday, Dr Byungse So, from Samsung, will talk about the role of memory technologies in energy-efficient high-performance computing (HPC); while on the Thursday, Professor Dr Arndt Bode, the director of the Leibniz Rechenzentrum, will talk not just about energy efficiency but about extreme energy efficiency with the centre’s SuperMUC machine. ISC’12 also has an entire session explicitly dedicated to ‘Energy-efficient HPC centres’. Intriguingly, the session title adds the rider ‘At what cost?’
We would argue that whatever the cost of energy-efficient HPC centres, the cost of energy inefficient centres is insupportable. In 1954, Lewis Strauss, chairman of the US Atomic Energy Commission, claimed that nuclear power – fusion rather than fission – would deliver ‘electricity too cheap to meter’. Those days of naïve optimism are long gone – no one today can point to any technology that will stop the inexorable increase in energy costs.
Across the whole spectrum of computing, consciousness of the need for energy efficiency has been growing in recent years. The amount of electricity consumed in the world’s data centres, for example, doubled between 2000 and 2005, but the rate of growth slowed significantly between 2005 and 2010, due in part to the 2008-9 economic crisis, but also to the increased prevalence of virtualisation in data centres, and the industry’s efforts to improve energy efficiency. Google recently chose Hamina in Finland as the site for a new data centre because its position on the Baltic Sea meant that the centre could use sea-water to provide ‘chiller-less’ cooling that is more energy-efficient (and also cheaper!).
In February 2007, several key companies came together to improve the efficiency not just of data centres but also of business computing more generally by forming The Green Grid. This not-for-profit consortium now has 175 members, consisting of end-users, policy-makers, technology providers, facility architects and utility companies.
In HPC, the issue of energy efficiency got earlier recognition. The case for a Green500 list was set out in some detail six years ago, at the 2nd IEEE IPDPS Workshop on High-Performance, Power-Aware Computing in April 2006. The first Green500 list itself was published in 2008, ranking systems according to their ‘Flops per Watt’.
But energy-efficiency needs to be holistic and consider what is done with the ‘waste’ energy after the power has been consumed to do the computational work. This is usually rejected to the environment as heat – Google is essentially dumping its waste heat into the sea at the Hamina data centre. Does this mean that part of the ‘cost’ of an energy-efficient HPC centre to be discussed at ISC’12 will be relocating them to places where cooling will be easier and cheaper? A truly energy-efficient HPC centre would find a way to make productive use of the waste energy rather than dumping it in the sea or the air. It might even be possible to make money out of this waste ‘product’ by selling it as district or low-level process heat.
Ultimately though, it is on the road to exascale that the price of energy will dominate the future of HPC. And again, this is where energy efficiency will turn out to be a benefit rather than a cost to the community. The best performer in the November 2011 Green500 list was the IBM BlueGene/Q machine at the University of Rochester in the USA with a tally of just over 2000 Mflops/Watt. It is not the fastest machine in the world, coming in at place 29 on the Top500 list. But a straightforward extrapolation of its technology would indicate a power demand of 500 MW at exascale.
This is obviously not feasible, so all vendors are looking at how to improve efficiencies such that an Exascale system in 2020 should be rated at between 20 and 50 MW. At today’s prices in the USA that would cost somewhere between $20M and $50M a year. This is still beyond the reach of most consumers. But by making exascale machines modular, so the technology could be disaggregated into something smaller, then it would be possible to create highly energy-efficient computers in the petaflop domain – petascale performance for the same energy costs as a Linux cluster today. That would be a benefit to everyone in HPC.
In the Wizard of Oz, Dorothy is advised that only by following the Yellow Brick Road will she get to her destination safely. For the HPC community, the road to a secure future is paved with green bricks.
Beth Sharp is editor, and Dr Tom Wilkie is editor-in-chief, of Scientific Computing World, based in Cambridge, UK.