Skip to main content

HPC User Forum discusses problems with exascale computing

Now operating in its 10th year, the HPC User Forum held its recent meeting at the EPFL (Swiss Federal Institute of Technology in Lausanne) on 8-9 October. Organised by users, funded by industry sponsors and administered by the market research company IDC, the users meet four times per year, typically twice in the US and twice in Europe. Their goal is to provide a common voice to help steer the development of the industry and applications. Roughly 60 people – HPC users, system administrators and representatives of suppliers to the HPC community – were in attendance including many of the most experienced HPC experts in the world. These meetings are open to any interested party and are free of charge; to get the dates for the next meetings, go to www.hpcuserforum.com.

Many of the presentations described how major research institutions and government-sponsored programmes are using HPC and their current hardware/software setups. At this meeting, such talks were given by representatives of the Swiss National Supercomputing Centre, the EPFL's own Blue Brain project, the US National Cancer Institute, CERN, NASA, US Department of Energy, Edinburgh Parallel Computing Centre (EPCC), US National Science Foundation and the PRACE (Partnership for Advanced Computing in Europe) project. Each of these presentations included a section that illustrated the challenges they face in building, maintaining and expanding their HPC facilities. Additionally, vendor presentations were given by IBM, Microsoft and Altair.

A particularly interesting session was a panel discussion titled 'On Using HPC to Advance Science-Based Simulation'. One member started off by saying that for HPC we are starting with hardware building blocks that are the wrong building blocks for what we are trying to do. An interesting comment came from a NASA engineer who stated bluntly that with his current software, 'exascale is a waste of my time'. For example, the radiation methods he uses are not HPC friendly, but he does need scalability for future efforts. But is the HPC market itself big enough to drive hardware developments? In addition, it appears that much large HPC funding typically targets one specific application, but that does not necessarily lead to hardware that is generally useful for other apps? Is there a common denominator in applications that hardware vendors can address and yet see a return on their investments? The group outlined the problems associated with handling and analysing the huge amounts of data that are coming from HPC-based experiments and simulations.

The industry representatives did address these future needs. Without being very specific about disclosing details about unannounced products, IBM's VP of Deep Computing David Turek noted that Moore's Law is essentially dead, and to get to exascale computing we will need 100 million cores - but that size brings along a entirely new set of problems. He extrapolated from 2005-type designs in terms of power, memory, network bandwidth, reliability and I/O bandwidth and showed that there's no question we need major breakthroughs before exascale computing can become a reality. For example, if you note that the 400 teraflop Blue Gene needs 200MW of power, at the exascale that demand goes to 2GW for a single facility. He did note that IBM is actively working to solve all these problems. On the software side, Beat Sommerhalder, Western Europe HQ Product Manager for HPC at Microsoft, discussed Visual Studio 2010 and how Parallel LINQ makes it far easier for ISVs to write software that actually takes advantage of multicore architectures.

Media Partners