Widening horizons for high-performance computing
When Scientific Computing World celebrated its 10th anniversary, the pages of the magazine (for it was almost entirely a print-on-paper title in those days) contained very little mention of the emerging field of high-performance computing.
How times have changed! Thomas Sterling and Donald Becker built the first Beowulf cluster at NASA’s Goddard Space Flight Centre in 1994 – the year in which Scientific Computing World itself was launched. Although the idea was to create a cost-effective alternative to the large supercomputers of those days, the benefits for ‘ordinary’ scientists and engineers were barely perceptible even a decade ago. But Beowulf heralded a switch to commodity off-the-shelf-components that has cut costs over the intervening years to the point where high-performance computing is nowadays within the reach of not just major industrial companies, but small and medium-sized enterprises as well.
Simulation and optimisation of engineering designs for automotive and aerospace applications are now done almost entirely in silico. Oil exploration companies find that intensive computation to extract as much information as possible from their data cuts the money that they have to spend on highly expensive exploration activities out in the field.
As Scientific Computing World celebrates its 20th anniversary, it is now a multi-media publication appearing in print and digital formats. And its website, email newsletters, webcasts, and, yes, magazine pages, are filled with stories reflecting the growth in the applicability of high-performance computing outside of the narrow confines of the US National Laboratories and defence establishments around the world.
Around the world, different groups – usually with national or inter-governmental support – are exploring ways of getting to exascale (1018 floating point operations per second), which represents a thousand-fold increase over the first petascale computer that came into operation in 2008.
Scot Schultz from Mellanox argues in the following pages, there will not be any single dominant microprocessor architecture for next-generation exascale class installations, and alternative architectures to x86, such as Power and 64-bit ARM are already in view.
This makes the interconnects all the more important for unless they are fast and efficient, movement of data will be the limiting factor on speed of computation. The variety of architectures may well appear to pose a problem for compiling a program and getting it to run on different machines, but Doug Miles from PGI sees open source as the remedy for this, enabling proprietary compiler developers to focus on innovating in higher-level optimisations, parallel programming models, and productivity features.
Giampietro Tecchiolli CTO of Eurotech argues that the practical argument for exascale computing is less the building of such a machine itself, rather that progress on the road to exascale will enable cheap and cost effective petascale computing to diffuse yet more widely into science and engineering, with all that that means for improving engineering and design capabilities.
Mark Seager, chief technology officer, Technical Computing Ecosystem at Intel has no doubts. In his view, the single most important truth about high-performance computing over the next decade is that it will have a more profound societal impact with each passing year, in areas such as disease research and medical treatment; climate modelling; energy discovery; nutrition; new product design; and national security.
But computing depends on people, not just technology – talented, creative people as stressed by Altair’s Sam Mahalingam. So we conclude this overview of HPC with two contemporary initiatives to attract new people and equip them with the skills they need to drive next wave of evolution in HPC. Jon Bashor looks at the Student Cluster Challenge initiative and Paul Messina calls for more initiatives like the Argonne Training programme on Extreme-Scale Computing that he has organised.