Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Prace: high-performance computings Cern?

Share this on social media:

Tom Wilkie reports on some surprising suggestions about the future shape of high-performance computing

China, and not the USA, may well be the first nation to build an Exascale computer, according to Bill Gropp, chairman of the SC13 supercomputing conference and exhibition. Putting it in slightly oblique terms, he told a press conference at the event in Denver, Colorado, in November: ‘I am not sure that the people to get there first will be from within the USA.’

His assessment was shared by Peter Beckman, from the US Argonne National Laboratory, and one of the intellectual drivers of US high-end computing. In an interview in Denver, he remarked that he had just returned from a meeting in China and ‘China has decided that high-performance computing is important for its economy. They have a system where they are very deliberate in making long-term plans with funding and clear goals.’ But the Chinese do not just want ‘a nice machine’, in his view: ‘They want students, post-docs and professors to make progress technically.’

In contrast, and in the aftermath of the recent US Government shutdown and ongoing ‘sequestration’, Beckman pointed out with dry understatement: ‘The USA has not been as good at planning and funding large science and technology programmes over the past few years – even though everyone agrees that science, technology, and medicine is where the US needs to compete.’

His worries about the future development of high-performance computing in the USA were shared by Bill Gropp, who told the SC13 press conference that restrictions by the US National Laboratories on the number of people who could attend the conference had fallen disproportionately on the young, he said, but these junior scientists were the very ones who might get most out of the experience of attending. He cited the example of MPI, the standardised, portable message-passing interface designed by researchers from academia and industry to function on parallel computers. That sort of collaboration could only come about at events such as SC13, he maintained, where people physically got together to discuss issues and problems in the subject. ‘I’m hoping for common sense to come back,’ he said.

Constraints of funding also pre-occupied Peter Beckman. He pointed out that the Top500 list of the fasted computers in the world reflected not only the technologies but the overall funding that was available – and that funding as well as energy consumption were constraining the development of new supercomputers.

He floated the suggestion that high-performance computing should, perhaps, be reconfigured as an internationally funded ‘Big Science’ project, rather like the European Laboratory for Particle Physics, Cern, near Geneva in Switzerland.

Indeed, a prototype already existed, he said, in the form of Prace, the Partnership for Advanced Computing in Europe, which was established to facilitate high-impact scientific discovery and engineering research and development by offering researchers access to world-class computing and data management resources and services through a peer-review process. Prace, Beckman remarked, ‘is a good model. I like it a lot.’

He pointed to the training courses and meetings that Prace organises, and to its summer school in particular – all of which are aimed at bringing scientists together to understand how they can use high-performance computing to achieve the goals of their research. There was some similarity, he said, with the two-week training course in parallel programming that Argonne had organised earlier in the summer. ‘Finding computational scientists who can do both science and computing is hard,’ he said. Although commercial data centres are using high-end computing, the dotcoms just want to recruit computational scientists whereas, in Beckman’s view, high-performance computing needs both them and those who can apply the mathematics to real scientific problem, if HPC is to have a viable future. ‘The community need to be broader and bigger and we want more applications to use HPC,’ he said.

Independently of Beckman, Bill Gropp advanced a very similar view: ‘No-one does parallel computing for the fun of it,’ he said. ‘Parallel computing is grim sometimes’ – and with a laugh, he continued: ‘I like doing it.’ But he also stressed the need for computation performance to be aimed at accomplishing science. And, he said, future machines may be more differentiated – with each specifically tuned to a particular application.

Both Beckman and Gropp see future machines as being different from even the fastest on the current Top500. Some of the changes are coming from new technologies. Beckman pointed out that in an effort to increase efficiency, interconnects are coming down to the chip in designs for future Exascale machines. Some influences are financial and institutional – China is spending on HPC whereas the US and Europe do not have a common strategy. But if HPC is to survive both men were of the same mind: it must serve the end-user scientist and engineer and respond to the demand of science. In order for this to happen a transatlantic version of Prace could help.

It’s interesting to see that Prace and the US Extreme Science and Engineering Discovery Environment (XSEDE) are already collaborating on many initiatives. Is this the shape of the future of high-performance computing?