Are we running out of codes?

In the second in his series on software, Tom Wilkie discusses whether a lack of application software is hindering the wider use of supercomputing

Hardware manufacturers, Governments, and national laboratories are pursuing the technologies to build ever-faster computers, but industry is lagging behind in its adoption of supercomputing.

In his review of the Top500 list of fastest computers, Erich Strohmaier told ISC’13 that the performance offered by accelerators had risen dramatically but that ‘the commercial market does not have applications that run well on these systems and so the commercial market is not benefiting from the performance.’ In his keynote address, Bill Daley acknowledged the problem frankly: looking forward to the exascale world, ‘programming is a problem in 2020,’ he warned. He echoed Strohmaier’s verdict, saying that the systems available then will be ‘parallel, hierarchical, and heterogeneous’ but that, today, parallel programming is considered a ‘Ninja speciality’. He warned his audience that: ‘The commercial world is not using parallel tools.’ As an indicator of the size of the problem, he remarked that, in a quick search of LinkedIn, he had found only about 13,000 people who claimed to be doing parallel programming.

The issue is not one of having software to make the next generation of computers run efficiently – that is the subject of both commercial and Government initiatives – but one of application software. There is, after all, little point in building superfast machines if only Government defence laboratories are able to take advantage of them. The issue is that not enough of the codes that actually do the calculations to produce results of interest to scientists and engineers in industry and in academia are optimised to run on supercomputers. Within Europe, the issue has been recognised at the level of the European Commission itself. Kostas Glinos, the head of the e-infrastructure unit at the Commission, told the PRACE seminar at the opening of the conference that ‘only a very few applications today take advantage of petaflop systems. New methods and algorithms must be developed.’

At ISC’13, Intel and HP announced one step towards increasing the uptake of high-performance computing by industrial end-users. They are co-funding a ‘Centre of Excellence’ at HP’s site in Grenoble in France. Ed Turkel, the manager, WW HPC Marketing at HP, said that one focus of the centre would be parallelising and optimising application software ‘because customers do not have the knowledge to do this’. Philippe Trautmann, EMEA sales director for HP, envisaged three sorts of users of the Centre: independent software vendors (ISVs) and co-developers who need to optimise the code base from the ISVs; large companies that ‘need to integrate a complex software environment into large hardware; and small- to medium-sized enterprises.

For some observers, the Independent Software Vendors (ISVs) appear to be the missing factor in the wider adoption of HPC – whether they have optimised their software for parallelism and how they price their product. Industrial users have their own in-house clusters but want the option of breaking out ‘into the cloud’ from time to time, to deal with a particularly difficult calculation. But for this to make sense – technically as well as economically – the ISVs have to have built this sort of flexibility into the technical aspects of their software and into their pricing structure.

According to Philippe Trautmann, the increasing heterogeneity of computers makes licensing complex for ISVs. For his part, Kostas Glinos conceded that ‘licensing is an issue.’ One end-user may have software from several different ISVs and the fact that different ISVs may have licensing models that are incompatible with each other is an issue of concern for Wolfgang Gentzsch, who acts as a strategic consultant for HPC, Grid, and Cloud Computing. Some established ISVs may levy an annual fee but in the new era of occasional supercomputing on the cloud, charging ‘on demand’ or ‘pay per use’ may be more appropriate. Here, Dr Gentzsch thinks, the smaller start-up ISVs may have an advantage, as they are less interested in annual fees. Gentzsch is co-founder of the UberCloud Experiment, which is designed specifically to solve existing industry end-user problems in current HPC environments (rather than looking to problems of an exascale future) by identifying issues, such as licensing, that are roadblocks in the path of high-performance computing as a service. The experiment is conducted in three-month long ‘rounds’ and the fourth round starts on 20 July 2013.

Michael Resch, director of the High-Performance Computing Centre (HLRS) Stuttgart, also expressed anxiety not only about the lack of software tools in Europe but also about the lack of facilities to train the next generation of parallel programmers or to encourage industrial users to adopt HPC. ‘We are running out of codes,’ he said. 'The ISVs cannot keep up with the number of cores on the machines.’ But the number of European institutions where ISVs and industrial companies can tap into a reservoir of knowledge about HPC was in decline, he said.

Professor Resch cited the example of the Automotive Simulation Centre in Stuttgart (ASCS) where ISVs and motor car manufacturers could come together with HPC experts. It currently has 23 members: three car manufacturers (Porsche, Opel, and Daimler); 11 software vendors; two hardware vendors; two universities; two research facilities; and three individuals. The ASCS was founded to support application-oriented research in automotive engineering and promote the transfer of know-how from science to industry, particularly in numerical simulation.

Similarly, the Jülich Supercomputing Centre (JSC) has established a new type of domain-specific research and support structure: the Simulation Laboratory. Again, the rationale is explicitly to address the problem that application software is lagging behind HPC hardware developments and to encourage wider use of HPC. According to the deputy director of the Jülich Supercomputing Centre, Norbert Attig, the laboratory offers early access to new hardware, and vendors get access to applications. Four SimLabs were established at JSC initially, in the fields of computational biology, molecular systems, plasma physics, and climate modelling. In October 2012 the Terrestrial Systems SimLab was started, followed by a SimLab in Neuroscience that came on stream early in 2013.

According to Kostas Glinos, the European Commission is aware that expertise needs to be made more widely available, through training and consultancy and he expressed the hope that the European ‘Centres of Excellence’ would help plug the gaps. The Commission’s Centres of Excellence appear to be elaborate entities compared to the more commercial venture established by Intel and HP at Grenoble. However, although it may be a lesser construction, it at least has the merit of being real and functioning in the here and now.

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe looks at the latest simulation techniques used in the design of industrial and commercial vehicles


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers