Skip to main content

Parallel programs need new maths

In the first report from last week’s PRACEdays15 conference in Dublin, Tom Wilkie considers why so much Exascale software will be open source and why engineers are not using parallel programs

Disruptive change is needed if software is to take full advantage of next-generation and Exascale machines, delegates to the PRACEdays15 conference in Dublin were warned last week. Researchers and engineers may have to reverse the way in which they approach the task of writing software by starting from parallelism and only then thinking of the mathematics, Mark Parsons, director of the UK’s Edinburgh Parallel Computer Centre (EPCC) said.

There is currently too little focus on usability, he told his audience: ‘It’s not got any easier to use HPC over the past 20 years. The amount of hand-holding we [at EPCC] have to give an SME for it to do its first simulation is enormous.’ Usability is not just an issue of programming languages, he went on, but also of applications.

Open source software may not scale

Reporting on the EU-funded Cresta project to review software and encourage co-design for Exascale, he warned that two major Open Source software packages, OpenFOAM and OpenIFS, would not scale to the massively parallel architectures of the future. ‘It will cost many man years to port the codes to Exascale and massive parallelism by 2025,’ he told the meeting.

The Cresta project, which has now completed its work, had revealed that OpenIFS ‘is in no way appropriate for Exascale,’ he continued. The community supporting the code had taken the point to heart and were starting to look at how they could restructure the software.

OpenIFS is, technically, not open source despite the name, but perpetual licences for institutions, though not individuals, are avail free of charge. The project is led by the European Centre for Medium-Range Weather Forecasts (ECMWF) to provide an easy-to-use, exportable version of the IFS system in use at ECMWF for operational weather forecasting.

The situation for OpenFoam, which is one of the most widely used CFD codes, is worse, however. ‘It is not even a petascale code. It’s never going to get to Exascale in its current form,’ Parsons said. In fact, the Cresta workers ‘gave up on’ OpenFoam.

But exascale software will be open source

The importance of open-source software was underscored at the meeting in comments by Eric Chaput from Airbus who said that in future the company would be using open source software as the basis for its engineering simulation work. It would not be relying on commercial software from the independent software vendors (ISVs) because of the cost of licences. Commercial software was too expensive, even for a company of the size of Airbus, he explained, in a way because of the company’s size – most licencing models were ‘per user’ and Airbus had so many engineers and researchers carrying out simulations and other tasks that to pay so many licences was more than the company was willing to do.

In any case, according to Lee Margetts, lecturer in computational mechanics at the University of Manchester, the ISVs see their market as desktops and workstations, not HPC, let alone Exascale. Reporting on the results of a survey of 250 companies conducted late last year for NAFEMS, the international organisation set up to foster finite element analysis, he told the meeting that ISVs were moving towards supporting accelerators such as Nvidia GPUs and the Intel Xeon Phi coprocessor. However, the idea of rewriting their code to port to FPGAs, ARM processors -- or to cater for such features important to Exascale as fault tolerance and energy-awareness – was, for many ISVs, ‘not on their roadmap. Some vendors don’t know what it is,’ he said.

Most engineers use work stations, not HPC

In paying so little attention to the extreme end of computing, the ISVs are simply following the market, according to his presentation. Margetts’ survey looked at the computing platforms that engineers are using to carry our engineering simulation and the survey revealed that the standard workstation and laptop are where most engineers do most of their work.

Those ISVs that are aware of developments in Exascale computing see it as a high-risk area. They see several radically different approaches being taken to the hardware and no clarity as to what will be the final design. The ISVs are not going to deliver Exascale software, he continued, and hence Exascale engineering software needs to be open source.

The EPCC’s Mark Parsons reached a very similar conclusion based on his view that the first Exascale machines are ‘going to be large and difficult systems’ that would be power hungry and would employ parallelism at 500 million to a billion threads. In contrast, he pointed out, most users are not taking full advantage of the parallelism that already exists today. Even on ARCHER, the UK’s national supercomputer facility at Edinburgh, most jobs were running on a few tens of thousands of cores not hundreds of thousands, he said.

Software is being left behind

In Parsons’ view: ‘Hardware is leaving software behind, which is leaving algorithms behind.’ In contrast to the relentless changes in hardware, ‘algorithms have changed only incrementally over the past 20 years. Software parallelism is a core challenge. We need to see codes moving forward.’

However, there was a strand of pessimism in Parsons’ presentation. Although he thinks that Europe needs to spend as much on software as on hardware development in its Exascale research programme: ‘I don’t see that money being spent.’ Work was being done on the parallel implementation of algorithms in the USA but much less effort was being undertaken in Europe.

The meeting heard that the issue is not one of Government or EU funding alone, nor of the researchers: part of the problem is the inherent conservatism of the end-users of applications. Chemists might have confidence, gained over years of use, in codes that scale to 250 nodes, say, but will not trust new codes that scale to thousands of nodes. There is a very similar situation in engineering, Parsons concluded, where companies would stick with, say, Ansys’ commercially available software rather than put their confidence in a new highly scalable code.

 This is the first in a series of reports from PraceDays15, held in Dublin last week. Robert's Roe's report on the role of HPC in the host country, Ireland, can be found here. On a similar theme, Tom Wilkie writes about how supercomputing for small companies can be made simple.  European supercomputing policy was set out at PraceDays15, prompting the question whether a European company will build Europe's first Exascale computer. In the final report, Robert Roe compares and contrasts the support given for HPC in Japan and in Europe.

 

 

 

Media Partners