Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

A new power base in Europe

Share this on social media:

David Robson explores the latest initiatives driving the development of high-performance computing in Europe

Since the Top500 list of the most powerful supercomputers in the world was first compiled in 1993, Europe has noticeably lagged behind other countries in delivering high-power processing facilities for scientists and engineers, with Japan and the US typically holding nine of the top 10 slots.

This has meant that some European scientists have been reliant on American resources to do their high-level simulations and data processing – a far from ideal situation that has no doubt put strain on the creative process of scientific research.

All this might be about to change, with a new initiative that could finally put Europe firmly on the supercomputing map. The EU-funded PACE project plans to create ‘three to five HPC leadership systems of Petaflop/s performance’ by 2009 – computers that would operate at speeds faster than any other system in the world at the moment.

Europe has been moving in this direction for some time. Currently, scientists do have a couple of different options if the processing requirement of their research exceeds the power of their institution’s own facilities. Visitor programmes, such as HPC-Europa, coordinate a number of different centres to encourage scientists to visit supercomputing centres for themselves, even in another country, to perform the research on site. For more convenient access, the DEISA project connects the different supercomputing sites across Europe through a high-speed network, allowing them to use the different facilities from their desktop.


The supercomputer at the EPCC in Edinburgh, UK.

The application processes to access these facilities are largely similar. Scientists make proposals, just as they would do for academic funding. The proposals are assessed by a panel of supercomputing experts on their academic merit, the likelihood that they will be published, and on their processing requirements and how much the research would benefit from a supercomputing facility. Industrial and commercial organisations can also normally use the facilities, although they typically pay for the privilege.

For almost as long as they have existed, supercomputing centres have invited visitors to benefit from their facilities, but the recent HPC-Europa programme, founded in 2003, has given a common framework for six key European centres, simplifying the application process. Applications for all the centres are put into a common pool so that the facility that best suits the scientist’s interests can be selected.

Any scientist in the EU, or an associated state such as Iceland, Israel or Turkey, is eligible to apply and visit a centre in the UK, France, Spain, The Netherlands, Italy or Germany. Many visitors do come from further afield, as the scheme has a policy to give priority to research groups that wouldn’t normally have access to similar facilities. ‘In the first year, 27 per cent of our visitors came from Eastern Europe and Turkey,’ says Catherine Inglis, the programme coordinator for the EPCC centre in Edinburgh, a member of HPC-Europa. ‘We’ve had applications from all [of the 33 eligible countries] apart from Lichtenstein, Iceland and Malta.’

In addition, to allowing the scientists to use the facilities, the project also pays for their travel and living expenses for the typical three-month stay. Each visitor is given two contacts at the centre – a specialist in high-performance computing, and a specialist in their field of research. ‘The idea is to enhance their research,’ says Inglis. ‘It helps them to integrate into the department, which gives them new input and ideas.’

Now that DEISA exists, it would be possible for a scientist to connect to the different centres from their own town, but that would obviously lose this element of collaboration that can improve research. Inglis also believes that the HPC contact can help researchers that may not know how to exploit the high powers of the computers to their full potential. ‘I think there’s more value sitting down with someone for two hours than going through email help desks – particularly if their first language isn’t English.’

However, for researchers with more experience in high-performance computing, DEISA is an attractive option. According to Victor Alessandrini, the director of DEISA, it was built with the purpose ‘to take the leading supercomputing resources in each European country and add value by integration’. The different centres are connected by a high-speed network, built and maintained by an organisation called Dante, which ensures that no bottlenecks appear in the data transfer that could limit the efficacy gained from the high processing powers of the centres. ‘It’s been operating for three years, and we have succeeded –which was not obvious in the beginning.’ One of the benefits of this system is that rather than being tied down to one facility, the research groups can benefit from different computers that may be more suitable for the different parts of their research. So the pre-processing of data could be done in Germany, the processing could be performed in France, and the post-processing could be performed in Italy.

The DEISA consortium includes 11 members, including the Barcelona Supercomputing Centre (BSC) in Spain and the Leibniz Computing Centre of the Bavarian Academy of Sciences and Humanities in Germany – currently the ninth and tenth most powerful computers in the world, according to the Top500 list. The BSC now contains 10,240 processors. Sergi Girona, the operations director at the BSC, says that it is actually the largest centre in the world that is open to general research. The eight other centres have to limit their access to certain groups, so European scientists are now very lucky to be able to access this facility remotely.


MareNostrum, the supercomputer at the Barcelona Supercomputing Centre, is housed in a former chapel.

Alessandrini says that the DEISA infrastructure is ‘a major step in the deployment of a real shared European supercomputer.’ The next step to achieving this goal is the PACE programme, which should be implemented by 2010. This project will rely on the underlying DEISA infrastructure, but it will include a tighter overlying management of the different sites, how they are used and how they evolve. At the moment, each country has individual control over this.

The project will be funded by 15 member states, and the money should go towards improving the existing infrastructure, which could include upgrading three or more centres with Petaflop/s processing powers. PACE will also oversee the development of massively parallel software that will enable scientists to get the most out of the different sites.

This couldn’t happen at a better time, if European scientific research is to remain competitive with the rest of the world. ‘It’s very important that we develop more competence for supercomputing in Europe,’ says Achim Bachem, a coordinator of PACE. ‘We want more vendors involved in the centre, like IBM, Bull, Cray and Hewlett Packard. In the US, they have lots of programmes that help them to be the leader. It’s important that Europe has independent access, so we’re not depending on technology elsewhere in the world.’

Bachem points out that ‘a supercomputer of today is the laptop of tomorrow’, which is why we need to remain up to date and competitive in the fast-moving world of supercomputing, but that also raises the issue of whether these supercomputing centres will ever become redundant. High-performance computing products are now readily available that allow scientists to develop their own parallel processing over more than more processor. It’s possible that, one day, all of the scientist’s computing needs could be done on his own desktop. For example Nvidia, historically a provider of graphics processors, has recently branched out into producing parallel processing chips for high-performance computing. Each GPU they produce has a rate of 500 gigaflop/s, which may already be enough for many high-performance computing problems.

Andy Keane, the general manager of GPU computing at Nvidia, believes that it would be much more efficient for an organisation to solve its problems using its own systems rather than the supercomputing centres, particularly if it has to pay for processing time: ‘With supercomputing centres, they have to schedule the time, and budget for it. It’s much more convenient to give everyone some form of this computing, so they can use the resources right away, which can unlock a lot of creativity.’ However, he agrees that some problems would be too intensive, even with this new technology. ‘Supercomputing centres are safe –they will always exist,’ he says. One thing is certain: simulating the world around us is becoming as important as wet experiments in everything from drug discovery to astrophysics, and high-power supercomputers will soon become a tool as essential as the microscope.