Skip to main content

The benefits of a virtual European supercomputer

Several problems in science have been classified as 'Grand Challenges', the solution for which is critical for scientific progress and technical advance. Among them are: protein folding; turbulence; catalysis at the atomic level; cosmic evolution; modelling climate and climate change; and drug design. To get insight into these phenomena, much more powerful computers are necessary than can be built today. The USA has a tradition of building and installing the most powerful supercomputers, beaten, for a few years, only by Japan with the Earth Simulator system. European researchers do not have access to such powerful systems of their own and have been relying on the upgrade or re-installation cycles of their individual national services. For Europeans, the situation is now set to improve, since these national supercomputing centres are combining their resources in the DEISA project (Distributed European Infrastructure for Supercomputing Applications, www.deisa.org) to jointly deploy and operate a distributed European supercomputer.

The European vision was formulated in spring, three years ago, in 2002. The idea was to establish supercomputing at a European scale to enhance scientific discovery. The first alliance, of institutions from six European countries, was formed then and an Expression of Interest was submitted to the European Commission the same spring. A year later, eight leading supercomputing centres had come together to submit an EU proposal. The project started in spring 2004, and another year later, again in spring (2005), the core infrastructure with four centres from three countries (France, Germany, Italy) was inaugurated during the DEISA symposium in Paris. At the same time, the three remaining big European supercomputing centres dedicated to basic scientific research joined the DEISA consortium, enhancing not only the computing power and the technical expertise, but also giving the initiative added momentum. With this step, the aggregated computing power of initially 20 Teraflop/s peak performance will surpass the 100 Teraflop/s threshold next spring (2006).

Integrating national supercomputers into a virtual supercomputer naturally leads to a grid infrastructure. To operate a grid infrastructure and provide adequate access to the resources, intermediate software layers called middleware are mandatory. To deploy and operate a production quality grid of supercomputers, reliable and mature middleware components are needed. Through DEISA, the idea was to deploy global file systems across Europe to decrease the complexity of middleware layers normally needed, and to provide transparent data access for scientists throughout Europe. In spring 2005, just one year after the project started, a production-quality global file system spanning Germany, France and Italy was demonstrated.

In June 2005, a real physics application, simulating turbulence effects in magnetic confine-ment devices, was executed on 256 processors of the supercomputing system in Garching near Munich, Germany.

The application was fed transparently with so-called restart-data (from a previous run to allow for chained calculations, a standard technique in supercomputing, where projects may take months to complete) from the disk system of a supercomputer at Orsay near Paris, France. After termination of the calculation in Garching, the restart-file was written to a disk system at Juelich, Germany, to allow for efficient continuation of the simulation on the Juelich supercomputer, while writing the result of the terminated simulation step to Garching disks.

In a post-processing visualisation step, carried out with computers in Bologna, Italy, output data from the Garching disk systems were transparently fetched from Garching as if they were local. All data traffic occurred implicitly through the applications (e.g. Fortran90 I/O calls) without any need for manual data transfers by classical grid techniques (via ftp or gridftp), with data transfer speeds close to the bandwidth of 1 Gbit/s of the DEISA network.

With such a global file system in place, the next challenge is load balancing and job re-routing across Europe. The idea is simple: single-site supercomputers must be partitioned to host a variety of supercomputing projects at the same time, with requirements ranging from usage of moderate processor numbers (below 100) per job to highest requirements for 512 or 1024 processors. With a re-routing option in place, large resources can be freed in one site for a huge, demanding project. At the same time, other sites can take the load of moderate jobs that otherwise would be blocked during execution of the big job, which may sometimes take weeks or even months to complete. Scientific communities with geographically distributed members, and different kinds of computing needs, will particularly benefit from the DEISA supercomputing infrastructure. Proposals have been put forward where large supercomputing simulations will be carried out in a first step, while in the second step the huge amount of the resulting theoretical 'data' will be compared to the experimental data.

This will be the case for large cosmological simulations carried out by the VIRGO consortium that will be compared to experimental data and will also give theoretical input for the International Virtual Observatory, a global petabyte grid of observed and simulated data. What will be needed is grid access and query enablement as a work package within the EURO-VO initiative, supported by the European Southern Observatory, the European Space Agency, and research institutions from France, Britain, Italy, Germany, Spain, and The Netherlands.

What comes first - an innovative, complex application or a new supercomputing architecture/infrastructure? Do new applications drive the building of a new computer architecture? Or are new computer architectures built first, with the architects only afterwards calling for the development of suitable applications? A simple resolution of this sort of chicken-and-egg problem is that computational innovations seem to need both, new applications and new infrastructures, like two feet taking steps forward alternately. Service activities in DEISA move the infrastructure leg; joint research activities and the extreme computing initiative the application leg. With the infrastructure leg having done a first major step, the application leg is now taking a big step.

After support of pioneer applications in joint research activities, the recently created DEISA applications task force is now addressing extreme and complex computing. Examples include applications tuned to extreme parallel scalability, workflows of computational steps, and so-called coupled applications.

Rather than spreading a huge, tightly coupled parallel application on two or more supercomputers, DEISA is able to allocate a substantial fraction of a single supercomputer to a single project. Since the latency problem cannot be overcome, this approach appears more efficient for those applications.

An example is the gyrokinetic turbulence simulation code, TORB. Here, a new approach to parallelisation, called domain cloning (a supplement to one-dimensional domain decom-position) offers the opportunity of optimising the scaling property of particle-in-cell codes such as the TORB code. A further adaptation, focused on clustered symmetric-multiprocessor computers, made it possible to apply the domain cloning concept to thousands of processors. Within DEISA, access to such large systems was made possible. On up to 1024 processors on the IBM system operated by the European Centre for Medium-Range Weather Forecasts (ECMWF) in Reading, UK, nearly linear speedup was achieved, and on 2,048 processors the Terascale terrain was conquered, with a total main memory of 1.6 TBytes available, an overall sustained performance of 1.3 TFlop/s, allowing for challenging turbulence simulations. Here the parallel efficiency was still 82 per cent, which denotes a speed-up of 1,680.

In some cases of multi-physics/multi-scale simulations, several computing modules are involved that treat different aspects of the physical phenomena, and which can proceed in a loosely coupled fashion without significant communication. The following example from climate and environmental sciences illustrates the situation: The aim of this project is to develop and optimise the efficiency of combustion and reduce pollutant emissions in industrial systems (engines, energy production, industrial furnaces, and so on). Largely independent processes can be calculated simul-taneously on different resources of the grid system, allowing radiative process in the combustion, rarely considered in previous works, to be taken into account. Thus three coupled codes describe three physical phenomena: pollutants, combustion, and radiation.


First simulations of the impact of radiative process on flame behaviour. The temperature field is largely modified: the temperature decreases and the field is more homogeneous when the radiative process runs.


Coupled application on flame behaviour, consisting of three codes, describing the three physical phenomena pollutants, combustion, and radiation (courtesy of Denis Veynante, EM2C).

Automated production runs of coupled applications, however, require a co-allocation service that is not yet operational (it is expected in 2006). Nevertheless, development and optimisation of such coupled applications require time, and DEISA is ready to start considering new application projects in this area.

Another example refers to so-called workflow applications, which include a series of coordinated compute and/or visualisation tasks on different platforms. Especially in areas where a series of well-established applications is intensively used within certain scientific commun-ities, as for example in bioinformatics/genomics (or, in a somewhat less pronounced manner, in materials science), science gateways and support for workflow applications (which can use different, optimised compute architectures for each step) can provide significant added value: ease of access to applications; ease of application instrumentation; ease of application pipelining; and access to resources while hiding the complex infrastructure behind. For these functionalities, the European middleware system UNICORE (see www.unicore.org) plays a key role within DEISA.

DEISA has established a systematic network of cooperation of all major civil supercomputing centres in Europe, deploys and operates a powerful supercomputing grid, and is working hard on system interoperability. A new key task will now be application enabling for this powerful supercomputing infrastructure, for the benefit of European researchers and the advancement of science.

Dr Hermann Lederer, from the Garching Computing Centre of the Max Planck Society, is a DEISA task leader for Joint Research Activities in Materials Science and Plasma Physics.



Topics

Media Partners