Skip to main content

Stuck in the middle

There is a smattering of supercomputing sites across central and eastern Europe, with Switzerland housing the largest number of HPC sites to appear on the latest Top500 list in that region, and Poland housing the most powerful supercomputer in the area.

And while most of these supercomputing sites may seem to be small fry compared to the massive projects going on in nearby countries, such as Germany and the UK, the Central and Eastern European countries are quietly using their HPC power for a wide range of scientific research.

Poland is doing well in the HPC stakes, with one system called Galera (number 45 in the latest Top500 list) being the most powerful supercomputer in central and eastern Europe. Galera is capable of 50 trillion operations per second, runs on 1,344 quad-core Intel Xeon processors and has a main memory of 5,376GB.

Scientists at the Gdansk University of Technology and other Polish universities use Galera for a host of research projects, including: chemistry, physics, engineering, electronics and oceanography. The supercomputer is also used by other research projects looking into substances that can be applied in anti-cancer therapy, research on aerodynamics and in the aviation industry as well as research into sea tides.

Poland has a total of five supercomputers at research facilities in Warsaw, Poznan, Wroclaw and Krakow and, though not as powerful as the Galera, the site at the Wroclaw Centre for Networking and Supercomputing is 318 on the June 2008 Top500 list. One of the main tasks of the Wroclaw centre is to operate and develop the Wroclaw Academic Computer Network (WASK). The WASK network includes 110km of fibre optic routes and 23 nodes, connecting more than 15,000 computer stations. WASK is connected through the PIONIER to the GÉANT European Network and to other national and global networks, and also to the Polish Telecom Network (TPnet) and to the Research and Academic Computer Network NASK.

Moving a little south, a site found at Slovenia’s Turboinstitut appeared at number 53 on the latest Top500 list with a maximum performance of 35.08 teraflops and a theoretical performance of 49.15 teraflops. The Turboinstitute is an independent hydro facility that offers companies access to its clean-power resources and conducts research into hydropower development.

The Turboinstitute houses its HPC offering in the Ljubljana Supercomputing Centre ADRIA, which is focused on research within computation, life and earth sciences. Many applications are directly or indirectly connected to renewable energy technologies and simulating the natural world. And, as all processes in nature are unsteady, the numerical analyses of such processes have to be done using unsteady methods. So, powerful computer clusters are used to perform the millions of calculations required to simulate the numerical prediction of such unsteady processes.

Alpine power

But the Swiss are leading the HPC race in central and eastern Europe, as far as numbers are concerned, with six of the country’s facilities appearing in this year’s Top500 list, including CERN which is the home of the muchanticipated Large Hadron Collider. Leaving CERN aside (see article by Paul Schreier on page 41), the Ecole Polytechnique Federale de Lausanne, based near the Léman Lake in Switzerland, makes an appearance at number 103 in the latest Top500 list with an IBM BlueGene/L system, which is made up of four racks of 8,192 processors and has a processing power of 22.9 Tflop/s (which placed the system at number eight in the world when it made its debut on the Top500 list in June 2005).

But the BlueGene system is just one of a host of supercomputing facilities at the site, according to Doctor Pierre Maruzewski the IBM BLue Gene/L technical coordinator at Lausanne, with the centre also housing several more general-purpose servers (including an SGI Altix 350 SMP machine, AMD Opteron cluster, Intel Woodcrest cluster and IBM Intel Harpertown cluster) and shared HPC platforms. Lausanne has also tapped the previously unused resources of its switched-off PCs with users able to sign up to a desktop grid (called Greedy). There are around 650 machines in the grid now, which is running at about 0.65 Tflop/s, and the system reached a million recovered computing hours in just under two years of operation.

The scientific research making use of HPC at Lausanne has a very broad scope, spanning several domains of science and engineering, including: molecular dynamics; a life sciences project known as ‘The Blue Brain Project’, which is simulating cortical columns; simulations in nanophysics of carbon nanotubes; semiconductor devices; CFD simulations of hydraulic machines and fusion plasma physics simulations of turbulence.

The Blue Brain Project specialises in introducing simulation-based methods into the domain of neuroscience. While it is quite common in engineering disciplines to predict experimental results through simulation, in biology the complexity of the pieces poses a challenge on the quantitativeness of the data and the computational power. Since the beginning of the Blue Brain Project in July 2005 a successful proof-of-concept has been accomplished showing that a completely data-driven model of a piece of brain tissue can be created that encapsulates the level of detail observed in the wet-lab.

Professor Henry Markram and Doctor Félix Schuermann, both heads of Blue Brain Project, told Scientific Computing World: ‘The realistic modelling of nerve cells and their connecting synapses was never before possible at the scale required to model a fundamental building block of the neocortex. The availability of a dedicated BlueGene/L supercomputer, which was listed number eight in the Top500 at the time of acquisition, allowed the Blue Brain Project to model thousands of morphologically complex nerve cells interconnected by tens of millions of synapses, which is needed to be able to compare the results with the experimental data.

‘Scientific visualisation also plays an important role to allow researchers to analyse the results interactively; for this purpose a visualisation supercomputer, an SGI Prism Extreme, has been interconnected with the BlueGene/L.’

Lausanne scientist Professor Laurent Villard’s speciality is in turbulence simulations of magnetically confined plasmas, in the context of controlled nuclear fusion research. These are first-principle based, or, as they are sometimes called, direct numerical simulations. The numerical codes have been developed in-house, partly in collaboration with the Max-Planck Institute. These codes are ported and run on massive parallel platforms such as BlueGene/L and have shown excellent scalability. Another important aspect is the manpower devoted to these code developments, which amounts to a total of several dozens of professional-personyears. Having such supercomputing power has helped Swiss research no end, according to Villard, who says: ‘It [HPC] has helped virtually all domains of science and engineering. HPC is becoming an intrinsic part of research in an increasing number of fields, and has also allowed newer, deeper lines of research to be undertaken.’

And the future is also looking bright for the Switzerland-based scientists. Villard and Jean-Claude Berney – the latter manages the IT services at Lausanne – say that the HPC Steering Committee, which is managed by Professor François Avellan, has just submitted a proposal for a sustainable strategy for HPC at the facility.

This is a kind of ‘step ladder’ approach: at the top, the centre has massively parallel platforms for capability computing, like the BlueGene/L, for which Lausanne hopes there will be a successor. Below, Lausanne has centrally managed, general-purpose platforms, for capacity computing. Then, it has shared platforms, which are supported and operated by several research units, the rationale being to have a more efficient usage and to make economies of scale as compared to the lower step of the ladder, namely ‘individual’ platforms, which are attached to a single laboratory.

The Ecole Polytechnique Federale de Lausanne is not the only Swiss supercomputing site. The Swiss National Supercomputing Centre (CSCS) houses a Cray XT system, which pops up at number 197, and also helps scientists get their hands on HPC power.

CSCS is the largest supercomputing centre in Switzerland and is managed by the Swiss Federal Institute of Technology in Zurich. The centre collaborates with domestic and foreign research and carries out its own research, and has recently expanded its HPC horizons, having signed a memorandum of understanding of a staff exchange program between its centre and the National Energy Research Scientific Computer Center (NERSC) at the Lawrence Berkeley National Laboratory. Three other Swiss sites, which all appear in the 400s of the Top500 list, include the Albert Dalco cluster, which is used by the BMW Sauber F1 team, a BlueGene/L system, run by IBM Research in Zurich, and an IBM cluster used for finance.

Less is more?

There are a fair few central and eastern European countries with no HPC facilities to appear in the Top500 list, including Austria, Belarus, Bosnia and Herzegovina, Croatia, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Luxembourg, Macedonia, Slovakia, Turkey and Ukraine.

But these countries still have some supercomputing resources with, for example, the Central Institute for Meteorology and Geodynamics (ZAMG) in Vienna officially launching a vector supercomputer last year. Capable of performing 512 billion arithmetic operations per second, the new NEC supercomputer is 28 times faster than its predecessor, an SGI supercomputer, and is designed to generate weather forecasts more quickly and with greater precision.

Turkey is also starting to fire up its HPC power, opening a National Center for High Performance Computing in 2004, which is partly used by the country’s scientists and researchers, as well as for R&D by the corporate world.

So it could be that size does not matter to the central and eastern European countries, with smaller supercomputing sites fulfilling the scientist’s research needs. Or, one could argue, that the recurring theme across central and eastern Europe seems to be that its HPC heyday has passed, with many of the supercomputers languishing towards the lower reaches of the Top500 list, once ranking much higher in previous years when they were first unveiled.

Alternatively it could just be that the scientists in these countries are hedging their bets, and waiting for the pan-European supercomputing infrastructure known as PRACE (Partnership for Advanced Computing in Europe) to get up and running.

PRACE will provide researchers in Europe with access to world-class supercomputers that process data at rates of petaflops per second. It will consist of France, Germany, The Netherlands, Spain and the UK as its principal partners and nine additional general partners, including Austria, Poland and Switzerland from the central and eastern European group. And, more recently, Turkey signed the PRACE memorandum of understanding.

Central and eastern European scientists clearly have a range of HPC facilities available to suit their needs and, with PRACE just around the corner, this middle block of countries will soon be upping their HPC game.



Topics

Read more about:

HPC

Media Partners