An air of collaboration surrounds the scientific supercomputing projects going on in the north of Europe, with countries keen to take part in the everlengthening list of pan-European projects. And Denmark, Finland, Norway and Sweden have even set up their own group project, known as the Nordic Data Grid Facility, to help coordinate their grid computing resources.
That said, the region’s countries are also keen to up their own high-performance computing power, with government funding and new facilities popping up like lights in a busy server room to aid research in a variety of scientific projects.
All together
The range of pan-European projects helping research facilities within northern European buff up their supercomputing power is quite diverse. This acronym-heavy list of projects and groups include: DEISA, EGEE-II, EGI, e-IRG, ESFRI, GEANT, HET and PRACE (see Pan-European HPC Projects at the end of this article).
Kimmo Koski, managing director of the Finnish supercomputing centre CSC, says: ‘We are doing collaboration over country borders in many ways both in the Nordic area and within Europe. Many of the centres participate in the same European projects. Due to the central system in Finland and sufficient critical mass at CSC, we are exceptional since we are partners in all European major grid and HPC projects. In other countries there are often different participants in different projects.’
There are also more large-scale projects on the horizon for the north European nations too, as Bengt Persson, director of Swedish supercomputing facility NSC, says: ‘Starting next year, the IS-ENES project will provide an infrastructure for Earth systems modelling, including setting up facilities for easy data access and optimised multi-core codes within this area.’
IS-ENES stands for the Infrastructure for the European Network of Earth System Modelling and brings together around 40 European institutions, which will work together to develop a European network for Earth system modelling.
This collaboration with wider-scale projects is popular within the north European regions, as Koski says: ‘We do collaborate in various EU and other projects, but the collaboration is seldom limited to northern Europe. In most cases the collaboration is Europe wide.’
Netherlands-based centre SARA is also participating as a primary partner in the PRACE project. And the NSC also takes part in several international collaborations, including EGEE and PRACE, to help optimise its computing resources, according to Persson.
More specific to the Nordic region is the Nordic Data Grid Facility (NDGF), which is an alliance between the Nordic countries of Denmark, Finland, Norway and Sweden. The NDGF traces its history back to the end of 2001 and is related in the NorduGrid project. NorduGrid was established in 2000 with a small distributed testbed aimed at providing resources for the anticipated mass production of experimental data from CERN. After a year it became apparent that a larger Nordic facility was required and the NDGF subsequently came into being, which provides HPC resources to a range of scientific areas, as NSC’s Persson says: ‘The Nordic DataGrid Facility is coordinating resources in grid computing for particle physics, bioinformatics, CO2 research, and further disciplines.’
But it seems having the supercomputing kit in place might not be enough to make the area’s HPC projects work, and northern European countries will need to work hard to make sure they get the most out of such widespread projects. CSC’s Koski adds: ‘The collaboration between supercomputing facilities in Northern Europe is intensifying at some speed and it is planned to increase common activities such as changing cycles over country borders etc. The critical factor, however, is not the hardware but competence for computational science, thus the major impact will be gained if we are able to collaborate better, including all stakeholders (infrastructure, experts, code development, research, and so on), to address multidisciplinary scientific and research challenges.’
Nordic variations
Despite this collaborative nature of the region’s countries, the Nordic countries all have very different approaches to supercomputing, CSC’s Koski explains: ‘In Nordic countries, Finland has a fully centralised model, Sweden and Norway have distributed models and Denmark has no centre at all (funding is directed to research groups who maintain their own systems themselves).’
Finland only has one supercomputer on the latest Top500 supercomputing list, but this is not due to a lack of interest in HPC, but because of the unique centralised nature of the country’s supercomputing facilities.
The centralised centre, known as the CSC, is the Finnish national IT centre for science and provides a range of services, including highperformance computing, to a range of customers from various disciplines. Koski says: ‘Our main customers are universities and polytechnics, research centres, public sector and industrial R&D, but CSC is not targeted to do research, but to support research.’
CSC is a state-owned non-profit limited company with 170 employees, who in addition to maintaining and developing the systems also provide expert services and consultancy in those areas. The centre has seen a number of scientific successes, as Koski explains: ‘Many of the scientific results by Finnish university researchers using high-end computation have been achieved by using CSC’s services.’
In addition to the CSC, there are smaller computing facilities in Finland’s university research groups that complement the system. Koski says: ‘We are running together, for example, grid projects within Finnish universities to utilise synergy in system management, application installations etc. Part of the university systems enter a few teraflops of capability so they could be called supercomputers – and also in the future smaller systems will be installed in universities in addition to CSC, too.’
And while Finland’s population might seem rather small with around five million inhabitants, the CSC has big plans. ‘The vision of CSC is to be one of the major centres in Europe by 2012, which is a challenging vision for such a small country,’ says Koski. ‘The reason behind the vision is the fact that to be able to be competitive internationally, scientists require efficient collaboration over country borders and access to high-quality infrastructure. Having an efficient centre with high quality services to support Finnish research makes our scientists attractive partners in international collaboration and attracts top-class research wanting to place themselves within Finland.’
Although Sweden has the greatest number of supercomputers on the Top500 list within the northern European countries (nine in total) – six of these are used by financial institutions and one by a government agency, leaving the country with two supercomputers on the list being used for research purposes. The first of these (ranked 39) is called Akka and forms part of the HPC2N (High Performance Computing Center North), which is a national centre for scientific and parallel computing. The centre’s partners include Umeå University, Luleå University of Technology, the Swedish Institute of Space Physics, Swedish University of Agricultural Science and Mid-Sweden University.
This collaboration between universities and research institutes forms a network of HPC users who are all using the facilities for very different projects, including: simulating large eddy currents for manoeuvring ships, simulating complex magnetic structures and modelling chemical reactions at mineral surfaces.
The second Swedish supercomputer, coming in just after Akka at number 40, is called Neolith and is housed at the National Supercomputer Centre (NSC). It is the centre’s largest resource and was installed in late 2007. The NSC has several different architectures, including shared-memory machines and clusters dedicated for grid computing, but the most common architectures there are large cluster systems.
NSC was founded in 1989 and provides large-scale computational and storage facilities to researchers at Swedish universities, in collaboration with its partners SMHI (Swedish Meteorological and Hydrological Institute) and Saab.
The centre will continue to expand over the next few years too as Bengt Persson, director of NSC, says: ‘We are expanding to meet our users’ needs. We are acquiring more computers in order to have optimal systems at all levels and of different architectures. We are building up facilities for large-scale storage, for which there is an increased demand now, especially within the fields of particle physics, meteorology, and bioinformatics.’
Over in Norway, the Notur project provides the national infrastructure for computational science in the country. The project provides computational resources and services to researchers at Norwegian universities and colleges and operational forecasting and research at the country’s Meteorological Institute.
Two Norwegian supercomputers appear on the Top500 list, both of which sit within the country’s universities and partner with the Notur project. At number 47, the University of Bergen has a Cray XT4 system, as well as a Linux cluster. The supercomputing facilities are used for, among other things, research into marine molecular biology, large-scale simulations of ocean processes, climate research, computational chemistry, computational physics, computational biology, geosciences, and applied mathematics. At the University of Tromso, and at number 61 on the list, is a cluster from HP which is targeted towards theoretical and computational chemistry. Supercomputers are also being used to help engineers locate and simulate the country’s oil reserves (see the feature Grid technology strikes Norwegian black gold).
CSC’s new machine room with CrayXT4/XT5 machines. The machine room is located in Espoo, Finland. © Deco Media and Up To Point.
Despite Denmark not having any supercomputing centres on the Top500 list, the country is still HPC-savvy with plans afoot to boost its supercomputing power.
The Danish Center for Scientific Computing (DCSC) is a national research infrastructure run under the Danish Ministry of Science, Technology and Innovation to provide scientific computing and high-performance computing, as well as a grid infrastructure, to the country’s researchers.
Some of the research conducted at the centre includes: using supercomputers to create an extensive map of protein complexes involved in diseases such as breast cancer and Alzheimer’s disease; predicting trade and the behaviour of the financial markets and simulating the behaviour of the atmosphere, oceans, upper soil layers and the interactions between them to understand climate change.
The DCSC has been ramping up its HPC efforts recently with a recent report on HPC in Denmark, A National High-Performance Scientific Computing Facility, hinting towards a wider national HPC scheme as the report reads: ‘It is concluded that Denmark needs a combined decentralised and centralised infrastructure for HPC. Supplementing the existing structure with a centralised infrastructure will greatly support the research and education in scientific computing, the internationalisation of Danish HPC as well as the application of HPC in Denmark to new areas and users.’
Going Dutch
The Netherlands ranks relatively high on the Top500 list, with five supercomputing facilities appearing on the latest countdown, only one of which belongs to an IT services provider, and the other four being used for research purposes. Two (ranked at numbers 51 and 68) sit within the University of Groningen and the other two (numbers 73 and 315) are at the SARA (Stichting Academisch Rekencentrum) supercomputing centre.
The University of Groningen contains a centre for High-Performance Computing and Visualisation, which, as the name suggests, provides scientists and other parties with supercomputing and visualisation facilities. Included in its HPC wares are a 200 dualprocessor node HPC cluster used by scientists for heavy computing tasks, a grid cluster that is part of the European grid infrastructure, and a six-rack IBM BlueGene/L supercomputer called Stella that is used by ASTRON, which is the Netherlands Institute for Radio Astronomy.
Stella is the brain behind a radio telescope called LOFAR (LOw Frequency ARray) which will try and find answers to a host of astronomical conundrums, including looking back to a time when the first stars were forming, mapping the galaxy’s magnetic field and attempting to discover the sources of highenergy cosmic rays.
Michiel van Haarlem, managing director at LOFAR, says: ‘This telescope is an important scientific and technological pathfinder for the next generation of radio telescope – the Square Kilometre Array (SKA).’
Supercomputer centre SARA houses the Dutch national supercomputer, which is available for the country’s universities and academic research centres. There are similarities between the Dutch offering and the Finnish centralised centre too, as CSC’s Koski explains: ‘There is a pretty similar concept to CSC in the Netherlands (SARA), even though there the research network is a separate company, unlike in Finland.’
SARA is in the process of replacing the current IBM pSeries 575 Power5+ system to a Power6 system. The current system is placed at number 73 in the June 2008 Top500 list and is the first half (at 1,536 cores) of the new system (which will contain 3,328 cores). Walter Lioen, group leader of supercomputing for SARA’s High Performance Computing and Visualisation group, says: ‘So the extrapolated Top500 for the full system (end July) will probably be some 51TFlop/s for the Rmax, which would be sufficient for a number 34 ranking in the same June 2008 Top500 list.’
Lioen adds: ‘The HPC facilities at SARA are used by many different scientific disciplines for their day-to-day scientific research. A lot of time is used by theoretical chemists; furthermore, every year we also facilitate a couple of computational grand challenge projects (in the order of magnitude of a million CPU hours each). Since we are talking about the Dutch national supercomputer, a lot of research could not have been performed without having this national facility.’
Fellow Benelux country Belgium has two systems that make appearances in the latest Top500 list, but its supercomputing facilities are more geared towards industrial, rather than academic, research.
Cenaero, the Centre of Excellence in Aeronautical Research, is an applied research centre in Belgium with a focus on the development of mulitidisciplinary simulation technology for aeronautics. The centre focuses its HPC power on a range of industrial problems, including: virtual manufacturing for simulating processes like machining, welding or heat treatments; modelling materials and structures to see how they might react to fractures, bonding, structural analysis and damage detection and fluid mechanics to attempt to understand complex physical systems.
Green future
The future for northern European supercomputing does look bright, with Koski adding that such northerly countries hold a particular advantage over the southern European supercomputing centres. ‘HPC is increasingly looking forward to more efficient power consumption and green energy,’ he says. ‘The colder climate allows new opportunities in this.’
The northern European supercomputers already seem to be doing their bit for ecofriendly HPC, with many making appearances on the latest Green500 list, which ranks the world’s most energy-efficient supercomputers.
The systems at the Netherlands’ University of Groningen came in at numbers seven and 45 on the list, Umeå University’s cluster which forms part of the HPC2N project is number 16, the Belgian blade system at Cenaero is number 85 and one of SARA’s systems is number 147.
And NSC’s Persson believes the widespread collaboration between centres and countries is the key to the region’s future supercomputing success, as he says: ‘There are many ongoing efforts, and more and more international collaborations, which strengthen the field and help to provide good resources in an efficient manner.’
But it is the use of supercomputing within more and more scientific disciplines that will also assure a safe future, as Persson adds: ‘Europe is well ahead in the supercomputer area. Supercomputing facilities are constantly expanded to meet the increased needs from the scientific community. Supercomputers are also used in many more scientific fields today and can be expected to be so even more in the future.’
Pan-European HPC projects
DEISA: the Distributed European Infrastructure for Supercomputing Applications. DEISA is a consortium of leading national supercomputing centres that deploys and operates a distributed supercomputing environment.
EGEE-II: Enabling Grid for E-sciencE. The project provides researchers in academia and industry with access to grid infrastructure, regardless of their geographic location.
EGI: An effort to establish a sustainable grid infrastructure in Europe.
e-IRG: e-Infrastructure Reflection Group. e-IRG is supporting the creation of a framework for easy and cost-effective shared use of distributed electronic resources across Europe – particularly for grid computing, storage and networking.
ESFRI: European Strategy Forum on Research Infrastructures. The role of ESFRI is to support a coherent approach to policy-marking on research infrastructure in Europe and act as an incubator for international negotiations about concrete initiatives. In particular, ESFRI is preparing a European Roadmap for new research infrastructures of pan-European interest.
GEANT2: the seventh generation of pan-European research into an education network.
HET: High-Performance Computing in Europe Taskforce. Established in June 2006 with a mandate to draft a strategy for a European HPC ecosystem.
PRACE: Partnership for Advanced Computing in Europe. The EU FP7 project for building European Petaflop computing computing centres.
Norway-based oil and gas company Statoil ASA has streamlined its supercomputing facilities to help reservoir engineers in its sub-surface division use 3D simulation applications to search for potential oil-bearing structures in the Earth’s crust. These engineers simulate the flow of fluids in oil and gas reservoirs using Eclipse simulators from Schlumberger, which provide numerical simulation techniques for all kinds of reservoirs and all degrees of extraction complexity.
The rising cost of drilling and the high cost of an error in choosing a drilling location was putting the company’s engineers under increasing pressure to produce ever-more accurate results with their modelling efforts. John Hybertsen, principal engineer at Statoil Hydro, says: ‘There is a huge effort in getting more from the reservoirs. In some reservoirs they started out by saying that current production technology would get only about 45 per cent of the oil. But we have aims of 55 per cent or up to 60 per cent, and you don’t do that without controlling the production and knowing what is happening in the reservoirs.’
Statoil was working from four Norway locations where each site had its own local computing infrastructure, which could vary in size from 64 to 400 CPUs. This disparity in the relative number of CPUs caused a number of problems, including differences in engineering process consistency, performance and reservoir simulation accuracy.
Statoil decided to use grid technology to create a more consistent HPC environment and in 20 days Platform had implemented a single, division-wide computing grid to address its problems. Christoph Reichert, vice president of HPC sales, EMEA at Platform, says: ‘A key challenge we faced with this project was that we had to integrate separate site locations to create the Platform “Grid”. We had to manage not only the work flow from the users in these locations and their project priorities accessing the resource onto the Grid, but also the smart management of data between locations.
‘In order to address this challenge we ensured that users and administrators could access the grid via a Portal, providing them with an easy-touse interface and feedback on the progress of simulations being run in the Platform Grid System.’
First, the Platform LSF software managing each of the four local clusters was upgraded and then Platform LSF MultiCluster software was used to tie in the four local server clusters into a single computing grid, giving engineers access to the entire set of computing resources, regardless of location.
Second, Platform worked with Schlumberger’s software development team to integrate Statoil’s hardware and software so users can perform more iterations when carrying out simulations. And, finally, a web portal was rolled out to improve the user interface to simplify the submission of jobs on the grid, providing better visibility into the job execution process.
As a result of giving users access to greater computing power, Statoil is seeing a significant increase in the amount of simulation work taking place throughout the grid – without having to add more computer hardware.