Skip to main content

Providing an HPC blueprint

With hardware spread across three high-performance computing (HPC) facilities, Germany is a powerhouse in European supercomputing. The oldest of the centres, the High Performance Computing Center Stuttgart (HLRS), is primarily an academic resource but also supports industrial users. 

To help manage the resources at HLRS, the centre has devised a strategy that focuses on sustained application performance. This is taking shape through a series of training programs to ensure users have the necessary skills to make their applications run effectively but also in the procurement and continued development of hardware and software by HLRS. 

‘All of our users are experts,’ said Professor Michael Resch, director of the HLRS. ‘HLRS has established a concept in the state of Baden-Württemberg called bwHPC, creating a pyramid of power in which users move from local systems through central clusters up to the national supercomputer facility.’ 

The model provides significant organisational and scientific support throughout the various stages of this pyramid, allowing HLRS patrons to use the resources appropriately.

This model is a blueprint for the rest of Germany, said Resch: ‘Users arriving at the top – which is HLRS – have a lot of expertise before they even submit a proposal.’

While training and education are massively important to HPC, it should be made clear that, to emulate this program of bwHPC, a country or organisation would need a considerable computing infrastructure that can take users on the journey from scientists or engineers to HPC experts.

Hardware at HLRS

HLRS’ flagship supercomputer is Hazel Hen, an 185,088-core system (7,712 compute nodes) based on a Cray XC40. With a peak performance of 7.42 petaflops (quadrillion floating point operations per second), Hazel Hen is one of the most powerful HPC systems in the world (position 9 on the TOP500, 06/2016) and is the second fastest supercomputer in the European Union.

The HLRS supercomputer, which began operation in October 2015, is based on the Intel Haswell processor and the Cray Aries network and is designed for sustained application performance and high scalability.

However, as Resch explains, the goal when designing Hazel Hen was not to create the fastest supercomputer in Europe, but to provide a system suitable for specific applications and users: ‘It was by accident that our Cray XC40 is currently the second fastest system in the European Union. We turned down an offer of a theoretically faster system that would have used accelerators.’

This is a recurrent theme for HLRS; the whole infrastructure is designed to support its users. Primarily, this is because its first role is to assist the academic users of Stuttgart – but the mindset and approach have far-reaching consequences for all its users, including those from industry.

Resch said HLRS users are ‘working with codes that do not benefit from the theoretical peak of accelerators but rely heavily on memory bandwidth. For HLRS, the main motivation is the best solution for our users. That means that we widely ignore peak performance and TOP500 ranking.’

Many industrial users face similar problems with memory bandwidth, so their applications would benefit from this architecture just as the academic users do. 

‘HLRS offers a number of systems in addition to Hazel Hen,’ said Resch. ‘There is another ‘workhorse’ based on an NEC cluster. The second system evolved over time and helped us to cover workloads that do not require densely coupled systems and extremely fast networks.’ 

HLRS also houses several smaller systems such as a single-cabinet NEC-ACE system. ‘Since we are interested in new technologies we always have some smaller systems on the floor to test and experiment and learn about new approaches in hardware and software’ commented Resch. 

This abundance of supercomputing facilities puts Germany in a distinctive position because it has three world-class supercomputers with varied architectures housed in three separate HPC centres under the single banner of the Gauss Centre for Supercomputing (GCS). Within GCS are three national supercomputing centres; High Performance Computing Center Stuttgart (HLRS), Jülich Supercomputing Centre (JSC), and Leibniz Supercomputing Centre, (LRZ) at Garching, near Munich. 

Resch described the Gauss Centre as an association created ‘to coordinate the activities of the three German national supercomputer centres’ as well as to speak with a ‘single voice towards Europe, our funding agencies and our users.’

‘The three centres are targeting three different user communities that require different architectures.’ said Resch.

Meeting expectations

HLRS was the first German national supercomputing centre, officially established in Germany in 1996. Resch explained, that from its inception, HLRS has two key areas where it takes a leading role. ‘On the one hand HLRS takes over the role of Engineering HPC centre within the GCS – and, on the other hand, HLRS takes the leads in industrial usage of HPC within GCS.’ 

However, Resch stressed that first and foremost HLRS is an academic centre with the clear duty to support public research: ‘The focus is on solutions for researchers with a focus on engineering, so the hardware and software we choose typically match the needs of industry. When going into procurement, we consider industrial advice that helps to identify issues that researchers – eager for speed and fast solutions – tend to underrate.’

By choosing varying HPC architectures, each supercomputer is designed to support a different set of user applications. Ultimately, this helps Germany to provide a balanced HPC infrastructure to support as wide an array of users as possible.

A big part of providing a balanced ecosystem is training to support users and prepare them to use the supercomputing facilities available.

‘In more than 20 courses (most of them five-day courses) we train 600 to 800 users every year. We have extended this training to our European customers over the last years as part of our PRACE activities,’ explained Resch.

Resch said this number would be increased as the HLRS is opening a new centre to train industrial customers stating ‘We have built a new training centre that will be operational early in 2017.’

For scientific users, Resch confirmed that HLRS is focusing on a concept he refers to as the ‘embedded scientist.’ ‘This concept includes our scientists in the working groups of our key users.’ He explained that HLRS users ‘undertake common projects, research work and publications’ which allows HLRS ‘continuously to support the optimisation process for user software.’

Applications at the heart of HLRS

‘About two thirds of the user research at HLRS is focused on computational fluid dynamics in the widest sense. One of the key projects is aeroacoustics simulated by colleagues in Aachen, while a second highlight is a regional climate analysis done by colleagues from the University of Hohenheim.

The aeroacoustic projects completed in collaboration between HLRS and Aachen University were based on acoustics of a ducted axial fan and a jet engine respectively. Both projects were extremely scalable, using all 92,000 cores of Hazel Hen’s predecessor ‘Hornet’.

The researchers on the fan acoustics project conducted a large-scale simulation run to increase accuracy when predicting the acoustic field created by a low-pressure axial fan using computational aeroacoustics (CAA). The goal was to achieve a better understanding of the development of vortical flow structures and the turbulence intensity in the tip-gap of a ducted axial fan. This project was completed in approximately 110 machine hours and produced more than 80 TB of data. 

The engine research project focused on large eddy simulations of a helicopter jet engine aimed at analysing the impact of internal perturbations because of geometric variations on the flow field and the acoustic field of the engine. The simulation required a time span of 300 machine hours to compute Cartesian meshes up to one billion cells.

In addition to the engineering work, climate research is also a heavy user of HPC resources at HLRS. One particular project carried out in collaboration with the Institute of Physics and Meteorology of the University of Hohenheim focused on highly complex climate simulation for a period long enough to cover various extreme weather events in the northern hemisphere. 

This project increased the resolution of previous simulations, which in turn allows researchers to predict the weather with a higher degree of certainty, particularly for extreme weather events. The simulation was performed using the Weather Research and Forecasting (WRF) model on 84,000 compute cores of Hornet. 

HLRS is at its core a HPC centre committed to supporting its user community. Whether through training and education or procurement, every decision is made to increase the amount or speed of research that can be conducted using HLRS’ computing resources.

The driving force here is application performance, not FLOPS, or some benchmark that is not related to the kinds of applications that will be run on the system. Applying these methods, together with sufficient training and education, HLRS has created not only a powerful supercomputer but also a community of HPC users that know how to use the latest computing technologies. 



Topics

Read more about:

HPC

Media Partners