SuperMUC more than doubles performance after upgrade

Share this on social media:

The Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences and Humanities in Garching near Munich, part of the Gauss Centre for Supercomputing (GCS) has announced that its flagship supercomputer SuperMUC has now been upgraded with the official opening ceremony held on Monday, 29 June 2015.

The second phase of the SuperMUC sees the system upgraded to a peak performance of 6.8 petaflops, up from the previous peak of 3.2 petaflops, pushing the maximum theoretical computing power of SuperMUC far beyond its original capabilities.

The inauguration of ‘SuperMuc Phase 2’ was kicked off with a host of key experts within German supercomputing. Dr Ludwig Spaenle, minister of science at the Free State of Bavaria; Stefan Müller, parliamentary state secretary to the federal minister of education and research; Karl-Heinz Hoffmann, president of the academy; Prof, Dr Arndt Bode, Director of the LRZ; Martina Koederitz, General Manager of IBM Germany; and Christian Teismann, vice president and general manager, global account business at Lenovo, jointly pressed the ‘Red Button’, symbolising the start-up of the system and the expansion one of the most powerful HPC systems in Europe.

The extension of SuperMUC, an IBM System X iDataPlex system which first became operational in mid-2012, was carried out according to the earlier defined systems roadmap. 86,016 processor cores in 6,144 Intel Xeon E5-2697 v3 processors were added to the previously available 155,656 processor cores, lifting the maximum theoretical computing power to 6.8 Petaflops.

The performance boost comes with remarkably little additioanl space: While more than doubling the overall system performance, Phase 2 only required one fourth of the original SuperMUC footprint.

Users will benefit from the fact that – like with SuperMUC Phase 1 – the system expansion refrains from using accelerators. Although the addition of accelerators can provide huge performance increases it means that applications must be finely tuned to make the most of the highly parallel processing devices. Instead the LRZ decided to concentrate on technology that had been used previously. This means that any applications that were optimised for the original SuperMUC should run without any major adaptations to the software. 

The LRZ supercomputing infrastructure now offers additional 7.5 Petabyte of SAN/DAS storage on GPFS Storage Servers (GSS). By combining IBM’s Spectrum Scale technology with Lenovo System x-Servers, 5 Petabytes of data are managed with an aggregated bandwidth of 100GBps across the distributed environment.

IBM's hot water cooling technology, was also installed alongside Phase 2 of the installation so SuperMUC will continue to provide efficient HPC to its users. Through a network of micro channels, the cooling system circulates 45°C warm water over active system components, such as processors and memory. This means that no additional chillers are needed.

‘Energy efficiency is a key component of today’s computing devices – from smart phones to supercomputers,’ explained Arndt Bode, Chairman of the LRZ. ‘With Phase 2 of SuperMUC, LRZ continues to act as a pioneer in this field as we deliver proof that it is possible to significantly lower the energy consumption in data centres, thus drastically reducing the operating costs.’

Like SuperMUC Phase 1, the LRZ system expansion has been designed for exceptionally versatile deployment. The more than 150 different applications which run on SuperMUC on average per year range from solving problems in physics and fluid dynamics to a wealth of other scientific fields, such as aerospace and automotive engineering, medicine and bioinformatics, astrophysics and geophysics, amongst others.

Professor Bode is confident that the newly available system will be of great benefit to scientists in their pursuit of answers to the great scientific questions of our time.