Skip to main content

GCS approves more than 1 Billion HPC core hours for simulation projects

As part of the 19th PRACE Call for Large-Scale Projects the Gauss Centre for Supercomputing (GCS) steering committee granted a total of more than 1 billion core hours to 17 research projects.

The research teams represent a wide range of scientific disciplines, including astrophysics, atomic and nuclear physics, biology, condensed matter physics, elementary particle physics, meteorology, and scientific engineering, among others. Scientists awarded with computing time will have immediate access to the GCS HPC resources for a period of 12 months.

Of the 17 approved large simulation projects, four will be followed with particular interest, as they will be the first large-scale projects to run on the Jülich Supercomputing Centre's (JSC’s) new HPC system, the Jülich Wizard for European Leadership Science, or JUWELS.

The system, which replaces JSC's IBM BlueGene/Q system JUQUEEN, will consist of multiple, architecturally diverse but fully integrated modules designed for specific simulations and data science tasks. The first module, built with a versatile cluster architecture based on commodity multi-core CPUs, is currently being set up at JSC. It consists of about 2,550 compute nodes with two Intel Xeon 24-core Skylake CPUs each, and about 2% of the compute nodes will feature four of the latest-generation NVIDIA Volta GPUs.

JUWELS, which will deliver a peak performance of 12 petaflops (10.4 petaflops without GPUs), will go into operation in late June 2018, thus the time window for researchers using this HPC platform will be open until June 2019. The projects supported come from science fields as diverse as atomic and nuclear physics, condensed matter, elementary particle physics, and scientific engineering. ‘We are excited to further develop and implement the modular architecture concept with JUWELS,’ says Professor Thomas Lippert, director of the Jülich Supercomputing Centre. ‘Our next challenge is to work with our users on their applications to use the new system in the most efficient way.’

The Leibniz Supercomputing Centre (LRZ) in Garching near Munich is using its SuperMUC system to deliver 340 million core hours during the 19th GCS call. Nevertheless, LRZ also will undergo a major technology shift during the course of this year.

While users can continue to leverage the computing power of the current SuperMUC installations until the end of 2019, they can also gradually migrate their applications to LRZ's new supercomputer SuperMUC-NG ("next-generation"), which is based on the Intel Xeon Scalable Processor and is connected by Intel’s Omni-Path network. Installation work has already begun at LRZ. SuperMUC-NG is expected to begin operation in late 2018 and will deliver a peak performance of 26.7 petaflops, representing a five-fold increase in computing power over the current system. The largest allocations on SuperMUC for the current large-scale call support projects in life sciences (75 million core hours), condensed matter physics (60 million core hours), and scientific engineering (75 million core hours).

Of the 1,060 million core hours granted with GCS’ 19th large-scale call, more than half of the computing time granted will be delivered by HPC system Hazel Hen, the Cray XC-40 system installed at the High-Performance Computing Center Stuttgart (HLRS). The lion’s share of the 580 million core hours allocated on the HLRS supercomputer will support four computationally challenging fluid dynamics projects, a research area in which HLRS has always been very strong. Projects from the field of elementary particle physics (3 projects), biology and astrophysics complement the scientific tasks supported by Hazel Hen.

The complete list of the 19th GCS Large-Scale Call projects can be found here.

The GCS Calls for Large-Scale Projects application procedure and criteria for decision is described in detail here.

Topics

Read more about:

Product

Media Partners