Skip to main content

Europe's DEEP in the heart of Texas

Although SC15, the Supercomputing Conference and Exhibition is taking place deep in the heart of Texas this year, an impressive array of initiatives from Europe will be on display, particularly focused on research and development for Exascale computing.

Results from the successful DEEP programme will be presented – this innovative prototype for an Exascale architecture is now up and running applications for science and engineering. Another European programme, aimed at addressing I/O issues, is just getting underway, and Bull, the European systems vendor, will be showcasing its Exascale solution.

In their article How to make HPC happen in Europe, published in Scientific Computing World’s HPC Yearbook last month, Leonardo Flores Añover and Augusto Burgueño Arjona from the European Commission discussed how and why Europe should play a leading role globally in HPC, setting out the European programme for Exascale. Some aspects of that programme are already available from our Show Preview, but yet more information will be on display in Austin, Texas, from 15 to 20 November, as reported in this, the second of our four news reports.

On the commercial side, Bull is launching the Bull sequana X1000, its new range of exascale-class supercomputers, the forerunners of the generation that will offer a thousand times more performance than current petaflops-scale systems. Bull is now part of the Atos group and according to the company, the launch confirms Atos’ strategic commitment to develop -- through its Bull brand -- innovative systems for high-performance computing to process large volumes of data. The new Exascale-class supercomputer has been designed in close cooperation with its key clients, and will also address digital simulation challenges.

Thierry Breton, Chairman and CEO of Atos, said:Digital simulation is an essential tool to accompany the digital transformation of our clients. In this new data-centric era, large processing infrastructures are an essential lever for development and wealth creation. Atos is proud to offer its customers a supercomputer that allows them to reinvent their business model’,

Philippe Vannier, Executive Vice-President Big Data & Security and Chief Technology Officer of the Atos Groupexplained that Bull had been working in partnership with the CEA and other strategic clients to develop the new machines. With characteristic élan, Bull claims that its innovative capabilities allow it to delivers a solution that matches the technological challenges ofExascale, even while many other vendors – it says -- are still struggling with technical limitations in the exascale race.

Bull sequana is designed to integrate the most advanced technologies in terms of processors, interconnect and data storage. It uses Bull’s own Exascale Interconnect (BXI) network, which frees the processors of all communications tasks. It also lowers energy consumption by a factor of 10, compared to the previous generation of supercomputers. Bull is targeting an electrical consumption of 20 Megawatts for exascale by 2020.

Since an exaflops-scale supercomputer will include tens of thousands of components, it has to be highly resilient and the architecture and packaging of Bull sequana were designed with this objective in mind, including the redundancy of critical components, and efficient software suite and management tools.

Right across Europe, international consortia of researchers are working on Exascale related projects, drawing on funding from the European Commission. Among the projects on display at SC15 will be the NEXTGenIO project led by the Edinburgh Parallel Computing Centre (EPCC) in Scotland. This R&D project will develop solutions to high-performance computing’s input/output (I/O) challenges.

Most high-end HPC systems employ data storage separate from the main computer system. As such, the I/O subsystem often struggles to deal with the degree of parallelism simply because too many processors are trying to communicate with slow disk storage at once. This I/O challenge must be addressed if extreme parallelism at Exascale is to deliver the performance and efficiency of applications expected by users.

Peter Bauer, head of the Scalability Programme for the European Centre for Medium-term Weather Forecasting (ECMWF) said: ‘Numerical weather prediction works on very tight schedules to produce forecast model runs and to manage the vast amounts of data involved in individual tasks, post-production and dissemination to users. The solutions that NEXTGenIO offers will greatly enhance efficiency by implementing a much closer link between computing and data management with novel technology. This will enable ECMWF Member and Co-operating States to benefit from enhanced model configurations without excessive data handling costs.’

Over the next three years, the NEXTGenIO project will explore the use of non-volatile memory technologies and associated system-ware developments through a co-design process. The partners are EPCC, Intel, Fujitsu, Technische Universität Dresden, Barcelona Supercomputing Centre, the European Centre for Medium-Range Weather Forecasts, Allinea, and Arctur.

A key output of the project will be a prototype system built around NV-DIMMs by Intel (3D XPoint technology) and Fujitsu. The partners will also develop an I/O workload simulator to allow quantitative improvements in I/O performance to be directly measured on the new system in a variety of research configurations. System-ware developed in the project will include performance analysis tools, improved job schedulers that take into account data locality and energy efficiency, optimised programming models, and APIs and drivers for optimal use of the new I/O hierarchy. Professor Mark Parsons, Executive Director of EPCC and NEXTGenIO Project Coordinator, said: ‘Intel’s 3D XPoint memory is going to impact all areas of computing over the next five years. I am immensely proud that EPCC will lead its development in the HPC arena through the NEXTGenIO project.’

Also at SC15, the 16 partners involved in the EU-funded DEEP project will present their innovative HPC platform: a 500 TFlop/s prototype system that implements the Cluster-Booster concept and operates with a full system software stack and programming environment engineered for performance and ease of use.

The system is now up and running at the Jülich Supercomputing Centre (JSC) in Germany with a peak performance of 500 TFlop/s and is fully integrated with the hardware and software infrastructure on site. Initial application results clearly show the performance and efficiency potential of the system, and JSC plans to operate the machine for several years to come and make it available to external users.

The DEEP system achieves high density and energy efficiency due to Eurotech’s Aurora technology, while it uses the EXTOLL HPC interconnect, and employs Intel multi- and many-core processors. Porting and optimization of applications is facilitated by adherence to standards (MPI and OpenMP), and by extending the task-based OmpSs model developed by the Barcelona Supercomputing Centre (BSC). ParaStation MPI, provided as part of ParTec’s ParaStation ClusterSuite, has been turned into a Global MPI, the key system software component linking the Cluster and the Booster.

The novel Cluster-Booster concept is a heterogeneous architecture that enables applications to run at the right level of concurrency: parts of the code that are highly scalable profit from the throughput of the many-core Booster; while parts with limited scalability benefit from the high per-thread performance of the more conventional Cluster.

Eurotech’s Aurora technology achieves tight packaging -- the whole system uses less than two racks -- and high energy efficiency through direct liquid cooling. The Booster integrates 384 Intel Xeon Phi nodes communicating over a 3D high-performance torus network based on Extoll technology. The Booster was designed by Eurotech in close collaboration with Intel, Heidelberg University, and Leibniz Supercomputing Centre under the guidance of Intel, within the ExaCluster Lab at Jülich Supercomputing Centre. DEEP’s Cluster-Booster architecture is relatively complex. To mask this and make programming easy-to-use and familiar for application developers, the project has developed a complete software stack.

The project had a total budget of € 18.5 million and Professor Dr Thomas Lippert, Head of Jülich Supercomputing Centre and Scientific Coordinator of the DEEP project, said: ‘The companies, research institutes, and universities behind the consortium can all be proud of having created a unique system, which is both most generally applicable and also unimaginably scalable. The DEEP Cluster-Booster concept will become part of the future of supercomputing.’

Media Partners