FEATURE

ISC Show report

Robert Roe reports on developments in AI that are helping to shape the future of high performance computing technology at the International Supercomputing Conference

In 2018 the focus of the International Supercomputing Conference (ISC) has shifted from big data analytics towards AI and machine learning, which are now beginning to shape HPC technology development.

Now in its 33rd year, the ISC High Performance conference and exhibition, held in Frankfurt, Germany, set a new attendance record by attracting 3,500 people and 162 companies and research organisations from 59 countries to the event. HPC users and vendors came together over five days to exchange ideas and display new products.

The convergence of digital modelling and simulation, machine learning and AI, data analytics, and exascale computing, is driving a new generation of HPC.

The 2018 conference touched on these themes in many sessions, talks and even throughout the exhibition, as new products and services spring up around the continued development of AI in HPC.

25th anniversary of the Top500

ISC also hosted the latest release of the Top500, a biannual list of the fastest supercomputers. This year marked the 25th anniversary of the Top500. This was noteworthy as it was the first time in more than six years that a US supercomputer featured at the top of the list.

In 2018 HPC is now feeling the effects of AI development across the spectrum of HPC hardware, as new supercomputers adapt to take advantage of AI and ML workflows.

‘The new Top500 list clearly shows that GPUs are the path forward for supercomputing in an era when Moore’s Law has ended,’ said Ian Buck, vice president and general manager of accelerated computing at Nvidia. ‘With the invention of our Volta Tensor Core GPU, we can now combine simulation with the power of AI to advance science, find cures for disease and develop new forms of energy. These new AI supercomputers will redefine the future of computing.’

The new leader in the LINPACK benchmark is the Department of Energy (DOE) supercomputer from Oak Ridge National Laboratory (ORNL), Summit. The impact of AI can be felt even here, as the new system comes equipped with more than 27,000 GPU – six for every two Power 9 processors – linked through Mellanox EDR InfiniBand interconnect. Summit has been heralded as the first supercomputer built around AI and will be used in a number of projects that will use AI and ML techniques to investigate grand scientific challenges, such as cancer.

Jack Dongarra, professor at the University of Tennessee and Oak Ridge National Laboratory, who co-authors the TOP500 list, said: ‘This year’s Top500 list represents a clear shift toward systems that support both HPC and AI computing. Accelerators, such as GPUs, are critical to deliver this capability at the performance and efficiency targets demanded by the supercomputing community.’

When the ORNL launched the Summit supercomputer, Jeff Nichols, ORNL associate laboratory director for computing and computational sciences commented: ‘Summit’s AI-optimised hardware also gives researchers an incredible platform for analysing massive datasets and creating intelligent software to accelerate the pace of discovery.’

Competition in AI drives computing performance

Nvidia has been at the forefront of AI development for a number of years and this trend shows no sign of abating. The company continued to showcase both hardware and software development that is driving the potential for AI and also real-world scientific applications.

At ISC Nvidia showcased its AI solutions and services, including the DGX-2 and the Nvidia GPU cloud (NGC), alongside talks based on the future of GPU technology in HPC and AI, NVSwitch and the new Oak Ridge supercomputer Titan.

Nvidia users working with the Nvidia GPU cloud can now use 35 deep learning, high-performance computing, and visualisation containers from NGC. Containers allow scientists and researchers to deploy applications on the cloud easily and reliably. Over the past three years, containers have become a crucial tool in deploying applications on a shared cluster and speeding the work, especially for researchers and data scientists running AI workloads.

Nvidia was demonstrating some of these containers as there have been several new applications released on the NGC since SC17 in November last year. These new HPC and visualisation containers include CHROMA, CANDLE, PGI and VMD, which have been added to the NGC. This is in addition to eight containers, including NAMD, GROMACS and ParaView, launched at the previous year’s conference.

The aim for this is similar to other areas of software tool and ecosystem development for Nvidia. By providing these containers they are giving some level of assurance that an application will work and deliver a certain performance based on Nvidia hardware. This helps scientists and researchers get set up using AI applications but it also helps to generate experience in these areas, as users develop a familiarity and experience working with Nvidia hardware.

Intel is also making inroads into the AI market with several announcements at ISC regarding the company’s plans for future development in both AI and its roadmap for HPC. Intel was showing applications running on its Xeon Scalable processors for use in AI workloads. These chips are designed specifically for AI/ML and will allow much faster speeds for inferencing and training neural networks.

Intel hosted several talks and demos at its booth, including a brain tumour screening simulation that is trained with high-resolution images using AI running on Intel Xeon Scalable processors.

Intel has also begun developing a software stack for artificial intelligence, which aims to develop optimised libraries for various popular AI frameworks for many different AI workloads, such as speech and image recognition, language translation and object detection.

Intel also plans to use its FPGA technology for HPC/AI workloads, particularly in compression, image/object recognition and genome sequencing.

Intel’s efforts to develop an ecosystem around AI have already begun to impact scientific research. Intel and the Institut Curie, a French research centre, announced a partnership in May. That aims to use AI in the implementation of bioinformatics tools, pipelines and techniques to improve the use of molecular profiling across both research and clinical oncology research.

The research centre will work with Intel to define high-performance computing and artificial intelligence infrastructure. ‘Artificial intelligence holds great promise for medical progress, including genomics,’ said Brian Krzanich, former Intel CEO. ‘The Intel-Institut Curie collaboration is one more example of Intel’s commitment to the development of bold artificial intelligence research for the good of humanity.’

‘Collaborating with Intel, Institut Curie will develop, use and implement innovative bioinformatics technologies to improve time to diagnosis, diagnostic accuracy, targeted treatment recommendations, and provide a better understanding of application needs to develop features that are needed for the healthcare sector,’ said Emmanuel Barillot, head of the Institut Curie Bioinformatics platform and director of the Bioinformatics, Biostatistics, Epidemiology and Computational Systems Research Unit.

Other tags: 
Company: 
Exclude from view: 
Feature

Robert Roe reports on developments in AI that are helping to shape the future of high performance computing technology at the International Supercomputing Conference

Feature

James Reinders is a parallel programming and HPC expert with more than 27 years’ experience working for Intel until his retirement in 2017. In this article Reinders gives his take on the use of roofline estimation as a tool for code optimisation in HPC

Feature

Sophia Ktori concludes her two-part series exploring the use of laboratory informatics software in regulated industries.

Feature

As storage technology adapts to changing HPC workloads, Robert Roe looks at the technologies that could help to enhance performance and accessibility of
storage in HPC

Feature

By using simulation software, road bike manufacturers can deliver higher performance products in less time and at a lower cost than previously achievable, as Keely Portway discovers