Running a railway - what are the lessons for supercomputing?

Tom Wilkie, editor-in-chief of Scientific Computing World, finds that this year's ISC programme prompts thoughts about the relevance of railways and art to high-performance computing  

At the beginning of this month, a thousand-strong ‘orange army’ – so called because of the distinctive colour of their weather-proof safety clothing – has successfully repaired and re-opened the train line that hugs the Devon coast at Dawlish, portions of which had been washed into the sea by the devastation of the great storms that hit the UK early in February. They had been working night and day, sometimes in atrocious conditions, to repair what is one of the most beautiful stretches of railway line in the world, designed by Isambard Kingdom Brunel and opened in 1846. But the motivation was not just aesthetic, this mainline track connects London and the far south-west of Britain and, more than a century and a half after its creation, remains an artery vital to the economic health of the nation.

One of the most important uses of supercomputers today is in modelling climate change and weather forecasting. It provides a clear link between this very 21st century technology and that damaged hardware of the 19th century on the Devon coast. And the programme of ISC’14 does indeed include presentations from Oliver Fuhrer, the HPC lead for Swiss weather modelling, on ‘Weather Prediction and High Performance Computing’ and from George Mozdzynski at the European Centre for Medium-Range Weather Forecasts on the challenges of getting its weather forecast model to Exascale.

But that twisted and bent railway line has a metaphorical as well as a literal significance for high-performance computing, in my view. After 160 years, it still performs a vital economic function. The current time-horizon of HPC is not much more than five to ten years, with Exascale as the goal. But can we ensure that supercomputing embeds itself so deeply into the productive economy as to be indispensable a century from now? And will it be regarded as a thing of beauty, rather than just a technical artefact? And is Exascale the only destination?

Of course, the answer for the research community is that modern science would be impossible without harnessing the most powerful of supercomputers, not just in modelling and simulation but also in processing the data generated in such huge quantities by the international collaborations that now plan and carry out research projects in particle physics and astronomy.

Interestingly, applications of HPC to physics and astronomy are strikingly absent from the programme of ISC’14 published so far. Instead there is a great emphasis on biology and medicine. Shane Corder, from the Oklahoma Children's Mercy Hospital, will discuss ‘Using HPC to decode genomes for customised medicine’. Victor Jongeneel from the US National Center for Supercomputing Applications (NCSA) will consider ‘High-Performance, High-Capacity or High-Throughput Computing? The challenges of genomic big data’; Professor Olivier Michielin from Lausanne will look at ‘HPC-Supported Therapy Development in Oncology’; and Matthias Reumann, from IBM Research Zürich at ‘Multiscale Systems Biology: Big Data Challenges in Supercomputing Enabling Translational Medicine in Cardiology’.

Exascale machines will be expensive to build and to run – the energy constraint is well known and there are ingenious efforts to try to overcome this obstacle. Alex Ramirez from Barcelona will be considering imaginative hardware options to circumvent the problem, while Robert Wisniewski from Intel will be looking at ‘Advancing HPC software from today through Exascale and beyond’. To national laboratories and Government-funded researchers, this expense will be a constraint but ultimately, if the research needs the compute power, then money will be found for the electricity bill.

However, that railway line was built not by Government but by a commercial company seeking to profit from offering a service to users of its technology. And the existence of that technology encouraged new industries to spring up, and allowed existing ones to expand their market. Even long-established, traditional industries benefited: agriculture and fisheries could now sell their produce from the south-west of Britain fresh in the food markets of London.

Will Exascale machines, which are so expensive to build and to run, have the same appeal to a wide range of industries in the 21st century? Or will it be only the big multi-national corporations, which can afford huge capital outlays, that will be users of this technology? At first sight, indications that this might be the case are visible in the ISC’14 programme, with the contribution by Tate Cantrell, CTO Verne Global, and Susanne Obermeier, Global Data Centre Manager at BMW, explaining why BMW moved its HPC applications to a data centre in Iceland. There is also a session in the programme on ‘Supercomputers Solving Real Life Problems’ but it too concentrates on the larger scale applications, not on the widespread diffusion of HPC out into the wider community.

On the other hand, there is the argument, powerfully expressed by the component and system vendors that the drive to reduce energy consumption at Exascale will have the effect of making Petascale computing cheaper and more accessible to a wide range of companies. Small engineering organisations that never thought to be able to afford such compute power will, according to this line of reasoning, be able to afford Petascale computing in the future for a price not much greater than they currently pay for a server.

The two-day track on ‘Industrial innovation through HPC’ takes up the challenge of setting out how computer simulations and digital modelling using high-end computing and storage can boost industrial companies’ productivity and competitiveness in the global market. It is explicitly intended to help engineers, manufacturers, and designers understand which tools and methods would help them solve their problems using HPC.

The organisers of this session frankly acknowledge that, in the past, the design of HPC clusters was driven by considerations of the technology itself: CPU, interconnect, and network. Nowadays, for clusters to be useful to a wider range of users, it is necessary to understand the applications that will be run on the cluster just as much as the ‘infrastructure’ technology of the cluster itself.

The train-spotter analogy applies here also: after all, railway lines carry commuter and freight traffic, not just Inter-City Express trains. Speed was not all-important, even in the application of 19th century technology.

But of beauty, there is as yet no mention in the programme of ISC’14. The European Laboratory for Particle Physics, Cern, near Geneva, has for some years now operated a cultural policy for engaging with the arts. The laboratory believes that ‘particle physics and the arts are inextricably linked: both are ways to explore our existence – what it is to be human and our place in the universe. The two fields are natural creative partners for innovation in the 21st century.’

Perhaps it is time that supercomputer centres also opened their doors to an ‘Artist in Residence’, whose work might grace future meetings of the ISC?

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon
Analysis and opinion

Robert Roe looks at research from the University of Alaska that is using HPC to change the way we look at the movement of ice sheets


Robert Roe talks to cooling experts to find out what innovation lies ahead for HPC users

Analysis and opinion