If we calculate performance predictions for the TOP500 list, then potentially we will see a 100 petaflops system in the year 2016. Looking back, the trend has been for a thousand-fold performance increase over an 11-year time period – from gigaflops (Cray2 in 1986) to teraflops (Intel ASCI Red in 1997), and then up to petaflops (IBM Roadrunner in 2008). And it is expected that exascale systems will be seen first in 2019. But what are the trends in the near future?
We all have a general idea of what the car of the future might look like: above all, it will likely rely more on electricity to cut emissions, and the innovative use of electronics for everything from the drivetrain to convenience factors will help various manufacturers differentiate themselves. These uses of electricity will make thermal management more important, and that’s just one area where modelling is making large contributions to the development of the car of the future.
Seven years of engine modelling
May 2009 marked a milestone for the Forschungszentrum Jülich and European supercomputing: Jugene, the first supercomputer in Europe with a computing power of more than a petaflop per second, was inaugurated in Jülich. The Jülich IBM Blue Gene/P system consists of about 72,000 PowerPC 450 cores at 850MHz and a memory size of 144 terabytes. Jugene is the highly-scalable pillar in the Jülich heterogeneous supercomputing concept. The other one is Juropa, a best of breed cluster computer integrated by the French HPC company Bull, and is used for general-purpose HPC applications.
Imagine HPC hardware that could optimise its own configuration – even down to the architecture of the CPU running the algorithms – based on the application. Such a virtual processor could accelerate often-used algorithms by several hundred per cent compared to a standard CPU. It sounds wild, but it is possible with a type of IC known as an FPGA (field-programmable gate array). The underlying technology has been around for decades, but it has been almost the exclusive domain of engineers writing code in specialised hardware-description languages such as VHDL or Verilog.
When the cost of a bad decision can run to hundreds of millions of pounds, significant pressure exists for oil and gas companies to get their drilling decisions right. Seismic surveys remain the principal tool of the prospecting geophysicist, and while the basics of the technique have continued relatively unchanged over the last couple of decades, the practice of interpreting the resultant data has been, and continues to be, revolutionised by breakthroughs in HPC.
Arguments over whether social sciences can truly be described as ‘science’ are perennial; they have been around much longer than I have, and no doubt will run and run long after I’m gone. What is not in doubt is that they are now at least as dependent on scientific computing methods and resources as their physical science counterparts – one project mentioned here uses an Oracle database courtesy of the National Grid Service (NGS), and its author comments:  ‘I really can’t stress enough that the project might have ended if we hadn’t been given access to... the NGS.’
‘Bringing a drug to market is one of the most difficult and costly research and development activities, because of the complex environment in which they are used – the human body.’ So says Trish Meek, director of product strategy, life sciences at Thermo Fisher Scientific. This goes some way to explaining the tremendous amount of resource, time and money pharmaceutical companies pump into R&D. To refine a lead into a marketable drug can take years and drug companies are constantly looking for ways to streamline the processes.