June/July 2009

FEATURE

The future for HPC

If we calculate performance predictions for the TOP500 list, then potentially we will see a 100 petaflops system in the year 2016. Looking back, the trend has been for a thousand-fold performance increase over an 11-year time period – from gigaflops (Cray2 in 1986) to teraflops (Intel ASCI Red in 1997), and then up to petaflops (IBM Roadrunner in 2008). And it is expected that exascale systems will be seen first in 2019. But what are the trends in the near future?

FEATURE

Modelling rides into the future

We all have a general idea of what the car of the future might look like: above all, it will likely rely more on electricity to cut emissions, and the innovative use of electronics for everything from the drivetrain to convenience factors will help various manufacturers differentiate themselves. These uses of electricity will make thermal management more important, and that’s just one area where modelling is making large contributions to the development of the car of the future.

Seven years of engine modelling

FEATURE

Far more than petaflops

May 2009 marked a milestone for the Forschungszentrum Jülich and European supercomputing: Jugene, the first supercomputer in Europe with a computing power of more than a petaflop per second, was inaugurated in Jülich. The Jülich IBM Blue Gene/P system consists of about 72,000 PowerPC 450 cores at 850MHz and a memory size of 144 terabytes. Jugene is the highly-scalable pillar in the Jülich heterogeneous supercomputing concept. The other one is Juropa, a best of breed cluster computer integrated by the French HPC company Bull, and is used for general-purpose HPC applications.

FEATURE

Computer - reconfigure yourself!

Imagine HPC hardware that could optimise its own configuration – even down to the architecture of the CPU running the algorithms – based on the application. Such a virtual processor could accelerate often-used algorithms by several hundred per cent compared to a standard CPU. It sounds wild, but it is possible with a type of IC known as an FPGA (field-programmable gate array). The underlying technology has been around for decades, but it has been almost the exclusive domain of engineers writing code in specialised hardware-description languages such as VHDL or Verilog.

FEATURE

Drilling for fuel

When the cost of a bad decision can run to hundreds of millions of pounds, significant pressure exists for oil and gas companies to get their drilling decisions right. Seismic surveys remain the principal tool of the prospecting geophysicist, and while the basics of the technique have continued relatively unchanged over the last couple of decades, the practice of interpreting the resultant data has been, and continues to be, revolutionised by breakthroughs in HPC.

FEATURE

Criminal patterns

Arguments over whether social sciences can truly be described as ‘science’ are perennial; they have been around much longer than I have, and no doubt will run and run long after I’m gone. What is not in doubt is that they are now at least as dependent on scientific computing methods and resources as their physical science counterparts – one project mentioned here uses an Oracle database courtesy of the National Grid Service (NGS), and its author comments: [1] ‘I really can’t stress enough that the project might have ended if we hadn’t been given access to... the NGS.’

FEATURE

Data pharming

‘Bringing a drug to market is one of the most difficult and costly research and development activities, because of the complex environment in which they are used – the human body.’ So says Trish Meek, director of product strategy, life sciences at Thermo Fisher Scientific. This goes some way to explaining the tremendous amount of resource, time and money pharmaceutical companies pump into R&D. To refine a lead into a marketable drug can take years and drug companies are constantly looking for ways to streamline the processes.

Feature

Gemma Church finds out how astronomers are using simulations to investigate the extremities of our universe

Feature

Turning data into scientific insight is not a straightforward matter, writes Sophia Ktori

Feature

The Leibniz Supercomputing Centre (LRZ) is driving the development of new energy-efficient practices for HPC, as Robert Roe discovers

Feature

William Payne investigates the growing trend of using modular HPC, built on industry standard hardware and software, to support users across a range of both existing and emerging application areas

Feature

Robert Roe looks at developments in crash testing simulation – including larger, more intricate simulations, the use of optimisation software, and the development of new methodologies through collaboration between ISVs, commercial companies, and research organisations