Skip to main content

The future for HPC

If we calculate performance predictions for the TOP500 list, then potentially we will see a 100 petaflops system in the year 2016. Looking back, the trend has been for a thousand-fold performance increase over an 11-year time period – from gigaflops (Cray2 in 1986) to teraflops (Intel ASCI Red in 1997), and then up to petaflops (IBM Roadrunner in 2008). And it is expected that exascale systems will be seen first in 2019. But what are the trends in the near future?

Going from terascale to petascale HPC systems means that the number of elements (cores, interconnect, storage) within such a system will grow enormously. In the near future we will see clustered multicore systems with core numbers in the range of 100,000 to 1,000,000. It is obvious that these highly parallel systems will raise questions about parallel software and especially the fault tolerance of hardware components.

But the real problems that every multicore system will face are the bandwidth limitations for memory access. A good solution would be one byte per flop and core. With Nehalem from Intel we will head in this direction, with half a byte per flop and core.

The growing number of computing components within the hardware architecture means large efforts must be made for the parallelisation of application programs. It is a fact that parallelisation tools are far behind the possibilities offered by HPC hardware. Various programming techniques are already in use, like data locality; meaning that the data is partitioned into blocks that fit into the CPU’s local memory, but the parallelisation has to be the focus point for every new program development that runs on a multicore system. And porting existing sequential industrial codes is still an open question.

A serious competitor for the multicore CPU is represented by graphical processing units (GPUs), which are graphic cards used for scientific computing. GPUs are only suitable for tasks that perform some type of number crunching within a parallel processing environment.

Today the fastest GPUs from AMD and Nvidia are already in the teraflops range, whereas normal multicore chips are slowly reaching this milestone.

The real problem with GPUs is that they may not be programmed as is usual for x86-, Sparc- or Power-CPUs. That’s the reason Nvidia GPUs offer the support of the Cuda (compute unified device architecture) library that provides a set of user-level subroutines and allows the GPU to be programmed with standard C or Fortran.

Nevertheless, the Supercomputer Tsubame from the Institute of Technology in Tokyo, is the first system in the TOP500 list running the Tesla graphics chip (170 Tesla-S1070) from Nvidia, resulting in 170 teraflops, theoretically. In practice, the system reaches 77.48 teraflops and is number 29 in the TOP500 list (November 2008).

For the near future, we expect that the hardware architecture will be a combination of specialised CPU and GPU type cores.

But now to the most challenging problem for HPC: energy consumption. It is a well-known fact that the energy consumption of HPC data centres will double in the next four to five years, if the current trend continues. HPC manufacturers and data centres have to concentrate on energy efficiency. The development of HPC systems that reduce the energy consumption (50 to 70 per cent of the power is normally used to cool the equipment) is absolutely necessary. A straightforward extrapolation for exaflops systems shows that they will be somewhere in the giga-watt ranges.

But perhaps we have already crossed the critical border of energy consumption in the HPC range and the further imminent growing energy consumption will be the limiting factor for future HPC applications.

The growth rate of data volumes is tremendous. With large multicore systems, it is not unusual for terabytes to be produced within hours (CERN). In the next few years we will see new developments in the area of solid state disk drives (SSDs). In comparison to standard hard disk drives they clearly need less power, which is one of their biggest advantages.

With the flexible structure of the clustered multicore systems in mind, it is obvious that companies may start with small HPC systems at low prices and expand to mid-sized or large systems, depending on their budget and application needs. ISVs with their software packages are increasingly jumping on the parallel bandwagon, thus making it easier for companies to integrate HPC systems within their environment. And last but not least, Microsoft is offering an alternative to Linux with their Windows HPC Server.

We will see systems that will set a new flops/watt standard. In 2011 the Sequoia-System, developed and manufactured by IBM, will be installed at LLNL and will start running in 2012. It will have 1.6m power processors and 1.6 petabytes of main memory, leading to a peak performance of more than 20 petaflops. From a technological point of view the system architecture is based on a newly developed optical communication technology. As a whole, Sequoia will only need 6MW resulting in the fabulous value of 3,000 MFflops/watt.

For 25 years, Sun has been guided by the vision that the ‘network is the computer’. Cloud computing in 2009 could have the potential for this vision to become more relevant than ever. Let’s wait and see. At this year’s ISC, there will be a session on ‘HPC & Cloud Computing – Synergy or Competition?’, featuring speakers from Google, Amazon, Yahoo, Microsoft, IBM, HP and Sun Microsystems. These experts will present and discuss whether cloud computing will continue to impact the way IT infrastructure is designed and delivered to meet the varied needs of the web, of business and especially of HPC users.

Photo courtesy of Otto-von-Guericke-Universität



Topics

Media Partners