How much faster can China go?

From Leipzig’s legendary literary watering hole, Auerbachs Keller, Tom Wilkie ponders the place of China in supercomputing

Last month, China doubled the speed of the world’s fastest computer. Within three years, it may double the speed again. According to Dr Yutong Lu, director of the System Software Laboratory at the National University of Defence Technology (NUDT), the country plans to build a 100 petaflops machine by the end of the current five-year plan in 2015.

It was, she conceded, a tough target and the pressure was on to achieve it. But her demeanour gave no hint that she thought the goal unattainable.

Some observers believe that Chinese plans are not confined to the highest performance machines but that it could soon be exporting its own computers (fuelled by its own processor chips) to countries such as Brazil and India, supplanting US and European suppliers.

As announced on 17 June at the opening of ISC’13, the fastest system in the world is now the Tianhe-2 (Milky Way 2) supercomputer, developed by the NUDT, which had a performance of 33.86 petaflops on the Linpack benchmark (and a peak performance of 54.9 petaflops). This is nearly double the performance of the next fastest machine: Titan, a Cray XK7 system installed at the US Department of Energy’s (DOE) Oak Ridge National Laboratory, which achieved 17.59 petaflops.

The Tianhe-2 had a combined total of 3,120,000 computing cores, comprising 16,000 nodes, each with two Intel Xeon IvyBridge processors and three Xeon Phi processors. Some observers expressed surprise that the Chinese had gone all-out for performance using the first-generation of the Intel Phi, sometimes known as ‘Knights Corner’ rather than waiting for the second phase ‘Knights Landing’. These processors will be fabricated at the 14 nanometre scale and will be able to act as CPUs in their own right. They can be stand-alone CPU or PCIe coprocessors, so they will not be bound by ‘offloading’ bottlenecks and they will have integrated on-package memory.

However, China itself had developed the interconnects, the operating system, front-end processors and software for Milky Way 2, so most of the features of the system were Chinese and the Intel components were used for the main compute part. (The Tianhe-1A system, which was the world’s fastest in 2010 and is now tenth in the ranking, uses Nvidia GPUs to accelerate computation.)

According to Peter Beckman of the Argonne National Laboratory, China had a rational strategy for achieving leadership in supercomputing. ‘China has invested in the people,’ he said. ‘Over the past six years, they have invested in the software and interconnects that are home-made. It is software and interconnects that make a supercomputer a supercomputer.’ He said that processor chips entirely developed and made in China are waiting ‘off in the wings’.

Milky Way 2 was built at the NUDT, which is located in the urban area of Changsha, capital city of Hunan Province in South-Central China. But it is to be relocated to the National Supercomputer Centre in Guangzho by the end of the year. More supercomputers are expected out of the NUDT pipeline thereafter.

European supercomputer manufacturers, such as Bull and Eurotech, do not make their own chips – they integrate processors and other components into a functioning system. Only two nations at present, Japan and the USA, have the capability to provide all the components, from chips to software, for a supercomputer. But within possibly as little as three years, Beckman thinks that China may join the club and be exporting its own systems to other countries such as Brazil and India.

During ISC’13, Intel hosted a celebration dinner for the staff of the Chinese National University of Defence Technology. It took place at Auerbachs Keller in Leipzig, famous in German literature as the place where the devil, Mephistopheles, drank with Goethe’s Faust. Given that China could soon be the leading international competitor to the US in selling supercomputing systems, perhaps it is stretching the literary allusion too far, but nonetheless one wonders if a second Faustian pact was being celebrated there, this June?

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers