Skip to main content

Future clears for HPC

The high-performance computing sector finished the year on a high with the announcement in November, by the US Department of Energy, that it was to spend $425 million in developing supercomputers that will leapfrog the international competition and open up the way to Exascale machines. It went some way to compensate for the news that, for the fourth consecutive time a Chinese system held the top spot in the Top500, the bi-annual list of the fastest supercomputers in the world, also announced in November.

A further boost came in the UK, in December – albeit not this time on hardware but on the application of supercomputers – when the UK Chancellor of the Exchequer announced that, despite cutbacks elsewhere in public spending, the Government intended to spend £113 million on a Cognitive Computing Research Centre at the Hartree Centre, Daresbury, to substantially expand the data-centric cognitive computing research capabilities there. Significantly, the Hartree Centre works in collaboration with IBM, which is the lead company in the consortium to build the two new US supercomputers that have been announced.

The bid to re-establish US leadership in the field of high-performance computing came in the form of an announcement, by US Secretary of Energy Ernest Moniz, that two machines will be built at a cost of $325 million, one at the Department of Energy’s Oak Ridge and one at its Lawrence Livermore National Laboratory. A third machine will be built at Argonne National Laboratory, but that announcement has been deferred.

The new US machines will not come on line until 2017 and so, for the moment, the Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, has no rivals for its position as the world’s top system with a performance of 33.86 petaflops. In fact, there was little change among the ranking of the world’s top 10 supercomputers in the latest edition of Top500. The only new entry was at number 10 – a 3.57-petaflop Cray Storm system installed at an undisclosed US government site.

Although the United States remains the top country in terms of overall systems with 231, this number is down from 233 in June 2014 and down from 265 on the November 2013 list. The US is nearing an historically low number of systems on the list. In contrast, the number of European systems rose to 130, up from 116 last June, while the number of systems across Asia dropped from 132 to 120. The number of Chinese systems on the list also dropped, now at 61, compared to 76 in June 2014. Over the same period, Japan increased its number of systems from 30 to 32.

This lag in the overall average performance of all 500 systems is noticeably influenced by the very large systems at the top of the list. Recent installations of very large systems – up to June 2013 – have counteracted the reduced growth rate at the bottom of the list, but with few new systems at the top of the past few lists, the overall growth rate is now slowing.

In the USA, the joint Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (Coral) was established in late 2013 to leverage supercomputing investments, streamline procurement processes and reduce costs in developing supercomputers. In addition to the $325 million in procurement of new machines, the Department of Energy also announced approximately $100 million to further develop extreme scale supercomputing technologies as part of a research and development programme titled FastForward 2.

On the Scientific Computing World website in July this year, Tom Wilkie discussed how the Coral programme was a classic example of how the US Government can push technological development in the direction it desires by means of its procurement policy. There are (at least) three ways in which Governments can force the pace of technological development, his article suggested. One is by international research cooperation – usually on projects that do not have an immediate commercial product as their end-goal. A second is by funding commercial companies to conduct technological research – and thus subsidising, at taxpayers’ expense, the creation or strengthening of technical expertise within commercial companies.


 Scientific Computing World's Tom Wilkie

The third route is subsidy by the back door, through military and civil procurement contracts. Use of procurement policies to push technology development in a particular direction has been a consistent – and very successful – strand in US Government policy since the end of the Second World War. Nearly two decades ago, in his book Knowing Machines, Donald MacKenzie, the distinguished sociologist of science based at Edinburgh University, showed how the very idea of a supercomputer had been shaped by US Government policy.

He concluded: ‘Without the [US National] weapons laboratories there would have been significantly less emphasis on floating-point-arithmetic speed as a criterion of computer performance.’ Had it been left solely to the market, vendors would have been more interested in catering to the requirements of business users (and other US agencies such as the cryptanalysts at the US National Security Agency) who were much less interested in sheer speed as measured by Flops and this would have led to a ‘subtly different’ definition of a supercomputer, he pointed out.

The purchasing power of the laboratories was critical, he argued, in shaping the direction of supercomputer development: ‘There were other people – particularly weather forecasters and some engineers and academic scientists – for whom floating point speed was key, but they lacked the sheer concentrated purchasing clout.’



Media Partners