Skip to main content

Software critical to HPCs future

In recent years, the HPC server market has been dominated by x86 processors. However, this looks set to change, as a number of new compute architectures from ARM and IBM, specifically designed for HPC, are taking shape and gaining traction across the high-performance computing market.

The HPC market is also seeing a period of invigoration driven by technological developments such as the cloud, and home-grown HPC technology. All of these are interwoven with a change to a more data-centric model of computing, which focuses on data availability and movement rather than pure computational power.

Finally, governments across the globe are realising the economic benefit that HPC provides, not only in academia and defence, but also many industry sectors. This has seen HPC centres and governments alike redefine the usage of HPC systems to improve economic competitiveness.

This shift in HPC architectures inevitably has an impact on the market share of the various companies vying for the HPC server market. Steve Conway, research vice president of High Performance Computing at the market-research company IDC, said: ‘More than 80 per cent of the systems sold each year by revenue are based on x86 processors, but there has also been demand for IBM POWER processors and ARM based processors. There are still vendors that have RISC supercomputers: Fujitsu is the one for the K computer – that is their own version of the SPARC processor that they are using. In China, there are at least half a dozen home-grown processor initiatives.’

Addison Snell, chief executive officer, and CEO of Intersect360 Research said: ‘We see a lot of adoption of many core technologies, whether you think of them as processors, co-processors, or accelerators depending on GPU computing or Xeon Phi’.

The rise of many/multi core

The proliferation of compute accelerators and many core processing technologies over the past few years enables programmers to pursue parallelism on a scale that was impossible just a few years ago. This drive to extend the parallelism of HPC applications is becoming increasingly important as the scaling of transistor size runs up against the hard limits imposed by materials science. As Moore’s law slows down, programmers and hardware developers alike must increasingly look to increase performance through parallelisation rather than increasing clock speed.

Snell continued: ‘We predicted at the beginning of the year that 50 per cent of high-performance systems would have at least some multi or many core components, and I would stand by that that prediction now. But we have to get to the surveys at the end of the year to see how close we were.’

This new paradigm relies on software that can fully utilise the parallelism offered by these compute accelerators. This means that the specific user-base must develop expertise around a specific set of tools for a specific technology.  So far, however, this reliance on more complex programming knowledge and tools has not discouraged HPC users and the use of accelerators continues unabated.

Conway commented that there had been a slowdown in purchases of HPC systems in 2014-5, but explained that this trend can be misleading because of the relatively small sample size and large price tag of the top systems. Conway said: ‘In the years 2010 – 2012 particularly, there was unusually large spending on big supercomputers. The next cycle will be coming in 2016/7 and many of those procurements have already been made. The market in any one year can be thrown one way or the other by just a few, very large purchases at the top. I think the largest one so far is the K system in Japan, which had a total cost of around $550 million.’

Conway agreed that there had been a lot of interest around the architectural innovation. He went on to explain that some of these new architectures are driven by a need to explore more specialised models for next-generation HPC systems. He said: ‘The x86 processor was not designed specifically for high-performance computing, having come up from the desktop and so forth. It is a kind of loose fit for high-performance computing and, because it is a loose fit, it has opened up opportunities for other processor technologies to come and fill the gap.’

However Conway also stated that the x86 processor would remain the dominant compute architecture for several years to come, a view shared by Addison Snell.

China

In addition to developing technologies to fill gaps in the HPC market, specifically around more data-centric computing, HPC centres in some areas are being asked to wean themselves from a reliance on American HPC technology. This is driving a number of home-grown processor technologies – particularly in China.

Conway stated: ‘x86 processors are not very good for data-level parallelism which is where a lot of these other technologies are trying to fill in. Another thing that is going on in China and other places is a desire to have home-grown technology. That is a real challenge. But it really is a good thing in a way as it is accompanied by a growing recognition of the strategic value of high-performance computing, not just for science but the economy.’

At ISC this year China presented plans for a Chinese developed accelerator for use in the Tianhe-2A supercomputer, housed at the National Supercomputer Center in Guangzhou. This had been accelerated by the US ban on the sale of Intel processors to four main supercomputing sites in China. However, Conway stressed that these grassroots development programmes started long before any USD embargo on the sale of CPUs.

Conway said: ‘The Chinese started their own technology development efforts as far back as the 1980s, maybe further back. They have been at this for quite a while, with Shenwei and the other initiatives.’

Conway explained that ‘there has been this back and forth because of the Chinese government.’ The government has issued specific mandates around the development of home-grown supercomputing technology, particularly for ‘government affiliated, government controlled sites including banks and so forth, so called critical infrastructure sites, to rid itself of non-Chinese supercomputers.’

‘That has had a big impact on IBM in particular, because IBM has had such a strong presence in China, including those critical infrastructure sites. One of the outcomes of IBM’s sale of its x86 business to Lenovo is that, suddenly those IBM supercomputers are no longer IBM supercomputers, they are Lenovo – so they can stay,’ concluded Conway. Lenovo is itself a Chinese company and so the deal allowed it to acquire not only IBM’s x86 technology, but also a large installed customer base in China, including installations in these critical infrastructure sites.

Cloud

Cloud has been a hot topic in the HPC industry over the last few years and, while it is not going away, it has so far only found a small user base in the HPC industry. Snell commented that: ‘Cloud is growing, albeit from a very small base within HPC.’ He explained that while HPC users understand cloud and what it can provide, ‘security is a concern, but I think that data movement is a larger concern.’

‘We see HPC in the cloud growing predominantly through two use cases. One is the true new user of HPC, who has not done it before and is starting out in the cloud as opposed to buying their own resources.’ Snell explained that by its very definition this segment of the market is very low volume and is unlikely to see much growth.

‘The other is the notion of bursting over-peak capacity. I run along at 90 to 95 per cent of peak capacity all the time and then, for a short amount of time, I need a lot of additional capacity. It is not worth it for me to buy that capacity because I only need it

for a short time, so I go out to the cloud,’ Snell said.

Snell stressed that cloud computing has been adopted more wholeheartedly by some specific groups of users, particularly academia and government. ‘In the commercial markets, pharmaceutical is probably the farthest down this path and potentially finance behind that,’ stated Snell.

While cloud can provide potentially unlimited computing power whenever and wherever it is needed, there are some adjustments that need to be made to encourage adoption specifically within the HPC market. Conway stressed that use of cloud HPC is rising, but questions the potential adoption of the technology, explaining that it is currently of limited use to the HPC market because of the public cloud architecture.

Conway said: ‘Most of the public clouds are set up to manage embarrassingly parallel workloads, and so users are smart: they will send those embarrassingly parallel workloads off to the cloud and handle the other, less parallel, workloads in house or some other way.’ Embarrassingly, parallel problems require little or no communication of results between tasks, which makes them much more suited to a cloud with limited interconnect speeds and I/O throughput.

‘If public clouds had architectures that were more friendly to a larger portion of HPC workloads, then inevitably a larger portion of HPC workloads would be sent to the cloud,’ Conway concluded.

Intel vs the world

The increase in the number and scope of different compute architectures available to the HPC market will affect Intel. However, although in the short term it may lose some market share, it is in such a dominant position that this will be unlikely to affect its overall business.

Snell said: ‘Intel found itself in a strange position where it had a thoroughly dominant market share but people thought of their processors as commodity technology that was not so important to the overall ecosystem. The reality is that this was a critical technology for all of these systems.’

Snell explained that Intel has developed a strategy which centres on integrating functionality from other areas of the HPC hardware stack into its processors, in order to bring these elements closer to the computation. A knock-on effect of this is that Intel can take a much higher share of the HPC value stream as a result, even if it were to lose a portion of its market share.

Snell said: ‘Intel is capitalising on its position by incorporating more of the value stream and hooking it into its processing architectures. They are integrating the I/O stacks, certain areas of the middleware layer, networking components and the like, in order to have a competitive advantage around the entire Intel ecosystem.’

However this strategy is in direct competition with many competing technologies in the HPC space, particularly interconnect and accelerator manufacturers, such as Mellanox and Nvidia. ‘That strategy threatens people who have competing technologies in that part of the space. Natural allies for OpenPOWER are going to be people who compete in other areas of the value stream,’ said Snell.

Snell stated: ‘What would be interesting would be a situation in which OpenPOWER reaches a 20 to 25 per cent share of the market; you could get a situation where both sides claim a victory.’ He explained a potential scenario where IBM takes a relatively small market share from Intel, but at the same time Intel increases its revenues by capitalising on more of the hardware stack.

OpenPOWER would be looking at how much market share it had gained in the ecosystem but, on the other hand, Intel could go from a position where it has 90 to 95 per cent share of the market which it collects 15 per cent of the rent, to having 75 per cent of a market where it collects 30 per cent of the rent.’

This leaves Intel potentially in a very strong position, with increased technological development of new technologies built up around an established, dominant processor technology. The strength of the CPU in the HPC market gives Intel a secure foothold, as it looks to increase adoption of other technologies – as has been the case with the release of its Xeon Phi coprocessors.

‘From there it can try to build its market share back up again. Intel is in a position where it could really still win, even if it loses a lot of market share initially,’ Snell concluded.

In a volatile, fast-paced industry such as HPC, it is hard to predict exactly how the market will be carved up by the various technologies today. The increased competition from the likes of ARM, IBM and the already established presence of Intel, plus other technology providers such as Nvidia and Mellanox, will drive technological development across the industry.

Snell explained that, ultimately, there are compelling arguments on either side of the Intel-IBM debate. ‘Intel will argue that there is a lot of benefit to having all of these technologies integrated into a single architecture, while IBM is ‘selecting best of breed technologies at every step in the value chain.’ People will choose the architectural mix that suits their workflows on an ongoing basis, he said.

Steve Conway explained that although there is some healthy competition, the lines of battle are not quite as clearly drawn as we would like to think. He said: ‘If you look at OpenPOWER, they position themselves, along with their chief partners Nvidia and Mellanox as the alternative to X86. That is not really a clean positioning, because x86 processors are often used in conjunction with Nvidia and Mellanox technology. Nevertheless, what you see emerging is a battle of ecosystems.’

Architectures everywhere but not a programmer to code

The main difficulty with the increasing number of architectures available to the HPC community is that they are becoming more specialised in order to differentiate the technology in such a competitive market.

Snell said: ‘You have to have specialised tools for the various architectures. Intel is going to have tools based around the Xeon and Xeon Phi roadmap. CUDA has really been a grass-roots effort started by Nvidia around its own GPUs, but even if you go to some of the outliers like AMD with its own line of accelerated products, they are trying to do a more open-community based initiative with OpenCL. The problem is they do not have nearly the adoption that Intel or Nvidia has, and AMD has been losing that race.’

Ultimately adoption of a new architecture for HPC is reliant on widespread adoption of the programming models by its user base. Without a sufficiently large user base, generating code libraries and developing expertise around a given programming model, it is very unlikely that it will be able to generate sustainable success within the HPC market.

The unfortunate knock-on effect of these competing technologies is that it effectively splits the total user base. As the tools get more specialised, users must make a choice about which architecture they will spend time learning and optimising for their specific workloads.

Snell said: ‘AMD hasn’t lost, based on processor technology. Where they have lost is on the ecosystem around the processor.’

Steve Conway explained that there are parallels with the Chinese HPC market, which has some of the largest supercomputers in the world. While they have the systems in some areas, they lack the programming expertise to utilise the hardware fully.

Conway said: ‘The issue in China is more to do with the size of the user base. China has really increased its capacity dramatically over the last few years in supercomputing, but the user base is still not very large, so a lot of those machines are under-utilised.’ This is a conundrum that faces the entire HPC industry, as there are limited numbers of programmers with knowledge of HPC programming. Splitting them further, based on architectural platforms and programming models, further splinters the skilled codebase.

In 2014, Intersect 360 research surveyed HPC centres for a report written for the US government council of competitiveness. The results showed that software scalability was viewed as the largest barrier to a tenfold increase in the scalability of software.

Conway said: ‘Software scalability will be a bigger and bigger issue, because what has happened over the last 15 years is that compute architectures have gotten extremely compute centric. It was almost always the case that the CPU was the fastest part of the system, but that has become very exaggerated. Now CPUs are not being utilised effectively.’

Conway concluded: ‘So much – more now than ever – is based on software. Leadership will be determined far more by software advances than hardware over the next decade.’



Topics

Read more about:

Business

Media Partners