Modular HPC goes mainstream
The advent of more scalable, industry standard and lower cost high-performance computing (HPC) is bringing high-performance computing within the scope of new users across a range of industries and smaller organisations.
Instead of being restricted to a handful of universities and government-funded research centres, modular HPC is increasingly feasible for commercial organisations. This is true for small to medium commercial entities, as well as a more diverse range of public organisations such as hospitals, colleges and research institutes.
Manufacturing, engineering, architecture, drug design, personalised medicine, finance, insurance, video analytics, data intensive computing, and market simulations are some of the applications that could drive adoption of modular HPC by new adopters. Industrial IoT, smart cities, smart transportation, smart buildings, engineering and structural visualisation are some of the commercial trends that require high levels of computing performance.
Over time, multiple smaller-scale HPC commercial implementations could greatly outweigh very large research installations in total value.
HPC is getting smaller
Since its inception, HPC has been about purpose-building the fastest, largest, biggest number crunching machines. The industry’s scoresheet is the TOP500, the twice-yearly list of the world’s fastest non-distributed supercomputers.
These huge specialist machines are typically the preserve of national laboratories using them for high-energy physics and other scientific computing and engineering computing.
This emphasis on the fastest, biggest machines is increasingly misplaced. For the last few years, an opposite trend has occurred. It has become easy and affordable to implement smaller scale high-performance systems that can be used at the SME level, or even by start-ups.
This trend is taking high-performance computing out of the public national laboratory and university environments and implanting it in the commercial world alongside enterprise systems. The proliferation of new applications means that, in many cases, these HPC systems are not even being used for scientific computing. Increasingly it is financial services firms, advertising, media and consumer goods companies using high-performance computing for complex financial transactions, video and behavioural analytics, accelerated HD animation, natural language processing, speech and media recognition, data science and deep learning.
Ultimately, it even raises the prospect of smaller on-side HPC systems becoming production systems, working full-time on the same jobs in a fault-tolerant setting aimed at full availability in mission critical situations, such as hospitals, or edge computing analytics in manufacturing plants or industrial infrastructure.
The advent of industry standard HPC
What has driven this emerging small and mid-scale HPC market among commercial companies, including mid and smaller sized firms is the advent of scalable HPC systems built on industry-standard hardware and software architectures.
Not only has that brought costs down, but the use of industry-standard architectures has made it much easier to apply enterprise skill sets and software to HPC environments.
‘In the past, large HPC systems were very complex and very proprietary,’ says Scott Tease, executive director, hyperscale and high performance computing at Lenovo. ‘Now, if you look at the TOP500 list, you could take any of a number of systems at the top of the list and you could build a cluster that’s only five or six nodes wide, or a 5,000 node cluster, using the same technologies and skill sets.’
‘When you take building blocks made up of standard components that can do pretty much any enterprise task and interconnect them together to create a supercomputer, the entry barrier is much lower. The systems look a lot more like what people are used to dealing with, and they can do a lot more tasks well. They are not single-purpose machines. It is an enormous reduction in the barrier to entry to HPC,’ says Lenovo’s Scott Tease.
Smaller HPC fuelling rapid growth
According to vendors, this sector of the market is the now the fastest growing segment in high-performance computing – which makes the emerging market for commercial mid and smaller scale HPC one of the fastest segments across the whole of the global $3.4 trillion IT market.
‘The low end of high-performance computing has been growing faster than the high end of high-performance computing,’ says Scott Misage, vice president and general manager, high-performance computing, Hewlett-Packard Enterprise. ‘According to industry analyst IDC, the low end of the HPC market has been out-pacing the high end for a couple of years now, and both of them are growing faster than generalised IT.’
‘The high end of high-performance computing is largely driven by the public sector, and that is typically highly cyclical. But there are certainly small businesses, whether start-up or mature, that want to capitalise their high-performance computing environments – and they are buying smaller systems,’ says HPE’s Misage.
Some large firms are also establishing small, ad-hoc HPC installations, even when they have existing, larger HPC infrastructures alongside their enterprise systems. A firm that is carrying out a separate, secret R&D project, or a secret joint venture project with business partners, may establish a separate small-scale HPC installation in their own offices, sometimes off-site. This will be isolated from both the main HPC infrastructure and the company’s enterprise systems in order to maintain secrecy over the project.
According to Nigel Gore, head of product management at British HPC specialist Iceotope, many of the systems they provide are for companies that are engaging in product or market developments that they want to keep secret. Iceotope packs HPC systems into water-cooled units, producing systems that are designed to go unnoticed into any normal office or workplace, without requiring any special power systems or air conditioning. The company claims its systems could even be installed into a boardroom. A small HPC cluster can, therefore, be added easily to any normal workspace without the need for adaptation.
Cloud driving adoption of on-site HPC
Cloud implementations of HPC from the likes of AWS, Google, or Microsoft, can be seen as an alternative to small or mid-scale industry-standard HPC systems. For some smaller and mid-sized companies, public cloud HPC can replace the need for their own local on-site HPC installations. But for some companies – big and small – public cloud HPC is accelerating the adoption of smaller on-site HPC clusters at departmental or work-group level. Rather than replace on-site HPC, public cloud HPC is combining with smaller on-site HPC into a new cloud-local HPC model.
This new model can be called ‘hybrid burst-out’ HPC. According to Bart Mellenbergh, director, HPC and big data at Dell EMC EMEA, ‘some companies will reach a point where they have both a cloud-like HPC environment and a small local HPC installation. You can start with only one HPC server, and then you burst out to the cloud to do the computations that you need.’
Smaller, simple to manage HPC systems on-site supply a work-group with continual access to medium range compute HPC performance, together with data intensive pre-processing and post-processing, data-set munging, selection and other data-related operations. When extremely intense compute performance is required, the local on-site HPC system automatically transfers those tasks to a public HPC cloud instance to compute, and then retrieves the result – completing processing locally.
By processing the highly compute-intensive task in a cloud HPC instance, but carrying out all other compute and data handling tasks locally, the hybrid burst-out approach creates a highly cost-effective and agile model for smaller firms and work-groups to implement HPC for either scientific or alternative uses.
For many types of workload for which firms would consider HPC, public cloud HPC may not be suitable. Smaller on-site local HPC may be much better suited.
‘Movement of significant amounts of data from on-site to off-site is so expensive and so cumbersome that it makes it difficult to move to the public cloud,’ says Lenovo’s Scott Tease. However, Tease sees great potential in the development of private HPC clouds.
For Dell’s Bart Mellenbergh, cloud-based HPC can make sense, based on workload requirements: ‘If you need continuous access to HPC compute, it doesn’t make economic sense to site HPC in the cloud. But if you’re only using it rarely, then it can make sense.’
According to HPE’s Scott Misage, this ability to mix and match different platform models for HPC requirements, depending on workload, makes smaller scale HPC far more cost-effective than before, and is fuelling a surge in market growth.
‘Around five per cent of the low-end of the market are consuming HPC from the cloud,’ says Misage. ‘But there’s a 30 per cent compound annual growth rate, which is very fast compared to the mainstream HPC market, which analysts say is growing at around eight per cent annually.’
It’s worth pointing out that, according to the latest Gartner estimates, the overall global IT market is growing at between -0.5 and 0.0 per cent. PC shipments are reckoned to have fallen by seven per cent.
Enabling individualised medicine
Industries that are adopting smaller sized HPC systems include finance – one of the pioneers in embracing industry-standard HPC – manufacturing, construction, engineering – even in the case of one start-up at the STFC site at Daresbury, a small team designing tents for emergencies.
But two industries that are likely to be transformed by scalable, industry-standard HPC are life sciences and healthcare.
The uptake of HPC among life science firms – even smaller ones and some start-ups – underscores why commercial firms are turning to HPC in place of enterprise or general purpose commercial off-the-shelf (COTS) big data systems.
‘There are many spin-off companies working, who are trying to develop unique ways to compare and analyse genomic data,’ says Dell’s Bart Mellenbergh. ‘But places such as Cambridge are going much further. They’re not just looking at the genomic data; they’re looking at multiple sources of data, including medical records, genetic data and other sources of data. This is very data intensive, and is demanding HPC type capabilities rather than conventional big data processing.’
As scalable, industry-standard HPC use accelerates among life science firms, there is likely to be a knock-on effect on hospitals. With individualised, or personalised, medicine and translational medicine increasingly being adopted within larger hospitals, the demand for cost-effective, scalable HPC systems within healthcare is likely to grow. For both translational and individualised medicine, the falling price of gene sequencers is likely to create increasing demand for HPC to process complex data. The end benefit for patients includes safer, faster treatments for cancer and a host of other aggressive diseases that currently resist conventional medical approaches.
‘It’s relatively easy for hospitals to go out and buy gene sequencers,’ says Mellenbergh. ‘Sequencers now are only a couple of thousand. Personalised medicine is getting much closer. Sequencing is relatively simple, but the back-end processing is hugely complex for personalised medicine, and that demands HPC. There you’ve got hospitals that haven’t a clue what HPC is about, but are just interested in a function. This is a huge shift in how HPC is likely to be used.’
New attitudes to HPC
In many other ways, scalable industry-standard HPC is changing the way HPC is done, how it is programmed, even the culture, attitudes and self-identity of the people who engage in HPC. It’s a shift from an exceptional, science and maths focused world to one where HPC is just another technology for getting answers.
‘This spread of HPC into the commercial space is changing the culture, and skills mix among HPC users,’ says Mellenbergh. ‘The users who are coming to use HPC today do not have a background in HPC computing.
‘A couple of decades ago, there was deep technical knowledge of HPC among users. They understood the architecture; they understood how to use the machines. They understood how to parallel program to extract the best out of the systems.
‘With the growth of HPC among commercial organisations, that background doesn’t exist today among an increasing number of users. There is now a large HPC community that wants to use the machines but doesn’t have the technical background. This is especially true in the life sciences.’
There is a wide recognition that there is not the same range of programming or systems management skills as there is among HPC’s established scientific users. For HPE’s Scott Misage, the solution is to build solutions that are tailored to particular industry-use cases. These tailored solutions – which pre-package hardware, systems software and applications into pre-tuned solutions – could increase even further the uptake of HPC by commercial companies. ‘We think this solution angle will further drive the usefulness of HPC into broader industries,’ says Misage. ‘We’re doing this in finance, life sciences for sequencing, in interior auto and aero design.’
For Lenovo, part of the solution is to wrap up the complexity of HPC even more, creating portal tools that make it far easier to use the range of powerful Open Source tools that are increasingly available for industry-standard HPC platforms.
‘We’re involved in an effort to create a graphical user interface (GUI) to make all those open source tools much easier to use, so there won’t be that steep learning curve and it will allow more people to use these tools,’ says Lenovo’s Scott Tease.
For Dell’s Bart Mellenbergh, the spread of HPC into the commercial arena is changing the mind-shift of users away from a focus on the machines and the technology, and towards a search for business answers. ‘There’s far less of the attitude: ‘I’m proud to be doing HPC, I’m proud to be a programmer’, and far more “No, I’m doing biology”,’ says Mellenbergh.