With the race to achieve the high performance computer (HPC) industry’s next supercomputing milestone gathering pace, it’s little surprise that innovation in the cooling market for HPC/data centres has stepped up a gear, as efficiency and performance of cooling technology becomes more significant in preventing operational costs of HPC resources reaching prohibitive levels.
If proof were needed, the latest figures released by intelligence firm Research and Markets show that the global data centre cooling market is predicted to go through strong growth during the 2017-2025 period, citing a rising number of data centres, particularly in developing regions such as Asia-Pacific, as a big part of the reason for this. The revolution in IT infrastructure in countries such as China and India, through virtualisation, has also positively impacted market demand.
This report would also correlate with that recently published by Garner Insights, which valued the global data centre cooling industry at approximately $7.28 billion in 2016, and which anticipates a healthy growth rate of more than 15.37 per cent over the 2017-2025 period. The major factor propelling the growth, according to Garner, is the growing number of both SMEs and large enterprises driving computing technologies, which will increase the power consumption – and, in turn, require innovative cooling systems.
In addition, the HPC industry is getting ever-closer to exascale, whilst the escalating amount of data including big data generation, video contents, and streaming services has reinforced the requirements for data centres. Garner’s study – Global Data Center Cooling Market by Solution Type, Data Center Size And Regional Forecasts, 2018-2025 – observed a remarkable increase in bandwidths from internet service providers globally. With enhanced technology, these providers can deliver greater internet speeds, and with the vast amount of data usage, infrastructure must be improved, driving up requirements for cooling technology. Here, some of the most well-known cooling technology providers offer insights into the current state and future prospects of cooling technology in HPC and data centre markets.
The direct approach
CoolIT is a recognised provider of direct contact liquid cooling (DCLC) technology for server manufacturers, with solutions compatible with Hewlett Packard Enterprise, Dell EMC, Intel, Huawei, Supermicro systems and various ODM-direct servers. This approach has served the business well, and it has recently reported a 60 per cent revenue growth for 2017. Data centres saw the most significant escalation with sales up by 75 per cent from the previous year. The strategic pairing between the company and STULZ – a solution-provider of energy-efficient temperature and humidity management technology – enabled a unique offering, along with worldwide service capabilities, to ensure customer support regardless of location.
Dell EMC selected the company’s Rack DCLC technology for its high-performance liquid-cooled PowerEdge offering in Q3/17 and the business says it is on track to set revenue records in 2018 with the launch of five programs in the first half of the year.
The provider believes its own ongoing growth is proof that DCLC is now being more widely adopted across various HPC verticals and is gaining popularity with hyperscale data centre operators.
Geoff Lyon, CEO and CTO, explains: ‘Over the next five years, we expect to see a significant shift towards widespread adoption, with liquid cooling being recognised as the standard in server and data centre cooling. Key trends that are driving the cooling industry are increased rack density, maximised performance and improved energy-efficiency. Liquid cooling is uniquely positioned to deliver on all three of these demands.
So, what are the benefits over the more traditional air cooling? Lyon elaborates: ‘Leading chip manufacturers continue to push computational boundaries with the release of high TDP processors that simply produce too much heat for air cooling solutions to manage. The properties of liquid, as a heat conductor, make it significantly more effective at removing heat from a processor than air, while using a fraction of the electrical power.
‘DCLC uses the exceptional thermal conductivity of warm liquid to provide dense, concentrated cooling. Through a patented, micro-channel architecture, our technology maximises coolant flow and directs the coolest liquid to the hottest area of the processor first. Coldplates can be as low as 2.4mm in height and are easily integrated into extremely compact, low-profile blade architectures, providing optimal performance.
DCLC technology drastically reduces the dependence on (many) fans, expensive air conditioning and air handling systems that comes with traditional air cooling. Not to mention the noise pollution from screaming fans. It also enables liquid cooling solutions that can operate with or without facility water hook-up, through its offering of liquid-to-liquid and liquid-to-air coolant distribution units. Customers employing direct liquid cooling will realise competitive benefits by lowering their cooling costs while increasing compute density and maximising the performance of their servers.’
For customers looking at the options available, it is important to understand what will work most effectively for their individual needs. Lyon continues: ‘We review customer requirements and strategically pair them with the liquid cooling technology that best serves their unique data centre application. From coldplate assemblies through to rack manifolds and coolant distributions units, users are offered a wide range of modular, scalable products that combine to deliver a reliable, complete data centre liquid-cooling solution. Customers can also partner directly with mechanical engineering and manufacturing Teams to design and build custom cooling solutions.’
In the mix
For Motivair, the future brings continued advances in the design of CPUs and GPUs, which are rapidly surpassing traditional approaches to cooling, and which will drive cooling system vendors to innovate at a faster rate.
Rich Whitmore, the firm’s president and CEO, said: ‘Previously untapped markets for HPC are fuelling market demand. These systems are being installed at sites that are frequently unprepared for today’s dense compute nodes. Because of this, the applied cooling solution must be flexible in design and scalable in nature.
‘There will absolutely be a mixture of technologies, mainly because there are so many individually unique new uses of HPC and big data.
‘No two customers are the same and no two facilities are the same. The days of HPC systems only being used in large national labs and government sites are over. Motivair provides multiple types of cooling equipment and we listen to customer needs. From there we define if they are a candidate for standard chilled door rack cooling systems or coolant distribution units (CDUs). Some customers require a customised solution.’
Whitmore believes that the migration of cooling systems back to source at rack level, is one of the largest changes in cooling technology.
‘Our ChilledDoors have become the gold standard in active rear door heat exchanger technology, offering a rack agnostic approach, cooling up to 100kW per rack, and the global service and support services needed to accommodate the HPC market,’ he said. ‘Motivair’s coolant distribution units provide customers and computer OEM’s with an unmatched list of available options, resiliency features and heat removal capabilities. From an in-rack CDU capable of removing 100 kW to the newest floor mount CDU that removes 1.2MW in a 900mm (36”) wide cabinet, we can accommodate both current and future cooling system needs.’
For Green Revolution Cooling (GRC), customers are very much immersed in the future. CEO Peter Poulin believes that liquid cooling addresses several problems facing the industry, and HPC applications are increasingly turning to alternative solutions to meet their ever more powerful requirements. The company’s CarnotJet liquid immersion cooling uses minimal power while reducing the need for energy-intensive chillers and air handlers. The company says that the entire cooling system benefits from using no more than three to five per cent of IT, which results in mechanical PUE (Power Usage Effectiveness) of 1.05 or less. Cost can be cut as immersion cooling is ‘almost free’ Poulin says – and negates, for example, raised floors, plenums and air flow engineering.
‘AMD has announced a new server processor. In the fall, Intel announced their new Skylake processor,’ he says. ‘Nvidia keeps introducing new processors. All of these processors are more and more power intensive, which is creating really high densities. Customers are asking if we will be able to support densities as high as 80kW per rack by the end of the year. GRC can manage 130kW.’