Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Cool solution to hot problem

Share this on social media:

Asetek has announced that the US Department of Energy’s National Renewable Energy Laboratory (NREL) will install Asetek’s RackCDU direct-to-chip 'hot water' liquid-cooling system as a retrofit to NREL’s Skynet HPC cluster.

As part of this liquid-cooling retrofit, the cluster will be relocated into the new data centre at the Energy Systems Integration Facility (ESIF) in Golden, Colorado, which is designed to be the most energy efficient data centre in the world, with a PUE (power usage effectiveness) of 1.06.

The data centre at ESIF will use 'warm water' (75oF) liquid cooling to operate servers and to recover waste heat for use as the primary heat source for the building office space and laboratories. The higher liquid temperatures used by Asetek’s RackCDU (105oF) will improve waste-heat recovery and reduce water consumption for the data centre.

Because of RackCDU’s design, these performance improvements will be achieved without the need for a customised server design. The system will be installed as a drop-in retrofit to existing air-cooled servers and racks.

RackCDU is a hot water, direct-to-chip, data centre liquid cooling system that enables cooling energy savings of up to 80 per cent and density increases of 2.5x when compared to modern air cooled data centres. RackCDU removes heat from CPUs, GPUs, memory modules and other hot spots within servers and takes it out of the data centre using liquid where it can be cooled for free using outside air, or recycled to generate building heat and hot-water.

By retrofitting an existing air-cooled HPC cluster with RackCDU, NREL will reduce the cooling energy required to operate this system, reduce water usage in the cooling system and increase the server density within the cluster, reducing floor-space and rack infrastructure requirements.

'Ambient water temperature in the hydronic system is a critical factor in data centre efficiency and sustainability,' said Steve Hammond, director of the Computational Science Center at NREL.  

'Starting with warmer water on the inlet side can create an opportunity for enhanced waste-heat recovery and reduced water consumption, and in many locations can be accomplished without the need for active chilling or evaporative cooling, which could lead to dramatically reduced cooling costs.'