Skip to main content

Unobtrusive power

As a rule, we don’t like ‘idle cycles’, whether queuing up at the supermarket, sitting in a traffic jam or waiting for time on the department server. This is even more the case today where there is a constant push to get results faster. We do have the benefit of some impressive compute resources at our disposal, but there is a lot of competition for these departmental or company resources and it’s generally necessary to queue up to use them.

This is what makes deskside HPC so attractive; it’s always at our disposal for immediate use. It might not have the full horsepower of corporate servers with their hundreds or even thousands of cores, but these machines have become so powerful that they can accomplish many of those tasks we are impatiently waiting to get done. For instance, a software developer might want to have a dedicated system for writing and testing code. Or it might be preferable to create a very large simulation problem’s geometry and other settings before submitting the job to a cluster for running the solver, the most compute-intensive part of the project.

Deskside HPC, what we might consider HPC outside of the data centre, has its own special requirements. Such a system must be physically small enough to sit unobtrusively next to a desk or perhaps in a printer room. It must run on power from standard office sockets. And in an office environment, it must be quiet. That last point can be difficult to achieve because of contradictory demands. On the one hand, we want more and increasingly powerful processors in our systems. That, in turn, means more heat is generated and this heat must be removed. In many cases this is done with fans, but if they are too loud, these systems won’t be welcome.

In the deskside HPC domain, it’s also important to make the distinction between a ‘personal cluster’ and a workstation. Today’s workstations might have single or dual processors and perhaps 16 or more cores, but there are no provisions to have the cores work together as in a cluster. Further, scientists and engineers sometimes hit the limits of workstation performance and need flexibility and headroom, and they turn to a personal cluster. They can use the cluster’s many more cores to work on one big job or launch multiple jobs and then combine the results.

Market shakeout

Interestingly, you generally won’t find deskside clusters from the traditional big-name HPC suppliers; the market doesn’t yet seem to be big enough to warrant setting up a product line in this area. For instance, in 2008 Cray brought deskside clusters into the limelight when it introduced the CX1, which supported from one to four blades, including a GPU blade. Since then, however, this product has been taken off the market. Some insights in this regard come from Jeffrey Cachat, now director of global channel sales and business development at Ciara Technologies, but who was previously the program manager for the CX1 at Cray.

‘Back then, Cray and Ciara entered into a partnership where Cray took the base technology from Ciara and developed the CX1,’ he says. ‘They tried it for several years and while they learned a great deal about this market, they felt there were better business opportunities with more appropriate margins to focus on.’ Cray has meanwhile launched a midrange initiative with supercomputers starting at $200,000; these XE6m and XK6m systems target the market segments and customers previously served by the CX line.

Agreeing with Ciara’s assessment and adding some further thoughts is Oliver Tennert, director of Technology Management and HPC Solutions at Transtec, which builds and also resells workstations and HPC deskside clusters. Among the products it marketed was the CX1, but it was not at all successful for Transtec. ‘We initially had lots of interest in the concept, but there were three major things that held back sales,’ Tennert explains. ‘One was noise. Even with efforts at noise reduction, the box was as loud as a vacuum cleaner, which discouraged its use in an office as intended. Second was the cabling on the rear of the unit, which was somewhat disorganised and not at all straightforward. Third, and most important, was price. Cray essentially took the Ciara unit and added a hefty mark-up, which customers balked at.’

Cray isn’t alone in dropping out of deskside clusters. SGI has pulled back on its marketing of the Octagon III, which holds 120 cores and nearly 2TB of memory in a pedestal format. The Octane III was developed by SGI, but the company is no longer manufacturing new units; if a customer wishes to buy one, it comes from the company’s Remarketed Products Group. These are units SGI has purchased back from customers and is reselling through this division. Why this move? The company now feels there is only a small niche market for semi-powerful desktop HPCs like the Octane III. It appears to them that most end-user needs in this segment are satisfied initially by high-performance desktop units available from the larger PC manufacturers at low prices, and then when this no longer gives them the performance they need, individuals leapfrog to a departmental cluster. This could also be a reflection of how organisational budgets are broken down.

Big players drop out, niche players pick up the ball

‘The deskside cluster is definitely a niche product,’ Ciara’s Cachat comments, ‘but it’s big enough to be interesting for a smaller company such as ours.’ That company has now introduced the third generation of its product, which Transtec’s Tennert feels has solved the three problems that previously discouraged users. Tennert adds that there seems to be a market for this class of machine and notes that the rebirth of the CX1 as the Nexxus C series is getting high interest among his customers.

A typical workstation has two or four sockets, whereas the Nexxus C allows for 20 processors, 120 cores, 16 GPUs and almost 2TB of memory. Measuring 20.1 x 25.7 x 29.7 inches, it holds four trays with up to eight nodes. The operating temperature is 10 to 30°C and the operating relative humidity is 8 to 90 per cent. Users can select from four modules, one being the NXG600 with as many as four Nvidia C2075s, each with 448 Cuda cores. It supports both Microsoft Windows HPC Server 2008 R2 and Red Hat Enterprise Linux, CentOS. Emphasising the unit’s intended use as a cluster, options include the Platform Cluster Manager and the Bright Cluster Manager middleware.

Confident that there is indeed a market for personal clusters, T-Platforms recently introduced its T-Mini P system, which is basically a cube on rollers. It resembles the CX1 in that it also has a front-panel touch LCD for basic chassis management and a system status applet for Windows; Windows HPC Server 2008 R2 is the primary OS and the vendor does not officially support Linux. The chassis accepts one head node (either Xeon E5 2600 or Opteron 6x00) and either eight ‘skinny’ nodes or four ‘fat’ nodes, likewise with a Xeon or Opteron, the latter also being large enough to support Tesla GPUs from Nvidia. Note that this unit uses six redundant (N+1) cooling fans.

Liquid cooling for cooling 
and overclocking

Removing heat from such loaded systems without creating any excess noise presents a true challenge. Manufacturers of ‘big iron’ have long used liquid cooling in large clusters and this technology is now working its way down to the desktop/deskside level. For instance, to address the issue of cooling, Ciara, headquartered in Montreal, has turned to another Canadian company, CoolIT Systems.

That company states that liquid cooling has recently become a requirement of high-end desktop processors with the launch of the Intel Core i7-3900 series. These processors do not ship with cooling solutions, but Intel offers the RTS2011LC liquid cooler, which can be purchased separately. According to CoolIT, this liquid-cooling solution, which was designed for the enthusiast/gaming market with features such as illumination, may not provide sufficient cooling for an engineering workstation or cluster. Thus, CoolIT sells the ECO II, which like most similar liquid cooling systems consists of two components. First is a header that fits on top of a CPU with a micro-channel fluid heat exchanger; a pump in the header sends heated fluid to the second major component, an aluminium radiator whose fin pitch and fluid path have been optimised to allow peak efficiency with both low and moderate fan speeds. Various radiator sizes are available ranging from 80 x 104 x 25mm to 274 x 120 x 27mm. The datasheet specs list acoustic noise of 23 dBA.

This liquid cooling does more than just remove heat; because the CPUs run so much cooler, they can be overclocked to achieve enhanced performance. Ciara, for instance, uses this feature to allow what it refers to as ‘safe overclocking’ of CPUs for applications that require absolute processor speed performance, such as CAD and finite-element analysis. This cooling scheme is not only used in deskside clusters; the firm’s Kronos S900 pedestal workstation comes with the dual Intel Xeon X5690 processor, which is spec’d for operation at 3.46 GHz, but with the addition of liquid cooling runs at 4.4 GHz.

Another supplier of sealed liquid cooling systems is Asetek, which this summer announced it has shipped its 1,000,000th sealed liquid cooler – this could be due to the fact that the company also provides the technology for Intel’s RTS2011LC cooler, which has proven very popular in gaming PCs. The system likewise consists of a cold-plate unit for the CPU, including an integrated pump and reservoir, a heat exchanger (radiator) and connecting tubes to transport the liquid. This liquid cooling, says the developer, delivers what is equivalent to three metric tons of liquid per minute for every square metre of die area.

An interesting Asetek product is the 760GC Combo Liquid Cooler, which provides cooling heads for two CPUs: for one Nvidia GTX 570 or 580 GPU and one popular AMD or Intel CPU. This firm is also active in the workstation market, supplying liquid cooling for HP’s Z400 and Z800 workstations as well as several Tier two workstation suppliers.

Sealed liquid cooling from Asetek is also the choice for Boston with its Venom 2300-7T workstation, which comes with dual eight-core Xeon E5-2600 processors alongside the Quadro 4000 and Tesla C2075 cards, all in a small midi-tower chassis. It also features 128GB of solid-state memory for the OS (optionally expandable to 400GB of solid state memory) and two 2TB (optionally 4TB) of RAID1 disk storage.

Another workstation company relying on cooling systems from Asetek is Boxx Technologies. Its Model 4050 Xtreme, for instance, uses that method to overclock the Intel Core i7 processor at 4.5 GHz and features up to two Nvidia Quadro, GeForce or Tesla as well as ATI graphics cards.

Because it supports both Nvidia’s graphics GPUs and general-purpose GPUs, Boxx is among the companies presently certified in Nvidia’s Maximus program; other certified companies include Dell, Lenovo, HP and Fujitsu. Maximus technology combines the visualisation and interactive design capability of Quadro GPUs and the high-performance computing power of Tesla GPUs into a single workstation. Tesla processors automatically perform the heavy lifting of engineering simulation computations or photorealistic rendering, which frees up CPU resources for the work they are best suited for – I/O, running the OS and multi-tasking – and also allows the Quadro to be dedicated to powering interactive graphics.



Media Partners