Supermicro MicroBlade servers enable one of the world’s most efficient data centres

Supermicro, a provider of compute, storage and networking technologies has announced the deployment of its disaggregated MicroBlade systems at one of the world’s highest density and energy efficient data centres.

An unnamed Fortune 100 company has deployed over 30,000 Supermicro MicroBlade servers, at its Silicon Valley data centre facility with a Power Use Effectiveness (PUE) of 1.06, to support the company’s growing need for high-performance computing (HPC).

Compared to a traditional data centre, which can operate with a PUE as high as 1.49, the new datacentre achieves an 88 per cent improvement in overall energy efficiency. When the build is complete, the company expects $13.18M in savings per year in total energy costs across the entire datacentre.

‘With 280 Intel Xeon processor-servers in a 9-foot rack, and up to 86 percent improvement in system cooling efficiency, the MicroBlade system is a game changer,’ said Charles Liang, president, and CEO of Supermicro. ‘Leveraging our Silicon Valley-based engineering team and global service capabilities, Supermicro collaborated closely with the company’s IT department and delivered a solution from design concept to optimally tuned, high-quality product with full supply chain and large-scale delivery support in five weeks.  With our new MicroBlade and SuperBlade, we have changed the game of blade architecture to make blades the lowest in initial acquisition cost for our customers, not just the best in terms of computation, power efficiency, cable-less design, and TCO.’

In addition to energy efficiency, the Supermicro installation is also designed to allow multiple upgrades of server components independently without replacing the entire server. The MicroBlade disaggregated architecture is designed to unlock the interdependence between the major server subsystems enabling independent upgrades of CPU+Memory, I/O, Storage and, Power/Cooling. Using this system means that each component can be refreshed on-time to maximize Moore’s Law improvements in performance and efficiency versus waiting for a single monolithic server refresh cycle.

‘A disaggregated server architecture enables the independent upgrades of the compute modules without replacing the rest of the enclosure including networking, storage, fans and power supplies, which refresh at a slower rate,’ said Shesha Krishnapura, Intel Fellow, and Intel IT CTO. ‘By disaggregating CPU and memory, each resource can be refreshed independently allowing data centres to reduce refresh cycle costs. When viewed over a three to five-year refresh cycle, an Intel Rack Scale Design disaggregated server architecture will deliver, on-average, higher-performing and more-efficient servers at lower costs than traditional rip-and-replace models by allowing data centres to optimise adoption of new and improved technologies.’

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

Building a Smart Laboratory 2018 highlights the importance of adopting smart laboratory technology, as well as pointing out the challenges and pitfalls of the process


Informatics experts share their experiences on the implementing new technologies and manging change in the modern laboratory


This chapter will consider the different classes of instruments and computerised instrument systems to be found in laboratories and the role they play in computerised experiments and sample processing – and the steady progress towards all-electronic laboratories.


This chapter considers how the smart laboratory contributes to the requirements of a knowledge eco-system, and the practical consequences of joined-up science. Knowledge management describes the processes that bring people and information together to address the acquisition, processing, storage, use, and re-use of knowledge to develop understanding and to create value


This chapter takes the theme of knowledge management beyond document handling into the analysis and mining of data. Technology by itself is not enough – laboratory staff need to understand the output from the data analysis tools – and so data analytics must be considered holistically, starting with the design of the experiment