Exploring energy efficiency

Energy efficiency is one of the key challenges of modern computing – in an era where even the most efficient supercomputers come with massive energy bills, technology that can help to increase energy efficiency is critical to sustainable HPC development.

The problem, of course, is performance. Today, the most power-efficient supercomputers are not the most powerful; the best and most powerful systems are not the most power-efficient. Shoubu, the current leader of the Green500, only just breaks into the top 100 most powerful supercomputers in the world at #94 (as of June 2016). Meanwhile, the most powerful supercomputer in the world, China’s Sunway TaihuLight (as of June 2016) has a vastly superior Rmax value (1 petaFLOP vs 93 petaFLOPs) but consumes almost 30 times as much power as Shoubu (0.55 MW vs 15.4 MW).

Clearly, the balance of power and performance is a key issue. As we move towards the exascale era of high-performance computing, a significant step forward in power efficiency will be necessary – even for today’s most efficient systems.

This is where Adept enters the frame. A three-year, European project funded under the European Commission’s Seventh Framework initiative, Adept’s sole focus is on understanding and improving energy efficiency in both High-Performance and Embedded computing. Despite being distinct fields, in the efficiency area both have a similar goal: high performance for low power and energy usage. HPC developers excel at exploiting parallelism for increased performance, while embedded engineers’ fixed energy budgets mean they are experts at balancing performance with power usage.

Starting in September 2013, the major threads of the project have been exploring the implications of parallelism in programming, and investigating the users’ choice of hardware. With this data, we have been working on developing a number of tools that enable users to quantify power and performance in both software and hardware, and then design a more efficient system. We can also utilise the tools to predict the performance of a piece of software on a system that may not be available or does not yet exist – the aim is to take the guesswork away from novel system design.

The tools suite has three major components: a benchmark suite, a power measurement tool, and power and performance prediction tool.


The Adept benchmark suite is designed to test systems using simple operations to determine how efficiently it carries them out. There are several levels of benchmark with increasing complexity – from simple-operation nano benchmarks to full small applications. Micro and kernel-level benchmarks are also available, testing functions such as branch & jump statements, function calls, and I/O operations (micro) and basic linear algebra (BLAS) operations (including dot product, matrix-vector and matrix-matrix multiplication), 2D and 3D stencil computations, and a conjugate gradient solver (kernel). 

All benchmarks come in a number of common implementations so that they can be used on a wide variety of systems. The parallel programming models OpenMP and MPI, but also less commonly used models such as UPC or Erlang. A wrapper library has also been implemented read to the values of the RAPL (Running Average Power Limit) counters on supported Intel processors. 

The benchmark suite is intended to characterise systems and works in tandem with the Adept Power Measurement system to fully understand how the software and hardware of a specific system are working together, to create a power profile.

Power Measurement

The Adept Power Measurement system implements a finely-grained power infrastructure to discover how every component of a system uses power over a given time. The system can take up to one million samples/second, highlighting power variations that coarser-grained power measurement tools will not detect. This enables researchers to study the impact of even small changes in either software or hardware on power and energy consumption.

We are also able to measure power from all components – even those outside the traditional power envelope typically measured by other measurement infrastructures. The Adept system reads the current and voltage from the power lines that supply the components, such as CPU, memory, accelerator, or disc.

Performance and power prediction

The Adept Performance and Power Prediction tool is another major development from the project, and a natural follow-on from the power measurement tools already described. This tool uses detailed, fast profiling and statistical modelling techniques to examine a software binary to predict how well a CPU and memory hierarchy system will perform, and how power efficient it will be – even if we do not have access to the system or even if the system does not yet exist. 

This give us a powerful tool in the design of new systems, as designers can see exactly how a given software application will run on a theoretical system before it is built. We have specifically focused on improving this tool throughout the project to increase its accuracy and have created a prototype tool that allows us to explore the design space for smarter, cheaper, and more efficient systems, because a system’s performance and power behaviours can be matched to a specific workload.

Sharing knowledge

The goal of the Adept project is to make it easier for software developers and hardware designers to create more efficient systems, by removing to need for them to guess how the software and hardware will perform together. Efficient design of efficient systems – this is the ultimate goal. The Adept tools allow developers to make sensible decisions on new implementations based on real data, rather than guesswork and allows owners of existing systems to see where their efficiencies can be improved. This could be updating a power-hungry component, or switching to a different, more efficient implementation of software.

Advances made by the Adept project are already being used in other projects with energy efficiency aspects, such as the NEXTGenIO and ExaFLOW projects. These are both projects with the exascale goal in mind, and as such require a workable efficiency solution to meet their goals. We have seen interest in the Adept tools suite from a number of institutions and companies who are keen to explore them within their own environments.

We have gathered a large amount of knowledge within the project, as well as the tools required to share this knowledge with others. The Adept project may be ending, but the advances we have made could have significant implications for the high-performance computing community.

Mirren White is a project dissemination and exploitation officer at EPCC

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon
Analysis and opinion

Robert Roe investigates some of the European projects focusing on preparing today’s supercomputers and HPC programmers for exascale HPC


The Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich in Germany has been operating supercomputers of the highest performance class since 1987. Tim Gillett talks to Norbert Attig and Thomas Eickermann


Gemma Church investigates how simulation and modelling de-risks the detection and extraction of geothermal energy resources


Robert Roe investigates the importance of upgrading legacy laboratory informatics systems and the benefits this provides to scientists