Taking the initiative
Manoj Nayee, managing director at Boston, discusses the revolutionary impact of GPUs
The need for greater performance in the high-performance computing (HPC) arena is on the rise as more complex computational problems become the focus of research projects. However, it takes the addition of hundreds or thousands of individual nodes to significantly improve the performance of a traditional CPU-only HPC cluster. Although this method is effective, it takes up a lot of valuable space and consumes a great deal of power, making the running of CPU-only supercomputers extremely costly. An alternative, increasingly popular approach for the HPC arena is the hybrid computing model. The hybrid model combines GPUs (graphics processing units) and CPUs working together to perform HPC tasks in a fraction of the space of a traditional CPU-only cluster.
As highly parallel processors, GPUs have the ability to divide complex computing tasks into thousands of smaller tasks that can be run simultaneously, enabling scientists and researchers to address some of the world’s most challenging computational problems in record time. From climate modelling to advances in medical tomography, hybrid computing is enabling a wide variety of scientific and industrial research projects to progress in ways that were previously impractical, or even impossible, due to technological limitations.
The introduction of hybrid processing represents a dramatic shift in HPC. In addition to improvements in speed, GPUs also significantly increase overall system efficiency, as measured by performance per watt, versus conventional CPU-only clusters. Studies have shown that the hybrid supercomputers in the Top500 list are, on average, almost three times more power efficient than CPU-only systems.
In an effort to make it easier for researchers to take advantage of the capabilities of hybrid supercomputers, the technology industry has recently announced a new parallel-programming standard known as OpenACC. This is an open parallel programming standard that enables scientific and technical researchers to more easily adapt their code to run on a hybrid cluster, without the need for in-depth parallel programming knowledge.
The OpenACC Application Program Interface provides hints, known as directives, to the compiler in order to identify which areas of code can be parallelised, without having to modify or adapt the underlying code itself. By exposing parallelism to the compiler, directives allow users to quickly accelerate the most performance critical sections of their applications.
Using an OpenACC compiler, programmers are now provided easier access to the massively parallel processing capabilities of GPUs. As a side benefit, OpenACC will also benefit any multi-threaded processor, so code will also run faster on multi-core CPUs. The compiler directives in OpenACC can work in loops and regions of code in C, C++ and Fortran applications, providing portability across different operating systems, CPUs and GPUs.
One directive-based solution used for hybrid computing is CAPS HMPP Workbench. Using C, C++ and Fortran directives, HMPP Workbench offers a high level abstraction for hybrid programming that leverages the computing power of GPUs without the complexity associated with GPU programming. HMPP Workbench integrates powerful data-parallel back ends for Nvidia’s parallel computing architecture, Cuda, and OpenCL to dramatically reduce development time.
The OpenACC initiative is anticipated to benefit a broad range of researchers working in a range of fields that use compute-intensive applications and demand immense processing power, such as chemistry, biology, physics, data analytics, and weather and climate research. Existing compilers from partner companies such as Cray, PGI and CAPS are expected to provide support for the OpenACC standard in the coming months.
The desire for more computational power in the HPC industry will certainly not diminish in the future, as the amount of data and complexity of research projects is ever increasing. What the OpenACC parallel programming standard brings to the table is an easier way of unlocking the huge potential performance and energy savings of GPU-based accelerators. The directives approach offered by OpenACC enables users to port existing code with ease in order to perform supercomputing tasks on GPUs. By allowing users to simply insert hints into the codebase and tap into the power of GPUs within hours rather than days, researchers are now able to tackle some of the world’s most challenging computational problems up to several orders of magnitude faster.