Skip to main content

Promoting freedom of choice in HPC

AS HPC clusters and software increase in complexity, it becomes increasingly difficult to switch from one technology to another. AMD is trying to make this process easier with a suite of open-source tools designed to give application developers the freedom to port their applications from the proprietary CUDA framework into C++.

Boltzmann initiative

This process began during November 2015 when AMD launched the Boltzmann Initiative during SC15. The initial announcement came with the release of a suite of open source tools including: AMDs new Heterogeneous Compute Compiler (HCC), a headless Linux driver and HSA runtime infrastructure for HPC; and the Heterogeneous-compute Interface for Portability (HIP) tool for porting CUDA-based applications to C++. 

Senior business development manager HPC GPU computing at AMD, Jean-Christophe Baratault, said: ‘Now we have GPUOpen, a global company-wide initiative, this means that everything, all of our software stacks are open source. The Boltzmann initiative is based on three pieces of software, a compiler, a code converter and new drivers.’

As HPC architectures have moved from consumer clusters towards more HPC specific technologies, programmers have had to introduce increasing layers of complexity to manage the diverse technologies available to HPC users–from accelerators to exotic processors or node architectures.

This added complexity, introduced as the HPC community adds more specialised technologies, such as GPUs, can deliver large performance increases. However, specialised technology running on proprietary frameworks like CUDA effectively lock users into a specific architecture because of the time and expertise invested into developing and optimising their applications.

New tools for promoting freedom of choice

Baratault explained that AMD’s motivation for developing these open-source GPU tools is based on an opportunity to remove the added complexity of proprietary programming frameworks to GPU application development. 

‘CUDA is an additional programming framework, based on the C++ programming language but which sits on top of C++ applications. This means that you have your legacy code, and whenever you have some data processed by the GPU, you have some CUDA codes into the C++ code. It is feature rich, well supported but it is an additional framework, and I would say it is an exotic one’, said Baratault.

If successful, these tools – or similar versions – could help to democratise GPU application development, removing the need for proprietary frameworks, which then makes the HPC accelerator market much more competitive for smaller players. For example, HPC users could potentially use these tools to convert CUDA code into C++ and then run it on an Intel Xeon co-processor.

‘We are not telling customers that they must move from CUDA to C++ and AMD hardware. We developed this HIP tool to give users the flexibility. You can keep your CUDA source code, you can keep development tools that you are using, your CUDA stack,’ said Baratault. ‘You just “HIPify” your code; convert it from CUDA to C++.’

New tools for open GPU programming

HIP is a C++ runtime API and kernel language that allows developers to create portable applications. The resulting C++ code can be compiled with AMD’s HCC and Nvidia’s NVCC, using the best compilers and tools on the respective hardware. This Heterogeneous-Compute Interface for Portability (HIP) gives users more hardware and development-tool options. Additionally, because both CUDA and HIP are C++ languages, porting from CUDA to HIP is much easier than porting from CUDA to OpenCL.

AMD reports that the conversion can convert 100 per cent of the code for a given application automatically. However, Baratault stressed that, for the majority of applications, the HIP tool will convert as much as 95 per cent of an applications code: ‘You will likely have to do a small amount of manual tuning. That could be 2-3 hours, up to a couple of weeks – it depends on the code.’

AMD also released new HPC drivers alongside the new HIP tool. As Baratault explains, this was to alleviate the community’s concerns about AMD’s previous drivers, which are based on consumer GPU drivers rather than being developed specifically for HPC. ‘We have to acknowledge that we were not that good at delivering HPC drivers to the community said Baratault. ‘Our drivers had issues, we fixed them, but they were not really HPC drivers because they were based on our consumer catalyst drivers. We decided to re-write them, being open source and being purely HPC, 64 bit Linux drivers and other HPC specific features.’

The new AMD tools are not aimed at increasing performance of applications, although it could be possible to get better price per watt AMD cards in some situations. ‘What we offer is freedom, freedom of choice,’ explained Baratault. ‘Today, you will see more than 300 CUDA-ready applications from ISVs or academics, and thousands and thousands of in-house CUDA codes. This is a big asset and we think would send the wrong message if we tell users that they should drop CUDA and switch to something different, because CUDA is good, CUDA is very good.’

AMD has already started to engage with the HPC community to encourage the use of these tools. The first project involved CGG, a global geophysical services and equipment company, which was using Nvidia GPUs to process and analyse seismic data, geoscience and oil and gas research.

Employing AMD’s HPC GPU Computing software tools available through GPUOpen, and with the help of AMD, CGG converted its in-house Nvidia CUDA code to OpenCL for seismic data processing running on an AMD FirePro S9150 GPU production cluster.

‘They [CGG] can have petabytes or even zettabytes of raw information that they collect on behalf of national oil companies, then they have to process this raw data to extract something that they can interpret’, said Baratault. ‘Once they have identified a reservoir, then they can find out where to drill – drilling is very expensive, so it is effectively a ‘one shot’ process.’

Baratault explained that CGG was an early adopter of GPU technology because signal processing algorithms are particularly suited to parallel processing: ‘CGG started back in 2006/7, they were one of the first to jump on Nvidia CUDA technology. CCG tried different architectures, different products and they decided to port most of their internal codes from pure CPU processing on X86 to GPU using CUDA. Why? Because in 2006/7, CUDA was the only suitable framework that was available.’

In the opinion of Baratault, it was this monopoly of technology, established by Nvidia, which motivated CGG to investigate the use of other accelerator technologies: ‘Like any other company they would like to have double sourcing for key components and for CGG, GPUs are crucial components.’

By using the HIP tools provided by AMD, CGG could remove the reliance on Nvidia hardware without tying themselves to AMD. This is because the HIP tool ports code into C++ rather than a proprietary format – so CCG could just as easily switch from Nvidia GPUs to Intel coprocessors. 

‘The reason why they went for AMD hardware, the S9150, is that our boards provide much higher single precision performance, we also have much higher memory bandwidth,’ said Baratault. ‘The speed of communication between the GPU and its memory is very important for this kind of signal processing algorithm.’

Ultimately, AMD believes that by offering freedom of choice to the GPU community it can begin to demonstrate that Nvidia is not the only game in town when it comes to GPU application development in HPC. Now, it is up to the HPC community to embrace these tools if they want to support freedom of choice for GPU development.

‘CUDA is great. Nvidia did a great job, but, if you go with Nvidia you are locked into their solutions,’ concluded Baratault. 

Based on the third generation AMD Graphics Core Next (GCN) architecture, the AMD FirePro S9300 x2 Server GPU is the world’s first professional GPU accelerator to be equipped with high bandwidth memory (HBM), and the first accelerator compatible with all AMD’s GPUOpen Professional Compute tools and libraries. 

The GPU delivers up to 13.9 TFLOPS of peak single-precision floating point performance. The new AMD FirePro S9300 x2 Server HBM allows the AMD FirePro S9300 x2 Server GPU to exceed the competition with 3.5x the memory bandwidth of NVIDIA’s Tesla M40 and 2.1x the memory bandwidth of NVIDIA’s Tesla K80.

Based on third generation AMD Graphics Core Next (GCN) architecture, the AMD FirePro S9300 x2 server GPU delivers up to 13.9 TFLOPS of peak single-precision floating point performance – more than any other GPU accelerator available on the market today for single-precision compute. 


Read more about:


Media Partners