Skip to main content

Supporting science

HPC technology enables some of the most notable scientific breakthroughs, such as the image of a black hole that was released in April this year. However, it is not just HPC where this technology is driving innovation as the technologies permeate into artificial intelligence and machine learning.

The first image of the event horizon for a black hole involved an international partnership of eight radio telescopes with major data processing at MIT and the Max Planck Institute (MPI) in Germany. Processing more than four petabytes of data generated in 2017, the project demonstrates the scope of science enabled by HPC systems.

Many of the servers involved in the project were provided by Supermicro. Michael Scriber senior director, server solution management at the company, highlights the development that has taken place to enable multiple generations of Supermicro technology: ‘Our involvement really spans a number of years, as you can tell from the white paper there are a number of generations of products that were involved. If it was a nice, uniform type of thing, it would be much more predictable – to know exactly when things are going to be done.’

‘One of the helpful things with using Supermicro is that our management tools go across all of those generations. Where some people have become rather proprietary about their management tools, our focus has been on open-source applications such as Redfish, for example. With open-source tools you are not stuck with something that is proprietary,’ noted Scriber.

Imaging a black hole

Using an array of eight international radio telescopes in 2017, astrophysicists used sophisticated signal processing algorithms and global very long baseline interferometry (VLBI) to turn four petabytes of data, obtained from observations of a black hole in a neighbouring galaxy, into the first image of a black hole event horizon.

However, Scriber notes that the project uses multiple generations of server technology. If the team were to use the latest technologies they could accelerate the process of data processing and reduce the time taken to produce these images: ‘We have got some great systems that would have made this task so much easier; so much faster. One of the great things about working at Supermicro is that they are on the very cutting edge.’

Scriber noted that the newly released EDSFF all-flash NVME system could have enabled the scientists to process the data much faster, and with less rack space. ‘EDSFF is just coming out and it allows you to have much higher capacities. You are talking about half a petabyte of data in the 1U box. By the end of the year this will be a Petabyte of flash,’ said Scriber.

‘Sometimes storage is part of that bottleneck because you need to get that data out to the compute nodes so it can be processed,’ added Scriber.

Many academic clusters are built in stages over their lifespan, so the idea of having multiple generations of servers in a single installation is not a new idea. In fact, it is something that Supermicro expects when it designs its systems.

‘What we have done is to continue generations using that same architecture. So the TwinPro architecture has not gone away, but what we have done is modify that architecture to take advantage of the latest and greatest technologies,’ added Scriber.

‘You didn’t necessarily think years ago, “I am building this system so I can run a black hole project”. You bought it for other reasons, but it is so versatile that you can pull these things together and make this happen,’ Scriber concluded.

Specialised computing resources

HPC users have many options available if they want to go beyond the traditional model of x86 server-based clusters. Heterogeneous systems using GPUs have become popular in recent years, even taking many of the top 10 positions on the Top500.

Another option for more niche tasks are field-programmable gate arrays (FPGA) which can be configured specifically for a single application, such as signal processing data from radio telescopes or image processing for ML, and AI applications such as autonomous vehicles.

As John Shalf, department head for computer science at Lawrence Berkeley National Laboratory describes in the ISC coverage from the last issue of Scientific Computing World, architectural specialisation with FPGAs or even application-specific integrated circuits (ASIC) could be important to overcoming the bottleneck introduced by the slowdown of Moore’s Law.

Accelerating certain applications, pre-processing data, or providing hardware acceleration, are some options for the use of FPGA technology. Gidel, for example, an FPGA specialist based in Israel, has been working on compression algorithms and applications for image processing for autonomous vehicles.

Ofer Pravda – COO, VP sales and marketing at Gidel – noted that the company has been working on FPGAs in several different markets. ‘Gidel is a company that was established almost 27 years ago.

‘We are working to deliver boards based on Intel FPGAs, previously Altera, since day one. Right now we are focusing on several different markets from HPC to vision.’

The company has developed its own set of tools for developing code and applying it to the configurable logic of the FPGA. For example in 2018, Gidel announced its own lossless compression IP which utilised just one per cent of an FPGA board.

‘All the IP that we are developing, or our customer is developing on our board, is actually using our tools. When we develop JPEG compression, lossless compression the InfiniVision IP by using our tools, we can get a much faster delivery date,’ stated Ofer.

The toolkit from Gidel includes its ProcDev Kit, which helps users to automatically tailor the process of building the infrastructure required to support algorithm development by optimising the use of FPGAs, on-board memory resources and FPGAs to host communication.

The developer’s kit includes the ProcWizard application, API, examples of HDL and software libraries, and Gidel IPs. The company also provide tools for synthesis of C++ code and a support package for OpenCL. ‘One of the challenges with FPGAs is how to make it accessible for software engineers. We all know that you have more software engineers than hardware engineers, and you need to see how they can work with your FPGA hardware,’ stated Ofer. ‘OpenCL, for example, is trying to fill up the gap and give some options for the software engineer to work with, but the price that you have to pay is the efficiency. If you go to other methodologies that you have, such as HDL, then you need to have hardware engineers.

‘We are trying to work to give the engineers the way to allocate the board to their IP, not the IP to the board,’ added Ofer.

Reuven Weintraub, founder and CTO at Gidel, noted that the development of these tools allows users to develop applications that maximise the efficiency of FPGA resources.

‘It is possible to use our tools with the HLS (high-level synthesis) and then use some HDL (hardware description language) coding for what takes more logic,’ added Weintraub.

‘This way the combination of our tools both the HLS and the HDL can give you well-optimised development.

‘In these options we are definitely unique in the market. When it is pure OpenCL there are several companies doing that including Intel and Xilinx, but when we are dealing with optimisation I would say that from feedback from our customers there is a big gap between our tools and other tools available.’

Another innovation at Gidel is the use of several applications running or accessing the same FPGA.

‘This has potential use cases in autonomous vehicles or AI and ML, as it can be used to take data from or process images or data from sensors. ‘In automotive recording systems, this enables us to take many sensors whether with or without pre-processing, compression and then store all the sensors together,’ concluded Weintraub.



Topics

Read more about:

HPC

Media Partners