ANALYSIS & OPINION
The role of HPC in oil and gas
3 April 2013Tweet
Head of the Fraunhofer Competence Center for HPC at the Fraunhofer ITWM, Dr Franz-Josef Pfreundt is scheduled to host a session on oil and gas at ISC'13. Beth Harlen caught up with him to find out more
At ITWM, I founded departments that focus on image analysis, flow in complex media, and high-performance computing (HPC), which aid the development of simulation software. At the university in the late 1980s, we were developing software for the re-entry simulations for the European space shuttle and, as you can imagine, this was a very time -consuming process. We were looking for faster systems at a time when the first parallel machines were beginning to emerge, and I do believe that we were the first group in Germany to purchase an nCube parallel machine. Using parallel machines for very compute-intensive simulations is where my interest in HPC began.
My involvement with ISC started in the early 1990s, and I will be chairing the session on oil and gas at this year’s event. ITWM works closely with many oil and gas companies to develop seismic imaging software that aims, among other things, to improve the understanding of sub -surface structures. Oil and gas is one of the fields today where huge compute capacities are needed and increasingly being sought. If you look around the industry, you see that companies like Total are investing in HPC systems – such as Pangea, the petaflop system it recently purchased – because the problems they face are so large and compute-intensive, they would be impossible to tackle without such resources.
The session at ISC will focus on the question of what the oil and gas industry is doing with HPC, with secondary topics that look at the algorithmic challenges and HPC challenges behind that. The fact that algorithms need to be tailored to certain machines has prompted accelerator discussions in the industry, revolving around FPGAs, GPUs, Intel Mic, etc. We now have to question what the right CPU for the right algorithm is, and vice versa. As machines become faster, we also have the opportunity to address whether methods that were too challenging for previous systems can now be applied.
The development of software costs a considerable amount of money and can easily take years before developers have anything productive. Then there is the architecture to consider – so the first step is to look for mainstream architecture to make the code work, and then determine what new technologies to invest in, depending on the algorithm. One of the main workhorses in seismic imaging today is the Revers Time Migration. The oil and gas industry is using both CPUs and GPUs to solve this problem. In the isotropic case, GPUs have no advantage but in stronger anisotropic cases GPUs are faster. GPUs’ architectures are changing and so has the code, so from a software developers’ point of view it is not an easy choice.
Opinions on accelerators can differ greatly – which is why I will be stimulating the discussion during my session at ISC. The views and investments in the oil and gas industry are important not only for scientists and vendors involved in that sector, but for HPC in general. ISC provides a good forum for discussion as there is always enough time to meet people and follow up on discussions that have been sparked by the sessions or presentations.
The combination of conferences and exhibitions works well for me as an attendee and ISC’13 will be offering an increased parallel research track, with many scientific papers being presented. I think this is an important development for ISC because it is assimilating elements of scientific conferences while maintaining its focus on academia and application. It’s this exploration of what we as an industry can do with the theoretical that makes trade shows like this so valuable.
The Role of HPC in the Oil & Gas Industry
Wednesday, 19 June 2013
11.30am – 1pm
Hall 2, CCL - Congress Center Leipzig