Skip to main content

Visualising the future

Gemma Church examines the different ways visualisation is used to improve everything from Cosmology to cardiovascular surgery

In today’s data-heavy world, visualisation is a vital tool to help engineers cut through the noise and ‘see’ the results that matter. But big data is getting bigger and that’s a problem for visualisation, as Jim Jeffers, senior director and senior principal engineer for Advanced Rendering and Visualisation at Intel explains: ‘The challenge of "trying to understand what the data is telling you" continues to grow and will get even harder as we move to exascale. As the sheer size of the data increases, new techniques will be required to isolate what is really important.’ 

This is because, at exascale, the data is so large that ‘you can’t readily move it on and off the compute nodes anymore, even with system memory and parallel fi le systems, as fast as they are,’ according to Jeffers. ‘So, in-situ visualisation, which is basically computing and visualising simultaneously in real-time, will be a prevalent means to get to new discoveries and take best advantage of the awesome computational capability of this new generation of supercomputers.’ 

Over the past five years, Intel has worked with The Stephen Hawking Centre for Theoretical Cosmology at the University of Cambridge on a range of projects, exploring how the universe formed and continues to change, and the gravitational impact on space-time.

Intel has also worked with the German weather service, DKRZ, on the impact of weather and climate on cloud and storm formation, and the Leibniz Supercomputing Centre (LRZ) to investigate the mechanisms of full-body blood flow, to name a few examples in the scientific space.

‘The challenges are sifting through the data and finding that gem, which is often not what you thought you were looking for,’ according to Jeff ers. ‘In particular, with the late Stephen Hawking’s team, their use of Intel OSPRay, a component in the Intel oneAPI Rendering Toolkit, has shed significant light on the mysteries of gravitation in the universe, and they have found significant discoveries through visualisation that they didn’t expect, driving new areas of exploration towards a unified theory of "everything".’

For scientific visualisation, Intel OSPRay’s ray tracing capability, included in tools like Kitware’s ParaView, Open Source VisIt, Visual Molecular Dynamics (VMD) and, recently, NCAR’s VaPOR has ‘changed the game for scientists', according to Jeffers. ‘We have gone beyond traditional rasterisation to physically based and higher-fidelity rendering aligned with how the human visual system receives and processes light. 

‘They are able to see nuances in the data because of the physically based mechanisms and the ability to shine a "light" on the data – and I literally mean turning on a light virtually, which you can do with ray tracing and look deeper at the data. The researchers and engineers we are working with are all saying ray tracing has become key to them fully understanding their data.’

Recently, Intel introduced a feature called particle volume rendering, which uses the HACC (Hardware/Hybrid Accelerated Cosmology Code) for scientific simulations, and it plans to implement this feature on the upcoming Aurora Exascale system, which is designed to run simulations of the universe at extremely high-performance levels. ‘Cosmology models are among the largest and most complex models in computational research. Our optimisations provide the additional performance researchers need to run applications like HACC and make more accurate predictions about the distribution of billions of galaxies, as well as the underlying distribution of mass and hot gasses in the universe,’ Jeffers adds.

Machine matters 

Back on planet Earth, visualisation is replacing the Excel spreadsheets and the PLC simulations engineers traditionally use to interpret data within the worlds of industrial automation, automotive and aerospace.

Chris Harduwar, vice president of business development at Maplesoft, explains why this is the case: ‘An Excel spreadsheet is an example of where things can go wrong; getting the units wrong can result in a catastrophic issue. Also, traditional PLC simulations for automation don’t include any visualisation in the traditional sense. ‘So, you have to pay close attention to what’s happening and pay close attention in real time. These challenges cause engineers to spend an excess amount of time on looking at data, interpreting results and double checking their work,’ Harduwar adds.

This is where visualisation comes in, helping engineers understand and extract the most important insights from their models and simulations. Brett Chouinard, chief techical officer for Altair’s modelling, visualisation and math-based solutions and strategy, explains: ‘CAE, CFD and design engineers need solutions that can respond dynamically to ramping up/down of simulation and modelling needs, as well as flexible access to simulation-driven design tools both to do their work and to share results.’ Chouinard adds: ‘For CAE and CFD engineers, performance can be impeded by resource limitations, such as hardware constraints and license availability. Design engineers face complex system design

challenges exacerbated by geography and working from home, such as prohibitive data downloads, access to appropriate software and limitations on the information that can be shared between different tools.’ This is where an easy-to-use tool and one that’s hosted in the cloud can help, providing engineers with a single source of truth as new data comes in.

There are many such solutions available. Altair Access, for example, is an interface for submitting, monitoring and sharing jobs on remote clusters and other resources, including the cloud, with visualisation and collaboration tools. 

Altair One also ‘makes it easy for engineers to access software, HPC and data, and includes applications such as the Altair Drive, which enables engineers to securely upload, access, store and manage data, making it possible to access data from anywhere, on any device, without big downloads,’ Chouinard explains. 

On the HPC side, Altair and the Argonne National Laboratory have also collaborated to enhance Altair’s workload management technology, helping Argonne’s Aurora system to run workloads at exascale. ‘As a longtime leader in the simulation space, Altair is well versed in the types of challenges engineers face when it comes to deriving meaning from HPC output,’ Chouinard adds. MapleSim from Maplesoft is also an all-in-one simulation platform where simulations and models ‘come to life’, according to Harduwar, using virtual commissioning where the model essentially becomes a digital twin, connected to the PLC control code. When virtual commissioning a machine, engineers use a model developed in MapleSim and connect that to the PLC control code and run it against the model. The results appear in real time with CAD import options to create a rich visualisation. 

Recently, for example, Maplesoft worked with an injection moulding company that used a model based on an existing machine, which was in the field and inaccessible to their engineers. So, the engineers modelled that machine in-house and reproduced the results they were seeing in the field. 

They updated the control code, used visualisation components to understand how that model was behaving, and this behaviour matched the real machine. Harduwar explains: ‘They were able to update the speed of that machine and update the performance all on the computer. They loaded it up on the real machine and it produced the exact same results.’ It’s not just scientists and engineers that rely on advanced visualisation tools either, such functionality is also important from a consumer point of view. Intel, for example, recently worked with Bentley Motors. Jeffered explains: ‘The overwhelming majority of Bentley automobiles are purchased online, sight unseen. 

So some of the most expensive cars in the world need the best 3D technology tools, because that expensive car "better be the car that I ordered". ‘With Intel OSPRay, Bentley has created a full 3D manufacturing pipeline, so the models consumers see are the actual manufacturing models rendered in 3D, so that you can select colours and other options. Then, that model goes to the manufacturing floor all in one workflow.’ 

Future vision

Going forward, visualisation tools must adapt to the growing scale and complexity of data, providing an easy-to-use and unified solution. Harduwar said: ‘As data increases, so does complexity. So does the importance of interpretation and doing the right thing with the data. MapleSim’s simulation platform can solve various problems. Large amounts of data can be brought in and processed and simulated to produce various results and the optimised simulation can produce one solid answer, or a suite of answers and possible solutions.

‘We also hear customers talking about predictive maintenance of machines and products out there. Predictive maintenance is a trend we see going forward because MapleSim is backed by the Maple math engine. So, simulations can be performed extremely fast.’ From Chouinard’s perspective, remote working and advancing digitisation also pose increasingly prevalent problems for visualisation tools. He explains: ‘The challenges associated with the quantity, location, transfer and visualisation of engineering data throughout the product development lifecycle will only increase as organisations continue the adoption of cloud computing models and embrace data-heavy artificial intelligence and IoT technologies.'Chouinard adds: ‘Today’s mobile and distributed workforce will also put the burden on producers of engineering software to meet a high standard of flexibility and compatibility across devices. We expect the trend of the convergence of simulation, artificial intelligence and high-performance computing to continue and even accelerate in coming years.’ The challenge for Intel is to deliver ‘the promise of in-situ visualisation', according to Jeffers. ‘I/O just won’t keep up. Currently, you take your thousands of nodes, run your simulation – sometimes for weeks or months – save all the data, and then need to allocate time in the future to analyse the data, competing with other important science computations. 

‘The issue is, at exascale we have great compute capability and have made great  strides in I/O, but there is a significant delay between I/O – that’s everything from node to node communications, storing from memory to disk, and reading it back out for visualisation,' says Jeffers. ‘Everyone focuses on the simulation, but it’s just 1’s and 0’s until you visually analyse it. Making it consumable for the scientists for them to do what they want – which is to make a discovery – is going to require a high percentage of codes going to in-situ processing to deliver that goal.’

Intel is making progress in this area across its many partnerships and collaborations, as Jeffers explains: ‘For example, the Aurora Exascale system will allow researchers to do larger, more complex simulations than they have ever done before, and we’re already using the Aurora Early Science program to provide early access to hardware to test new approaches to visualisation.’ For example, via a US Department of Energy project called SENSEI, the Argonne scientists are collaborating with other labs and within industry to create a unified approach to in-situ data analysis and visualisation. Jeffers explains: ‘With access to early hardware and the Aurora software development kit, the team has been testing and developing these capabilities in advance of the exascale machine’s arrival. With its in-situ, real-time interactive visualisation capability, researchers will be able to examine the data during the on-going simulation and extract insights as the simulation progresses. 

Thus, the system will speed the time to analyse results in what I like to call “real-time discovery”,' he concludes. As we move to the exascale, visualisation is on the brink of great things, where real-time discovery could pave the way for a new era of scientific and technological breakthroughs – unlocking the promise of visualisation for all, regardless of your location, industry and end user.

Visualising vascular surgery

Surgeons can now plan, execute and understand surgeries using intelligent augmentation and machine learning technologies. Developed by Cydar Medical, the company’s Cydar Intelligent Maps harness cloud GPU computing, computer vision and machine learning technologies to advance surgical visualisation and improve decision-making in theatre, and across the surgical pathway.

Tom Carrell, co-founder at Cydar Medical and vascular surgeon, says: ‘We are all familiar with consumer intelligent map apps such as Google Maps, Apple Maps, and Waze. The concept of intelligent maps for surgery is similar: every new patient’s care would be informed by the care of all previous similar patients and, in turn, each new patient’s care would inform the care of all future patients.'

The key is integrating augmented, intelligence-enabled planning, guidance and outcome analysis of surgery for each patient, connecting that dataset to an anonymised global pool of similar datasets and using augmented intelligence to match cases. In 2021, Cydar EV Maps introduced integrated planning, guidance and outcome analysis that uses augmented intelligence (including 3D and 2D deep learning and computer vision technologies) to connect the data across a patient’s journey together into a dynamic, patient-specific model of endovascular (EV) surgery – an EV Map. Carrell explains: ‘Before surgery, augmented intelligence helps clinical users build a 3D pre-operative map of the surgical plan. Then, during surgery, it automatically fuses the map with the live imaging, continuously checking it and non-rigidly adjusting it to account for real-time postural changes and deformation.

‘After surgery, it helps analyse the post-operative outcome in the context of planned, pre-operative and actual adjusted maps,’ he adds. ‘The Cydar EV Maps cloud platform connects each patient to an anonymised data pool across the EU, US and UK and is learning from every case. The next step is towards intelligent maps for surgery.’



Media Partners