Taking scientific visualisation to new heights

Chad Harrington, VP of marketing at Adaptive Computing, reflects on visualisation workloads

It doesn't take a pair of rose-coloured aviator glasses to visualise this scenario: imagine you’re an aerodynamics engineer and you’re scanning a 3D simulation of your latest concept plane. You pan, zoom and otherwise manipulate the simulation that visualises airflow around the wings and fuselage. The simulation is massively pixilated and highly compute-intensive, and yet you’re viewing and manipulating it on a three-year-old laptop. What’s more, several of your colleagues are on a conference call with you, and they’re manipulating those same pixels in the same session from multiple locations using typical office PCs and even tablets.

The fact is that this scenario is a reality in leading-edge visualisation environments today. That’s because an integrated solution consisting of HPC workload management software and visualisation software is doing for 3D simulation workloads what Software as a Service did for 2D applications: keeping applications and data together on the server side where they can be accessed simultaneously by multiple PCs, thin clients and mobile devices on the same session. And compute resources can be used more efficiently than ever before.

For IT managers, visualisation workload optimisation couldn't arrive sooner. That’s because today it’s not unusual for 3D simulations to top out at 50 gigabytes or more. Worse still, they’re getting bigger every year and workload management issues are following suit. Workstations are becoming obsolete faster than ever before – usually in less than three years – and, of course, licensing, troubleshooting, patching and updating requirements at widely dispersed offices and desks are costly, time-consuming hassles. Network congestion and latency are serious issues, too, as is leakage of proprietary data.

Upfront workstation costs also continue to be a key concern. Deskside workstations feature high-end CPUs, top-of-the-line GPUs, and lots of memory. And they are single-purpose machines. Once a simulation is up and running, the system’s usefulness is tapped out. Adding insult to injury, workstations must be sized for the largest potential simulation job. So, even if small- or mid-sized technical visualisation sessions are the norm, the workstation must be overbuilt to handle the worst-case scenario. Given today’s widely dispersed and mobile workforce – and the need for elasticity when it comes to compute and network resources – the immovable and otherwise rigid deskside workstation is an idea that's gone the way of the zeppelin.

Instead of having expensive hot, loud, underutilised workstations taking up space under desks, it makes sense to have 3D CAD/CAE and other scientific visualisation applications and data housed on shared virtualisation clusters. Users get the full power of a high-end, GPU-enabled machine – as if it’s under their desks – while GPUs and other high-performance compute resources are shared by multiple sessions.

In addition, compute capabilities can be ‘right-sized’ on the fly, and utilisation can be maximised. Rather than clogging networks by transferring source data for full workloads, this approach minimises bandwidth use. Likewise, support, updating and replacement of hardware and software are more efficient, less costly processes that do not affect users nearly as much as when those processes are completed at individuals’ desks. And by keeping data sets closer to the centralised resources, you can minimise security risks by ensuring tighter access control and data security.

Perhaps the most important benefit, however, is the ability to share these visualisations anywhere, using virtually any computing device. Users are able to securely work where it makes the most sense – in conference rooms, at home after work or wherever they are authorised to do so. As a result, workforce collaboration and productivity can improve dramatically.

Adaptive Computing, the largest provider of private cloud management and high-performance computing (HPC), has recently added two versions of its Moab HPC Suite that make these capabilities possible. Moab HPC Suite Application Portal Edition and Remote Visualisation Edition are designed to maximise backend processing efficiencies and leverage next-generation access models. By simplifying the collection and interpretation of data, these solutions can help reduce the time it takes to achieve meaningful results.

Many companies have major investments in HPC in the data centre, and, generally, those resources aren’t used at anywhere near capacity. The same can be said for visualisation workstations that are dispersed throughout many companies. But the good news is that those resources can now be brought together in a private technical compute cloud that handles both HPC and visualisation. In many cases, GPUs can be shared between computational and visualisation workloads – dedicated to visualisation workloads during the day and HPC computational requirements through the night. HPC workload management software policies can be put in place to automatically determine which sessions and jobs go where. Those policies can also be set up to establish reservations for higher-priority workloads at specified times of the day or night.

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon
Analysis and opinion

Robert Roe looks at research from the University of Alaska that is using HPC to change the way we look at the movement of ice sheets


Robert Roe talks to cooling experts to find out what innovation lies ahead for HPC users

Analysis and opinion