The pros and cons of virtualisation in HPC

Share this on social media:

Tags: 

Enterprise computing makes extensive use of virtualisation. If introduced properly, Deepak Khosla argues, it can have a useful role in high-performance computing too.

Hardware virtualisation has a lot to recommend it. For enterprise data centres, it can mean far better system utilisation, flexible workload management, and impressive results, through consolidation, in terms of total cost of ownership and return on investment. Security is enhanced, and power, cooling, and floor-space requirements reduced. However, virtualisation, especially when applied to the world of high performance computing (HPC), has its share of problems.

What is virtualisation?

Hardware virtualisation is the creation of a number of self-contained virtual servers that reside on the physical server – the host machine. It provides a complete hardware platform to the virtual machine (VM). The hardware abstraction made possible by virtualisation means that there is no need to modify the IT infrastructure’s operating system and drivers every time new virtualised hardware capabilities are added to a cluster – a flexibility that bare metal (physical) clusters lack.

Multiple applications can be run on the same machine, while providing fault isolation and security. Virtualisation allows multiple applications to be run in isolation on the VM.

Virtualisation really took off in 1999, when a group of Stanford University researchers founded VMWare and released their first product based on virtualizing the x86 instruction set. Since then the concept has spread from hardware to platforms, and even to networking and storage, as the technology continues to evolve.

A few drawbacks

Despite the fact that virtualisations continues to make inroads in enterprise and some HPC data centres, it comes with its share of problems.

For example:

  • VM sprawl – Today’s hypervisors make it easy to allocate computer resources and create a VM on an available server in minutes. However, without proper design, planning and management, VMs can breed like Tribbles, overburdening their bare metal hosts, and requiring the purchase of new physical servers.

  • Capacity planning – The same caveat applies to network capacity planning due to the dynamic nature of VM resources. A lack of virtualisation capacity planning can lead to VM sprawl.

  • Substandard application performance – More modern applications that are underperforming might be running into VM resource problems, such as not enough memory space or CPU cores, or because the application requires very high bandwidths for I/O but the virtualised drivers were not fully tuned to the hardware capabilities.

  • Network congestion – As increasing numbers of VMs are stuffed on to a single server, each VM wants its share of the network. A bare metal server with a single NIC port can become overwhelmed.

  • Hardware failures – A server hosting multiple VMs can act as a single point of failure, playing varying degrees of havoc with workloads until failover and restart capabilities cut in.

  • Software licensing – Difficult enough in conventional environments, software licensing in the world of virtualisation really becomes complicated. The ISVs are modifying their licensing policies to deal with the deployment of multiple VMs and other quirks associated with this new environment.

Virtualisation advantages

Despite the speed bumps mentioned above, the benefits of virtualisation can outweigh the difficulties – in some cases even in the more demanding HPC applications environment.

For example, virtualisation applied to HPC can:

  • Improve business agility by providing the additional compute resources needed to optimise an organisation’s ‘time to science’ or time-to-market, as well as its competitiveness
  • Increase hardware utilisation in cases where the resource usage is low but dedicated to a single team
  • Exhibit high resiliency in dealing with power outages, equipment failure, or other disruptions
  • Quickly and cost effectively enables more users –– virtualisation leverages existing hardware and accommodates a wide range of software environments such as Red Hat, SUSE, Linux, etc. It also enhances rapid re-provisioning.

Virtualisation allows users to pilot a non-critical service in a live production virtual environment before deploying the application to the data centre’s primary HPC cluster. They can create self-contained sandboxes to explore new applications without impacting other users before committing to live production.

New users with conflicting requirements can become productive quickly, without the need on the part of IT to purchase dedicated hardware. Virtualisation provides the capability to enforce the sharing of IT resources between multiple groups. Instead of the more unstructured, trust-based approach found in many HPC environments, the technology allows IT to guarantee resources to specific user groups as needed.

Enhanced security is another major benefit of virtualisation. The technology allows workloads to be compartmentalised within their own separate VM in order to take full advantage of today’s multicore, heterogeneous HPC systems while supporting high levels of security. Security is delivered as a software-defined service decoupled from physical devices. Workloads can be scaled and moved without security constraints or the need for specialised appliances. Overall, virtualisation allows hardware to be shared while providing fault and security separation between users.

Virtualisation in the cloud

Making virtualised resources available through a public or hybrid public/private cloud has a number of advantages. Cloud-based automatic provisioning allows users to realise on-demand access to compute resources within the guidelines of critical IT and business policies. It also allows IT to handle workload peaks by cloud bursting, an alternative to deploying over-provisioned bare metal environments to cope with unanticipated demand.

By making virtualised HPC compute power available when and where it’s needed, IT can experience greater resource utilisation. This reduces the risk of compute islands being established outside the IT infrastructure by frustrated users who are not getting the access they need to the organisation’s computational resources.

Conclusion

Within both the enterprise and HPC environments, virtualisation is now being brought to bear on complex and demanding workflows. A seemingly simple concept, virtualisation can be difficult to deploy given the complicated mix of technological and policy decisions that have to be dealt with. Anyone contemplating introducing HPC virtualisation into their data centre should get help from a vendor-neutral third-party organisation familiar with the unique demands that HPC makes on this technology. The consultants can help identify risks and minimise surprises and delays towards a successful implementation.

It’s worth the effort. Properly implemented, virtualisation can provide a level of flexibility, agility and cost effectiveness, that is unmatched by bare metal solutions.

Deepak Khosla is president of X-ISS Inc, a provider of cross-platform management and analytics solutions to high performance computing and big data for more than 10 years.