Skip to main content

Take HPC to the cloud - in a container

Everyone agrees that HPC is needed to foster innovation and strengthen competitiveness. Why, then, is HPC still reserved for a small community of top experts?

Since the days of the early Beowulf Clusters in the late 1990s, the HPC community has been predicting that HPC for the masses was at the door. But the door has remained firmly shut. Beowulfs were so easy to build; grids connect every researcher to powerful supercomputers; and clouds offer computing on demand at the user’s fingertips. But where are the masses of users? According to the US Council of Competitiveness, fewer than five per cent of manufacturers are using HPC servers for computer simulations to design and develop their products.

It seems that, with every big step forward in computing, we have added one more layer of complexity, or one more hurdle, and thus we have scared away the average end-user who has decided instead to stick to their workstation – still regularly unhappy about its performance and memory limitations but very familiar with mastering it.

But there may be a silver lining for HPC – in the cloud. In my view, there are some very good reasons why HPC clouds will follow enterprise clouds, albeit at a distance. Users will get HPC without having to buy and operate an HPC system. They can continue to use their own desktop system for daily design and development work, and then submit larger, more complex, time-consuming jobs into the cloud. Because cloud services are already widely accepted and used today at the enterprise level (e.g. for ERP, CRM, administration), the growing acceptance and use of cloud services in a company’s R&D department will seem like a natural progression. Supply chain partners are being encouraged by the big manufacturing companies to perform high-quality end-to-end simulations on HPC systems, in an effort to reduce failure rates and increase quality in the entire supply chain, and they should be natural candidates for cloud computing.

Despite these advantages, there are roadblocks, such as complex access processes to clouds; conservative software licensing; losing control over assets in the cloud; slow transfers of large data sets; incompatible clouds; and a jungle of cloud hardware, software, and expertise providers.

One important recent technological development might have the power to change the world of HPC cloud: UberCloud Containers. The UberCloud started in mid-2013 using an open platform, called Docker, that can package an application and its dependencies in a virtual container that runs on any modern Linux server. The UberCloud enhanced Docker to suit it for technical computing applications in science and engineering.

UberCloud Containers form the basis for the user-friendly online UberCloud Marketplace. They rely on Linux kernel facilities, such as cGroups, libcontainer, and LXC, which are already part of many modern Linux operating systems. The run-time components to launch UberCloud Containers are widely distributed by the Docker platform and do not require an additional capital investment. The Containers are launched from pre-built images distributed through a central registry hosted by UberCloud. Software and operating system updates, enhancements, and fixes become instantly available for the next container launch in an automated fashion.

The notion of a pre-built image may sound familiar. The notion has been at the heart of virtualisation, a popular technology for breaking down a physical computer environment into finer logical pieces. However, unlike virtualisation, UberCloud Containers do not rely on a hypervisor; instead, they share the host operating system’s kernel and application libraries, leading to performance characteristics that are comparable to bare metal installations.

UberCloud Containers are portable: they can run on most infrastructures with minimal modification, while the run time environment is distributed as open source. UberCloud manages the contents of the containers and keeps them up-to-date; keeping installation, tuning, maintenance, and testing costs to a minimum. Engineering applications, tools and operating systems are constantly being adding to the portfolio.

The containers rely on light-weight Linux container technology, providing a low overhead. They start within seconds, with a single command, meaning that end-users receive the resources they need, when they need them. Containers are easy for any Linux user to understand. IT audits of the components, configurations, and security settings are easy to perform.

Container technology can reduce or even eliminate many of today’s HPC Cloud hurdles, making access to the cloud as easy as accessing a workstation. Packaged in a suitable container like this, HPC will rocket, high into the cloud.

Wolfgang Gentzsch is co-founder and president, the UberCloud



Media Partners