Skip to main content

CAE turns to HPC

The CAE industry is no stranger to the use of HPC – but what was traditionally a tool employed by the largest engineering companies, such as the automotive and aerospace manufacturers, is now becoming much more ubiquitous as the barrier to HPC at entry level is lowered.

Although a large-scale on-premise cluster can still be out of the price range of many small or medium-sized engineering companies, they can still leverage the computational capabilities of HPC through cloud or appliance-based systems designed specifically for engineers or designers.

For large-scale users with access to their own HPC infrastructure, software companies are continually optimising their software to better fit today’s highly parallel architectures through partnerships with HPC centres and hardware providers, such as Intel and Nvidia, to tune engineering software for the next generation of HPC architecture. 

Wim Slagter, director, HPC and cloud alliances at ANSYS, commented on the importance of using HPC for engineering and design purposes. ‘HPC is helping manufacturers cut costs and create new revenue streams because they can design completely new products they had not previously considered. Users can also produce more reliable products and reduce cost in the development cycle,’ said Slagter.

He also stressed that many more engineers and designers have begun to see the potential value of using HPC in their workflows, as a survey published by Ansys found many users had issues that can be solved through the use of HPC technology. The 3,000 survey respondents were asked to name the biggest pressures on design activities, and about half of them stated reducing the time required to complete design cycles. Similarly around a quarter of respondents said producing more reliable products, which results in lower warranty-related costs, was a big concern. ‘HPC is an enabler of innovation. HPC is a technology that addresses our customer’s top challenges,’ said Slagter.

Reducing the barrier to HPC

Today the benefits of using HPC are fairly well understood by the CAE community, but this does not make using and setting up HPC systems any cheaper. In the past, the cost of provisioning and managing an in-house cluster were prohibitively expensive for all but the largest companies, but now more engineers can gain access to HPC class computing capabilities through technologies such as the cloud.

Particularly for smaller and medium-sized enterprises, Ansys recommend using cloud computing to reduce costs and facilitate access to HPC computation without the burden of managing a cluster.

‘It is clear that hardware and software enhancements provide more value and have enabled HPC to deliver more value, but there are also challenges once companies start to deploy HPC. For example, in smaller enterprises specifying a cluster of provisioning, and managing a cluster, is not straightforward. They may lack even the basic IT staff in-house to set up and manage a cluster. These are real challenges, so we need to simplify HPC cluster deployments,’ stated Slagter.

The main method that Ansys employs to democratise HPC to a wider audience is through partnerships with a number of companies such as cloud-hosting providers, HPC hardware manufacturers and supercomputing centres such as HLRS, in Stuttgart. 

The cloud partners not only provide HPC services, but also the back-end infrastructure, to those customers who lack the in-house HPC or IT staff but still want the ability to increase computational resources quickly. In addition, Ansys also has partnerships with HPC partners that provide appliances, or pre-configured racks of computational hardware optimised and configured to run Ansys software.

‘The theory of cloud is that those users can deploy these services only when they need it and ... they only have to pay for what they use. Smaller companies are interested in the flexibility, not only in terms of hardware deployment, but also software licences,’ stated Slagter. 

‘Cloud is one aspect that is clearly helping to address these challenges, the other thing is that we have developed HPC appliances with specific partners – out-of-the-box, externally managed clusters. Again, this is aimed at those companies lacking infrastructure or IT staff, the right people to manage and provision a cluster,’ Slagter said.

Through alliances and partnerships, Ansys is aiming to democratise the use of HPC for engineers in smaller firms by providing them with either cloud-based or appliance-based HPC solutions. For users without infrastructure or investment to support an in-house cluster, these options can provide HPC class computation at a lower barrier to entry than traditional HPC.

‘These partners work with system integrators to remotely manage HPC clusters for our customers, and those clusters are shipped to the customers but then everything is up and running, Ansys is pre-installed and the system is optimised to run Ansys workloads,’ said Slagter.

‘Gradually HPC resources are becoming more readily available to all engineers and hardware is becoming more affordable and also more powerful than ever before,’ Slagter concluded.

Preparing for the future

Though the largest growth in the use of HPC comes from small and medium-sized companies, CAE software providers like Ansys are still putting a lot of time and effort into tuning their software for large-scale HPC simulations.

Ansys has developed partnerships with HPC centres such as HLRS in Stuttgart and King Abdullah University of Science & Technology (KAUST) in Saudi Arabia. These partnerships allow Ansys to scale its software to test the limits of engineering simulation on some of the largest supercomputers in the world.

For example, in July this year Ansys, Saudi Aramco and KAUST announced that they had set a new supercomputing milestone by scaling Ansys Fluent to nearly 200,000 processor cores. This supercomputing record represents a more than five-fold increase over the record set just three years ago when Fluent first reached the 36,000-core scaling milestone. 

The calculations were run on the Shaheen II, a Cray XC40 supercomputer, hosted at the KAUST Supercomputing Core Lab (KSL).

‘We need to deliver better HPC performance and capability. These customers are pushing the envelope; they come up with ever more challenging models that require more computational resources and better performance’ said Slagter. ‘We are constantly working on the optimisation of our software. We need to provide more parallelism throughout the entire process. We need to continue working with supercomputing partners.’

By partnering with these HPC centres, Ansys can test the limits of its software and implement optimisations to take advantage of the highly parallel nature of today’s leadership class HPC systems. Without these partnerships the onus would be on Ansys alone to deliver these improvements, which are quickly becoming a task too complex for a single organisation.

‘The computer industry has delivered enormous increases in computing speed at lower cost. As an ISV, we have also made significant improvements regarding parallel performance, robustness, and scalability’ said Slagter. ‘The same is true for GPUs as well. Any of our Ansys products, from structural mechanics to fluid dynamics to electromagnetics, are now taking advantage of GPUs,’ concluded Slagter.

Expanding the capabilities of design engineers

Another fast-growing area of CAE is the use of simulation tools by designers that can make quick changes to design concepts and see the changes in product performance without having to wait for verification from engineers. Slagter explained that Ansys has been working for a number of years on its design tool, Ansys Discovery Live. The aim of this product is to give simulation tools to designers who to accelerate prototyping of new products.

‘We have recently launched Ansys Discovery Live a new product that is completely built from scratch on GPUs. It is empowered by the thousands of cores available in a GPU and it provides real-time simulation results that are primarily aimed at designers,’ said Slagter. ‘This was a multi-year development programme where people have drag and drop capabilities, but with a built-in simulation that solves in an instant, so people do not have to wait to get their results. For example, if a user changes the CAD, you can immediately see the influence on the flow or stresses of the structure.’

While this tool will not directly be used on HPC resources, it does feed into the same design cycle and could reduce the amount of simulation required, as small changes can be made early in the design process without requiring large simulations to verify their performance. 

‘It is not providing the full high-fidelity results that the analysts require, but that is not necessary for designers who want very quick results,’ said Slagter.

This new tool can feed into HPC workflows, as designers may work quickly on a workstation and then pass data to engineering teams that can run a full analytical simulation on a HPC cluster. Ultimately, modern CAE requires ISVs to serve several user communities with different requirements from the large aerospace or automotive companies, to small engineering firms and design teams that have different uses for engineering software.

‘It may sound difficult if you looking for a single solution to meet the requirements of these different types of customers but that is not what you should do, or at least that is not our HPC strategy,’ said Slagter. ‘We are not working on a single solution that will meet all requirements, because that is impossible. For analysts, we will continue to improve parallelisation further, optimise software, extend the parallelism throughout the workflow, work with supercomputer centres to profile and benchmark the software at an extreme scale.

‘It is impossible to provide just a single solution, you need an ecosystem. It is becoming more and more important to grow and come up with the right solutions by working with the right partners,’ he concluded.



Topics

Read more about:

HPC

Media Partners