Skip to main content

Driving change

As consumer products go, the modern automobile is subject to more design constraints than most. Automobile designers produce works of aesthetic beauty, capable of slicing efficiently and silently through the wind, as their powerful motors turn the world beneath them. Engineers, on the other hand, must make this creativity practical, packing functionality into the constraints of designers’ outlines. These days, automobile manufacturers are expected to prioritise safety and the environment, but many consumers are reluctant to sacrifice form and performance at the altar of sensibility. Blending these contrapuntal elements into a compelling and desirable product is not easy. But how can high performance computing help to marry the creativity of designers with the precision of engineering?

Bringing designs to life

Every component of a new model of car is designed, tested, and deliberated over in silico. Computer-aided engineering (CAE) techniques are used to ensure the functionality of the working components within the car, and of the structural materials involved, while computer-aided design (CAD) techniques are used to model the shape of the car.

At each step in the process of designing a new car, and for each component element, there are choices to be made. Decisions on the shape of a 3D car cannot be based on 2D drawings, nor can they be made based on models displayed by a computer screen.

Virtual reality (VR) has played a part in product visualisation within the automotive industry over the past 10 years, and the latest generation of immersive VR is building on this foundation by allowing product development to be accelerated while cutting costs. With the introduction of software capable of directly interpreting a CAD model via a graphics cluster, such as Mechdyne Corporation’s Conduit product, VR has gained a new degree of usefulness in visualisation.

Immersive VR takes the form of CAVE Automatic Virtual Environments (or CAVEs – the term ‘CAVE’ is a recursive acronym). CAVEs are white rooms, wherein data projectors create an image on the walls and floor. Users within the CAVE wear positionsensitive 3D goggles that transmit information about the user’s viewpoint back to the control computer, meaning that the simulated object can be made to seem stationary as the user moves around it. In addition, users may also have some form of interface by which they can interact with the virtual environment or digital prototype. A single projector serves each wall of the CAVE. Conventionally, each of these projectors is controlled by a single PC and GPU, with an additional PC or cluster generating the visualisation.

Jan Rendek is the chief marketing officer at Scalable Graphics, a French company that specialises in middleware for CAVE systems. He describes the demands made by increased reliance on VR visualisation: ‘As people are making more and more decisions based on VR images, the CAD models need to be more and more complex in order to reflect the reality they represent,’ he says. ‘The problem with traditional systems is that a single workstation does not have enough computing power to be able to move around [these complex models].’ A lack of computing power behind the CAVE can lead to a delay between when the observer’s movements and the corresponding adjustment of the image; more than a few milliseconds delay can lead to an unpleasant feeling dubbed ‘cyber-sickness’.

The middleware offered by Scalable Graphics addresses this limitation by slicing the image differently. Rather than giving each projector a single workstation, the company’s middleware allows a user to ‘break the one-to-one relationship between projectors and workstations’. In practice, this means that any number of workstations could be devoted to the problem; the image is divided between them in whichever way is most effective, be that by slicing each image up into smaller parts (screen decomposition), or by having each workstation account for a particular object (database decomposition). The operator adds as many workstations as are required, until the model is displayed at the resolution and frame-rate he or she needs.

The middleware was released earlier this year, and the company boasts PSA Peugeot Citroën as a customer. ‘The approach gives them the performance they need right now,’ says Rendek, ‘but it also ensures that their systems will scale over time; if PSA begins using a more and more complex [CAD] model, then they can add more workstations to the existing solution’. Rendek believes that another strong point of the Scalable Graphics middleware is that it makes use of commercial, or ‘commodity’ hardware: ‘Commodity hardware evolves pretty quickly, particularly graphics cards, and [CAVE operators] will be able to directly benefit from these improvements’.

PSA uses the VR simulations in all stages of design in order to limit the number of physical mock-ups they need to do for a given car. ‘They can test any number of hypotheses, because they don’t have to build them physically,’ says Rendek. VR is also used to simulate the physical process of production to determine whether an idea is technically feasible.

Before any prototypes make it to the wind tunnels, computational fluid dynamics (CFD) simulations will have predicted the aerodynamics of the car’s body, allowing the designers to consider the effects of their creativity even as they encode their designs to the simulation. Later, more complicated physical simulations can greatly reduce the costs of crash-testing by allowing designers to wreck virtual cars, even before the first real car is produced.

Combining drawing boards

Despite the advances in displaying simulations, finding the computing power to process models also becomes difficult as the complexity increases. A new car is designed by component groups, with a small team taking responsibility for a small part of the car. Each component must be checked against all of its neighbours, in order to ensure that the final product fits together as expected, and that it looks the way the design group wants it to look. Christoph Reichert, EMEA VP of HPC sales at Platform Computing, describes the problems faced when combining the component models: ‘These are huge calculations, because the complexity scales as a power of n [where n is the number of components]. There are more than 5,000 parts in a car, all of which need to fit into the design frame given by the designers.’

Platform has several users in the automotive industry, including PSA. The company’s LSF product manages computing resources over a grid, which may be geographically diffuse and may span across the company’s entire network. LSF is able to prioritise compute tasks to ensure that the company’s investment into computing capacity is put to good use.

Driving commodity computing

To meet ever-increasing passenger safety expectations, structures such as roll cages and side-impact bars reinforce cars at key points. Conversely, to meet ever-increasing demand for fuel-efficient vehicles, a new car must weigh as little as possible. These two criteria are measured against each other at the design stage.

L&L Products has been in the business of thermosetting plastics for more than 50 years, with a particular focus on custom adhesives. For the automotive industry, the company offers a product line called Composite Body Solutions, or CBS. CBS focuses on the structurally important parts of a car, which are usually produced in a tubular cross-section. The properties of these components can be precisely tuned by adjusting the thickness or the composition of the steel involved, but some compositions can be expensive and difficult to work with. Automotive designers can cut costs and weight by putting an additional polymer component inside the strong structure, with the purpose of holding the structure’s shape during deformation. In practice, these inserts are usually injection-moulded nylon pieces (very cheap), glued into place using L&L’s adhesives. The company uses HPC to design these nylon inserts for their automotive clientele.

Steve Reagan heads the computational modelling team at L&L. He explains that automotive customers share the numerical models with the company, leaving them to perform their own crash tests simulations, and expecting them to design and optimise the relevant inserts. The company runs its own 24-node cluster, running LSTC Dyna finite element solver software, although Reagan admits that they will adapt their simulations to use whatever solver the customer uses.

This degree of flexibility does not come cheap to a small company like L&L. A crash simulation has multiple millions of degrees of freedom. Reagan estimates that a four million element simulation can require up to 35 hours on the company’s cluster, and many simulations will have to be completed in order to optimise designs. As with visualisation, the complexity of the models involved increases constantly, and L&L’s computing requirements look set to increase accordingly. ‘Customers will always have larger and larger models; that’s been the progression, and they’ll continue to move in that direction,’ says Reagan. ‘The number of CPUs that it will take to run that model in a reasonable amount of time will tend to increase.’ Also, L&L is constrained when it comes to simplifying the model: ‘The customer has built the model, they’ve shared it with us, and they expect us to add some component to it without changing it,’ explains Reagan.

Simplifying the model is not currently a viable option for L&L, and the turnaround-time is fixed at two to three weeks. If the company is not able to run enough simulations in that time to optimise the additional component sufficiently, a large margin of error will have to be included in order to ensure that safety standards are met. Reagan believes that if his designs are too conservative in this way, the customer is likely to be dissatisfied. The only option remaining to the company for models of increasing complexity is to increase the computing power to which it has access.

The cluster used by L&L cost $100k, and the Dyna finite element solving software it uses costs an additional $75k annually. Reagan recognised that scaling up the company’s on-site cluster was an expensive option, and he decided to seek computing power from a cloud-based provider, by way of an on-demand model.

Reagan had a close relationship with Altair, and he approached the company with an idea inspired by his Starbuck’s card. ‘I told them “I can never convince my wife to leave $500 down at Starbucks each January to pay for my coffee all year, but if Starbucks gives me a little card and says “You just come in whenever you want, and pay by the month”, well I’ll spend way more than $500 over the year.’ They gave him a core/hour price with an agreement to bill him monthly, on a trial basis initially.

Reagan says he will keep the company’s own cluster running, as it provides a low core/hour price, and while it does not have enough power to manage the spikes in business, it is capable of managing the base level demand.

Whether visualising a design, testing it, or creating it, simulations for the automotive industry are moving to ever-increasing levels of complexity. As Reagan and L&L Products have discovered, there are many ways of meeting these computing demands.



Topics

Read more about:

HPC

Media Partners