Skip to main content

Taming turbulence

A quote often attributed to the German physicist Werner Heisenberg captures the complexity of turbulence quite nicely. It states: ‘When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first.’ A very similar observation has also been credited to Horace Lamb, a British mathematician. The fundamental challenge where turbulence modelling is concerned is that turbulent flow appears to be random and chaotic. So how do you begin to model something as uncertain as turbulence with any degree of accuracy? The answer is: with great difficulty, as there is no all-encompassing method. Rather, a range of models is required.

One method for defining turbulence is with the Reynolds number. Within fluid mechanics, this number characterises the different flow conditions. Laminar flow, which is smooth and constant, occurs at low Reynolds numbers. A large Reynolds number indicates turbulent flow, whereby inertial forces produce chaotic swirls and eddies as the fluid or air moves around an object. Known as laminar-turbulent transition, the process of laminar flow becoming turbulent is unpredictable and as such is yet to be fully understood.

To work around this problem, said Doug Neill, vice president of product development at MSC Software, aerospace companies have spent the past few decades tripping boundary layers on purpose in order to know that they are in a turbulent, attached state. Reshaping various surfaces on the aircraft allows engineers to then direct the laminar and turbulent flows. By keeping the flow attached to the wing of a plane, for instance, engineers can ensure a greater level of predictability and stability. 

Navier-Stokes equations 
and beyond

Developed in the early 1800s, the Navier-Stokes equations attempt to describe the motion of fluid in a non-linear fashion. If these equations could be solved exactly, in principle there would be no need for turbulence modelling. However, the numerical solution of the Navier–Stokes equations for turbulent flow is extremely challenging and, therefore, a number of approaches have been developed. Reynolds averaged Navier-Stokes simulations (RANS), for example, refer to the fluid flow equations being solved, averaged in time.

While this form of averaging in time enables engineers to migrate using a steady assumption, meaning that how they arrive at the final answer is not critical as it doesn’t change with time, there are some significant drawbacks to this approach. Fred Mendonca, director of aeroacoustic applications at CD-adapco, explained: ‘By using RANS, you’re making some fairly fundamental assumptions about how turbulence behaves, which can deviate greatly from reality. The most obvious deviation is that turbulence is not a steady phenomenon, and so cannot be treated as such.’ He added that, because time variations are not taken into account, modelling on this basis will always be an approximation.

Models based on a RANS approach will accurately predict attached boundary layers, but they are known to struggle with large separations behind airfoils. ‘The ideal solution would be to use large eddy simulations (LES), which is an approach that can potentially provide a significant increase in accuracy for the types of flows that RANS struggles with,’ said Brian Bell, lead technical services engineer at Ansys. ‘But for many years the problem with LES has been that the mesh requirements for realistic Reynolds numbers for aerospace applications simply mean that there’s no possibility of using it for problems with such a large number of grid points.’

LES does provide higher fidelity compared to RANS, however, and so one approach is to use the information from the RANS model as the inflow conditions for a small region. By restricting the grid points, engineers can potentially use LES in small, critical areas. Bell added that the generation of turbulence inflow is a very active area of research as, although methods are good, there is still a lot of room for improvement. The embedded LES model in Ansys’ Fluent solution has the ability to isolate a small piece of the computational domain. Outside that region, a RANS model with a relatively coarse grid can be used. Upcoming releases of the software will offer the ability to combine transitional modelling capabilities with a scale-resolving approach.

‘The industry is beginning to recognise that transient modelling is the way forward as it makes no gross assumptions about the characteristics of turbulence,’ CD-adapco’s Mendonca added. ‘Instead, we try to solve these characteristics implicitly from equations rather than the models themselves. There is a lot of effort being made towards integrated solutions with high level physics and integrated processes.’ The faster the industry can get answers with complex turbulence models based on large eddy simulations, the better, he said. 

The hybrid approach

The intense computational demands of large eddy simulations have led to the development of hybrid models that combine both RANS and LES approaches. Academic institutions, research organisations, and commercial companies have been collaborating in recent years on the progression of these models. One example is the EU-funded project, Desider (Detached Eddy Simulation for Industrial Aerodynamics), which brought together 18 partners from the European Union and Russia, representing industry, research institutions and universities, to improve existing computational fluid dynamics (CFD) methods and demonstrate the capabilities of hybrid RANS-LES approaches.

The development group at Ansys actively participated in this project. Brian Bell commented on the merit of the research: ‘These hybrid models make it possible to solve problems at realistic Reynolds numbers with, potentially, orders of magnitude reduction in the computation expense. They make it possible to predict the boundary layer turbulence without the stringent grid resolution requirements of large eddy simulations.

‘These simulations can now be used for applications that are simply not possible to do purely with LES at the level of computing power we have today. Ultimately, engineers can obtain more precise information from simulation.’ Bell warned, however, that these are not mature models and therefore require a lot of understanding regarding their performance and how they should be applied.

A big driver behind projects such as Desider is the demand for higher fidelity and predictive accuracy, without the corresponding computational cost. MSC Software’s Doug Neill has observed that high-fidelity CFD is being used to produce time domain equations for the calculation of noise and fatigue. ‘But where we truly see CFD and turbulence and gust load alleviation coming to bear is in the flight mechanics, such as the control system design,’ he said. ‘The actual vehicle design criteria for gusts and loads are being developed with much more simplified aerodynamics, and that seems to work acceptably, so far. The problem is that the moment you get high-fidelity fluid dynamics you also need a pretty high-fidelity structural model or you start to measure spurious elastic effects.’ Neill continued by saying that, whereas simplified aerodynamics would ignore these, CFD won’t. ‘Then you get unintended interactions that are wrong. It’s not about increasing the fidelity; it’s about getting the right fidelity,’ he added. 

Finding the right solution

All CFD software must, to a certain extent, handle turbulent flow. Choosing the solution that best meets each user’s needs comes down to the breadth of capabilities and the level of technical support being offered. MSC Software began in the aerospace industry and the company’s solutions cover the spectrum from external loads and aero-acoustics to the mechanical behaviour of advanced composite materials. The simulation suite is designed to enable aerospace companies to apply advanced materials to help them solve their regulatory compliance issues. Doug Neill stated that MSC Software is continuing to push its OEM partners in the industry to increase its use of simulation in the certification of aircraft.

CD-adapco’s solution, STAR-CCM+, is able to solve Navier-Stokes equations with any level of turbulence assumptions. The code is integrated in that this level of very complex physical modelling is built into the menu choice for modelling the physics. The complex geometry of an aircraft, such as the wings and landing gear, can be modelled and then automatically translated into a computational model. The full geometry is taken and then discretised. Once the physics have been solved on this virtual model, the data needs to be processed in order to understand what the pressure loading on the system is or the effect of lift and drag on individual components. Each step of the process is integrated so that, if engineers wish to go back and make changes to the geometry, the software will automatically go through the remembered steps.

Design optimisation is critical – especially with industry pressures regarding fuel economy and regulatory compliance. Esteco’s platform, modeFrontier, automates the process of executing simulations and then runs optimisation algorithms on top of that. The solution aggregates multiple simulations, changing parameters for the models or for the design configurations, in order to reach the optimal values. Turbulence models need to be calibrated in many situations, and calibration itself is a complex task that requires optimisation techniques. Because each individual calculation is computationally expensive, optimisation is essential in ensuring that models are accurate. ‘Thanks to optimisation the benchmark for modelling has been improved as there are constants that need to be corrected according to experimental variants,’ said Carlo Poloni, CEO of Esteco.

Poloni added that uncertainty quantification tools are under continuous development, which means he expects that, in the future, solutions will not only give a number, but will include a confidence interval. This will give the designer the confidence that what they are computing is actually true. ‘There are techniques that work on uncertainty quantifications and there are many European research projects that are focused on this environment,’ said Poloni. ‘One project, UMRIDA, will be launched in September 2013 and it will move this approach from a purely academic level to a technical-readiness level. This will increase the reliability of the computations, further reducing the number of tests needed in the design process.’ He went on to say that finding the right compromise between modelling complexity and computing time, according to what is available in the simulation environment, is critical. 

Expanding the options

One trend to emerge from the increase in technical capabilities contained within modern modelling and simulation solutions is that engineers are furthering their exploration of the design envelope and considering a wider range of scenarios. Mark Walker, principal engineer at MathWorks, commented that the less effort it takes to create a representative simulation, and the higher performance with which it can run, the broader the range of scenarios can be considered. ‘Engineers want to explore as many scenarios as they can, but there is a finite amount of development time in which to do this. Within that time they must consider the minimum amount as required by the legislation and then augment that analysis for their own confidence in the design,’ said Walker.

The extent of analysis that simulation enables is increasing. According to Walker, analysis models are being used through many more of the implementation stages, and engineers are running closed-loop simulations to assess how aircraft will behave in general turbulent conditions. Rather than focusing the analysis activity at the beginning of the process, engineers are recognising the value in using these techniques to validate further stages. Anyone running an analysis can use the block diagram language contained in Mathworks’ Simulink, for example, to build reference models and run evaluations as needed.

Walker said there is a trade-off between the level of fidelity and the number of scenarios that can be run, given that high-fidelity models run very slowly. Because this type of assessment has historically been driven through desktop analysis and simulation, the fidelity and performance of simulations are tied to the capabilities of deskside supercomputers.

As the processing power of these machines improve, so too will the possibilities afforded by the software tools.



What role does high-performance computing have within turbulence modelling? Beth Harlen reports on one XSEDE project

‘We offer extended collaborative support services under The National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) project, a federated consortium of service providers with many common infrastructure elements that scientists can use to share computing resources, data and expertise,’ explained Vince Betro, a computational scientist at the University of Tennessee’s National Institute for Computational Sciences (NICS), and a member of XSEDE’s Extended Collaborative Support Service (ECSS). ‘Working with principle investigators we demonstrate the impact high-performance computing can have on their domain science, and help them to get their code running faster and more efficiently.’

One project was headed by Antonino Ferrante of the William E. Boeing Department of Aeronautics and Astronautics of the University of Washington, Seattle, and focused on the study of droplet-laden flows. Ferrante and his research team accessed HPC resources and expert guidance at NICS and the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana–Champaign, via the XSEDE project. Although the majority of the code work was done at the University of Illinois, the majority of the runs were done on the large capability machine operated by University of Tennessee, Kraken. Housed in the Oak Ridge Leadership Computing facility at Oak Ridge National Laboratory, Kraken is a Cray XT5 is currently 30th on the Top500 list.

Ferrante’s simulations were particle-based, with each individual particle needing to be mapped and tracked in its interactions. To get even close to a realistic simulation, more than one billion particles had to be modelled and the only way to do that was with the capabilities offered by Kraken. Betro’s role was not only to offer support in understanding the computational fluid dynamics and evaluating the results, but also in dealing with the problem of I/O at this massive scale. ‘The machine can handle that scale of calculation, the file system can handle the massive amount of files, but getting the files from the file server to the machine, that’s where we were getting huge bottlenecks,’ said Betro. ‘But we were able to work with Ferrante’s I/O pattern so that it was conducive to running on such a massive machine.’

Betro believes the use of HPC resources has become a necessity for the type of modelling and simulation undertaken by Ferrante and his team, not just because the problems have become larger, but because standard supercomputers don’t have the processing power or memory to handle them.

Focusing on the visualisation aspects of the project was David Bock, a visualisation programmer at the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana–Champaign. Bock specialised in trying to graphically represent the gigabytes of data being generated by Ferrante and his team. Using custom software, written by Bock, rays were shot through the volume and values were mapped to colour.

Depending on the data values, different structures were visualised that enabled the team to observe particle structures moving along the boundary layer. When first viewing the data, the range of structures was so narrow that nothing could be seen. Further work was done to visualise these elements.

Topics

Read more about:

Modelling & simulation

Media Partners