Skip to main content

Active flow control

Ken Jansen, Professor of Aerospace Engineering Sciences at the University of Colorado at Boulder, discusses aerospace modelling and simulation

It’s interesting to note that for many years now aerospace companies have been trying to outdo each other, as in any industry, and with rising fuel costs, fuel performance has become a big differentiator. The sentiment is that they’ve nearly exhausted what can be done with regards to traditional aeronautics – everything has been refined and refined, and so they are now looking to more novel approaches.

The area of research we’ve been looking at is active flow control and the question is whether we can, through very small additional air flows in just the right locations, dramatically influence the performance of surfaces like the wings, body, tail and rudder assembly, and thus alter the primary flow in a way to better control the attitude of the plane. Of particular interest in our study is the rudder, which plays a role in controlling and altering the plane’s trajectory.

The Adaptive Detached Eddy Simulation of a Vertical Tail with Active Flow Control project is a joint project between the University of Colorado, Rensselaer Polytechnic Institute and The Boeing Company that specifically focuses on the vertical tail and rudder assembly. These are currently rather large portions of the aircraft and anything of that size has an opportunity to produce quite a lot of drag – in an ideal scenario we’d be flying slender missiles with no wings or tail as that would produce the least amount of drag.

One of the reasons why vertical tails are so substantial is that they need to be able to control the plane should one of the engines fail. In that situation, the rudder will be deflected to produce a counter moment to compensate for there being more engine thrust on one side of the airplane and provide pilots with the ability to land. Using aerodynamic flow control techniques, it may be possible to reduce the size of the stabilizer while maintaining similar control authority in the event of an engine failure.  

Reducing the size of the stabilizer decreases the weight and the drag of the airplane, which results in considerable savings in fuel costs. The reason a small-sized vertical tail is not currently feasible is that it cannot provide sufficient aerodynamic force. This is due to two factors.  First, when deflecting the rudder past a certain angle or yawing the tail past a certain angle the flow separates, which means that the performance of the rudder reaches a maximum in aerodynamic force production. Second, the aerodynamic performance of the smaller tail may simply not be sufficient as a result of airfoil section, planform, rudder deflection limit, etc. to provide the required aerodynamic forces.  In either case, active flow control, using relatively small jets of air, can enhance the aerodynamic performance of the vertical tail and provided the required forces to control the airplane.

In our case, the active flow control jets that are altering the flow are very small and actually driven by piezoelectric disks, like small speakers resonating at 1600 hertz. They push fluid out and suck it back in 1,600 times per second and this produces these tiny oscillatory jets which, when sized and tuned to the flow and positioned properly, can give the effect we need.

There are on-going experiments, but they are expensive and the first stages are typically on a smaller scale than flight Reynolds number. Flight Reynolds number experiments are prohibitively expensive and so the idea is to first validate our simulations against experiments which have been done at a modest and affordable scale and then extend them up to the flight Reynolds number scale. To scale up the problem and prove that what works in a lab can be made to work on a full-scale airplane is the crux of this project.

What we have is a classic multi-scale problem in that we have a massive tail (on the order of 25 feet in span) and we don’t just want to simulate a small section of it. The tail is pretty complex geometry in that it is swept and tapered.  It also has a deflected rudder and these small jets. At full scale we estimate it will take at least 100 of them to cover the area ahead of the rudder and alter the flow. Also, the jet slit width is about 1mm and so consequently our problem is that we need to resolve very small flow features at the same time as we resolve very large effects.

Our approach is to use unstructured grids and adaptive grids so that we can make the mesh extremely fine in the regions very near the jets where small unsteady structures that are causing this profound effect are occurring. At the same time we need to resolve not only the rudder, but some distance from it so we can capture the flow surrounding it. We use a stabilised finite element method and adaptive unstructured grids. One of the novel aspects is that we’ve really worked hard to make this scale very well on large processor counts and have scaled it up to the full Intrepid machine at Argonne, which has 160k processors, as well as the full Jugene machine in Jülich, which has 288k processors. We currently also have a case running on the new Mira machine where we have observed good scaling on 512k processors.

One of the key aspects of our method is that, in addition to the adaptive treatment of the spatial scales, we also have to handle a broad range of temporal scales – obviously if a jet cycle is 1,600 times per second we need perhaps 100 time steps per cycle to resolve that well. Our time steps are therefore quite small and yet, because we’re using very fine meshes, it will still be a rather large time step compared to what an explicit method would use. We do use an implicit method and it allows us to take larger time steps than explicit CFD methods. It does raise the complexity of our algorithms, but we’ve been able to make them scale well, which means we can still turn around multi-billion element simulations in a reasonable time frame.

We’re starting to face some new challenges as we move to the Mira software, that are different than those faced in the first few generations of Blue Gene. The initial design of our software was very good in that, even though it’s an implicit method and we do have to do global synchronisations of all communications, we were very happy to find that once we had the element counts balanced well, the element formation phase scaled perfectly. We had a little bit of a hiccup at the equation solution stage because balancing the elements in the mesh partition doesn’t necessarily balance the nodes. We addressed this issue by developing new strategies to adjust the partitioning of the mesh. If we have one billion elements and want to run on 100k processors, that would mean we would end up with around 10k elements per processor, on average. By partition I simply mean splitting up the total mesh into equal sized pieces for each processor.

There are some great partitioning tools available that will do a good job of balancing whatever you ask for.  An explicit method would be sufficient if all you had to do was integrate your elements; if all your work was proportional to the number of elements and you balanced it, you’d get great scaling. We do get great scaling for the element formation because these partitioners do an excellent job, but we found that if you really try to compress time – instead of running that billion element simulation on 100k processors, we move it to 200k processors – and if everything scaled, we would get results twice as fast. We found that if we kept doing this process, eventually we would get down to 3,000 elements per processor, but it began to degrade a little bit. The cause was not that the elements weren’t balanced – that portion was still scaling – but the number of nodes per processor is not guaranteed to be balanced. You may think that balancing elements would automatically balance nodes but it doesn’t because nodes are automatically duplicated on partition boundaries. How it gets split can eventually lead to an imbalance of nodes.

We developed new software to shift some elements around in a way that preserved the elemental balance and improved the nodal balance. Once we did this we were able to extend our scaling to much lighter loads. For the same mesh we kept increasing the number of processors and each time we doubled them we reduced the time by a factor of two. This time compression is important because it’s one thing to do a giant simulation, but it’s another to do it fast enough to enable you to try different combinations of parameters and really gain scientific understanding.

--

Interview by Beth Harlen   

Media Partners