Fast cars and supercomputers

Scientists are using high-performance computing to design the fastest car in the world, and the computational fluid dynamics (CFD) modelling methods used could also help surgeons personalise certain procedures.

The Bloodhound SSC Project is aiming to set a new world land speed record of 1,000 miles per hour by 2011 and the challenge at the heart of the project is to create a car capable of reaching such speeds.

But how can anyone design such a supercar? The speeds the vehicle should reach go beyond the capabilities of current wind tunnels, as Professor Oubay Hassan, head of the Civil and Computational Engineering Centre at the University of Swansea, explains: ‘Wind tunnels require a model, blowing air at that model and a conveyor belt to move the vehicle at the sorts of speeds it will be travelling at. Although there are conveyor belts available to test vehicles such as F1 cars, for this type of vehicle there is not type of wind tunnel, so we had to use CFD instead.’

The type of CFD modelling used by the team at Swansea University works by using HPC to churn through the millions of equations to make such a simulation work accurately and quickly, as Hassan tells ‘We break the surface of the car and the space into small elements, so instead of having a very difficult differential equation to solve, we have a simple set of simultaneous equations to tackle. There are, however, millions of these simultaneous equations and this is where high-performance computing comes in. Without parallel computing, these tools would be useless.’

The simulation works by breaking the surface of the vehicle and the surrounding space into elements, sometimes down to less than one millimetre. The car itself is broken down into a series of triangles, which forms the base of a series of tetrahedrons that model the flow of the surrounding air.

There are around 65 million of these tetrahedrons, which come in varying sizes depending on the distance from the car, as Hassan explains: 'Some of the elements will be less than a millimetre in size and that should be accurate enough when modelling a 12m long car, and its surroundings.'

'But when we model the air flow between 200 and 300 metres away from the car, we do not need to have this size of element everywhere – these smaller elements are only required closer to the car but further away we can use bigger elements.'

The design process is iterative, where designers will come up with an initial model of the car, or a part of the car, that they believe will work, which is then simulated to see if it will behave in the way expected at such high speeds.

Hassan explains: 'The initial concept from the designers might be proven wrong in the simulations, for example, the original design used a twin air duct and staggered wheels – but these were refined during the simulations to make the car more stable. We then refine the design after these simulations to make it more and more accurate.'

But it's not just the world of record-breaking cars that is being helped by such simulations; surgeons could use similar methods in the future to work out the best way to perform a non-emergency procedure on a patient. Hassan says: 'We are currently seeing if it is possible in the future to take a scan of a patient and then use that information to model how the blood will perform if certain operations are carried out on specific patients. We would use the same models, with the tetrahedral elements, but we would not be modelling in the air, but in the vessels.'

Hassan adds: 'For example, with valve replacement, there are different types of valves available and the surgeon could look at the model of the blood's performance and work out which valve is best for the patient, or what orientation to use of that valve.'

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers