Skip to main content

Asking the big questions

Finding sufficient and sustainable sources of energy is one of the most pressing issues facing modern society. The global need to reduce greenhouse emissions is going to become more urgent, fossil fuel resources are going to become scarcer, while the energy requirements of our modern society are unlikely to reduce appreciably. Renewable sources such as wind and solar power are providing some of society’s energy needs, but many believe that nuclear power is going to be important for meeting future needs without relying on fossil fuels.

Designing a nuclear reactor is a difficult process, and convincing governments and the public of both its safety and its economic viability can be a challenge. Gathering experimental data is difficult due to the costs, and due to the fact that each experiment tests for one physical effect at a time, and cannot easily be adapted to discover more; maybe one test will look at the motion of cooling fluids in the core, and maybe another will look at the irradiation of material around the core. It becomes a challenge in itself to bring together the separate experimental results to form the basis of designing a reactor.

Andrew Siegel, lead researcher at the Argonne National Laboratory, uses high-performance computing to simulate a particular kind of reactor – the sodium-cooled fast-neutron reactor. Whereas most reactors of the past 30 years have used light (i.e. normal) water as their cooling medium, which flows around the fuel rods at high pressure in order to remove heat and transfer it to turbines, Siegel is interested in the reactor design that uses liquid sodium metal. Water flowing through a reactor core slows down the neutrons that propagate the nuclear reaction, whereas sodium nuclei allow more of the neutrons to retain their high energy – hence they are ‘fast neutrons’. This approach would use up the fuel more efficiently, meaning that fewer radioactive waste products will remain per unit of energy generated. Furthermore, the reactor could be fuelled with reprocessed plutonium recycled from existing light-water reactors. ‘This is a bigger vision,’ says Siegel, ‘and it takes us very close to the dream of closing the fuel cycle; this reactor could potentially produce more fuel than it consumes.

‘The predictions of experiments to date were based largely on a theoretical understanding of things like heat transfer, neutron transport and turbulent mixing, augmented by experiments,’ explains Siegel. ‘Before a reactor is built, the designers do all of these things to try to give them an idea of what the limiting factors will be, because it is these limiting factors that determine the economics of the reactor, or the safety of the reactor (the two essentially go hand-in-hand). If you have to put in, as you currently do, lots and lots of engineering judgment on top of the physics-based, very idealised predictions, you begin to erode away your safety margins and to damage the economic case for the reactor.’

By applying techniques of multi-physics simulation, Siegel’s team hopes to ‘put tools in the hands of reactor designers’, allowing them to think more creatively and over a broader design space about how they can create new geometric configurations allowing higher outlet temperatures, allowing the reactor to extract more energy, all without overheating the core. In essence, they want to remove the guesswork.

There are many, many aspects of reactor design and safety that need careful and extensive modelling. Siegel’s simulations look particularly closely at the helical wire wrappings around the rods of fuel, which both space the rods and create turbulence within the cooling fluid. ‘The more mixing we have, the more energy we can produce while avoiding hotspots,’ explains Siegel. The simulations start with very detailed representations – or ‘meshes’ – of the reactor core. The Navier-Stokes equations for a Newtonian fluid are solved for this fluid, along with neutron transfer equations predicting the distribution of the particles, and structural mechanics simulations that examine the integrity of the core. The results from all of these simulations are coupled together.

A design criterion for the new reactors is that they should include passive safety. If there is a surge in reactivity for some reason, the reactor should automatically adjust, as a result of the expansion of its structural elements. In order to understand this in depth, the principle must be tested in simulation, which would not have been possible even a few years ago. ‘To use the simulation to learn something about a device that is yet to be tested is very different to the way in which computers were previously used,’ states Siegel. ‘[In earlier cases] experimental data would be collected in order to build models, and those models had to mimic the data. The models encompassed the state of the art of the experimental device, and the state of our knowledge of how the device works. The case of our simulations is very different; we’re saying “let’s simulate the basic physics, and let the physics tell us something that we didn’t know about the device.”’

With support from the US Department of Energy, the team is currently running its code on the 163,840 cores of Argonne’s IBM Blue Gene/P, and also on the 131,072 cores of the laboratory’s Cray XT5 – two of the world’s most advanced supercomputing facilities. Even with these resources, reductionism is still key to successful simulation. When simulating neutronics, the helical wire wraps are insignificant, but in a fluidics calculation this wrapping is of great importance. ‘When you know what to reduce, you can focus the computing power most effectively,’ says Siegel.

For example, solving the Navier-Stokes equation all the way down to the millimetre scale (which is the smallest significant length scale under the conditions) is not possible, and would be a ‘zetascale’ problem. Siegel states, however, that the global results of the simulation are insensitive to how they model such small scales; the team get accurate results by way of large-eddy simulations, or by taking Reynolds averages.

The limitations of the simulations emerge due to the way in which the physical parameters are modelled – the ‘meshing’. Siegel explains that discretising the mesh can introduce artefacts, and the act of treating the mesh as a series of finite volumes also introduces numerical error. ‘You have to be careful to distinguish between numerical error and the real, physical effects that you’re looking for,’ he says. When it comes to increasing the sensitivity of the simulation, Siegel states that ‘solving the continuum equations for the entire reactor core is not a sci-fi thing, but it is also not going to happen in the next five years; it’s more likely to be 20-30 years away.’ Whenever this does become possible, Siegel and his team hope that this efficient new type of reactor will already be in service.

Catch a falling star

While power generation based on nuclear fission is already a well established technology, one energy source for the future could be nuclear fusion. Rather than taking heavy, unstable elements such as uranium and plutonium and releasing energy by breaking them into smaller pieces, nuclear fusion takes light elements (isotopes of hydrogen), and releases many times more energy by fusing their nuclei together to form heavier elements (helium). This is the way in which stars release energy, and has the potential to provide a nearly inexhaustible supply of energy.

Magnetic confinement is one of the oldest approaches to fusion and, so far, the most successful approach to harnessing nuclear fusion on Earth is a device based on the tokamak principle, which is a Russian acronym for ‘toroidal chamber with axial magnetic field’. A reaction vessel surrounded by powerful electromagnets heats and compresses a mixture of gases to the point that they ionise and become a plasma, which can be contained by the magnetic field. There are two tokamak reactors based in Culham, Oxfordshire; the Joint European Tokamak (JET), and the Mega-Ampere Spherical Tokamak (MAST). The latter is a UK Atomic Energy Authority funded project, whereas the former is run by the European Fusion Development Agency.

Currently, the tokamaks only use a few grams of deuterium and tritium fuel at a time, which is delivered in pellets, and the longest sustained reactions so far have lasted for 90 seconds or less. JET is an experimental reactor, and its limit of performance has so far been equivalent to a maximum of 90 per cent of the energy put into it. In order to have the fusion sustained for a longer period of time, the vessel must be scaled up in size, and this is what is currently being undertaken at the ITER (formerly known as the international thermonuclear experimental reactor) in southern France.

A simulation of the plasma within the tokamak reaction vessel. Tokamak is a Russian acronym for ‘toroidal chamber with axial magnetic field’. Image courtesy of EFDA-JET.

David Robson is an associate consultant at Tessella, and acts as Linux systems manager for both the JET and MAST experiments. As in the case at Argonne, the underlying physics is relatively well-understood, but combining several effects requires complicated multi-physics simulation. Some of the simulations Robson oversees are gyrokinetic codes, in which individual particles are tracked around the tokamak. Millions of particles may be tracked in a single simulation, and each of them creates its own magnetic and electrical fields affecting all of the other particles. In some other approaches, explains Robson, the plasma is modelled as a conducting fluid, and equations of magneto-hydrodynamics are solved across the tokamak.

The two models are generally used independently, although there are some codes requiring the use of both, such as those examining effects at the edges of the vessel, where turbulence may have a significant effect. The demanding simulations are carried out on the department’s five clusters, which offer a total of about 800 cores. Robson states that some researchers do manage to get code run on more powerful systems in supercomputing centres, but generally they have to queue for time and pay for it. ‘Hundreds of people log onto our own cluster at any one time, and they don’t have to fight for resources,’ he says.

When it comes to expanding the capacity of the group’s clusters, the group has explored the possibility of using both Cell processors and GPU-based computing. ‘We did have a look at Cell processors, but we believe that they are a little difficult to make progress with, and the fact that many of our counterparts are drawing the same conclusion supported our decision. The tools for development on Cell processors are nowhere near as good as the tools for Cuda development,’ says Robson. ‘Cuda is relatively straightforward, and doesn’t take much time to make an experimental program for GPUs.’ The group has begun looking at ways to integrate Nvidia Tesla systems in order to reduce the time taken for particular tasks; currently, some simulations can run for several months, and Robson hopes to cut this duration down to a few days. The group has recently taken a delivery of a machine with three c1060 Tesla cards in it, and initial experiments with it using standard graphics cards have had promising results. ‘It’s still very experimental, and we don’t know if we’re going to get a lot out of it,’ says Robson of the GPU-based approach. ‘There are, however, a lot of people in the fusion community trying this out at the moment. Most of our simulations are written in Fortran, but we have a fair number of people in the group writing code in Java, Pearl, Matlab and IDL. The GPUs are C-based, which is a bit of a problem, but we can address this by converting code to C, or by analysing the Fortran code to look for hotspots that can easily be run on a GPU-based system,’ he says. Robson is also aware of compilers capable of running Fortran code on a GPU, produced by Portland.

Higgs hunting

While the applications within power generation rely on established physical principles, the scientists at CERN (which originally stood for Conseil Européen pour la Recherche Nucléaire) in Geneva are looking to discover new physical principles, and they require significant computational resources to do so. When the Large Hadron Collider (LHC) comes back online this winter after repairs, physicists will begin a series of high-energy particle collisions, with the ultimate aim of testing our understanding of the fabric of space. One of the better-known results they’ll look for is the existence of the elusive Higgs Boson, suspected to be the particle which gives matter mass.

CERN runs a series of clusters around the globe, catering for around 6,500 scientists. These clusters are combined to form a grid of between 40,000 and 60,000 CPUs. Platform Computing provides the organisation with the Platform LSF policy-based scheduling software, which schedules, prioritises, and delivers between 70,000 and 100,000 jobs for CERN staff each day. Scientists are able to get their results when they need them, and data centres around the world help provide 24 hour service, as each is busy at different times during the day. When the LHC comes back online, much of the computing power of this grid will be brought to bear on the analysis of the resulting collision data.

CERN has used LSF since 1997, since they switched from main-frame computing. The organisation tried to implement a home-grown scheduling solution at first, but found it more cost effective to use Platform’s software. Christoph Reichert, VP of HPC sales for EMEA at Platform, and the company’s liaison with CERN, states that ‘many people were, at that time, switching from main-frame computing to client-server computing. The size of the jobs still required a huge amount of time and compute resources, and there are only two ways to gain resources: you can either buy bigger hardware every time you need more power, or you can add clusters one by one. Platform began developing the latter of these solutions in 1993.’

If the sodium-cooled fast-neutron reactor ever becomes a commercial reality, or if thermonuclear hydrogen fusion beats it to the prize of offering the world clean and limitless energy, and whether or not the physicists at CERN succeed in finding the much-hyped Higgs Boson, it is certain that none of these impressive projects could have come so far without the support of the HPC resources they’ve come to depend upon.



Topics

Read more about:

HPC

Media Partners