Skip to main content

Researchers employ DOE supercomputers to better understand fusion reactor design

A team of researchers at Princeton Plasma Physics Laboratory (PPPL) are using Department of Energy supercomputers to advance our understanding of fusion energy. 

Aided by supercomputers Summit at the US DOE’s Oak Ridge National Laboratory (ORNL) and Theta at DOE’s Argonne National Laboratory (ANL). The researchers are using these supercomputers, together with a supervised machine learning program called Eureqa, to find a new extrapolation formula from existing tokamak data to future ITER based on simulations from their XGC computational code for modelling tokamak plasmas. 

The team, led by Choong-Seock Chang, head of the multi-institutional multi-disciplinary US SciDAC Center for Edge Physics Simulation (EPSI) then completed new simulations that confirm their previous study, which showed that at full power, ITER’s divertor heat-load width would be more than six times wider than was expected in the current trend of tokamaks. The results were published in Physics of Plasmas.

‘You don’t want to start and stop ITER or a fusion reactor too often to replace this divertor material, so it has to be able to withstand the heat load,’ said Chang. ‘Ideally, we want the hot exhaust particles to hit the surface in a much wider area so that it’s not damaged.’

To ensure the success of future fusion devices—such as ITER, which is being built in southern France—scientists can take data from experiments performed on smaller fusion devices and combine them with massive computer simulations to understand the requirements of new machines. ITER will be the world’s largest tokamak, or device that uses magnetic fields to confine plasma particles in the shape of a donut inside, and will produce 500 megawatts (MW) of fusion power from only 50 MW of input heating power.

One of the most important requirements for fusion reactors is the tokamak’s divertor, a material structure engineered to remove exhaust heat from the reactor’s vacuum vessel. The heat-load width of the divertor is the width along the reactor’s inner walls that will sustain repeated hot exhaust particles coming in contact with it.

Using Eureqa, the team found hidden parameters that provided a new formula that not only fits the drastic increase predicted for ITER’s heat-load width at full power but also produced the same results as previous experimental and simulation data for existing tokamaks. Among the devices newly included in the study were the Alcator C-Mod, a tokamak at the Massachusetts Institute of Technology (MIT) that holds the record for plasma pressure in a magnetically confined fusion device, and the world’s largest existing tokamak, the JET (Joint European Torus) in the United Kingdom.

‘If this formula is validated experimentally, this will be huge for the fusion community and for ensuring that ITER’s divertor can accommodate the heat exhaust from the plasma without too much complication,’ Chang said.

Each of the team’s ITER simulations consisted of 2 trillion particles and more than 1,000 time steps, requiring most of the Summit machine and one full day or longer to complete. The data generated by one simulation, Chang said, could total a whopping 200 petabytes, eating up nearly all of Summit’s file system storage.

‘Summit’s file system only holds 250 petabytes’ worth of data for all the users,” Chang noted. ‘There is no way to get all this data out to the file system, and we usually have to write out some parts of the physics data every 10 or more time steps.’

This has proven challenging for the team, who often found new science in the data that was not saved in the first simulation.

‘I would often tell Dr Seung-Hoe Ku, ‘I wish to see this data because it looks like we could find something interesting there,’ only to discover that he could not save it,’ Chang said. ‘We need reliable, large-compression-ratio data reduction technologies, so that’s something we are working on and are hopeful to be able to take advantage of in the future.’

Chang added that staff members at both the OLCF and ALCF were critical to the team’s ability to run codes on the centres’ massive high-performance computing systems.

‘Help rendered by the OLCF and ALCF computer center staff—especially from the liaisons—has been essential in enabling these extreme-scale simulations,’ Chang said.

Future research will include more complex physics, such as electromagnetic turbulence in a more refined grid with a greater number of particles, to verify the new formula’s fidelity further and improve its accuracy. The team also plans to collaborate with experimentalists to design experiments to further validate the electromagnetic turbulence results that will be obtained on Summit or Frontier.

This research was supported by the DOE Office of Science Scientific Discovery through Advanced Computing program.

The full news release is available on the OLCF website.

Topics

Read more about:

Physics, HPC

Media Partners