Skip to main content

Weathering change: playing the long game

More than 20 years ago in the UK ‘there was a very conscious choice to make a large long-term investment in climate science. This accelerated the development of climate models,’ according to Pier Luigi Vidale[1], professor at the University of Reading, UK, and National Centre for Atmospheric Science (NCAS) senior scientist. Vidale says this investment, along with the attention the UK government has devoted to environmental problems and the efforts it has made to engage the scientific community, is largely why ‘the UK now leads the way in climate modelling’. One such investment was made in 1990 with the opening of the Met Office Hadley Centre, a world-leading research centre for climate science. The dynamics group at the centre, now led by Nigel Wood, is responsible for the development of the ‘dynamical core’ of UK weather and climate models – that is, the scheme that solves the equations of atmospheric motion. Recently, ‘EndGame’, an improved version of the dynamical core of the UK’s ‘Unified Model’ has been completed and adopted operationally. This is set to have a significant impact on the simulation of atmospheric flow.

Development of EndGame began nine years ago and, says Vidale, it was nearly terminated as a project because the community had overly optimistic expectations in terms of when the next generation of dynamical cores would be available. ‘EndGame was viewed very much as a less important interim solution, but, through sheer force of will, the group continued down that path,’ said Vidale. On paper, the scheme the group proposed was more expensive than the next generation was expected to be – but, because of the capabilities of the latest supercomputers and substantially larger size of the problems currently being addressed, the gain made in scaling and turnaround has proven to be far superior – not to mention the fact that the next-gen models are still roughly a decade away from completion.

‘To be honest,’ Vidale continued, ‘no one was expecting the scalability of the model based on EndGame to be this good, and it turned out that the new dynamical core is ideal for taking advantage of the latest generation of supercomputers.’ The stability of the scheme enables the application to run on a supercomputer for extended periods without the need to restart the model – a common problem in the past, as the chaotic nature of the equations being solved can lead to ‘bad numbers’ (NaNs) that essentially cause the application to fail.

Because the new model’s dynamical core is less damping, it will enable Vidale’s High-Resolution Global Climate Modelling group to simulate more energetic storms with their global climate models, with a standard mesh size of 25km. In 2012, the group was able to simulate category-three hurricanes (which are classified in terms of wind speed from one to five, with five being the most extreme) and a few of the category-four with a good degree of accuracy. The new scheme dissipates less energy, enabling the group to retain more vertical motion and extreme winds, and to simulate category-four hurricanes and a few of the category-five storms.

In terms of robustness, one advantage that the UK has is that the same underlying model, the ‘Unified Model’, is used for weather prediction, seasonal prediction and for climate. Vidale explains that, because the code is shared, the climate model is tested several times a day, every day, against real data. ‘Weather forecasts are assessed against real observations every six hours, which enables errors in the code to be picked up quickly. Other centres around the world take a different approach where the climate and weather models differ but are compared in terms of climatology and statistics. These are not verified on a daily basis, however.’ 

Aerosol particles

Elsewhere in the UK, at the Institute for Climate and Atmospheric Science at the University of Leeds, Kirsty Pringle is working in a research group that has developed a global model of aerosol particles. Called GLOMAP, it aims to treat the processes that control and shape aerosol distribution. She explained that, by using GLOMAP, they can understand which processes are important and need to be included in climate models. GLOMAP has also been used to develop a complex aerosol scheme included in the UK Chemistry and Aerosol (UKCA) model, a community climate model that is run within the Met Office’s own system.

Pringle said that, unlike long-lived greenhouse gases that are distributed quite uniformly around the globe, the distribution of aerosol particles is heterogeneous, with concentrations close to source regions such as deserts or oceans that experience high wind speeds – or close to emissions sources such as power-plants or road transport. Aerosol particles also undergo different processes in the atmosphere, which can change their properties and the way they interact with solar radiation. ‘Including all these different processes in climate models is extremely challenging, both because it is computationally expensive, and because many of the processes are still not understood well enough to be parameterised in models,’ Pringle remarked.

Pringle says another area of her work has been ‘model uncertainty’, with a focus on atmospheric aerosols: ‘We know model simulations are an approximation of the real system so, even if a model is able to simulate realistic results, there will be an uncertainty associated with it. In some situations the uncertainty may not be important, but in others it may be significant. We can only know for sure by identifying and quantifying it.’

According to Pringle, the climate modelling community has started to address this issue. One approach is to invite every modelling group in the world to perform a fixed set of experiments with their own model. These model inter-comparison projects (MIPs) allow researchers to examine the extent to which different models produce similar results; divergences indicate that there is considerable uncertainty and allow researchers to target future research.

Uncertainty analysis can, added Pringle, also be done with a single model; a well-known example is the ClimatePrediction.net project run by the University of Oxford. The project invited members of the public to run a climate model on home PCs. This allowed the researchers to gather results from thousands of simulations, each of which used a slightly different set-up of the same climate model. Researchers can examine how sensitive the results are to the setting of individual uncertain parameters in the model.

‘This increased focus on understanding uncertainty comes simply from the fact that the climate is a very complex system, so simplifications must be made when designing climate models that are computationally efficient,’ Pringle commented. ‘Models must be efficient if we are to perform hundreds of years of simulations. Quantifying model uncertainty is also important if we are to communicate our findings to policy-makers and the public.’

Multiple simulations were run with GLOMAP, each with a slightly different setting of uncertain parameters. ‘A statistically robust estimate of model uncertainty requires many thousands of simulations and, unless you use a distributed computing approach (like climateprediction.net does), most groups simply don’t have the computer resources to complete all the simulations,’ said Pringle. To avoid this, the group ran a few hundred simulations (rather than thousands) and used a statistical emulator to interpret the results.

The emulator is a statistical package that ‘learns’ from the output of the computer model and can be used to interpolate from the hundreds of runs performed to the thousands of runs needed. ‘The use of statistical emulation in climate modelling is quite new, but is a really useful tool for interpolating model results,’ Pringle explained. ‘In addition to the model simulations used to “train” the emulator we do extra model simulations and use these to check that the emulator is working well.’ 

Using this approach, the group identified which of the uncertain parameters within the model had the greatest effect on the aerosol distribution so that, in the future, they can work on trying to constrain the values of these parameters. Pringle noted that a number of technical difficulties were encountered: ‘Although we routinely perform complex model simulations, we had never previously performed so many simulations at once. The emulator cuts down on the number of simulations required, but we still needed more than 200 simulations.’ GLOMAP runs on 16 CPUs and takes about a day to run a full model year.  

The role of HPC

According to Per Nyberg, director of business development at Cray, high-performance computing (HPC) has become a fundamental part of numerical weather prediction and ‘in the age of commodity microprocessors, these models need to be able to use an increasing number of processors in parallel.’ Nyberg believes that the critical element here is scalability because, unless the system is able to scale to use tens of thousands of cores in parallel, complex models cannot be run within their operational windows. ‘All of the major modelling groups around the world have efforts underway to address scalability from an algorithmic perspective as they understand that the world is moving towards millions of cores, and that the algorithms need to keep pace,’ he said. So far the results are very encouraging and there have been demonstrations of models scaling up into hundreds of thousands of cores.

In the UK, there is an experiment looking at automated code generation that Pier Luigi Vidale believes will improve portability, by enabling scientists to write their algorithms at a very high level. Essentially, the machine code would be generated by a tool, rather than the scientist, and then submitted to the computational platform. This represents a fundamental change as it would isolate the scientist from the underlying hardware. However, it would have the benefit of eliminating the need to readapt and rewrite code every time a new computer becomes available.

In terms of the hardware, one challenge within the climate community is the broad adoption of accelerators. ‘Most weather centres use regular x86 processors, but there is a lot of discussion surrounding GPGPU (general-purpose computation on GPU) and Intel Xeon Phi,’ said Nyberg. The question, he explained, of what a processor will look like five years from now is an open one, and the modelling community must factor this uncertainty when investing expertise in code development. He believes that the best option is to try to focus less on particular instantiations of technology – especially from a processor perspective – and more on general parallelisation at all levels, as that will provide a safeguard for algorithmic optimisations.

‘The relationship [between the HPC and numerical weather prediction communities] is very much a symbiotic one,’ added Nyberg. ‘Weather centres push us in HPC to improve our technologies, while our advancements enable them to test their modelling and simulation limits.’

At the end of 2013, Archer – the next generation of a national HPC facility in the UK, will become available – impacting the types of modelling that researchers like Pier Luigi Vidale and Kirsty Pringle are able to conduct. According to Vidale, Archer (Advanced Research Computing High End Resource) promises to be at least twice as fast as the Cray XE6 system, HECToR, currently available to the UK academic community – and equivalent to Hermit, installed at HLRS Stuttgart in Germany, which he used during the Upscale project in 2012. Archer, also a Cray machine, will enable Vidale’s group to set up a coupled model that includes both the atmosphere and ocean at very high resolutions. ‘The opportunity we had in Germany meant that we were using observed sea surface temperatures, because running a project like that would have taken longer than the single year allocation would have allowed,’ he explained. ‘A coupled model is very important for long-term climate prediction and projection. Our access to Archer will allow us to run two coupled simulations concurrently and continuously for at least two years.’

Pringle agrees that, in the future, more modelling groups will take the approach of either using multiple models to perform experiments, or perform experiments with multiple different setups of a single model: ‘In this way we can be sure that the results are not strongly sensitive to uncertainties within the model and hopefully this will improve the robustness of our research. There continues to be much public concern about the reliance on climate models for climate science; I hope that, as we really probe and test our models, we can better understand how confident (or not) we are in the results and this will help us communicate this to the public.’

[1] PL Vidale holds the ‘Willis Chair of Climate System Science 
and Climate Hazards’



‘The dynamical cores that we use are all developed in the public service, worldwide. We could think of using off-the-shelf solutions, but most of the commercial packages on the market are for fluid dynamics – simulating the flow of air around cars, for example – and while it would be possible to use them, the full weather and climate models may become incredibly expensive once we couple our physics, unless proper investment is made in joint development. Most modern climate models have very interconnected physics and dynamics and that line between the two becomes finer every day. If we were to take a dynamical core from a commercial vendor we would need to make some extensive changes, which is why there is still a tendency to focus on in-house development. This may yet change in the future, depending on the success or failure of next-generation dynamical cores.’ 

Pier L. Vidale, Professor at the University of Reading, UK, and a National Centre for Atmospheric Science (NCAS) senior scientist

Topics

Read more about:

Modelling & simulation

Media Partners