The imperfect storm
Weather prediction seems simple nowadays. You just have to summon Siri and ask it if you’ll need an umbrella or a sunhat.
But behind the scenes, the world of weather prediction is far more sophisticated and diverse. Modelling and simulation tools have appeared in abundance to help deal with the complexity of making predictions for a vast range of weather systems.
Adam Clark, a research scientist from the National Severe Storms Laboratory (NSSL) in the US, explained: ‘Weather is complex. There is no ‘silver bullet’ that can give you a perfect forecast. Perfect weather predictions would require accurately observing every square inch of the Earth’s atmosphere.’
As a result, billions of observations must be incorporated into prediction models. But these observations could be affected by instrumental errors and, to make matters more complicated, these prediction models are also prone to errors because they are not exact solutions to the set of differential equations that describe atmospheric motion.
Clark explained: ‘These multiple error sources have a compounding effect and can grow exponentially. Therefore, forecasters have to use every source of information that is readily available, and then use their own knowledge and intuition based on years of experience to make the best possible prediction.’
Observations are an important source of information for specific short-term weather phenomena, such as issuing tornado warnings. But as we move to longer timeframes, models play an increasingly important role.
Convection-allowing models (CAMs) are the NSSL’s primary simulation tool for weather prediction. NSSL’s CAMs are weather models that typically only run over the US and have high enough resolution to depict storms and storm complexes that lead to hazardous weather. A supercell thunderstorm, for example, is a type of thunderstorm with a deep rotating updraft, and is also a weather phenomenon modelled using CAMs.
Clark explained: ‘Using a model with 32km horizontal grid-spacing, which was typical for National Weather Service (NWS) models until 10 to 20 years ago, a supercell thunderstorm was only sampled by a single grid-point. However, using 4km grid-spacing, which is needed for CAM applications, supercells can be adequately depicted.’
CAMs were first tested as a forecasting tool during collaborative forecasting experiments run by NSSL and sister US-government body the Storm Prediction Center during the mid to late 2000s. Based on their success, the National Weather Service operationalised CAMs in 2014 with the implementation of its High-Resolution Rapid Refresh (HRRR) model.
The HRRR model is a real-time 3km resolution, hourly updated, cloud-resolving, convection-allowing atmospheric model. It assimilates radar data every 15 minutes over a one-hour period. ‘Nowadays, we have enough computer resources to run CAM ensembles, which are a group of several CAMs – typically 10 to 20. In the ensemble, each prediction is obtained with slightly different input, and/or model parameters, which gives a range of forecast solutions. The different solutions can be used to determine which is most likely, and the possible range of outcomes,’ Clark added.
Data assimilation is a vital but often challenging aspect of the work done at the NSSL, because the researchers have to provide high-resolution forecasts for short lead times. Clark explained: ‘It basically requires very sophisticated algorithms to stitch together very different observational data sources into an accurate and balanced state that can be input into a model and give an accurate forecast. Data assimilation is one of the big challenges associated with a large initiative at NSSL called Warn-on-Forecast.’
The Warn-on-Forecast (WoF) research program is currently tasked to increase tornado, severe thunderstorm, and flash-flood warning lead times. Clark added: ‘In general, one of the biggest problems right now for CAM ensemble systems is that they often do not depict the full range of future outcomes very well. In other words, the weather that actually occurs is too often not forecast by any of the CAM predictions.’
This problem is called ‘under-dispersion’, according to Clark, who added: ‘To fix under-dispersion, there are many areas of ongoing research that involve figuring out how to properly account for model and observational errors, and how to best assimilate different data sources into the model.’
However, initial results from the WoF programme are promising. In 2017, an output from the WoF helped predict a tornado in Elk City, Oklahoma, allowing forecasters to alert the public.
MIT conducts a range of modelling and simulation research in weather prediction. This work primarily focuses on predicting changes in the occurrence of extreme/damaging weather events that result from the slowly evolving (over the coming decades) continental-to-global scale changes in our climate system. Adam Schlosser, a senior research scientist at the Center for Global Change Science and deputy director of the MIT Joint Program on Science and Policy of Global Change, explained: ‘The challenge is that models of the climate system are unable to resolve the details of many of the extreme events that we consider a threat. They typically occur at very ‘local’ scales (i.e. town, city, county). We bridge this gap by taking advantage of ‘tell-tale’ signs in a number of characteristics in the atmosphere at the larger spatial scales – and we use observations and machine-learning methods to identify what is the ‘recipe’ of these conditions that have to come together to cause the event.’
Schlosser added: ‘We then apply these associations to the climate models’ simulations to see how often they occur now, and how that will change going into the future. We then associate these changes in the occurrence of the large-scale ingredients (again – based on observational evidence) to the risk of change in the extreme event occurrence.’
The team recently conducted a pilot study to investigate the potential impact of extreme heat events on large power transformers (LPTs) in the Northeast US. The cumulative effects of overheating are the most common cause of failure for these units. ‘Our analysis indicated that, for the LPT we selected, the expected changes in the occurrence of the damaging heat waves from (unimpeded) human-induced climate change increased the underlying threat of (incremental) damage by a factor of four by the middle of this century. If the world did everything it could to avoid the human-induced warming, we would still see an increase of a factor of two,’ noted Schlosser.
There are a number of caveats to this study, according to Schlosser. First, the pilot only investigated these effects on one LPT but there are thousands of such systems deployed across the US power grid. ‘We also need to understand better how an incremental damaging heat wave or event impacts the cumulative risk of a catastrophic or premature failure of the LPT,’ Schlosser added.
This would allow more proactive action to be taken to upgrade and replace such systems. ‘We are actively pursuing collaborations and research support to continue our research endeavours along these lines,’ Schlosser concluded.
When we move to longer-term forecasts, global weather prediction models are required, such as the Integrated Forecasting System (IFS) from the European Centre for Medium-range Weather Forecasts (ECMWF), which produces analyses and forecasts in the mid-term.
Peter Bauer, deputy director of research at the ECMWF, said: ‘Any forecast beyond a few days requires global simulations, because all processes are interconnected. Short-range forecasting systems can be limited to regions and draw their boundary conditions from systems like ours.’
‘They then use km-scale models to resolve details of the orography and coastlines. Ideally, longer-range predictions are also run at high resolution, because small-scale processes affect large-scale phenomena and vice versa. However, due to computing limitations, such predictions are usually run at resolutions of 10km or so,’ he added.
Such medium-range forecasts require a high level of expertise and computational power, as Bauer explained: ‘It is a challenge to enhance the physical realism of the system to produce better forecasts, because many processes are either not well understood or difficult to represent in a computer model. Also, the more sophisticated the system becomes, the more computing resources it requires. Computing cost is a limitation in terms of both having sufficiently large computers and supplying the electrical power for running the machine and cooling it.’
The IFS computer model uses a representation of Earth system physics that includes the atmosphere, oceans, sea-ice and land surfaces. It also produces initial conditions that describe the starting point for each forecast, which requires it to use 40 million observations per day.
The IFS then simulates the weather for the next weeks in slightly different configurations, as a single 9km forecast up to 10 days in advance, and as an 18km ensemble up to 15 days in advance every day.
The IFS undergoes a major upgrade once or twice every year. The next upgrade (or cycle) is due in the summer of this year and will significantly improve the physical representation of the transfer of solar and thermal radiation, as it passes through and reflects back across the Earth’s atmosphere.
This radiation transfer is a continuous and complex process to represent, making it too computationally expensive for the IFS to correct on a regular basis. Florence Rabier, director-general at the ECMWF, explained: ‘In the next cycle, the IFS will be able to correct its representation of the radiation in the atmosphere every hour for the ensemble prediction, instead of every three hours, because we have significantly improved the efficiency of the code.’
Rabier added: ‘We will also increase the number of observations we use to improve the description we have of the atmosphere, which will improve the accuracy of this physical representation.’
This cycle will also introduce a number of changes to other weather phenomena that are modelled in the IFS. Any such change requires extensive testing to be carried out. While it’s relatively easy to check if individual changes improve the accuracy of the system, compromises may have to be made when looking at the whole Earth system. Rabier explained: ‘All of the changes may not interact positively when we test them together. Then, it’s a judgement call, based on which parameters will degrade. This is based on the thinking that some parameters are due for improvements in a later cycle, or they may be more significant than other parameters.’
The ECMWF also uses probability forecasting to add prediction uncertainty to its forecasts. This is done by running 51 scenarios for each forecast to account for the chaotic nature of the Earth system and modelling uncertainties. Bauer explained: ‘Each forecast can be judged based on the associated uncertainty. Sources of uncertainty are the natural predictability of a given weather pattern (for example, local summer storms are less predictable than stable winter high-pressure situations) but also the imperfection of the initial conditions and the model itself.’
The ECMWF is starting to investigate machine learning techniques ‘for a smarter way of exploiting observational information, for reducing the computational cost of the forecast model, and for more optimal exploitation of the information that is generated by the model,’ according to Bauer.
Machine learning techniques are now being developed to work with the world’s weather and climate prediction systems. For example, a team of researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (LBNL) is developing a deep learning system, called ClimateNet, to understand how extreme weather events are affected by our changing climate.
Deep learning is a subset of machine learning, where useful information is extracted from raw datasets for pattern detection at multiple levels of abstraction. For this project, a database of 10,000 to 100,000 curated images will be created where climate experts have labelled the images to tell the computer what it’s looking at.
This database will then be used to train machine learning models to more quickly and accurately identify approximately 10 classes of distinct weather and climate patterns, to help understand and predict how extreme events are changing under global warming.
The project hopes to address several shortfalls of current pattern detection schemes used by the climate science community. Karthik Kashinath, climate informatics and AI specialist at the National Energy Research Scientific Computing Center (NERSC) at the LBNL, explained: ‘Existing pattern detection schemes and heuristics are not usable on a global scale because the algorithms tend to be designed for specific regions and climate scenarios. However, for the ClimateNet project, we will create a unified machine learning-based global pattern detection model to address these challenges and improve the range of machine learning-based applications.’
Kashinath added: ‘Deep learning is also a highly scalable technique for large data sets, which performs better with larger amounts of labelled training data and larger computing systems. Hence, climate science is well positioned to utilise the true power of deep learning, provided we can develop high-quality labelled data for training, and that is exactly what the goal of ClimateNet is.’
As a result, the ClimateNet project could dramatically accelerate the pace of climate research that requires complex patterns to be recognised in large datasets. Kashinath explained: ‘When it’s up and running, ClimateNet will pull out the interesting patterns, and not have to use the whole dataset to predict the evolution of specific weather phenomena. This will vastly reduce the time it takes climate scientists to test out their hypotheses.’
Kashinath added: ‘If we can accelerate this process, then it will give scientists the time and space to think about the harder problems they need to resolve, when tackling climate change and making weather predictions.’