Skip to main content

Weathering the storm

Things are hotting up in the world of weather and climate forecasting thanks to today’s highly ambitious projects. Some initiatives are building world-class supercomputing facilities. Others are creating digital twins of the Earth. But what impact does this work have on the world of simulation and modelling?

Jon Petch, associate director of weather science at the Met Office, explained: ‘Climate and weather predictions, produced using supercomputing, are ever-increasing in complexity and size as atmospheric physics develops and new data is captured from earth monitoring systems.’

This is where new supercomputing facilities and modelling techniques can help. In April, the Met Office signed a multimillion-pound agreement with Microsoft for the provision of a supercomputer to accelerate weather and forecasting in the UK. The supercomputer will be twice as powerful as any other machine in the country.

It’s a massive step forward for both UK research and the world of weather and climate prediction. Petch explained: ‘This investment is for a 10-year service delivery and includes two substantial increases in supercomputing capacity through two generations of supercomputing implementations, with a return on investment of around 9:1 - resulting in financial benefits totalling up to £13bn for the UK over its ten-year lifespan.’

‘Increased capacity will permit an increase in the detail of both ocean and atmospheric models, allowing a more realistic representation of the large-scale weather systems that drive UK weather,’ according to Petch. ‘It will enable ever more localised climate predictions, ensuring infrastructure, housing, transport networks etc, built today, will be safe from the weather impacts of the future.’

Microsoft Azure’s Supercomputing-as-a-Service will also be used during the project, allowing the Met Office to ‘leverage the best blend of dedicated and public cloud services to provide more accurate predictions to help the UK population and businesses plan daily activities, better prepare for extreme weather, and address the challenges associated with climate change,’ according to a Microsoft spokesperson.

‘We are delighted to be working with the Met Office to deliver what will become the world’s leading climate and weather science supercomputing service. Combining the Met Office’s expertise, data gathering capability and historical archive with the sheer scale and power of supercomputing on Azure will improve forecasting, help monitor and tackle climate change and ensure the UK remains at the forefront of scientific and technological research over the next decade,’ the spokesperson added.

The new supercomputing facility will extend the Met Office’s longer-range predictions, enhancing accuracy and supporting medium-term decision-making for business and industry, and improve its understanding and analysis of climate change, while driving ‘technological innovation by UK business and industry,’ Petch noted.

The supercomputer itself will pack a powerful prediction punch. The Microsoft spokesperson said: ‘The first generation of the supercomputer solution will have a combined total of more than 1.5 million processor cores and over 60 Pflops, otherwise known as 60 quadrillion (60,000,000,000,000,000) calculations per second of aggregate peak computing capacity. Microsoft will also deliver further upgrades in computing capability over the ten years.’

Petch added: ‘Additionally, the increased supercomputing power will allow us to increase the number of model runs we undertake, which will improve assessments of current risks and predictions, undertake rapid attribution of severe weather in relation to our changing climate, and allow the characterisation of future worst cases – all of which is very compute hungry.’

Updating the dynamical core

This is an important point. As prediction ambitions and supercomputers scale up, the number of computational resource and data management challenges also increases.

This issue is something that the Met Office is more than aware of. A project to redesign the Met Office’s dynamical core (the numerical algorithm at the heart of its atmospheric model) for the next generation of supercomputers has just marked its 10th anniversary.

‘This is part of a larger programme to reformulate and redesign our complete weather and climate research and operational/production systems to allow the Met Office and its partners to fully exploit future generations of supercomputers for the benefits of society,’ Petch explained.

‘It covers the atmosphere, land, marine and Earth system modelling capabilities and ranges from observation processing and assimilation, through the modelling components, to verification and visualisation,’ he added.

One of the key parts of a weather or climate model is the dynamical core – the numerical algorithm that solves the equations governing fluid motion. The Met Office’s current dynamical core is known as ENDGame, and it describes the Earth using a latitude-longitude grid.

‘However, it has been known since the late 2000s that this grid will cause problems on future supercomputers, which will rely on spreading the calculations involved in the simulation over ever-increasing numbers of computer processors,’ Petch explained.

‘A programme to reformulate and redesign the dynamical core at the heart of our weather and climate model is underway (GungHo) together with a programme to design, develop and implement a new model infrastructure with the specific aim of being as agnostic as possible about the supercomputer architectures (LFRic).’

With these solutions and increased supercomputing capability, the Met Office can ‘make a step change in the level of precision with which it can forecast the impact of severe weather, with city-scale predictions of rainfall, winds and air quality, helping to protect life and property,’ Petch added.

By replacing and increasing its supercomputing capability, the Met Office also hopes to expand its localised climate projections to better inform future climate risk. This includes city-scale projections to ‘enable better investment in infrastructure and adaptation measures to keep people safe,’ said Petch.

Digital Earth

Destination Earth (DestinE) is another key initiative, from the European Union, which is tasked with developing a high-precision model of the Earth to monitor and simulate both natural and human activity.

To create a digital twin of the Earth, an approximately one to three kilometre global grid spacing is required between neighbouring simulation (grid) points. These points represent as many physical processes as possible from first principles to ‘simulate as observed’ and make the digital twin seamlessly interact with other applications and users.

To achieve this, vast amounts of natural and socio-economic information are required to continuously monitor the health of the planet and support EU policy-making and implementation.

From a data management perspective, the challenges are extensive. Nils Wedi, head of the European Centre for Medium-Range Weather Forecasts’ (ECMWF) Earth Modelling Section, said: ‘To put it in perspective, a single simulation will produce 100 to 200TB per day, similar to todays’ entire volume of daily production at the ECMWF.’

Wedi added: ‘We anticipate using the latest developments on federated data access, such as Polytope datacube access of weather data, and federated data lakes, combined with unsupervised learning and data reduction. It is not anticipated to be able to archive native resolution data for longer periods and beyond certain cut-off times raw data will have to be deleted.’

The Polytope datacube is one example of the new processes and technologies being put in place to help manage this data. It stores meteorological datasets in n-dimensional arrays (or datacubes) so data is returned in an accessible format.

The ECMWF is also following a four-strategy approach to adapt its HPC architectures and develop ‘an accelerator-enabled multi-architecture prediction model,’ according to Wedi.

First, ECMWF is introducing the use of single (instead of double) precision accuracy in its forecast algorithms. Second, it uses platform-specific accelerated libraries for computationally intensive parts of the model. Third, it is using separate data-layout, memory placement and science-driven code developments, enabling asynchronous and data-driven programming models, and the use of source-to-source translators (using DSL toolchains).

Finally, the ECMWF is also developing alternative and novel algorithms, for example, part replacing time-critical code with machine learned equivalents, and/or the use of alternative discretisations that are potentially better suited for emerging HPC.

‘How we can best use a quantum computer is still to be answered, but researched,’ Wedi said.

Streamlined simulations

There are many efforts to streamline weather and climate change prediction systems. The National Oceanic and Atmospheric Administration (NOAA), for example, is part of a broader community modelling effort called the Unified Forecast System (UFS).

Dr Vijay Tallapragada, chief of the Modelling and Data Assimilation Branch in NOAA’s Environmental Modeling Center, explained: ‘UFS is integrating numerous environmental models into a unified Earth modelling system that will be used to predict weather from local to global domains at time scales from minutes to seasons.

‘This unified system allows better collaboration between NOAA and the extramural science community, and will accelerate the development and integration of innovation into NOAA’s operational weather forecast systems.’

NOAA is migrating towards simplifying the operational production suite by adopting the community-based UFS for all operational applications in the next five years. Both its Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS) have already been migrated to the UFS framework, and the rest of its applications are currently being developed and merged into the same framework to streamline its research and operations.

The Met Office also uses a Unified Model of the atmosphere for both its weather and climate applications. ‘Although we have several modelling systems available to us, the Unified Model is key for our weather forecasts and climate predictions,’ Petch added.

To help manage the resulting, demanding workloads, commercial tools are also available. Altair, for example, provides workload management solutions for the world’s weather sites, such as the National Center for Atmospheric Research (NCAR) in the United States, Australia’s Bureau of Meteorology, and the US Naval Research Laboratory.

Sam Mahalingam, Altair’s CTO, said: ‘Sophisticated and well-supported HPC workload management and optimisation are a must for these sites, where HPC downtime, productivity loss and inefficient resource utilisation can threaten critical real-world research.’

At NCAR, Altair’s PBS Professional is already used for workload orchestration on the organisation’s current supercomputer, Cheyenne. PBS Professional and Altair Accelerator Plus will also be used on its new system, Derecho, which is predicted to be one of the world’s Top 25 HPC systems.

‘Features like high-throughput hierarchical scheduling with Accelerator Plus offer six to ten times HPC throughput improvements, as well as better license and resource utilisation, and more flexible scheduler usage models,’ Mahalingam explained. ‘At NCAR, this will help develop and test the Weather Research and Forecasting model for atmospheric research and operational forecasting applications.

‘Other features, such as cloud bursting, which provides massive scalability and flexibility, and key access portals and alerting mechanisms, are also critical to weather research and climate modelling,’ according to Mahalingam.

At Australia’s Bureau of Meteorology, staff members at many sites are required to constantly monitor the environment. There, an Altair solution that leverages the Cylc workflow engine provides detailed information to staff, allowing them to monitor the supercomputer hardware, Cylc suites, and PBS Professional jobs, while reporting status clearly and concisely.

The solution is designed to be modular and general-purpose, so any site can deploy it out of the box, or substitute components they’re more familiar with.

Seamless integration is a sign of things to come in the world of weather and climate prediction, as Mahalingam explained: ‘We expect the use of multi-dimensional HPC such as storage-aware scheduling and hierarchical scheduling, cloud bursting and automated cloud migration, and workload simulation, as well as the use of HPC to propel machine learning applications, will continue to gain traction in the coming months and years.’

Such developments are key to not only futureproof the world of weather and climate forecasting, but also protect our planet. Petch concluded: ‘Predicting the weather and climate has become one of the most important areas of scientific endeavour, and increasing our computing capability is essential if we are to continue to improve our climate predictions and climate change simulations.’



Topics

Media Partners