Skip to main content
Philosophical transactions. Series A, Mathematical, physical, and engineering sciences logoLink to Philosophical transactions. Series A, Mathematical, physical, and engineering sciences
. 2021 Feb 15;379(2194):20200083. doi: 10.1098/rsta.2020.0083

Opportunities and challenges for machine learning in weather and climate modelling: hard, medium and soft AI

Matthew Chantry 1,, Hannah Christensen 1, Peter Dueben 2, Tim Palmer 1
PMCID: PMC7898136  PMID: 33583261

Abstract

In September 2019, a workshop was held to highlight the growing area of applying machine learning techniques to improve weather and climate prediction. In this introductory piece, we outline the motivations, opportunities and challenges ahead in this exciting avenue of research.

This article is part of the theme issue ‘Machine learning for weather and climate modelling’.

Keywords: weather prediction, machine learning, climate modelling


Ever since the development of the first fully programmable electronic digital computers at the end of World War 2, the cutting edge of weather prediction science has lain in the development of accurate numerical representations of the governing dynamical equations of meteorology. As begun in the 1950s [1], these numerical representations have been further developed to allow first-principle simulations of climate. Since then, numerical modelling of weather and climate have dominated meteorological forecasting, to the extent that only 10 years ago prediction using statistical empirical models seemed rather antiquated.

With the recent advances in deep neural networks and other related machine learning techniques, statistical empirical prediction is firmly back in vogue. This raises a number of questions. Will the methods of artificial intelligence replace numerical models? Or, failing that, can they be used to supplement or enhance numerical models?

These were the overriding themes of a workshop held at Corpus Christi College Oxford in September 2019, papers from which are published in this Special Issue.

In order to address the questions posed above, it is worth breaking weather and climate prediction into different timescales.

Now-casting describes the evaluation of the weather and in particular precipitation right now or with a forecast lead time of a couple of hours. Here, conventional predictions are mostly based on observations and advection models with only limited representation of physical processes such as cloud physics or turbulence. Prudden et al. [2] summarize the current state of the field, particularly in the context of machine learning.

Short-range weather prediction describes forecasts for a day or two ahead. On these timescales, one may be interested in the timing of a cold front coming in or how individual thunderstorm complexes propagate and develop. This is a challenging problem as such cloud systems require very high-resolution modelling using spatial grids that have horizontal spacings on the order of 1 km. Inevitably, given current computational power, this means short-range prediction models are regional and not global in their domain. The development of such convective cloud systems depends critically on the condensation of moisture, these frequently occur in updrafts too narrow to be properly resolved even with a 1 km grid [3]. As a result, modelling on these space and timescales remains challenging.

Medium-range prediction, on timescales of a couple of days out to a couple of weeks, has, perhaps more than any other timescale, seen the most dramatic increases in skill over past decades [4]. The archetypal phenomenon on the medium-range timescale is the baroclinic wave, whose characteristic scales are well resolved by contemporary global numerical weather prediction models with grid spacings on the order of 10 km. Accurate prediction on these timescales over two generations of baroclinic waves is often possible.

Subseasonal forecasting spans timescales from two weeks to one season. A key source of predictability on this timescale is the Madden Julian Oscillation [5,6]. This is an eastward-propagating equatorially trapped oscillation in tropical winds with a timescale of 30–80 days, associated with suppressed or enhanced organized convection. This timescale has seen much attention in recent years with the birth of the Subseasonal to Seasonal (S2S) Prediction Project [7]. Associated with this project are several large databases containing S2S forecasts from operational and research models [7,8], invaluable for potential ML applications.

Seasonal forecasting depends critically on the coupled dynamics of the atmosphere and ocean and an archetypal phenomenon is the El Nino event in the tropical Pacific Ocean [9,10]. Coupled ocean-atmosphere models show skill in prediction for El Nino on seasonal timescales [11]. However, at this range, models must use grid spacings of many tens of kilometres. At these resolutions, models show substantial biases in key fields such as precipitation and wind. These biases arise from the need to parametrize key physical processes, notably associated with deep convective cloud systems.

Finally, climate-change prediction asks the question of how the statistics of weather changes as the greenhouse gas concentration of the atmosphere increases, on timescales of decades and longer. Again, on these timescales, the numerical models (with grid spacings of 100 km) exhibit quite substantial biases whose magnitudes are comparable with the climate-change signal one wishes to predict [12].

With this as background, let us ask the question: could prediction on any or all of these different timescales simply be replaced by AI? One might refer to this as ‘Hard AI’.

Now-casting applications appear to be a good starting point for Hard AI. On these timescales, physical constraints, such as conservation laws, can be ignored as errors will not accumulate significantly over a couple of hours. Furthermore, the wealth of Internet-of-Things (IoT) data, for example, in the form of mobile phone data, is increasingly available and usable for weather predictions. Assimilating this data using conventional methods will be very difficult due to the high number of measurements and large errors. In promising recent work, machine learning methods are becoming competitive for short timescales (e.g. [13]). These approaches combine the data assimilation and forecasting problems, directly using observations (e.g. satellites) as inputs to the prediction system.

For short and medium range predictions, an approach using Hard AI becomes more challenging. Skilful forecasts on these timescales would implicitly require the representation of the equations of motion and the interactions between features of the system (such as clusters of deep-convection, gravity waves and mid-latitude jets). It is in principle possible to learn the equations of motion from data using machine learning methods [1416], however, challenges with this approach include satisfying conservation principles and stability of the simulations. Forecast skill is found to decrease after a few days, such that it is currently difficult to envisage such Hard AI outperforming and hence potentially replacing numerical models on medium-range timescales. However, for short-range predictions, this approach can produce skilful forecasts [16] indicating potential for Hard AI to compete with numerical models. An alternative approach which has shown some promise on short timescales is to combine analogue forecasting (prediction using a database of past evolving weather patterns) with deep learning [17].

One of the biggest challenges for Hard AI on short and medium-range timescales is the lack of high-quality data that is available for training. To train machine learning systems from conventional models would potentially generate improvements in high-performance computing efficiency. This would, however, not improve prediction quality. If the model data could be generated at a higher spatial resolution than can be currently run within an operational forecasting window then there is scope for prediction improvements, however, the computation power required to generate this training dataset would be large. To train from observations requires consistent and high-quality data which is only available since satellite measurements have been made, beginning with Spunik 1 in 1959 [18]. This restricts the training datasets to O(104) days of training data. There is perhaps scope to pre-train on existing model data before final training on some observational datasets, but it is unclear if model and observational datasets are close enough for pre-training to be a valuable step.

For seasonal predictions, machine learning methods are more likely to beat conventional prediction systems. Here, the systematic errors of models are sufficiently large that AI may soon be competitive [19,20]. However, the same challenges of training datasets size still exist for the seasonal prediction problem.

Adopting the Hard AI approach on the climate change timescale is particularly difficult. The problem is inherently one of extrapolation, predicting climates not yet observed, a major challenge for machine learning techniques. One key question for climate change is whether cloud feedbacks are positive or negative. There is no indication from existing data that global cloud cover is increasing or decreasing [21]. Hence a single black-box AI scheme would struggle in giving us meaningful projections of the future. Our only hope are schemes which fully embrace the laws of physics. This could incorporate AI but likely as a sum of many elements rather than a single entity.

The potential for Hard AI was discussed extensively at the workshop, and no consensus was reached as to whether it will be likely to outperform conventional tools for medium-range and climate predictions. In particular, participants with a focus on high-performance computing were optimistic regarding machine learning tools, since their computational power will jump by a large factor in the coming years as chips and computing systems are co-designed towards the need of deep learning. Furthermore, the speed of developments in machine learning techniques is breathtaking with O(100) papers published on artificial intelligence every day.

If at least some of the weather and climate applications cannot currently be replaced in total, how can machine learning methods help to still improve their predictions? Via the use of what we call ‘Medium AI’ and ‘Soft AI’. Here, the intention is not to replace the entire model. It is rather the aim to improve forecasts using tools that have been learned from observations—Medium AI—or via the emulation of existing kernels of weather forecasting models—Soft AI—to allow for improvements in computing efficiency. Several papers in this Special Issue discuss such an application of AI techniques.

Medium AI should have a sufficient wealth of independent data points for each application. One approach uses machine learning as a postprocessing tool. This includes downscaling grid scale output to some point-specific location within a gridbox, bias correction, improving the representation of forecast uncertainty, and automating model post-processing such as feature detection for early warning of extreme events. These applications are fairly uncontroversial as they have a long-standing history—not necessarily for deep learning but certainly for the wider toolbox of machine learning [22,23]. One promising technique is the use of Generative Adversarial Networks (GANs) [24] to increase spatial resolution of model or satellite data [25].

A second approach with Medium AI is to replace components of the models with new tools that have been learned from observations or other data sources (such as high-resolution models). This is an attractive proposal, which could transform the representation of poorly understood, complex physical processes in our models. However, this is hindered by the lack of knowledge of the coupling between grid-scale and sub-grid-scale processes. There is a view that parametrization—reducing sub-grid variability to some sort of bulk formula—is an inherently ill-posed concept, which would set an upper bound on the improvements possible from AI [26]. There are also difficulties associated with using observational data (for example from satellites) for training of parametrization schemes as data on all vertical levels of the model are required. This is typically only possible to achieve within data assimilation systems but the signal-to-noise ratio will be challenging when extracting information from observations. Nevertheless, there are some model components which are amenable to a Medium AI approach, particularly those which are based on empirical relationships. One example is the scaling function used within Monin–Obukhov similarity theory to determine interactions between the atmosphere and the surface layer [27].

The Soft AI approach was first used by Krasnopolsky [28], and Chevallier et al. [29] over 20 years ago. In particular, by training a neural network on some existing parametrization scheme (e.g. long-wave radiation) we can replace the parametrization with an AI scheme which will not be more accurate than the original scheme, but will arrive at the parametrized tendency with significantly reduced computational cost. The saved computational cost can then be reinvested to improve the resolution or complexity of the numerical model, which in principle should improve the accuracy of the model. This approach is, in our view, among the lowest hanging fruit in the applications of AI to weather and climate modelling. It should be noted that this approach still has challenges, for example Brenowitz & Bretherton [30] found that multi-timestep training was necessary to produce a stable simulation.

Putting all this together, we believe that the real way forward is through a careful assessment of how AI-based schemes and numerical models can be synthesized to allow the production of next-generation high-resolution global weather and climate models running on exascale supercomputers. It can be expected that such models will, as far as possible, be used across all timescales, from days to decades. With climate change affecting the nature and frequency of weather extremes, making many in the world even more vulnerable to natural hazards, the need for such enhanced prediction models has never been greater.

The weather and climate modelling community has a long history in the use of machine learning techniques, such as parameter estimations or the extraction of the leading dynamic modes. However, the recent boost in capabilities of machine learning, and in particular deep learning, and the recent advance in HPC resources that enable the use of machine learning on much more complicated problems, is yet to be fully exploited. Doing so requires the development of additional skill and infrastructure within the community.

1. Workshop overview

In addition to presentations and posters, our workshop also included breakout groups. These discussed:

  • How to strengthen the link between physical understanding and machine learning

  • How to improve physical parametrization schemes within models

  • How to improve HPC efficiency of weather and climate models using machine learning

  • How to improve the useability of model output using machine learning

It was evident that there is a need for the community to develop machine learning solutions that are customized to the needs of weather and climate modelling, rather than relying on machine learning solutions that are mainly developed for other application areas such as image recognition. Such customized solutions would need to be able to impose physical understanding within the solution architecture. This is necessary to build machine learning tools of minimal complexity to optimize efficiency during training and inference. Furthermore, machine learning solutions for weather and climate applications would need to cope with changes of dynamic regimes due to climate change and would therefore need to be able to be trained outside of the training regimes that are available in past weather. In an ideal but unlikely scenario, solutions would be able to accurately extrapolate in addition to interpolation, where standard deep learning thrives. Some promising machine learning approaches that were presented at the workshop focused on the correction of a signal rather than the prediction of the signal itself. This provides a natural connection between conventional tools (that are used to generate the signal) and machine learning tools (that are used to correct the signal) and makes solutions more resilient against errors as the machine learning signal becomes smaller. Another promising approach is the use of machine learning in uncertainty quantification. This includes both the use of approaches to represent uncertainty within models, for example via the use of multi-bin Markov chains, GANs [31], or the use of machine learning techniques to post-process the output of ensemble simulations [32,33].

To enable machine learning solutions to take up information from physical knowledge, or to at least obey conservation properties, a number of research projects were presented such as an adjustment of architecture designs, constraint terms in loss functions during training, genetic programming or regression techniques that learn coefficients for terms that were guided by physical understanding (equation discovery), or the more conventional approach to perform parameter optimization, for example via the use of approximate Bayesian computing.

A number of infrastructure requirements to support machine learning applications within weather and climate modelling have been identified during the workshop. These include

  • To design standard methods that enable researchers to easily link Python and Fortran programmes as most machine learning methods are trained and used within Python code (based on machine learning libraries such as TensorFlow [34] or Keras [35]) while weather and climate models are typically based on Fortran code. Such a link between Python and Fortran would in particular be important for machine learning approaches that try out online learning within a running model. Some progress on this task has been made by [36].

  • To develop benchmark training datasets to allow for a qualitative comparison between machine learning approaches that are developed within the different scientific groups (see also [37]). The benchmark datasets would need to be tiered in terms of complexity including small datasets for conceptual testing up to the terabyte range to allow for tests of high-performance computing efficiency during both training and inference. For most datasets that are available currently, the time frequency is not good enough to allow for training of machine learning solutions.

  • To make better use of heterogeneous hardware when running weather and climate models. Most current weather forecasting models run on CPU hardware only, whereas machine learning solutions are typically faster on GPU hardware. This may generate tension for weather and climate computing centres with their own supercomputing facilities. Better adoption of heterogeneous hardware for weather and climate models would have benefits outside of machine-learnt kernels as these models feature dense linear algebra which can be further accelerated with low numerical precision [38].

  • To prepare data centres for larger data requests by machine learning users and to provide computing infrastructure that is optimal for training and inference close to the data.

  • To develop vanilla solutions for machine learning in weather and climate applications that reduce the need for excessive screening of all architecture and training options within deep learning. It may even be possible to use and share pre-trained machine learning solutions between groups. In particular, machine learning solutions that are able to work on unstructured grids and with spherical geometry as well as the three-dimensional structure of atmosphere and ocean models with changing grid spacings in the vertical are required.

It is still unclear how the workflow of weather and climate modelling will change in the coming 5–10 years. However, it is very likely that machine learning will be used within many components within the workflow. This will not necessarily make the workflow—which is already very complicated—easier and it will require machine learning resources such as special hardware and staff with experience in the design of machine learning tools. However, the use of machine learning within the workflow will likely allow a number of improvements in results.

This special issue contains a mix of new research and review articles. Most of the articles directly attack the problem of weather forecasting, with many in the community believing that this is a good testing ground before considering the problem of climate prediction. The new research articles mostly focus on what we have dubbed Medium or Soft AI, looking to improve parameterization kernels of weather and climate models or optimize existing kernels. There are several papers concerning the problem of data assimilation, an important and expensive part of weather prediction. Contributions also take on the subject of probabilistic forecasting, a key approach to forecasting on all timescales, as well as the exciting topic of machine learning for post-processing (please see Haupt [39] in this issue for an overview on post-processing approaches).

Data accessibility

This article has no additional data.

Authors' contributions

Each author committed equally to the writing of this introductory paper.

Competing interests

We declare we have no competing interests.

Funding

We gratefully acknowledge generous funding from the sponsors of the Machine Learning for Weather and Climate Modelling workshop, namely the ESiWACE-2 Centre of Excellence, the Oxford Martin School, the Office of Naval Research, Amazon, Vulcan Inc., the Copernicus Atmosphere Monitoring Service (CAMS), the Copernicus Climate Change Service (C3S), and NVIDIA. The ESiWACE-2 project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 823988. Tim Palmer acknowledges funding under the ERC Advanced Grant 741112 ITHACA.

References

  • 1.Harper K, Uccellini LW, Kalnay E, Carey K, Morone L. 2007. 50th anniversary of operational numerical weather prediction. Bull. Am. Meteorol. Soc. 88, 639–650. ( 10.1175/BAMS-88-5-639) [DOI] [Google Scholar]
  • 2.Prudden R, Adams S, Kangin D, Robinson N, Ravuri S, Mohamed S, Arribas A 2020 A review of radar-based nowcasting of precipitation and applicable machine learning techniques. (http://arxiv.org/abs/2005.04988. )
  • 3.Yang J, Wang Z, Heymsfield AJ, French JR. 2016. Characteristics of vertical air motion in convective clouds. Atmos. Chem. Phys. Discuss. 2016, 1–48. [Google Scholar]
  • 4.Bauer P, Thorpe A, Brunet G. 2015. The quiet revolution of numerical weather prediction. Nature 525, 47–55. ( 10.1038/nature14956) [DOI] [PubMed] [Google Scholar]
  • 5.Waliser D et al. 2009. MJO simulation diagnostics. J. Clim. 22, 3006–3030. ( 10.1175/2008JCLI2731.1) [DOI] [Google Scholar]
  • 6.Madden RA, Julian PR. 1971. Detection of a 40–50 day oscillation in the zonal wind in the Tropical Pacific. J. Atmos. Sci. 28, 702–708. () [DOI] [Google Scholar]
  • 7.Vitart F et al. 2017. The subseasonal to seasonal (S2S) prediction project database. Bull. Am. Meteorol. Soc. 98, 163–173. ( 10.1175/BAMS-D-16-0017.1) [DOI] [Google Scholar]
  • 8.Pegion K et al. 2019. The subseasonal experiment (SUBX). Bull. Am. Meteorol. Soc. 100, 2043–2060. ( 10.1175/BAMS-D-18-0270.1) [DOI] [Google Scholar]
  • 9.Neelin JD, Battisti DS, Hirst AC, Jin F-F, Wakata Y, Yamagata T, Zebiak SE. 1998. Enso theory. J. Geophys. Res.: Oceans 103, 14 261–14 290. ( 10.1029/97JC03424) [DOI] [Google Scholar]
  • 10.Timmermann A et al. 2018. El niño–southern oscillation complexity. Nature 559, 535–545. ( 10.1038/s41586-018-0252-6) [DOI] [PubMed] [Google Scholar]
  • 11.Johnson SJ et al. 2019. Seas5: the new ECMWF seasonal forecast system. Geoscientific Model Dev. 12, 1087–1117. ( 10.5194/gmd-12-1087-2019) [DOI] [Google Scholar]
  • 12.Palmer T, Stevens B. 2019. The scientific challenge of understanding and estimating climate change. Proc. Natl Acad. Sci. USA 116, 24 390–24 395. ( 10.1073/pnas.1906691116) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Sønderby CK, Espeholt L, Heek J, Dehghani M, Oliver A, Salimans T, Agrawal S, Hickey J, Kalchbrenner N 2020 Metnet: a neural weather model for precipitation forecasting. (http://arxiv.org/abs/2003.12140. )
  • 14.Dueben PD, Bauer P. 2018. Challenges and design choices for global weather and climate models based on machine learning. Geoscientific Model Dev. 11, 3999–4009. ( 10.5194/gmd-11-3999-2018) [DOI] [Google Scholar]
  • 15.Scher S, Messori G. 2019. Weather and climate forecasting with neural networks: using general circulation models (gcms) with different complexity as a study ground. Geoscientific Model Dev. 12, 2797–2809. ( 10.5194/gmd-12-2797-2019) [DOI] [Google Scholar]
  • 16.Weyn JA, Durran DR, Caruana R. 2019. Can machines learn to predict weather? Using deep learning to predict gridded 500-hPa geopotential height from historical weather data. J. Adv. Model. Earth Syst. 11, 2680–2693. ( 10.1029/2019MS001705) [DOI] [Google Scholar]
  • 17.Chattopadhyay A, Nabizadeh E, Hassanzadeh P. 2020. Analog forecasting of extreme-causing weather patterns using deep learning. J. Adv. Model. Earth Syst. 12, e2019MS001958 ( 10.1029/2019ms001958) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Kuznetsov VD, Sinelnikov VM, Alpert SN. 2015. Yakov alpert: Sputnik-1 and the first satellite ionospheric experiment. Adv. Space Res. 55, 2833–2839. ( 10.1016/j.asr.2015.02.033) [DOI] [Google Scholar]
  • 19.Nooteboom PD, Feng QY, López C, Hernández-García E, Dijkstra HA. 2018. Using network theory and machine learning to predict el niño. Earth System Dyn. 9, 969–983. ( 10.5194/esd-9-969-2018) [DOI] [Google Scholar]
  • 20.Ham Y-G, Kim J-H, Luo J-J. 2019. Deep learning for multi-year ENSO forecasts. Nature 573, 568–572. ( 10.1038/s41586-019-1559-7) [DOI] [PubMed] [Google Scholar]
  • 21.Norris JR, Allen RJ, Evan AT, Zelinka MD, O’Dell CW, Klein SA. 2016. Evidence for climate change in the satellite cloud record. Nature 536, 72–75. ( 10.1038/nature18273) [DOI] [PubMed] [Google Scholar]
  • 22.Vannitsem S.et al.2020Statistical postprocessing for weather forecasts–review, challenges and avenues in a big data world. (http://arxiv.org/abs/2004.06582)
  • 23.Hewson TD, Pillosu FM 2020 A new low-cost technique improves weather forecasts across the world. (http://arxiv.org/abs/2003.14397. )
  • 24.Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. 2014. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680.
  • 25.Leinonen J, Nerini D, Berne A 2020 Stochastic super-resolution for downscaling time-evolving atmospheric fields with a generative adversarial network. (http://arxiv.org/abs/2005.10374. )
  • 26.Palmer TN 2019. Stochastic weather and climate models. Nat. Rev. Phys. 1, 463–471. ( 10.1038/s42254-019-0062-2) [DOI] [Google Scholar]
  • 27.Monin AS, Obukhov AM. 1954. Basic laws of turbulent mixing in the surface layer of the atmosphere. Contrib. Geophys. Inst. Acad. Sci. USSR 151, e187. [Google Scholar]
  • 28.Krasnopolsky V 1997. A neural network forward model for direct assimilation of SSM/I brightness temperatures into atmospheric models. Research activities in atmospheric and oceanic modeling.
  • 29.Chevallier F, Chéruy F, Scott NA, Chédin A. 1998. A neural network approach for a fast and accurate computation of a longwave radiative budget. J. Appl. Meteorol. 37, 1385–1397. () [DOI] [Google Scholar]
  • 30.Brenowitz ND, Bretherton CS. 2018. Prognostic validation of a neural network unified physics parameterization. Geophys. Res. Lett. 45, 6289–6298. ( 10.1029/2018GL078510) [DOI] [Google Scholar]
  • 31.Gagne DJ, Christensen HM, Subramanian AC, Monahan AH. 2020. Machine learning for stochastic parameterization: generative adversarial networks in the Lorenz’96 model. J. Adv. Model. Earth Syst. 12, e2019MS001896 ( 10.1029/2019MS001896) [DOI] [Google Scholar]
  • 32.Rasp S, Lerch S. 2018. Neural networks for postprocessing ensemble weather forecasts. Mon. Weather Rev. 146, 3885–3900. ( 10.1175/MWR-D-18-0187.1) [DOI] [Google Scholar]
  • 33.Grönquist P, Yao C, Ben-Nun T, Dryden N, Dueben P, Li S, Hoefler T. 2020. Deep learning for post-processing ensemble weather forecasts. [DOI] [PubMed]
  • 34.Abadi M et al. 2015. TensorFlow: large-scale machine learning on heterogeneous systems. URL http://tensorflow.org/. Software available from tensorflow.org.
  • 35.Chollet F et al. 2015. Keras. https://keras.io.
  • 36.Ott J, Pritchard M, Best N, Linstead E, Curcic M, Baldi P 2020 A fortran-keras deep learning bridge for scientific computing. (http://arxiv.org/abs/2004.10652. )
  • 37.Rasp S, Dueben PD, Scher S, Weyn JA, Mouatadid S, Thuerey N. 2020. Weatherbench: a benchmark dataset for data-driven weather forecasting.
  • 38.Hatfield S, Chantry M, Dueben P, Palmer T. 2019. Accelerating high-resolution weather models with deep-learning hardware. In Proc. of the Platform for Advanced Scientific Computing Conference, pp. 1–11. [DOI] [PMC free article] [PubMed]
  • 39.Haupt SE, Chapman W, Adams SV, Kirkwood C, Hosking JS, Robinson NH, Lerch S, Subramanian AC. 2021. Towards implementing artificial intelligence post-processing in weather and climate: proposed actions from the Oxford 2019 workshop. Phil. Trans. R. Soc. A 379, 20200091 ( 10.1098/rsta.2020.0091) [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

This article has no additional data.


Articles from Philosophical transactions. Series A, Mathematical, physical, and engineering sciences are provided here courtesy of The Royal Society

RESOURCES