Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 1 - COCLIMAT (Fusion of Alternative Climate Models By Dynamical Synchronization)

Teaser

Climate models of the sort used by the Intergovernmental Panel on Climate Change (IPCC) all predict global warming over the next century, but differ widely in their detailed predictions for any specific region of the globe. The state of the art is just to run the models...

Summary

Climate models of the sort used by the Intergovernmental Panel on Climate Change (IPCC) all predict global warming over the next century, but differ widely in their detailed predictions for any specific region of the globe. The state of the art is just to run the models separately and form a weighted average of their outputs. A new approach first put forward by the PI is that of “supermodeling”: instead of just averaging the outputs of the models, the models are allowed to influence each other in run time. One must specify how much weight a given model gives to corresponding data in each other model. In a supermodel, the weights, or connection coefficients are given by a machine learning algorithm. That is one would use a collection of historical data to train the connections in the supermodel, so that the most reliable dynamical features of each model would be combined. Supermodeling is an instance of chaos synchronization, the phenomenon wherein chaotic systems can be made to follow corresponding trajectories by exchanging surprisingly little information. In a supermodel, the constituent models synchronize, at least partially, with one another as well as with reality in the training phase. In the free-running phase, the models remain partially synchronized with one another, with a common attractor that is expected to resemble the true attractor, even as parameters such as greenhouse gas levels are changed both in the true system and in the supermodel.

To further develop the supemodel method, it was proposed to A) focus on the representation of coherent structures, such as the Atlantic Meridional Overturning Circulation (AMOC) in the supermodel, rather than the raw field variables; B) develop algorithms to train the supermodel that resemble human learning and that do not depend on cost functions that are computationally expensive to compute; and C) apply the supermodel to predict modes of climate variability on inter-decadal time scales.

Work performed

Research under the COCLIMAT program was conducted using a supermodel constructed from the SPEEDO coupled climate model and the previously developed ECHAM-based (COSMOS) supermodel.

The focus was on atmospheric structures and atmospheric extreme events. First, the blocked-zonal index cycle was examined in two versions of the SPEEDO model and in a supermodel formed from the two versions. Blocking patterns are of interest because they are associated with extreme events (droughts, floods, high temperatures, etc.) downstream of the block. However the supermodel did not appear to represent the cycle in the “true” model better than did the separate models. Then rainfall extremes were examined directly by looking at the 95th percentile of local rainfall values in the supermodel and in the separate models. It was found that the supermodel indeed gave a superior representation of the 95th percentile values, and presumably also of the coherent convective activity associated with extreme rainfall. (This work was in collaboration with KNMI, Netherlands.)

Since computationally inexpensive, synchronization-based methods worked well to define connection parameters in the SPEEDO supermodel, research was conducted to show that standard methods for data assimilation and synchronization-based parameter estimation are suitable for neural networks with general connection schemes and are thus plausibly human-like. It was shown that the recently developed FORCE scheme for training networks with fully general inter-neuron connection patterns is equivalent to a variant of Kalman filtering, the method of choice in operational weather prediction for assimilating new observational data into predictive models. It is therefore expected that a variety of data assimilation methods already in use, and which can be applied in supermodeling, can also be applied to neural network training and that some are biologically plausible.

In regard to inter-decadal climate projection, the SPEEDO supermodel was shown to give correct projections for increased greenhouse gas (GHG) levels, even when trained with GHGs at their present levels. But SPEEDO is unrealistic in not capturing known modes of variability in the climate system, such as the El Nin ̃o - Southern Oscillation cycle (ENSO). Therefore the focus was on explaining how the more realistic but less fully developed ECHAM-based supermodel can give a qualitatively correct mean state, even where the two constituent models give the same sort of qualitative error in the mean state, a double Inter-Tropical Convergence Zone (ITCZ). Based on investigations of the wind and ocean circulation patterns in the primitive supermodel as weights were varied, a mechanism was proposed to show how the double ITCZ could arise in the two constituent models for opposite reasons. It is expected that the same behavior would occur in a fully connected supermodel.

Final results

Significant socio-economic gains may be made if uncertainties in the projection of future climate for increased GHG levels are reduced. The super-modeling strategy has the potential to significantly reduce model systematic error, long before that could occur through the slow, but essential process of model improvement. In particular, the project has demonstrated the potential of supermodeling to predict the statistics of extreme events, more accurately than any of the individual models or any ex post facto combination of model outputs. And of course, projection of extreme climate behavior is central to the socio-economic benefits of climate projection.

Outside of climate science, supermodeling will provide a generic approach for modeling in complex application domains where different expert models are available. One can envision applications to complex biological, social, economic, and environmental processes, in situations where there are a small number of competing models. Toward the promotion of supermodeling, the PI was the lead guest editor for a recent Focus Issue of Chaos (1)

The question of whether supermodeling is useful when the constituent models make similar errors has been squarely addressed, and answered in the affirmative, both empirically as in previous work, and with theoretical understanding of how the errors common to the different models might be corrected in the supermodel. The situation of common error among the world-class climate models currently in use is typical, so the result that a supermodel can surpass combinations of these models in regard to such errors is important.

The demonstrated relationship between data assimilation and learning in neural networks (possibly including biological networks) has larger implications for machine learning, since a very large assortment of methods for data assimilation have been explored, and heretofore not linked to the problem of training neural networks. A picture of brain function emerges in which semi-autonomous components assimilate data from each other, realizing a form of self-perception. Mutual benefits for cognitive science and computational science will ensue.

(1) Duane, G.S., Grabow, C., Selten, F., and Ghil, M., Introduction fo focus issue: Synchronization in large networks and continuous media - data, models, and supermodels, Chaos 27, 126601 (2017)

Website & more info

More info: http://www.uib.no/en/persons/Gregory.Duane.