Skip to main content
Philosophical transactions. Series A, Mathematical, physical, and engineering sciences logoLink to Philosophical transactions. Series A, Mathematical, physical, and engineering sciences
. 2022 Jun 20;380(2229):20210213. doi: 10.1098/rsta.2021.0213

Data-driven prediction in dynamical systems: recent developments

Amin Ghadami 1,, Bogdan I Epureanu 1
PMCID: PMC9207538  PMID: 35719077

Abstract

In recent years, we have witnessed a significant shift toward ever-more complex and ever-larger-scale systems in the majority of the grand societal challenges tackled in applied sciences. The need to comprehend and predict the dynamics of complex systems have spurred developments in large-scale simulations and a multitude of methods across several disciplines. The goals of understanding and prediction in complex dynamical systems, however, have been hindered by high dimensionality, complexity and chaotic behaviours. Recent advances in data-driven techniques and machine-learning approaches have revolutionized how we model and analyse complex systems. The integration of these techniques with dynamical systems theory opens up opportunities to tackle previously unattainable challenges in modelling and prediction of dynamical systems. While data-driven prediction methods have made great strides in recent years, it is still necessary to develop new techniques to improve their applicability to a wider range of complex systems in science and engineering. This focus issue shares recent developments in the field of complex dynamical systems with emphasis on data-driven, data-assisted and artificial intelligence-based discovery of dynamical systems.

This article is part of the theme issue 'Data-driven prediction in dynamical systems'.

Keywords: data-driven prediction, model discovery, dynamical systems

1. Introduction

Dynamical systems play a key role in deepening our understanding of the physical world. In dynamical system analysis, the need for forecasting the future state of a dynamical system is a critical need that spans across many disciplines ranging from climate, ecology and biology to traffic and finance [15]. Predicting complex dynamics is the foundation of the subsequent design, control and improvement of physical systems.

Conventionally, prediction in dynamical systems is done by first creating a model of the system, derived from first principles or experimental data. In this approach, the mathematical structure of a model is established, and observational data are used to inform it [68]. However, the increased complexity of systems poses challenges to formalizing models, even roughly accurate, for modern dynamical systems. Physical phenomena are very complex in many real-world systems, and they are not amenable to classical model-based approaches due to the computational size of these problems and lack of accurate models. In those cases, it is tempting to resort to data-driven approaches to model, analyse, and predict dynamical system entirely from data.

The problem of characterizing and forecasting in dynamical systems dates back to the 1980 s and 1990 s when dynamicists developed methods to analyse time series recorded from nonlinear and chaotic systems [912]. These frameworks rely on the pioneering work of Takens [13], introducing the concept of reconstruction of the state space of the system from which the data are sampled, and then study the system in this new representation. This development was the basis for the subsequent development of non-parametric methods for time series forecasting [1421].

Thanks to recent improvements in storage and computing power, more experimental, observational and simulated data are now available. Consequently, data-driven approaches to study dynamical systems have attracted increasing attention in recent years and now can tackle previously unattainable challenges in modelling and prediction of complex systems in a variety of fields [2224]. Extracting dynamical behaviours from data have been one of the main research goals in recent years [2530]. This line of research has focused on analysis of data recorded from system dynamics to reveal much of the desired properties of the system at hand. Examples include data-driven identification of transfer operators and their eigenvalues, eigenfunctions and eigenmodes [31,32], data-driven identification of coherent structures in dynamical systems [3338], and bifurcation forecasting methods for data-driven nonlinear stability analysis of dynamical systems [3944].

The generation of physical and mathematical models from experimental data is another field of study that has risen to prominence as computers have become faster, more powerful and more affordable [45]. Today's systems are complex and large, often with a massive number of unknown parameters that makes prediction and design of control policies to achieve a desired behaviour a challenge. As a result, developing simple and tractable data-driven system identification methods via a limited number of recorded observations has been the motivation of recent research. These methods focus on the discovery of dynamical systems from high-dimensional data [4655] and making predictions of system dynamics based on the identified model. Examples include data-driven identification methods based on nonlinear regression [56], empirical dynamic modelling [57], normal form methods [58], nonlinear Laplacian spectral analysis [59], eigensystem realization algorithms [60], dynamic mode decomposition (DMD) [52,54] and artificial neural networks [61,62]. For cases where a latent differential equation is believed to exist, data-driven discovery techniques have been developed to identify the best fit to data based on pre-defined equations [6365]. More recent progress in compressed sensing and sparse regression in a library of candidate models has also been proposed to identify dynamical systems [6670] and partial differential equations [55] to provide the best trade-off between model complexity and predictive accuracy.

Despite the widespread availability of measurement and simulation data, the sheer size, complexity and dimensionality of modern datasets pose their own challenges. Accurate and computationally tractable prediction in dynamical systems requires development of low-dimensional models that are fast to solve but that approximate well the underlying high-resolution dynamics. This has led to a growing need for dimensionality reduction and reduced-order modelling techniques, and made these an inseparable part of data-driven prediction algorithms [71]. Based on data collected either from simulation or from experiments, reduced-order techniques identify an alternate space having comparatively a smaller number of active variables and/or coordinates where the dynamical evolution is tractable. The literature on reduced-order modelling is quite mature. The most celebrated model reduction technique is the proper orthogonal decomposition (POD) [7276] and its various variants, such as local and global POD methods [77,78], adaptive POD methods [79,80], double POD methods [81,82] and gappy POD method [83] to name a few. In addition to POD, there are a variety of other methods developed for an improved accuracy and computational efficiency motivated by different applications. Examples include balanced truncation [8486], balanced POD [87], Krylov subspace methods [88], missing point estimation [89] and trajectory-based optimization for oblique projections [90]. Dynamic reduced-order models are another development in this field that exploit the opportunity presented by dynamic sensor data and adaptively incorporate sensor data during the online phase [91]. One common feature among these approaches is that, using numerical simulations or experimental measurements, they identify a few coordinates with high variance or energy that significantly influence the dynamics. If governing equations are available, Galerkin or Petrov-Galerkin projection [72,74,92,93] are then be used to create a reduced model using the identified few modes that contain the majority of the system's energy. Recently, fully data-driven techniques, particularly DMD methods [52,54,94], supported by Koopman operator theory [95,96], have been introduced as both system identification and model reduction technique and received a growing attention in a variety of fields. These algorithms were proposed to extract dynamically relevant features from time-resolved experimental or numerical data without explicit knowledge of the dynamical operator. These algorithms create a best-fit linear operator to advance spatio-temporal snapshots forward in time. In addition, advances in machine learning techniques have resulted in the use of autoencoders that aim to learn the identity mapping through a dimensionality reduction procedure comprising data compression and subsequent recovery [97100]. Such approaches have made the problem of nonlinear reduced-order modelling tractable, which was otherwise a challenging task.

With the aid of more powerful hardware, computational models and algorithms, recent research on data-driven prediction has been heavily influenced by machine learning algorithms, particularly neural networks [101,102]. These methods are used to make predictions in all kinds of time series [103106], and achieved a significant success thanks to the abundance of data. In addition, the combination of traditional methods and deep learning techniques received significant attention and resulted in breakthroughs [107111]. The use of neural networks for identification and prediction of dynamical systems dates back to more than three decades ago [112,113]. One of the most important early results in the field of predicting dynamics using neural network came from Herbert Jaeger & Harald Haass [108] where a particular kind of neural network, called a reservoir computer, used to forecast the Mackey–Glass dynamical system. In recent years, a variety of innovative machine learning techniques for system identification and prediction have been proposed. Among those, recurrent neural networks (RNNs) [114,115] and long short-term memory networks (LSTM) [116118] are most commonly used for time series forecasting. Other approaches include data-driven discovery using deep neural networks (DNNs) [61], ordinary differential equation networks (ODENet) [107], deep residual learning [110], reservoir computing [119], convolutional neural networks [120], deep Koopman methods [121,122] and recently introduced deep operator networks (DeepONet) for learning continuous nonlinear operators from data [111]. These approaches provide a neural network model that mimics system dynamics when little or no physical knowledge is known but a large amount of data are available. However, it is likely that the learned models do not generalize well on the region where training data are scarce or do not exist. This challenge motivated the development of another group of methods called physics-informed data-driven approaches, which model governing equations by encoding differential equations describing physical processes into neural networks loss function to penalize the network if do not obey the physics [109,123]. By exploiting fundamental physical laws, the amount of data needed to obtain good performance is much less than for pure data-driven methods, and neural networks approximately follow physical laws after training. Subsequent development of these approaches resulted in the development of network architectures that incorporate assumptions about the nature and underlying physical laws in the neural network topology for an improved training for particular physical systems. Examples include Hamiltonian neural networks [124,125] and symplectic networks (SympNets) [126] for learning Hamiltonian systems, and deep Lagrangian networks [127] for learning Lagrangian systems while ensuring physical plausibility.

Energized by the success of these generic methods, interest has grown for incorporating data-driven methods to tackle previously unattainable challenges in modelling and prediction of dynamical systems in a variety of applications. Examples include but are not limited to health sciences [128132], biology [133135], epidemiology and infectious disease [136,137], ecology and climate [138142], financial markets [143], fluid dynamics [96,144146], aeroelasticity [147149], solid mechanics and materials [150153], energy [154156], transportation [157161] and heat transfer [162164]. Data-driven prediction methods have also offered a solution to the formidable challenge of predicting catastrophic events in a variety of complex systems. Recent studies have shown that features extracted from data can be used to predict critical transitions [165] and extreme events [166] in the dynamics of a variety of complex systems, including aeroelastic systems [39,40,44,167,168], ecological systems [51,169175], epidemiological systems [176179], traffic flow systems [158,180] and fluid flows [166,181183].

In the light of the current abundance of sensor data and advances in data-driven and machine learning techniques, it is inevitable that data-driven techniques will dominate the future of dynamical systems to tackle the modelling, prediction and control challenges facing science and engineering. While data-driven prediction methods have made great strides, it is still necessary to develop new and improved techniques to enhance their effectiveness and applicability to a wider range of complex systems in nature and engineering. This special issue provides a platform to share recent developments in the field of data-driven, data-assisted and artificial intelligence-based discovery of dynamical systems. In this focus issue, 15 papers that are addressing timely topics in the field are included. In the following, a brief overview of each paper is presented.

2. The general content of the issue

Data-driven dimensionality reduction techniques are a crucial part of analysis in large dimensional complex dynamical systems. Traditional linear dimensionality reduction techniques are not suitable for problems whose underlying governing equations are characterized by local solution features that evolve with time. To address this challenge, localization-based reduced-order modelling techniques have been developed in which multiple local approximation spaces are tailored to different regions of the state-space. Existing approaches, however, require access to high-fidelity models or codes to compute the projected reduced-order operators. That limits the applicability of these approaches. Geelen & Willcox [184] present data-driven learning of localized reduced models combining the state-of-the-art localization techniques with a non-intrusive model reduction formulation, leading to a flexible framework for reduction of large-scale nonlinear problems. The proposed approach is particularly important for reduced models of nonlinear systems of partial differential equations, where the solution may be characterized by different physical regimes or exhibit high sensitivity to parameter variations.

POD-based reduced-order modelling techniques are effective for systems in which the most reachable and the most observable directions are aligned, which is not the case in many systems including in most transport problems. Balance truncation methods can offer a solution to this challenge in standard projection techniques. Application of balance truncation in stiff systems with lightly damped (slowly decaying) impulse response is challenging due to triggering instabilities and intensifying sensitivity to sampling properties in the method. To address the realizability and scalability of balance truncation applied to highly stiff and lightly damped systems, Rezaian et al. [185] introduce a non-intrusive data-driven method for balancing discrete-time systems via the eigensystem realization algorithm (ERA). The advantage of ERA for balancing transformation makes full-state outputs tractable and enables balancing despite stiffness, by eliminating computation of balancing modes and adjoint simulations.

Developing reduced-order models for mechanical systems is also an important topic in engineering. For instance, the need to capture amplitude-dependent properties and competing steady-state solutions are increasingly important to identify as highlighted in experiments of nonlinear mechanical vibrations. While data-driven model reduction techniques are well established for linearizable mechanical systems, general approaches to reducing nonlinearizable systems with multiple coexisting steady states have been unavailable. Cenedese et al. [186] discuss a new data-driven reduced-order modelling approach in the context of mechanical vibrations, which is dynamics-based rather than physics-informed. Built on the recent theory of spectral submanifolds (SSMs), this approach identifies very low-dimensional, sparse models over different time scales by restricting the full system dynamics to a nested family of attractors. This approach constructs normal forms on attracting SSMs, which are the smoothest nonlinear continuation of spectral subspaces of the linearized dynamics.

DMD techniques are data-driven reduced-order modelling and system identification methods that have been broadly used in the scientific community due to their ease of use, interpretability and adaptability. These methods provide a data-driven regression architecture for adaptively learning linear dynamics models over snapshots of temporal data. The majority of classical DMD algorithms, however, are prone to bias errors from noisy measurements of the dynamics, leading to poor model fits and unstable forecasting capabilities. Sashidhar & Kutz [187] introduce an optimized DMD method, called bagging optimized dynamic mode decomposition (BOP-DMD), by using statistical bagging methods that improves the performance of DMD methods. Unlike currently available DMD algorithms, BOP-DMD provides a stable and robust model for probabilistic or Bayesian forecasting with comprehensive uncertainty quantification metrics.

The effectiveness of reduced-order models depends on their design so they can capture the complexity of the underlying dynamics. However, identifying effective reduced-order models also depends on the information they rely on. This fundamental information in complex systems is provided by data, which is often expensive. Sapsis & Blanchard [188] introduce a criterion based on a Gaussian process regression (GPR) for the most effective selection of data or the associated experiments to generate this data to perform data-driven reduced-order modelling. In particular, an optimality condition for the selection of a new input is defined as the minimizer of the distance between the approximated output probability density function of the reduced-order model and the exact one, which is defined as the supremum over the unit sphere of the native Hilbert space for the GPR.

To identify and predict system dynamics, physics-informed identification of dynamical systems has received a growing attention in recent years. These methods aim to model governing equations by embedding physics into neural networks together with data, thereby mitigating the problem of the learned models not extending well to regions where there is a lack of training data. One approach to impose physics constraints on neural networks is by designing proper neural network architectures that obey the underlying principles without optimization processes, yet maintaining sufficient expressivity so that governing equations can be learned from data. Zhang et al. [189] provide a novel neural network surrogate model, called a GENERIC formalism informed neural network (GFINN), allowing flexible ways of leveraging available physical information into neural networks. The imposed physics is based on the GENERIC (General Equation for Non-Equilibrium Reversible-Irreversible Coupling) formalism, providing conditions interpreted as the first and second principles of thermodynamics. This approach can handle inference and prediction of deterministic and stochastic irreversible thermodynamic processes using structured neural networks, while strictly preserving the thermodynamics constraints described by the GENERIC formalism at the same time.

When no knowledge about a physical system is available, deep learning methods require large amounts of data for generalized solutions, which may pose a challenge for identification and prediction of many real-world dynamical systems. Learning spatio-temporal processes purely using data is an example of such challenges. Saha & Mukhopadhyay [190] address the problem of predicting complex, nonlinear spatio-temporal dynamics when data are recorded at irregularly spaced sparse spatial locations. The proposed method does not assume any specific physical representation of the underlying dynamical system, and is applicable to spatio-temporal dynamical systems involving continuous state variables. The proposed approach is based on radial basis function (RBF) collocation method which is often used for meshfree solution of partial differential equations. This framework enables unravelling the observed spatio-temporal function and learning the spatial interactions among data sites on the RBF-space. The learned spatial features are then used to perform predictions in future time steps.

Most existing data-driven systems identification techniques heavily depend on the quality of the available observations and learning dynamics from noisy and irregular observations is still a challenge. Bhouri & Perdikaris [191] present a machine learning framework (GPNODE) for Bayesian model discovery from partial, noisy and irregular observations of nonlinear dynamical systems. The proposed method takes advantage of differentiable programming to propagate gradient information through ordinary differential equation solvers and perform Bayesian inference with respect to unknown model parameters using Hamiltonian Monte Carlo sampling and Gaussian Process priors over the observed system states.

Though there is an increasing amount of data for complex systems and methods for discovering the laws governing dynamical systems, existing techniques are focused mostly on deterministic or stochastic systems with Gaussian noise. Lu et al. [192] present a new data-driven approach to extract stochastic governing laws with both (Gaussian) Brownian motion and (non-Gaussian) Lévy motion from short bursts of simulation data. The normalizing flow technique is used to estimate the transition probability density function from data and approximate Lévy jump measure, drift coefficient and diffusion coefficient of a stochastic differential equation using non-local Kramers–Moyal formulae.

The closure problem in nonlinear dynamical systems is another key area in computational statistics. One of the difficulties in this statistical closure problem is the lack of training data. Qi & Harlim [193] propose a machine learning non-Markovian closure modelling framework for accurate predictions of statistical responses of turbulent dynamical systems subjected to external forcing. A unified model framework is proposed aiming to directly predict the leading-order statistical moments subjected to general external perturbations, with limited training data. In this work, motivated by practical issues in obtaining longer time series, short-time transient statistical sequences are considered for training.

Stability analysis of dynamical systems is a critical requirement in predicting the dynamics of complex systems. For systems exhibiting bifurcation instabilities, centre manifold theory in conjunction with the theory of normal forms offer generic and reduced forms for a variety of bifurcations. However, traditional methods to identify the dynamics on the centre manifold require an accurate model of the system as well as considerable algebra using the nonlinear system equations. Ghadami & Epureanu [194] present a deep learning structure that can uncover the system dynamics on the centre manifold for a class of systems prone to co-dimension one instabilities. Using random snapshots recorded from system dynamics before the onset of an instability, the method identifies whether the system is at risk of instabilities in its dynamics, and returns a closed-form model of the system dynamics on the centre manifold facilitating stability analysis of large-dimensional dynamical systems from data.

Extending the state of the art in topology-based methods for nonlinear dynamical systems, Ghalyan et al. [195] focus on developing a robust machine learning method that makes use of topological invariants in data manifolds for a robust pattern recognition and anomaly detection in dynamical systems. In the proposed approach, pattern recognition and anomaly detection are viewed from topological perspectives, where changes within a phase are described by topological transformations that preserve topological invariants, while changes between different phases imply changes in these topological invariants. The proposed approach is validated on models of selected chaotic dynamical systems for prompt detection of phase transitions.

Data-driven and data-assisted algorithms reduce the computation costs associated with traditional approaches in a variety of applications. The dynamics of dispersions of small particles in a fluid are an important example in many engineering and medical applications. While the dynamics of these particles can be simulated directly for a specific (sampled) dispersion by tracking each particle, distribution statistics are typically sought in applications demanding expensive simulations. Quadrature-based moment methods (QBMMs) offer a low-cost approach to address this issue. However, these methods can exhibit numerical instabilities when high-order moments are evolved. Charalampopoulos et al. [196] propose a data-informed conditional hyperbolic quadrature method for statistical moments. This approach addresses previous challenges by training a RNN that adjusts the QBMM quadrature to evaluate unclosed moments with higher accuracy. The method is applied to the statistics of a population of spherical bubbles oscillating in response to time-varying randomized forcing, and may also be effectively applied to dynamical systems with non-Gaussian statistics where high-order moments are of interest.

Data-driven techniques offer a solution to many challenging engineering problems. McClellan et al. [197] propose a two-level, data-driven, digital twin concept for the autonomous landing of aircraft. The main purpose of the proposed approach is to predict the state of an aircraft and the aerodynamic forces and moments acting on it in real-time. Unlike static lookup tables or regression-based surrogate models based on steady-state wind tunnel data, this real-time digital twin prototype allows the digital twin instance for model predictive control to be informed by a truly dynamic flight model, rather than a less accurate set of steady-state aerodynamic force and moment data points.

Numerical time-stepping algorithms to approximate solutions of nonlinear differential equations are a key in analysing dynamical systems. Time-stepping schemes are typically based on Taylor series expansions that are local in time and have a numerical accuracy determined by the step size. Many systems characterized by multiscale physics exhibit dynamics over a vast range of timescales, making numerical integration expensive. Liu et al. [198] introduce a data-driven time-stepper framework based on synthesizing multiple DNNs for hierarchical time-steppers (HiTSs) trained at multiple temporal scales. The proposed method explicitly takes advantage of dynamics on different scales by learning flow-maps for those different scales. The proposed approach is shown to outperform neural networks trained at a single scale, providing an accurate and flexible approach for integrating nonlinear dynamical systems.

Acknowledgement

We would like to thank all the contributing authors who accepted our invitation and submitted their work to this focus issue. In addition, we are grateful the expert reviewers who participated voluntarily in the review process.

Data accessibility

This article has no additional data.

Authors' contributions

A.G. and B.I.E.: conceptualization, writing—original draft.

All authors gave final approval for publication and agreed to be held accountable for the work performed therein.

Conflict of interest declaration

This theme issue was put together by the Guest Editor team under supervision from the journal's Editorial staff, following the Royal Society's ethical codes and best-practice guidelines. The Guest Editor team invited contributions and handled the review process. Individual Guest Editors were not involved in assessing papers where they had a personal, professional or financial conflict of interest with the authors or the research described. Independent reviewers assessed all papers. Invitation to contribute did not guarantee inclusion.

Funding

We received no funding for this study.

References

  • 1.Luo Y, Ogle K, Tucker C, Fei S, Gao C, LaDeau S, Clark JS, Schimel DS. 2011. Ecological forecasting and data assimilation in a data-rich era. Ecol. Appl. 21, 1429-1442. ( 10.1890/09-1275.1) [DOI] [PubMed] [Google Scholar]
  • 2.Mudelsee M. 2019. Trend analysis of climate time series: a review of methods. Earth Sci. Rev. 190, 310-322. ( 10.1016/j.earscirev.2018.12.005) [DOI] [Google Scholar]
  • 3.Andersen TG, Bollerslev T, Christoffersen P, Diebold FX. 2005. Volatility forecasting. Cambridge, MA: National Bureau of Economic Research. [Google Scholar]
  • 4.Stoffer DS, Ombao H. 2012. Special issue on time series analysis in the biological sciences. J. Time Series Anal. 33, 701-703. ( 10.1111/j.1467-9892.2012.00805.x) [DOI] [Google Scholar]
  • 5.Vlahogianni EI, Karlaftis MG, Golias JC. 2014. Short-term traffic forecasting: where we are and where we're going. Transp. Res. Part C Emerg. Technol. 43, 3-19. ( 10.1016/j.trc.2014.01.005) [DOI] [Google Scholar]
  • 6.Wang B, Zou X, Zhu J. 2000. Data assimilation and its applications. Proc. Natl Acad. Sci. USA 97, 11 143-11 144. ( 10.1073/pnas.97.21.11143) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Medeiros RR, Cesnik CES, Coetzee EB. 2020. Computational aeroelasticity using modal-based structural nonlinear analysis. AIAA J. 58, 362-371. ( 10.2514/1.J058593) [DOI] [Google Scholar]
  • 8.Debnath JK, Fung W-K, Gole AM, Filizadehc S. 2011. Simulation of large-scale electrical power networks on graphics processing units. In 2011 IEEE Electrical Power and Energy Conf., Winnipeg, Canada3 October, pp. 199-204. Piscataway, NJ: IEEE. [Google Scholar]
  • 9.Grassberger P, Schreiber T, Schaffrath C. 1991. Nonlinear time sequence analysis. Int. J. Bifurcation Chaos 1, 521-547. ( 10.1142/S0218127491000403) [DOI] [Google Scholar]
  • 10.Mindlin GM, Gilmore R. 1992. Topological analysis and synthesis of chaotic time series. Physica D 58, 229-242. ( 10.1016/0167-2789(92)90111-Y) [DOI] [Google Scholar]
  • 11.Abarbanel HDI, Brown R, Kadtke JB. 1990. Prediction in chaotic nonlinear systems: methods for time series with broadband Fourier spectra. Phys. Rev. A 41, 1782-1807. ( 10.1103/PhysRevA.41.1782) [DOI] [PubMed] [Google Scholar]
  • 12.Casdagli M. 1989. Nonlinear prediction of chaotic time series. Physica D 35, 335-356. ( 10.1016/0167-2789(89)90074-2) [DOI] [Google Scholar]
  • 13.Takens F. 1981. Detecting strange attractors in turbulence. In Dynamical systems and turbulence, Warwick 1980 (eds Rand D, Young L-S), pp. 366-381. Berlin, Germany: Springer. [Google Scholar]
  • 14.Sugihara G, May RM. 1990. Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series. Nature 344, 734-741. ( 10.1038/344734a0) [DOI] [PubMed] [Google Scholar]
  • 15.Kantz H, Schreiber T. 2004. Nonlinear time series analysis, vol. 7. Cambridge, UK: Cambridge University Press. [Google Scholar]
  • 16.Bradley E, Kantz H. 2015. Nonlinear time-series analysis revisited. Chaos: Interdisc. J. Nonlinear Sci. 25, 97610. ( 10.1063/1.4917289) [DOI] [PubMed] [Google Scholar]
  • 17.Deyle ER, Sugihara G. 2011. Generalized theorems for nonlinear state space reconstruction. PLoS ONE 6, e18295. ( 10.1371/journal.pone.0018295) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Sugihara G. 1994. Nonlinear forecasting for the classification of natural time series. Phil. Trans. R. Soc. Lond. A 348, 477-495. ( 10.1098/rsta.1994.0106) [DOI] [Google Scholar]
  • 19.Abarbanel H. 2012. Analysis of observed chaotic data. Berlin, Germany: Springer Science & Business Media. [Google Scholar]
  • 20.Ye H, Sugihara G. 2016. Information leverage in interconnected ecosystems: overcoming the curse of dimensionality. Science 353, 922-925. ( 10.1126/science.aag0863) [DOI] [PubMed] [Google Scholar]
  • 21.Box GEP, Jenkins GM, Reinsel GC, Ljung GM. 2015. Time series analysis: forecasting and control. New York, NY: John Wiley & Sons. [Google Scholar]
  • 22.Brunton SL, Kutz JN. 2022. Data-driven science and engineering: machine learning, dynamical systems, and control. Cambridge, UK: Cambridge University Press. [Google Scholar]
  • 23.Taira K, Hemati MS, Brunton SL, Sun Y, Duraisamy K, Bagheri S, Dawson ST, Yeh CA. 2020. Modal analysis of fluid flows: applications and outlook. AIAA J. 58, 998-1022. ( 10.2514/1.J058462) [DOI] [Google Scholar]
  • 24.Cherkassky V, Mulier FM. 2007. Learning from data: concepts, theory, and methods. New York, NY: John Wiley & Sons. [Google Scholar]
  • 25.Froyland G, Padberg K. 2009. Almost-invariant sets and invariant manifolds—connecting probabilistic and geometric descriptions of coherent structures in flows. Physica D. 238, 1507-1523. ( 10.1016/j.physd.2009.03.002) [DOI] [Google Scholar]
  • 26.Banisch R, Koltai P. 2017. Understanding the geometry of transport: diffusion maps for Lagrangian trajectory data unravel coherent sets. Chaos Interdisc. J. Nonlinear Sci. 27, 35804. ( 10.1063/1.4971788) [DOI] [PubMed] [Google Scholar]
  • 27.Thiede EH, Giannakis D, Dinner AR, Weare J. 2019. Galerkin approximation of dynamical quantities using trajectory data. J. Chem. Phys. 150, 244111. ( 10.1063/1.5063730) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Kopelevich DI, Panagiotopoulos AZ, Kevrekidis IG. 2005. Coarse-grained kinetic computations for rare events: application to micelle formation. J. Chem. Phys. 122, 44908. ( 10.1063/1.1839174) [DOI] [PubMed] [Google Scholar]
  • 29.Froyland G, Junge O, Koltai P. 2013. Estimating long-term behavior of flows without trajectory integration: the infinitesimal generator approach. SIAM J. Numer. Anal. 51, 223-247. ( 10.1137/110819986) [DOI] [Google Scholar]
  • 30.Metzner P, Horenko I, Schütte C. 2007. Generator estimation of Markov jump processes based on incomplete observations nonequidistant in time. Phys. Rev. E 76, 66702. ( 10.1103/PhysRevE.76.066702) [DOI] [PubMed] [Google Scholar]
  • 31.Klus S, Nüske F, Koltai P, Wu H, Kevrekidis I, Schütte C, Noé F. 2018. Data-driven model reduction and transfer operator approximation. J. Nonlinear Sci. 28, 985-1010. ( 10.1007/s00332-017-9437-7) [DOI] [Google Scholar]
  • 32.Huang B, Vaidya U. 2018. Data-driven approximation of transfer operators: Naturally structured dynamic mode decomposition. In 2018 Annual American Control Conf. (ACC), pp. 5659-5664. IEEE. [Google Scholar]
  • 33.Deshpande R, de Silva CM, Lee M, Monty JP, Marusic I. 2021. Data-driven enhancement of coherent structure-based models for predicting instantaneous wall turbulence. Int. J. Heat Fluid Flow 92, 108879. ( 10.1016/j.ijheatfluidflow.2021.108879) [DOI] [Google Scholar]
  • 34.Haller G. 2002. Lagrangian coherent structures from approximate velocity data. Phys. Fluids 14, 1851-1861. ( 10.1063/1.1477449) [DOI] [Google Scholar]
  • 35.Froyland G, Padberg-Gehle K. 2015. A rough-and-ready cluster-based approach for extracting finite-time coherent sets from sparse and incomplete trajectory data. Chaos Interdisc. J. Nonlinear Sci. 25, 87406. ( 10.1063/1.4926372) [DOI] [PubMed] [Google Scholar]
  • 36.Schlueter-Kuck KL, Dabiri JO. 2017. Coherent structure colouring: identification of coherent structures from sparse data using graph theory. J. Fluid Mech. 811, 468-486. ( 10.1017/jfm.2016.755) [DOI] [Google Scholar]
  • 37.Haller G. 2015. Lagrangian coherent structures. Annu. Rev. Fluid Mech. 47, 137-162. ( 10.1146/annurev-fluid-010313-141322) [DOI] [Google Scholar]
  • 38.Rowley CW, Mezić I, Bagheri S, Schlatter P, Henningson DS. 2009. Spectral analysis of nonlinear flows. J. Fluid Mech. 641, 115-127. ( 10.1017/S0022112009992059) [DOI] [Google Scholar]
  • 39.Ghadami A, Epureanu BI. 2018. Forecasting critical points and post-critical limit cycles in nonlinear oscillatory systems using pre-critical transient responses. Int. J. Non-Linear Mech. 101, 146-156. ( 10.1016/j.ijnonlinmec.2018.02.008) [DOI] [Google Scholar]
  • 40.Ghadami A, Epureanu BI. 2016. Bifurcation forecasting for large dimensional oscillatory systems: forecasting flutter using gust responses. J. Comput. Nonlinear Dyn. 11, 061009. ( 10.1115/1.4033920) [DOI] [Google Scholar]
  • 41.Lim J, Epureanu BI. 2011. Forecasting a class of bifurcations: theory and experiment. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 83, 016203. ( 10.1103/PhysRevE.83.016203) [DOI] [PubMed] [Google Scholar]
  • 42.Lim J, Epureanu BI. 2012. Forecasting bifurcation morphing: application to cantilever-based sensing. Nonlinear Dyn. 67, 2291-2298. ( 10.1007/s11071-011-0146-8) [DOI] [Google Scholar]
  • 43.Ghadami A, Epureanu BI. 2017. Forecasting the post-bifurcation dynamics of large-dimensional slow-oscillatory systems using critical slowing down and center space reduction. Nonlinear Dyn. 88, 415-431. ( 10.1007/s11071-016-3250-y) [DOI] [Google Scholar]
  • 44.Ghadami A, Cesnik CES, Epureanu BI. 2018. Model-less forecasting of Hopf bifurcations in fluid-structural systems. J. Fluids Struct. 76, 1-3. ( 10.1016/j.jfluidstructs.2017.09.005) [DOI] [Google Scholar]
  • 45.Ljung L. 1998. System identification. In Signal analysis and prediction (eds Procházka JU, Rayner PWJ, Kingsbury NG), pp. 163-173. Berlin, Germany: Springer. [Google Scholar]
  • 46.Bai E-W. 2010. Non-parametric nonlinear system identification: an asymptotic minimum mean squared error estimator. IEEE Trans. Autom. Control 55, 1615-1626. ( 10.1109/TAC.2010.2042343) [DOI] [Google Scholar]
  • 47.Fattahi S, Sojoudi S. 2018. Data-driven sparse system identification. In 2018 56th Annual Allerton Conf. on Communication, Control, and Computing (Allerton), Monticello, IL, 2 October, pp. 462-469. Piscataway, NJ: IEEE. [Google Scholar]
  • 48.Klus S, Nüske F, Peitz S, Niemann J-H, Clementi C, Schütte C. 2020. Data-driven approximation of the Koopman generator: model reduction, system identification, and control. Physica D 406, 132416. ( 10.1016/j.physd.2020.132416) [DOI] [Google Scholar]
  • 49.Williams MO, Kevrekidis IG, Rowley CW. 2015. A data-driven approximation of the Koopman operator: extending dynamic mode decomposition. J. Nonlinear Sci. 25, 1307-1346. ( 10.1007/s00332-015-9258-5) [DOI] [Google Scholar]
  • 50.Li Y, Duan J. 2021. A data-driven approach for discovering stochastic dynamical systems with non-Gaussian Lévy noise. Physica D 417, 132830. ( 10.1016/j.physd.2020.132830) [DOI] [Google Scholar]
  • 51.Ghadami A, Chen S, Epureanu BI. 2020. Data-driven identification of reliable sensor species to predict regime shifts in ecological networks. R. Soc. Open Sci. 7, 200896. ( 10.1098/rsos.200896) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Schmid PJ. 2010. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 656, 5-28. ( 10.1017/S0022112010001217) [DOI] [Google Scholar]
  • 53.Champion K, Lusch B, Kutz JN, Brunton SL. 2019. Data-driven discovery of coordinates and governing equations. Proc. Natl Acad. Sci. USA 116, 22 445-22 451. ( 10.1073/pnas.1906995116) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Kutz JN, Brunton SL, Brunton BW, Proctor JL. 2016. Dynamic mode decomposition: data-driven modeling of complex systems. Philadelphia, PA: SIAM. [Google Scholar]
  • 55.Rudy SH, Brunton SL, Proctor JL, Kutz JN. 2017. Data-driven discovery of partial differential equations. Sci. Adv. 3, e1602614. ( 10.1126/sciadv.1602614) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Voss HU, Kolodner P, Abel M, Kurths J. 1999. Amplitude equations from spatiotemporal binary-fluid convection data. Phys. Rev. Lett. 83, 3422. ( 10.1103/PhysRevLett.83.3422) [DOI] [Google Scholar]
  • 57.Ye H, Beamish RJ, Glaser SM, Grant SCH, Hsieh C, Richards LJ, Schnute JT, Sugihara G. 2015. Equation-free mechanistic ecosystem forecasting using empirical dynamic modeling. Proc. Natl Acad. Sci. USA 112, E1569-E1576. ( 10.1073/pnas.1417063112) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Majda AJ, Franzke C, Crommelin D. 2009. Normal forms for reduced stochastic climate models. Proc. Natl Acad. Sci. USA 106, 3649-3653. ( 10.1073/pnas.0900173106) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Giannakis D, Majda AJ. 2012. Nonlinear Laplacian spectral analysis for time series with intermittency and low-frequency variability. Proc. Natl Acad. Sci. USA 109, 2222-2227. ( 10.1073/pnas.1118984109) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Juang J-N, Pappa RS. 1985. An eigensystem realization algorithm for modal parameter identification and model reduction. J. Guid. 8, 620-627. ( 10.2514/3.20031) [DOI] [Google Scholar]
  • 61.Qin T, Wu K, Xiu D. 2019. Data driven governing equations approximation using deep neural networks. J. Comput. Phys. 395, 620-635. ( 10.1016/j.jcp.2019.06.042) [DOI] [Google Scholar]
  • 62.González-García R, Rico-Martìnez R, Kevrekidis IG. 1998. Identification of distributed parameter systems: a neural net based approach. Comput. Chem. Eng. 22, S965-S968. ( 10.1016/S0098-1354(98)00191-4) [DOI] [Google Scholar]
  • 63.Schmidt M, Lipson H. 2009. Distilling free-form natural laws from experimental data. Science 324, 81-85. ( 10.1126/science.1165893) [DOI] [PubMed] [Google Scholar]
  • 64.Baake E, Baake M, Bock HG, Briggs KM. 1992. Fitting ordinary differential equations to chaotic data. Phys. Rev. A 45, 5524. ( 10.1103/PhysRevA.45.5524) [DOI] [PubMed] [Google Scholar]
  • 65.Bongard J, Lipson H. 2007. Automated reverse engineering of nonlinear dynamical systems. Proc. Natl Acad. Sci. USA 104, 9943-9948. ( 10.1073/pnas.0609476104) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Reinbold PAK, Kageorge LM, Schatz MF, Grigoriev RO. 2021. Robust learning from noisy, incomplete, high-dimensional experimental data via physically constrained symbolic regression. Nat. Commun. 12, 1-8. ( 10.1038/s41467-020-20314-w) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Wang W-X, Yang R, Lai Y-C, Kovanis V, Grebogi C. 2011. Predicting catastrophes in nonlinear dynamical systems by compressive sensing. Phys. Rev. Lett. 106, 154101. ( 10.1103/PhysRevLett.106.154101) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Naik M, Cochran D. 2012. Nonlinear system identification using compressed sensing. In 2012 Conf. Record of the Forty Sixth Asilomar Conf. on Signals, Systems and Computers (ASILOMAR), pp. 426-430. IEEE. [Google Scholar]
  • 69.Tran G, Ward R. 2017. Exact recovery of chaotic systems from highly corrupted data. Multiscale Model. Simul. 15, 1108-1129. ( 10.1137/16M1086637) [DOI] [Google Scholar]
  • 70.Brunton SL, Proctor JL, Kutz JN. 2016. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl Acad. Sci. USA 113, 3932-3937. ( 10.1073/pnas.1517384113) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Benner P, Gugercin S, Willcox K. 2015. A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev. 57, 483-531. ( 10.1137/130932715) [DOI] [Google Scholar]
  • 72.Holmes P, Lumley JL, Berkooz G, Rowley CW. 2012. Turbulence, coherent structures, dynamical systems and symmetry. Cambridge, UK: Cambridge University Press. [Google Scholar]
  • 73.Kirby M. 2001. Geometric data analysis: an empirical approach to dimensionality reduction and the study of patterns, vol. 31. New York, NY: Wiley. [Google Scholar]
  • 74.Sirovich L. 1987. Turbulence and the dynamics of coherent structures. I. Coherent structures. Quart. Appl. Math. 45, 561-571. ( 10.1090/qam/910462) [DOI] [Google Scholar]
  • 75.Lumley JL. 2007. Stochastic tools in turbulence. New York, NY: Courier Corporation. [Google Scholar]
  • 76.Lumley JL. 1967. The structure of inhomogeneous turbulent flows. In Proc. Int. Colloquium – Atmospheric turbulence and radio wave propagation. Moscow, Russia, 15–22 June, 1965. Moscow, Russia: Nauka. [Google Scholar]
  • 77.Schmit R, Glauser M. 2004. Improvements in low dimensional tools for flow-structure interaction problems: using global POD. In 42nd AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, 4 January, pp. 889. Reston, VA: AIAA. [Google Scholar]
  • 78.Sahyoun S, Djouadi S. 2013. Local proper orthogonal decomposition based on space vectors clustering. In 3rd Int. Conf. on Systems and Control, Algiers, Algieria, 29 October, pp. 665-670. Piscataway, NJ: IEEE. [Google Scholar]
  • 79.Singer MA, Green WH. 2009. Using adaptive proper orthogonal decomposition to solve the reaction–diffusion equation. Appl. Numer. Math. 59, 272-279. ( 10.1016/j.apnum.2008.02.004) [DOI] [Google Scholar]
  • 80.Peherstorfer B, Willcox K. 2015. Online adaptive model reduction for nonlinear systems via low-rank updates. SIAM J. Sci. Comput. 37, A2123-A2150. ( 10.1137/140989169) [DOI] [Google Scholar]
  • 81.Tubino F, Solari G. 2005. Double proper orthogonal decomposition for representing and simulating turbulence fields. J. Eng. Mech. 131, 1302-1312. ( 10.1061/(ASCE)0733-9399(2005)131:12(1302) [DOI] [Google Scholar]
  • 82.Siegel SG, Seidel J, Fagley C, Luchtenburg DM, Cohen K, McLaughlin T. 2008. Low-dimensional modelling of a transient cylinder wake using double proper orthogonal decomposition. J. Fluid Mech. 610, 1-42. ( 10.1017/S0022112008002115) [DOI] [Google Scholar]
  • 83.Everson R, Sirovich L. 1995. Karhunen–Loeve procedure for gappy data. JOSA A. 12, 1657-1664. ( 10.1364/JOSAA.12.001657) [DOI] [Google Scholar]
  • 84.Pernebo L, Silverman L. 1982. Model reduction via balanced state space representations. IEEE Trans. Autom. Control. 27, 382-387. ( 10.1109/TAC.1982.1102945) [DOI] [Google Scholar]
  • 85.Lall S, Marsden JE, Glavaški S. 2002. A subspace approach to balanced truncation for model reduction of nonlinear control systems. Int. J. Robust Nonlinear Control: IFAC-Affil. J. 12, 519-535. [Google Scholar]
  • 86.Moore B. 1981. Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans. Autom. Control. 26, 17-32. ( 10.1109/TAC.1981.1102568) [DOI] [Google Scholar]
  • 87.Willcox K, Peraire J. 2002. Balanced model reduction via the proper orthogonal decomposition. AIAA J. 40, 2323-2330. ( 10.2514/2.1570) [DOI] [Google Scholar]
  • 88.Feldmann P, Freund RW. 1995. Efficient linear circuit analysis by Padé approximation via the Lanczos process. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 14, 639-649. ( 10.1109/43.384428) [DOI] [Google Scholar]
  • 89.Astrid P, Weiland S, Willcox K, Backx T. 2008. Missing point estimation in models described by proper orthogonal decomposition. IEEE Trans. Autom. Control 53, 2237-2251. ( 10.1109/TAC.2008.2006102) [DOI] [Google Scholar]
  • 90.Otto SE, Padovan A, Rowley CW. 2021. Optimizing Oblique Projections for Nonlinear Systems using Trajectories. (http://arxiv.org/abs/210601211).
  • 91.Peherstorfer B, Willcox K. 2015. Dynamic data-driven reduced-order models. Computer Methods Appl. Mech. Eng. 291, 21-41. ( 10.1016/j.cma.2015.03.018) [DOI] [Google Scholar]
  • 92.Carlberg K, Barone M, Antil H. 2017. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction. J. Comput. Phys. 330, 693-734. ( 10.1016/j.jcp.2016.10.033) [DOI] [Google Scholar]
  • 93.Carlberg K, Bou-Mosleh C, Farhat C. 2011. Efficient non-linear model reduction via a least-squares Petrov–Galerkin projection and compressive tensor approximations. Int. J. Numer. Methods Eng. 86, 155-181. ( 10.1002/nme.3050) [DOI] [Google Scholar]
  • 94.Tu JH. 2013. Dynamic mode decomposition: theory and applications. Princeton, NJ: Princeton University Press. [Google Scholar]
  • 95.Budišić M, Mohr R, Mezić I. 2012. Applied Koopmanism. Chaos Interdisc. J. Nonlinear Sci. 22, 47510. ( 10.1063/1.4772195) [DOI] [PubMed] [Google Scholar]
  • 96.Mezić I. 2013. Analysis of fluid flows via spectral properties of the Koopman operator. Annu. Rev. Fluid Mech. 45, 357-378. ( 10.1146/annurev-fluid-011212-140652) [DOI] [Google Scholar]
  • 97.Gonzalez FJ, Balajewicz M. 2018. Deep convolutional recurrent autoencoders for learning low-dimensional feature dynamics of fluid systems. (http://arxiv.org/abs/180801346).
  • 98.Eivazi H, Veisi H, Naderi MH, Esfahanian V. 2020. Deep neural networks for nonlinear model order reduction of unsteady flows. Phys. Fluids 32, 105104. ( 10.1063/5.0020526) [DOI] [Google Scholar]
  • 99.Lee K, Carlberg KT. 2020. Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. J. Comput. Phys. 404, 108973. ( 10.1016/j.jcp.2019.108973) [DOI] [Google Scholar]
  • 100.Kim Y, Choi Y, Widemann D, Zohdi T. 2020. Efficient nonlinear manifold reduced order model. (http://arxiv.org/abs/201107727).
  • 101.LeCun Y, Bengio Y, Hinton G. 2015. Deep learning. Nature 521, 436-444. ( 10.1038/nature14539) [DOI] [PubMed] [Google Scholar]
  • 102.Goodfellow I, Bengio Y, Courville A. 2016. Deep learning. MIT Press. [Google Scholar]
  • 103.Wang H, Lei Z, Zhang X, Zhou B, Peng J. 2019. A review of deep learning for renewable energy forecasting. Energy Convers. Manage. 198, 111799. ( 10.1016/j.enconman.2019.111799) [DOI] [Google Scholar]
  • 104.Sezer OB, Gudelek MU, Ozbayoglu AM. 2020. Financial time series forecasting with deep learning: a systematic literature review: 2005–2019. Appl. Soft Comput. 90, 106181. ( 10.1016/j.asoc.2020.106181) [DOI] [Google Scholar]
  • 105.Gamboa JCB. 2017. Deep learning for time-series analysis. (http://arxiv.org/abs/170101887).
  • 106.Lim B, Zohren S. 2021. Time-series forecasting with deep learning: a survey. Phil. Trans. R. Soc. A 379, 20200209. ( 10.1098/rsta.2020.0209) [DOI] [PubMed] [Google Scholar]
  • 107.Chen RTQ, Rubanova Y, Bettencourt J, Duvenaud DK. 2018. Neural ordinary differential equations. Adv. Neural Inform. Process. Syst. 31, 6572-6583. [Google Scholar]
  • 108.Jaeger H, Haas H. 2004. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304, 78-80. ( 10.1126/science.1091277) [DOI] [PubMed] [Google Scholar]
  • 109.Raissi M, Perdikaris P, Karniadakis GE. 2019. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686-707. ( 10.1016/j.jcp.2018.10.045) [DOI] [Google Scholar]
  • 110.He K, Zhang X, Ren S, Sun J. 2016. Deep residual learning for image recognition. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, 27 June, pp. 770-778. Piscataway, NJ: IEEE. [Google Scholar]
  • 111.Lu L, Jin P, Pang G, Zhang Z, Karniadakis GE. 2021. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat. Mach. Intell. 3, 218-229. ( 10.1038/s42256-021-00302-5) [DOI] [Google Scholar]
  • 112.Rico-Martinez R, Anderson JS, Kevrekidis IG. 1994. Continuous-time nonlinear signal processing: a neural network based approach for gray box identification. In Proc. IEEE Workshop on Neural Networks for Signal Processing, Ermioni, Greece, 6 August, pp. 596-605. Piscataway, NJ: IEEE. [Google Scholar]
  • 113.Kumpati SN, Kannan P. 1990. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Netw. 1, 4-27. ( 10.1109/72.80202) [DOI] [PubMed] [Google Scholar]
  • 114.Bailer-Jones CAL, MacKay DJC, Withers PJ. 1998. A recurrent neural network for modelling dynamical systems. Network: Comput. Neural Syst. 9, 531. ( 10.1088/0954-898X_9_4_008) [DOI] [PubMed] [Google Scholar]
  • 115.Uribarri G, Mindlin GB. 2022. Dynamical time series embeddings in recurrent neural networks. Chaos Solitons Fract. 154, 111612. ( 10.1016/j.chaos.2021.111612) [DOI] [Google Scholar]
  • 116.Hochreiter S, Schmidhuber J. 1997. Long short-term memory. Neural Comput. 9, 1735-1780. ( 10.1162/neco.1997.9.8.1735) [DOI] [PubMed] [Google Scholar]
  • 117.Wang Y. 2017. A new concept using LSTM neural networks for dynamic system identification. In 2017 American control conference (ACC), Seattle, WA, 24 May, pp. 5324-5329. Piscataway, NJ: IEEE. [Google Scholar]
  • 118.Vlachas PR, Byeon W, Wan ZY, Sapsis TP, Koumoutsakos P. 2018. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks. Proc. R. Soc. A 474, 20170844. ( 10.1098/rspa.2017.0844) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119.Pathak J, Hunt B, Girvan M, Lu Z, Ott E. 2018. Model-free prediction of large spatiotemporally chaotic systems from data: a reservoir computing approach. Phys. Rev. Lett. 120, 24102. ( 10.1103/PhysRevLett.120.024102) [DOI] [PubMed] [Google Scholar]
  • 120.Mukhopadhyay S, Banerjee S. 2020. Learning dynamical systems in noise using convolutional neural networks. Chaos Interdisc. J. Nonlinear Scie. 30, 103125. ( 10.1063/5.0009326) [DOI] [PubMed] [Google Scholar]
  • 121.Lusch B, Kutz JN, Brunton SL. 2018. Deep learning for universal linear embeddings of nonlinear dynamics. Nat. Commun. 9, 1-10. ( 10.1038/s41467-018-07210-0) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 122.Yeung E, Kundu S, Hodas N. 2019. Learning deep neural network representations for Koopman operators of nonlinear dynamical systems. In 2019 American Control Conf. (ACC), Philadelphia, PA, 10 July, pp. 4832-4839. Piscataway, NJ: IEEE. [Google Scholar]
  • 123.Karniadakis GE, Kevrekidis IG, Lu L, Perdikaris P, Wang S, Yang L. 2021. Physics-informed machine learning. Nat. Rev. Phys. 3, 422-440. ( 10.1038/s42254-021-00314-5) [DOI] [Google Scholar]
  • 124.Greydanus S, Dzamba M, Yosinski J. 2019. Hamiltonian neural networks. Adv. Neural Inform. Process. Syst. 32, 15 379-15 389. [Google Scholar]
  • 125.Bertalan T, Dietrich F, Mezić I, Kevrekidis IG. 2019. On learning Hamiltonian systems from data. Chaos Interdisc. J. Nonlinear Sci. 29, 121107. ( 10.1063/1.5128231) [DOI] [PubMed] [Google Scholar]
  • 126.Jin P, Zhang Z, Zhu A, Tang Y, Karniadakis GE. 2020. SympNets: Intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems. Neural Netw. 132, 166-179. ( 10.1016/j.neunet.2020.08.017) [DOI] [PubMed] [Google Scholar]
  • 127.Lutter M, Ritter C, Peters J. 2019. Deep Lagrangian networks: using physics as model prior for deep learning. (http://arxiv.org/abs/190704490)
  • 128.Woldaregay AZ, Årsand E, Walderhaug S, Albers D, Mamykina L, Botsis T, Hartvigsen G. 2019. Data-driven modeling and prediction of blood glucose dynamics: machine learning applications in type 1 diabetes. Artif. Intell. Med. 98, 109-134. ( 10.1016/j.artmed.2019.07.007) [DOI] [PubMed] [Google Scholar]
  • 129.Fu JF, Klyuzhin IS, McKeown MJ, Stoessl AJ, Sossi V. 2020. Novel data-driven, equation-free method captures spatio-temporal patterns of neurodegeneration in Parkinson's disease: application of dynamic mode decomposition to PET. NeuroImage: Clin. 25, 102150. ( 10.1016/j.nicl.2019.102150) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 130.Hussein R, Palangi H, Ward R, Wang ZJ. 2018. Epileptic seizure detection: a deep learning approach. (http://arxiv.org/abs/180309848). [DOI] [PubMed]
  • 131.Solaija MSJ, Saleem S, Khurshid K, Hassan SA, Kamboh AM. 2018. Dynamic mode decomposition based epileptic seizure detection from scalp EEG. IEEE Access. 6, 38 683-38 692. ( 10.1109/ACCESS.2018.2853125) [DOI] [Google Scholar]
  • 132.Kissas G, Yang Y, Hwuang E, Witschey WR, Detre JA, Perdikaris P. 2020. Machine learning in cardiovascular flows modeling: predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 358, 112623. ( 10.1016/j.cma.2019.112623) [DOI] [Google Scholar]
  • 133.Feng Y, Mitran S. 2018. Data-driven reduced-order model of microtubule mechanics. Cytoskeleton 75, 45-60. ( 10.1002/cm.21419) [DOI] [PubMed] [Google Scholar]
  • 134.Yeung E, Kim J, Yuan Y, Gonçalves J, Murray RM. 2021. Data-driven network models for genetic circuits from time-series data with incomplete measurements. J. R. Soc. Interface 18, 20210413. ( 10.1098/rsif.2021.0413) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 135.Eslami M, et al. 2022. Prediction of whole-cell transcriptional response with machine learning. Bioinformatics 38, 404-409. ( 10.1093/bioinformatics/btab676) [DOI] [PubMed] [Google Scholar]
  • 136.Proctor JL, Eckhoff PA. 2015. Discovering dynamic patterns from infectious disease data using dynamic mode decomposition. Int. Health 7, 139-145. ( 10.1093/inthealth/ihv009) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 137.Bhouri MA, Costabal FS, Wang H, Linka K, Peirlinck M, Kuhl E, Perdikaris P. et al. 2021. COVID-19 dynamics across the US: a deep learning study of human mobility and social behavior. Comput. Methods Appl. Mech. Eng. 382, 113891. ( 10.1016/j.cma.2021.113891) [DOI] [Google Scholar]
  • 138.Rasp S, Pritchard MS, Gentine P. 2018. Deep learning to represent subgrid processes in climate models. Proc. Natl Acad. Sci. USA 115, 9684-9689. ( 10.1073/pnas.1810286115) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 139.Karevan Z, Suykens JAK. 2020. Transductive LSTM for time-series prediction: An application to weather forecasting. Neural Netw. 125, 1-9. ( 10.1016/j.neunet.2019.12.030) [DOI] [PubMed] [Google Scholar]
  • 140.Scher S. 2018. Toward data-driven weather and climate forecasting: approximating a simple general circulation model with deep learning. Geophys. Res. Lett. 45, 12-616. ( 10.1029/2018GL080704) [DOI] [Google Scholar]
  • 141.Christin S, Hervet É, Lecomte N. 2019. Applications for deep learning in ecology. Methods Ecol. Evol. 10, 1632-1644. ( 10.1111/2041-210X.13256) [DOI] [Google Scholar]
  • 142.Rammer W, Seidl R. 2019. Harnessing deep learning in ecology: an example predicting bark beetle outbreaks. Front. Plant Sci. 10, 1327. ( 10.3389/fpls.2019.01327) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 143.Mann J, Kutz JN. 2016. Dynamic mode decomposition for financial trading strategies. Quant. Fin 16, 1643-1655. ( 10.1080/14697688.2016.1170194) [DOI] [Google Scholar]
  • 144.Wang R, Kashinath K, Mustafa M, Albert A, Yu R. 2020. Towards physics-informed deep learning for turbulent flow prediction. In Proc. 26th ACM SIGKDD Int. Conf. on Knowledge Discovery & Data Mining, California, 6 July, pp. 1457-1466. New York, NY: Association for Computing Machinery. [Google Scholar]
  • 145.Raissi M, Yazdani A, Karniadakis GE. 2020. Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations. Science 367, 1026-1030. ( 10.1126/science.aaw4741) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 146.Brunton SL, Noack BR, Koumoutsakos P. 2020. Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 52, 477-508. ( 10.1146/annurev-fluid-010719-060214) [DOI] [Google Scholar]
  • 147.Bhatnagar S, Afshar Y, Pan S, Duraisamy K, Kaushik S. 2019. Prediction of aerodynamic flow fields using convolutional neural networks. Comput. Mech. 64, 525-545. ( 10.1007/s00466-019-01740-0) [DOI] [Google Scholar]
  • 148.Li K, Kou J, Zhang W. 2019. Deep neural network for unsteady aerodynamic and aeroelastic modeling across multiple Mach numbers. Nonlinear Dyn. 96, 2157-2177. ( 10.1007/s11071-019-04915-9) [DOI] [Google Scholar]
  • 149.Fonzi N, Brunton SL, Fasel U. 2020. Data-driven nonlinear aeroelastic models of morphing wings for control. Proc. R. Soc. A 476, 20200079. ( 10.1098/rspa.2020.0079) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 150.Goswami S, Yin M, Yu Y, Karniadakis GE. 2022. A physics-informed variational DeepONet for predicting crack path in quasi-brittle materials. Comput. Methods Appl. Mech. Eng. 391, 114587. ( 10.1016/j.cma.2022.114587) [DOI] [Google Scholar]
  • 151.Haghighat E, Raissi M, Moure A, Gomez H, Juanes R. 2021. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics. Comput. Methods Appl. Mech. Eng. 379, 113741. ( 10.1016/j.cma.2021.113741) [DOI] [Google Scholar]
  • 152.Paulson NH, Priddy MW, McDowell DL, Kalidindi SR. 2018. Data-driven reduced-order models for rank-ordering the high cycle fatigue performance of polycrystalline microstructures. Mater. Des. 154, 170-183. ( 10.1016/j.matdes.2018.05.009) [DOI] [Google Scholar]
  • 153.Lu X, Giovanis DG, Yvonnet J, Papadopoulos V, Detrez F, Bai J. 2019. A data-driven computational homogenization method based on neural networks for the nonlinear anisotropic electrical response of graphene/polymer nanocomposites. Comput. Mech. 64, 307-321. ( 10.1007/s00466-018-1643-0) [DOI] [Google Scholar]
  • 154.Singh P, Dwivedi P. 2018. Integration of new evolutionary approach with artificial neural network for solving short term load forecast problem. Appl. Energy. 217, 537-549. ( 10.1016/j.apenergy.2018.02.131) [DOI] [Google Scholar]
  • 155.Mohan N, Soman KP, Kumar SS. 2018. A data-driven strategy for short-term electric load forecasting using dynamic mode decomposition model. Appl. Energy. 232, 229-244. ( 10.1016/j.apenergy.2018.09.190) [DOI] [Google Scholar]
  • 156.Ding N, Benoit C, Foggia G, Bésanger Y, Wurtz F. 2015. Neural network-based model design for short-term load forecast in distribution systems. IEEE Trans. Power Syst. 31, 72-81. ( 10.1109/TPWRS.2015.2390132) [DOI] [Google Scholar]
  • 157.Li Y, Yu R, Shahabi C, Liu Y. 2017. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. (http://arxiv.org/abs/170701926).
  • 158.Ghadami A, Epureanu BI. 2020. Forecasting the onset of traffic congestions on circular roads. IEEE Trans. Intell. Transp. Syst. 22, 1196-1205. ( 10.1109/TITS.2020.2964021) [DOI] [Google Scholar]
  • 159.Avila AM, Mezić I. 2020. Data-driven analysis and forecasting of highway traffic dynamics. Nat. Commun. 11, 1-16. ( 10.1038/s41467-020-15582-5) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 160.Polson NG, Sokolov VO. 2017. Deep learning for short-term traffic flow prediction. Transp. Res. C: Emerg. Technol. 79, 1-17. ( 10.1016/j.trc.2017.02.024) [DOI] [Google Scholar]
  • 161.Huang J, Agarwal S. 2020. Physics informed deep learning for traffic state estimation. In 2020 IEEE 23rd Int. Conf. on Intelligent Transportation Systems (ITSC), Rhodes, Greece20 September, pp. 1-6. Piscataway, NJ: IEEE. [Google Scholar]
  • 162.Tamaddon-Jahromi HR, Chakshu NK, Sazonov I, Evans LM, Thomas H, Nithiarasu P. 2020. Data-driven inverse modelling through neural network (deep learning) and computational heat transfer. Computer Methods Appl. Mech. Eng. 369, 113217. ( 10.1016/j.cma.2020.113217) [DOI] [Google Scholar]
  • 163.Cai S, Wang Z, Wang S, Perdikaris P, Karniadakis GE. 2021. Physics-informed neural networks for heat transfer problems. J. Heat Transfer 143, 060801. ( 10.1115/1.4050542) [DOI] [Google Scholar]
  • 164.Kim J, Lee C. 2020. Prediction of turbulent heat transfer using convolutional neural networks. J. Fluid Mech. 882, A18. [Google Scholar]
  • 165.Scheffer M. 2009. Critical transitions in nature and society, p. 384. Princeton Studies in Complexity. See http://books.google.com/books?id=jYSZgaaxRv0C. [Google Scholar]
  • 166.Farazmand M, Sapsis TP. 2019. Extreme events: mechanisms and prediction. Appl. Mech. Rev. 71, 050801. ( 10.1115/1.4042065) [DOI] [Google Scholar]
  • 167.Riso C, Ghadami A, Cesnik CES, Epureanu BI. 2020. Data-driven forecasting of postflutter responses of geometrically nonlinear wings. AIAA J. 58, 2726-2736. ( 10.2514/1.J059024) [DOI] [Google Scholar]
  • 168.Yamasaki H, Epureanu BI. 2017. Forecasting supercritical and subcritical Hopf bifurcations in aeroelastic systems. Int. J. Non-Linear Mech. 94, 400-405. ( 10.1016/j.ijnonlinmec.2016.12.009) [DOI] [Google Scholar]
  • 169.Ghadami A, Gourgou E, Epureanu BI. 2018. Rate of recovery from perturbations as a means to forecast future stability of living systems. Sci. Rep. 8, 9271. ( 10.1038/s41598-018-27573-0) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 170.Krkošek M, Drake JM. 2014. On signals of phase transitions in salmon population dynamics. Proc. R. Soc. B 281, 20133221. ( 10.1098/rspb.2013.3221%5) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 171.Drake JM, Griffen BD. 2010. Early warning signals of extinction in deteriorating environments. Nature 467, 456-459. ( 10.1038/nature09389) [DOI] [PubMed] [Google Scholar]
  • 172.Chen S, O'Dea EB, Drake JM, Epureanu BI. 2019. Eigenvalues of the covariance matrix as early warning signals for critical transitions in ecological systems. Sci. Rep. 9, 2572. ( 10.1038/s41598-019-38961-5) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 173.Scheffer M, et al. 2012. Anticipating critical transitions. Science 338, 344-348. ( 10.1126/science.1225244) [DOI] [PubMed] [Google Scholar]
  • 174.D'Souza K, Epureanu BI, Pascual M. 2015. Forecasting bifurcations from large perturbation recoveries in feedback ecosystems. PLoS ONE 10, e0137779. ( 10.1371/journal.pone.0137779) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 175.Scheffer M, Carpenter SR, Dakos V, van Nes EH. 2015. Generic indicators of ecological resilience: inferring the chance of a critical transition. Annu. Rev. Ecol. Evol. Syst. 46, 145-167. ( 10.1146/annurev-ecolsys-112414-054242) [DOI] [Google Scholar]
  • 176.Brett TS, Rohani P. 2020. Dynamical footprints enable detection of disease emergence. PLoS Biol. 18, e3000697. ( 10.1371/journal.pbio.3000697) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 177.Brett TS, Drake JM, Rohani P. 2017. Anticipating the emergence of infectious diseases. J. R. Soc. Interface. 14, 20170115. ( 10.1098/rsif.2017.0115) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 178.O'Regan SM, O'Dea EB, Rohani P, Drake JM. 2020. Transient indicators of tipping points in infectious diseases. J. R. Soc. Interface. 17, 20200094. ( 10.1098/rsif.2020.0094) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 179.Drake JM, et al. 2019. The statistics of epidemic transitions. PLoS Comput. Biol. 15, e1006917. ( 10.1371/journal.pcbi.1006917) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 180.Ghadami A, Doering CR, Drake JM, Rohani P, Epureanu BI. In press. Stability and resilience of transportation systems: is a traffic jam about to occur? IEEE Trans. Intell. Transp. Syst. ( 10.1109/TITS.2021.309587) [DOI] [Google Scholar]
  • 181.Guth S, Sapsis TP. 2019. Machine learning predictors of extreme events occurring in complex dynamical systems. Entropy 21, 925. ( 10.3390/e21100925) [DOI] [Google Scholar]
  • 182.Wan ZY, Vlachas P, Koumoutsakos P, Sapsis T. 2018. Data-assisted reduced-order modeling of extreme events in complex dynamical systems. PLoS ONE 13, e0197704. ( 10.1371/journal.pone.0197704) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 183.Qi D, Majda AJ. 2020. Using machine learning to predict extreme events in complex systems. Proc. Natl Acad. Sci. USA 117, 52-59. ( 10.1073/pnas.1917285117) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 184.Geelen R, Willcox K. 2022. Localized non-intrusive reduced-order modelling in the operator inference framework. Phil. Trans. R. Soc. A 380, 20210206. ( 10.1098/rsta.2021.0206) [DOI] [PubMed] [Google Scholar]
  • 185.Rezaian E, Huang C, Duraisamy K. 2022. Non-intrusive balancing transformation of highly stiff systems with lightly damped impulse response. Phil. Trans. R. Soc. A 380, 20210202. ( 10.1098/rsta.2021.0202) [DOI] [PubMed] [Google Scholar]
  • 186.Cenedese M, Axås J, Yang H, Eriten M, Haller G. 2022. Data-driven nonlinear model reduction to spectral submanifolds in mechanical systems. Phil. Trans. R. Soc. A 380, 20210194. ( 10.1098/rsta.2021.0194) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 187.Sashidhar D, Kutz JN. 2022. Bagging, optimized dynamic mode decomposition for robust, stable forecasting with spatial and temporal uncertainty quantification. Phil. Trans. R. Soc. A 380, 20210199. ( 10.1098/rsta.2021.0199) [DOI] [PubMed] [Google Scholar]
  • 188.Sapsis TP, Blanchard A. 2022. Optimal criteria and their asymptotic form for data selection in data-driven reduced-order modelling with Gaussian process regression. Phil. Trans. R. Soc. A 380, 20210197. ( 10.1098/rsta.2021.0197) [DOI] [PubMed] [Google Scholar]
  • 189.Zhang Z, Shin Y, Em Karniadakis G. 2022. GFINNs: GENERIC formalism informed neural networks for deterministic and stochastic dynamical systems. Phil. Trans. R. Soc. A 380, 20210207. ( 10.1098/rsta.2021.0207) [DOI] [PubMed] [Google Scholar]
  • 190.Saha P, Mukhopadhyay S. 2022. Unravelled multilevel transformation networks for predicting sparsely observed spatio-temporal dynamics. Phil. Trans. R. Soc. A 380, 20210198. ( 10.1098/rsta.2021.0198) [DOI] [PubMed] [Google Scholar]
  • 191.Bhouri MA, Perdikaris P. 2022. Gaussian processes meet NeuralODEs: a Bayesian framework for learning the dynamics of partially observed systems from scarce and noisy data. Phil. Trans. R. Soc. A 380, 20210201. ( 10.1098/rsta.2021.0201) [DOI] [PubMed] [Google Scholar]
  • 192.Lu Y, Li Y, Duan J. 2022. Extracting stochastic governing laws by non-local Kramers–Moyal formulae. Phil. Trans. R. Soc. A 380, 20210195. ( 10.1098/rsta.2021.0195) [DOI] [PubMed] [Google Scholar]
  • 193.Qi D, Harlim J. 2022. Machine learning-based statistical closure models for turbulent dynamical systems. Phil. Trans. R. Soc. A 380, 20210205. ( 10.1098/rsta.2021.0205) [DOI] [PubMed] [Google Scholar]
  • 194.Ghadami A, Epureanu BI. 2022. Deep learning for centre manifold reduction and stability analysis in nonlinear systems. Phil. Trans. R. Soc. A 380, 20210212. ( 10.1098/rsta.2021.0212) [DOI] [PubMed] [Google Scholar]
  • 195.Ghalyan NF, Bhattacharya C, Ghalyan IF, Ray A. 2022. Spectral invariants of ergodic symbolic systems for pattern recognition and anomaly detection. Phil. Trans. R. Soc. A 380, 20210196. ( 10.1098/rsta.2021.0196) [DOI] [PubMed] [Google Scholar]
  • 196.Charalampopoulos A, Bryngelson SH, Colonius T, Sapsis TP. 2022. Hybrid quadrature moment method for accurate and stable representation of non-Gaussian processes applied to bubble dynamics. Phil. Trans. R. Soc. A 380, 20210209. ( 10.1098/rsta.2021.0209) [DOI] [PubMed] [Google Scholar]
  • 197.McClellan A, Lorenzetti J, Pavone M, Farhat C. 2022. A physics-based digital twin for model predictive control of autonomous unmanned aerial vehicle landing. Phil. Trans. R. Soc. A 380, 20210204. ( 10.1098/rsta.2021.0204) [DOI] [PubMed] [Google Scholar]
  • 198.Liu Y, Kutz JN, Brunton SL. 2022. Hierarchical deep learning of multiscale differential equation time-steppers. Phil. Trans. R. Soc. A 380, 20210200. ( 10.1098/rsta.2021.0200) [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

This article has no additional data.


Articles from Philosophical transactions. Series A, Mathematical, physical, and engineering sciences are provided here courtesy of The Royal Society

RESOURCES