Significance
Combining large uncertain computational models with big noisy datasets is a formidable problem throughout science and engineering. These are especially difficult issues when real-time state estimation and prediction are needed such as, for example, in weather forecasting. Thus, a major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. New blended particle filters are developed in this paper. These algorithms exploit the physical structure of turbulent dynamical systems and capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of the phase space.
Keywords: curse of dimensionality, hybrid methods
Abstract
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below.
Many contemporary problems in science ranging from protein folding in molecular dynamics to scaling up of small-scale effects in nanotechnology to making accurate predictions of the coupled atmosphere–ocean system involve partial observations of extremely complicated large-dimensional chaotic dynamical systems. Filtering is the process of obtaining the best statistical estimate of a natural system from partial observations of the true signal from nature. In many contemporary applications in science and engineering, real-time filtering or data assimilation of a turbulent signal from nature involving many degrees of freedom is needed to make accurate predictions of the future state.
Particle filtering of low-dimensional dynamical systems is an established discipline (1). When the system is low dimensional, Monte Carlo approaches such as the particle filter with its various up-to-date resampling strategies (2) provide better estimates than the Kalman filter in the presence of strong nonlinearity and highly non-Gaussian distributions. However, these accurate nonlinear particle filtering strategies are not feasible for large-dimensional chaotic dynamical systems because sampling a high-dimensional variable is computationally impossible for the foreseeable future. Recent mathematical theory strongly supports this curse of dimensionality for particle filters (3, 4). In the second direction, Bayesian hierarchical modeling (5) and reduced-order filtering strategies (6–8) based on the Kalman filter (9) have been developed with some success in these extremely complex high-dimensional nonlinear systems. There is an inherently difficult practical issue of small ensemble size in filtering statistical solutions of these complex problems due to the large computational overload in generating individual ensemble members through the forward dynamical operator. Numerous ensemble-based Kalman filters (10–14) show promising results in addressing this issue for synoptic-scale midlatitude weather dynamics by imposing suitable spatial localization on the covariance updates; however, all these methods are very sensitive to model resolution, observation frequency, and the nature of the turbulent signals when a practical limited ensemble size (typically less than 100) is used.
Recent attempts without a major breakthrough to use particle filters with small ensemble size directly on large-dimensional chaotic dynamical systems are surveyed in chap. 15 of ref. 15. The goal here is to develop blended particle filters for large-dimensional chaotic dynamical systems. For the blended particle filters developed below for a state vector , there are two subspaces that typically evolve adaptively in time, where , , and , with the property that is low dimensional enough so that the non-Gaussian statistics of can be calculated from a particle filter whereas the evolving statistics of are conditionally Gaussian given . Statistically nonlinear forecast models with this structure with high skill for uncertainty quantification have been developed recently by Sapsis and Majda (16–19) and are used below in the blended filters.
The mathematical foundation for implementing the analysis step where observations are used in the conditional Gaussian mixture framework is developed in the next section, followed by a summary of the nonlinear forecast models as well as crucial issues for practical implementation. The skill of the blended filters in capturing significant non-Gaussian features as well as crucial nonlinear dynamics is tested below in a 40-dimensional chaotic dynamical system with at least nine positive Lyapunov exponents in various turbulent regimes where the adaptive non-Gaussian subspace with a particle filter is only 5 dimensional with excellent performance for the blended filters. The mathematical formalism for filtering with conditionally Gaussian mixtures should be useful as a framework for multiscale data assimilation of turbulent signals and a simple application is sketched below. An earlier strategy related to the approach developed here is simply to filter the solution on an evolving low-dimensional subspace that captures the leading variance adaptively (20) while ignoring the other degrees of freedom; simple examples for nonnormal linear systems (18) demonstrate the poor skill of such an approach for reduced filtering in general; for filtering linear nonnormal systems, the optimal reduced basis instead is defined through balanced truncation (8).
Mathematical Foundations for Blended Particle Filters
Here we consider real-time filtering or data assimilation algorithms for a state vector from a nonlinear turbulent dynamical system where the state is forecast by an approximate dynamical model between successive observation times, , and the state of the system is updated through the use of the observations at the discrete times in the analysis step. For the blended particle filters developed below, there are two subspaces that typically evolve adaptively in time (although that dependence is suppressed here), where , , and with the property that is low dimensional enough so that the statistics of can be calculated from a particle filter whereas the statistics of are conditionally Gaussian given . Thus, at any analysis time step , we have the prior forecast density given by
| [1] |
where is a Gaussian distribution determined by the conditional mean and covariance, . We assume that the marginal distribution, , is approximated by Q particles,
| [2] |
with nonnegative particle weights, with . Below we sometimes abuse notation as in [2] and refer to the continuous distribution in [1] and the particle distribution in [2] interchangeably. Here it is assumed that the nonlinear observation operator maps to , with M observations at the analysis time, , and has the form
| [3] |
where has rank M and the observational noise, , is Gaussian . Note that the form in [3] is the leading Taylor expansion around of the general nonlinear observation operator. Now, start with the conditional Gaussian particle distribution, given from the forecast distribution
| [4] |
and compute the posterior distribution in the analysis step through Bayes’ theorem, . The key fact is the following.
Proposition 1.
Assume the prior distribution from the forecast is the blended particle filter conditional Gaussian distribution in [4] and assume the observations have the structure in [3]; then the posterior distribution in the analysis step taking into account the observations in [3] is also a blended particle filter conditional Gaussian distribution; i.e., there are explicit formulas for the updated weights, , , and conditional mean, , and covariance, , so that
| [5] |
In fact, the distributions are updated by suitable Kalman filter formulas with the mean update for depending nonlinearly on in general.
The proof of Proposition 1 is a direct calculation similar to those in refs. 21 and 22 for other formulations of Gaussian mixtures over the entire state space and the explicit formulas can be found in SI Text. However, there is a crucial difference that here conditional Gaussian mixtures are applied in the reduced subspace blended with particle filter approximations only in the lower-dimensional subspace, , unlike the previous work.
Here we consider algorithms for filtering turbulent dynamical systems with the form
| [6] |
where involves energy-conserving nonlinear interactions with whereas L is a linear operator including damping as well as anisotropic physical effects; many turbulent dynamical systems in geosciences and engineering have the structure shown in [6] (23, 24).
Application to Multiscale Filtering of a Slow–Fast System.
A typical direct use of the above formalism is briefly sketched. In many applications in the geosciences, there is a fixed subspace representing the slow (vortical) waves and a fixed subspace representing the fast (gravity) waves with observations of pressure and velocity for example, which naturally mix the slow and fast waves at each analysis step (15, 25–27). A small parameter characterizes the ratio of the fast timescale to the slow timescale. A well-known formalism for stochastic mode reduction has been developed for such multiscale systems (28, 29) with a simplified forecast model valid in the limit with the form
| [7] |
where satisfies a reduced Fokker–Planck equation for the slow variables alone and is a suitable background covariance matrix for the fast variables. A trivial application of Proposition 1 guarantees that there is a simplified algorithm consisting of a particle filter for the slow variables alone updated at each analysis step through Proposition 1, which mixes the slow and fast components through the observations. More sophisticated multiscale filtering algorithms with this flavor designed to capture unresolved features of turbulence have been developed recently in ref. 30.
Blended Statistical Nonlinear Forecast Models.
Whereas the above formulation can be applied to hybrid particle filters with conditional Kalman filters on fixed subspaces, defined by and as sketched above, a more attractive idea is to use statistical forecast models that adaptively change these subspaces as time evolves in response to the uncertainty without a separation of timescales. Recently Sapsis and Majda (16–19) developed nonlinear statistical forecast models of this type, the quasilinear Gaussian dynamical orthogonality (QG-DO) method and the more sophisticated modified quasilinear Gaussian dynamical orthogonality (MQG-DO) method for turbulent dynamical systems with the structure shown in [6]. It is shown in refs. 16 and 17, respectively, that both QG-DO and MQG-DO have significant skill for uncertainty quantification for turbulent dynamical systems with MQG-DO superior to QG-DO although more calibration is needed for MQG-DO in the statistical steady state.
The starting point for these nonlinear forecast models is a quasilinear Gaussian (QG) statistical closure (16, 18) for [6] or a more statistically accurate modified quasilinear Gaussian (MQG) closure (17, 18); the QG and MQG forecast models incorporate only Gaussian features of the dynamics given by mean and covariance. The more sophisticated QG-DO and MQG-DO methods have an adaptively evolving lower-dimensional subspace where non-Gaussian features are tracked accurately and allow for the exchange of statistical information between the evolving subspace with non-Gaussian statistics and the evolving Gaussian statistical background. We illustrate the simpler QG-DO scheme below and refer to refs. 17 and 18 for the details of the more sophisticated MQG-DO scheme. The QG-DO statistical forecast model is the following algorithm.
The subspace is represented as
| [8] |
where , are time-dependent orthonormal modes and is the reduction order. The modes and the stochastic coefficients evolve according to the dynamical orthogonality (DO) condition (31). In particular, the equations for the QG-DO scheme are as follows.
Equation for the mean.
The equation for the mean is obtained by averaging the original system equation in [6],
| [9] |
Equation for the stochastic coefficients and the modes.
Both the stochastic coefficients and the modes evolve according to the DO equations (31). The coefficient equations are obtained by a direct Galerkin projection and the DO condition
| [10] |
with . Moreover, the modes evolve according to the equation obtained by stochastic projection of the original equation to the DO coefficients
| [11] |
with
Equation for the covariance.
The equation for the covariance starts with the exact equation involving third-order moments with approximated nonlinear fluxes
| [12] |
where the nonlinear fluxes are computed using reduced-order information from the DO subspace
| [13] |
The last expression is obtained by computing the nonlinear fluxes inside the subspace and projecting those back to the full N-dimensional space.
The QG statistical forecast model is the special case with so that [9] and [12] with are approximate statistical dynamical equations for the mean and covariance alone. The MQG-DO algorithm is a more sophisticated variant based on MQG with significantly improved statistical accuracy (16–18).
Blended Particle Filter Algorithms
The QG-DO and MQG-DO statistical forecast models are solved by a particle filter or Monte Carlo simulation of the stochastic coefficients , from [8] through the equations in [10] coupled to the deterministic equations in [9] and [12] for the statistical mean and covariance and the DO basis equations in [11]. Let denote the s-dimensional stochastic subspace in the forecast; at any analysis time, , denotes the projection of to . Complete the dynamical basis E with an orthonormal basis and define at any analysis time as the projection on (the flexibility in choosing can be exploited eventually). Thus, forecasts by QG-DO or MQG-DO lead to the following data from the forecast statistics at each analysis time: (i) a particle approximation for the marginal distribution
| [14] |
and (ii) the mean and the covariance matrix
| [15] |
where R is the covariance matrix in the basis .
To apply Proposition 1 in the analysis step, we need to find a probability density with the form in [1] recovering the statistics in [14] and [15]. Below, for simplicity in exposition, we assume linear observations in [3]. We seek this probability density in the form
| [16] |
so that [14] is automatically satisfied by [16] whereas and need to be chosen to satisfy [15]; note that is a constant matrix independent of j. Let denote the expected value of g with respect to so that, for example, , , and let , denote fluctuations about this mean. Part I of the blended filter algorithm consists of two steps:
- Solve the following linear system to find the conditional mean :
Note that this is an underdetermined system for a sufficiently large number of particles, Q, and “–“ notation is suppressed here.[17]
Any solution of [17] and [18] with automatically guarantees that [14] and [15] are satisfied by the probability density in [16]. Note that is a constant matrix that does not depend on the individual particle weights. This crucial fact makes part II inexpensive.
Part II of the analysis step for the blended particle filter algorithm is an application of Proposition 1 to [16] (details in SI Text).
Use Kalman filter updates in the subspace to get posterior mean and covariance , .
- Update the particle weights in the u1 subspace by , with
[19] Normalize the weights and resample.
Get the posterior mean and covariance matrix from the posterior particle statistics .
This completes the description of the blended particle filter algorithms.
Realizability in the Blended Filter Algorithms.
A subtle issue in implementing the blended filter algorithms occurs in [17] and [18] from part I of the analysis step; a particular solution of the linear system in [17], denoted here as , may yield a candidate covariance matrix, , defined in [18] that is not positive definite; i.e., realizability is violated. Here we exploit the fact that the subspace for particle filtering is low dimensional so that in general, the number of particles satisfies so the linear system is a strongly underdetermined linear system. The empirical approach that we use here is to seek the least-squares solution of that minimizes the weighted norm,
| [20] |
This solution is given as the standard least-squares solution through the pseudoinverse for the auxiliary variable . Such a least-squares solution guarantees that the trace of , defined in [18], is maximized; however, this criterion still does not guarantee that is realizable; to help guarantee this, we add extra inflation terms such that
Here the inflation coefficients can be chosen according to
where is a small number chosen to avoid numerical errors, and .
We find that this empirical approach works very well in practice as shown in subsequent sections. An even simpler but cruder variance inflation algorithm is to set . Further motivation for the constrained least-squares solution of minimizing [20] comes from the maximum entropy principle (24); the least biased probability density satisfying in [17] formally maximizes the entropy of ; i.e.,
| [21] |
The high-dimensional nonlinear optimization problem in [21] is too expensive to solve directly but the small amplitude expansion, of [21] becomes a weighted least-squares optimization problem for the new variable, constrained by . If we choose , the least-squares solution from [20] is recovered. Whereas the algorithm using has a nice theoretical basis, it requires the singular value decomposition of the large covariance matrix, , and the solution in [20] avoids this expensive procedure. Incidentally, the max-entropy principle alone does not guarantee realizability.
Numerical Tests of the Blended Particle Filters
Major challenges for particle filters for large-dimensional turbulent dynamical systems involve capturing substantial non-Gaussian features of the partially observed turbulent dynamical system as well as skillful filtering for spatially sparse infrequent high-quality observations of the turbulent signal (chap. 15 of ref. 15). In this last setting, the best ensemble filters require extensive tuning to avoid catastrophic filter divergence and quite often cheap filters based on linear stochastic forecast models are more skillful (15). Here the performance of the blended particle filter is assessed for two stringent test regimes, elucidating the above challenges, for the Lorenz 96 (L-96) model (32, 33); the L-96 model is a 40-dimensional turbulent dynamical system that is a popular test model for filter performance for turbulent dynamical systems (15). The L-96 model is a discrete periodic model given by
| [22] |
with and F the deterministic forcing parameter. The model is designed to mimic baroclinic turbulence in the midlatitude atmosphere with the effects of energy conserving nonlinear advection and dissipation represented by the first two terms in [22]. For sufficiently strong constant forcing values such as 5, 8, or 16, the L-96 model is a prototype turbulent dynamical system that exhibits features of weakly chaotic turbulence (), strongly chaotic turbulence (), and strong turbulence () (15, 24, 34). Because the L-96 model is translation invariant, 20 discrete Fourier modes can be used to study its statistical properties. In all filtering experiments described below with the blended particle filters, we use with 10,000 particles. Thus, non-Gaussian effects are captured in a 5-dimensional subspace through a particle filter interacting with a low-order Gaussian statistical forecast model in the remaining 35 dimensions. The numbers of positive Lyapunov exponents on the attractor for the forcing values considered here are 9, 13, and 16, respectively (34), so the 5-dimensional adaptive subspace with particle filtering can contain at most half of the unstable directions on the attractor; also, non-Gaussian statistics are most prominent in the weakly turbulent regime, , with nearly Gaussian statistics for , the strongly turbulent regime, and intermediate statistical behavior for .
Capturing Non-Gaussian Statistics Through Blended Particle Filters.
As mentioned above, the L-96 model in the weakly turbulent regime with has nine positive Lyapunov exponents on the attractor whereas Fourier modes and are the two leading empirical orthogonal functions (EOFs) that contain most of the energy. As shown in Fig. 1, where the probability density functions (pdfs) of , are plotted, there is significant non-Gaussian behavior in these modes because the pdfs for , are far from a Rayleigh distribution; see SI Text for the scatter plot of their joint distribution exhibiting strongly non-Gaussian behavior. For the filtering experiments below, sparse spatial observations are used with every fourth grid point observed with moderate observational noise variance and moderate observation frequency compared with the decorrelation time 4.4 for . We tested the QG-DO and MQG-DO blended filters as well as the Gaussian MQG filter and the ensemble adjustment Kalman filter (EAKF) with optimal tuned inflation and localization with ensemble members. All four filters were run for many assimilation steps and forecast pdfs for , as well as forecast error pdfs for the real part of , are plotted in Fig. 1. The blended MQG-DO filter accurately captures the non-Gaussian features and has the tightest forecast error pdf, the Gaussian MQG filter outperforms the blended QG-DO filter with a tighter forecast error distribution, and EAKF yields incorrect Gaussian distributions with the largest forecast error spread. The rms error and pattern correlation plots reported in SI Text confirm the above behavior as well as the scatter plots of the joint pdf for , reported there. This example illustrates that a Gaussian filter like MQG with an accurate statistical forecast operator can have a sufficiently tight forecast error pdf and be a very good filter yet can fail to capture significant non-Gaussian features accurately. On the other hand, for the QG-DO blended algorithm in this example, the larger forecast errors of the QG dynamics compared with MQG swamp the effect of the blended particle filter. However, all three methods significantly improve upon the performance of EAKF with many ensemble members.
Fig. 1.
Comparison of pdfs of the absolute values of the first two leading Fourier modes , (Upper) and pdfs of the forecast error (Lower, only real parts are shown) captured by different filtering methods.
Filter Performance with Sparse Infrequent High-Quality Observations.
Demanding tests for filter performance are the regimes of spatially sparse, infrequent in time, high-quality (low observational noise) observations for a strongly turbulent dynamical system. Here the performances of the blended MQG-DO and QG-DO filters as well as the MQG filter are assessed in this regime. For the strongly chaotic regime, , for the L-96 model, observations are taken every fourth grid point with variance and observation time , which is nearly the decorrelation time, 0.33; the performance of EAKF as well as that of the rank histogram and maximum entropy particle filters has already been assessed for this difficult test problem in figure 15.14 of ref. 15 with large intervals in time with filter divergence (rms errors much larger than 1) for all three methods. A similar test problem for the strongly turbulent regime, , with spatial observations every fourth grid point with and compared with the decorrelation time, 0.12, is used here. In all examples with L-96 tested here, we find that the MQG-DO algorithm with the approximation described in the paragraph below (Eq. 20) is always realizable and is the most robust accurate filter; on the other hand, for the QG-DO filtering algorithm, the performance of the blended algorithm with crude variance inflation, , significantly outperforms the basic QG-DO blended algorithm due to incorrect energy transfers in the QG forecast models for the long forecast times used here (SI Text). Fig. 2 reports the filtering performance of the MQG-DO and QG-DO blended filters and the MQG Gaussian filter in these tough regimes for through the rms error and pattern correlation. There are no strong filter divergences with the MQG-DO and QG-DO blended filters for both and F = 16 in contrast to other methods as shown in figure 15.14 of ref. 15. The much cheaper MQG filter for exhibits a long initial regime of filter divergence but eventually settles down to comparable filter performance to that of the blended filters. The blended MQG-DO filter is the most skillful robust filter over all these strongly turbulent regimes, , as the observational noise and observation time are varied; see the examples in SI Text.
Fig. 2.
Comparison of rms errors (Left) and pattern correlations (Right) between different filtering methods in regimes (Top) and (Middle and Bottom) with sparse infrequent high-quality observations.
Concluding Discussion
Blended particle filters that capture non-Gaussian features in an adaptive evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining phase space are introduced here. These blended particle filters have been developed here through a mathematical formalism involving conditional Gaussian mixtures (35) combined with statistically nonlinear forecast models developed recently (16–18) with high skill for uncertainty quantification, which are compatible with this structure. Stringent test cases for filtering involving the 40-dimensional L-96 model with a 5-dimensional adaptive subspace for nonlinear filtering in various regimes of chaotic dynamics with at least nine positive Lyapunov exponents are used here. These test cases demonstrate the high skill of these blended filters in capturing both non-Gaussian dynamical features and crucial nonlinear statistics for accurate filtering in extreme regimes with sparse infrequent high-quality observations. The formalism developed is also useful for multiscale filtering of turbulent systems and a simple application has been sketched here.
Supplementary Material
Acknowledgments
The research of A.J.M. is partially supported by Office of Naval Research (ONR) grants, ONR-Multidisciplinary University Research Initiative 25-74200-F7112, and ONR-Departmental Research Initiative N0014-10-1-0554. D.Q. is supported as a graduate research assistant in the first grant and the research of T.P.S. was partially supported through the second grant, when T.P.S. was a postdoctoral fellow at the Courant Institute.
Footnotes
The authors declare no conflict of interest.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1405675111/-/DCSupplemental.
References
- 1.Bain A, Crisan D. Fundamentals of Stochastic Filtering, Stochastic Modeling and Applied Probability. New York: Springer; 2009. Vol 60. [Google Scholar]
- 2.Moral PD, Jacod J. Interacting particle filtering with discrete observations. In: Doucet A, de Freitas N, Gordon N, editors. Sequential Monte Carlo Methods in Practice. New York: Springer; 2001. pp. 43–75. [Google Scholar]
- 3.Bengtsson T, Bickel P, Li B. Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems. In: Nolan D, Speed T, editors. IMS Collections in Probability and Statistics: Essays in Honor of David A. Freedman. Beachwood, OH: Inst Math Stat; 2008. Vol 2, 316–334. [Google Scholar]
- 4.Bickel P, Li B, Bengtsson T. Sharp failure rates for the bootstrap particle filter in high dimensions. In: Clarke B, Ghosal S, editors. IMS Collections: Pushing the Limits of Contemporary Statistics: Contributions in Honor of J. K. Ghosh. Vol 3. Beachwood, OH: Inst Math Stat; 2008. pp. 318–329. [Google Scholar]
- 5.Berliner LM, Milliff RF, Wikle CK. Bayesian hierarchical modeling of air-sea interaction. J Geophys Res Oceans (1978–2012) 2003;108:3104–3120. [Google Scholar]
- 6.Ghil M, Malanotte-Rizzoli P. Data assimilation in meteorology and oceanography. Adv Geophys. 1991;33:141–266. [Google Scholar]
- 7.Chorin AJ, Krause P. Dimensional reduction for a Bayesian filter. Proc Natl Acad Sci USA. 2004;101(42):15013–15017. doi: 10.1073/pnas.0406222101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Farrell BF, Ioannou PJ. State estimation using a reduced-order Kalman filter. J Atmos Sci. 2001;58:3666–3680. [Google Scholar]
- 9.Anderson B, Moore J. Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall; 1979. [Google Scholar]
- 10.Evensen G. The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn. 2003;53:343–367. [Google Scholar]
- 11.Bishop CH, Etherton BJ, Majumdar SJ. Adaptive sampling with the ensemble transform Kalman filter, part I: Theoretical aspects. Mon Weather Rev. 2001;129:420–436. [Google Scholar]
- 12.Anderson JL. An ensemble adjustment Kalman filter for data assimilation. Mon Weather Rev. 2001;129:2884–2903. [Google Scholar]
- 13.Anderson JL. A local least squares framework for ensemble filtering. Mon Weather Rev. 2003;131:634–642. [Google Scholar]
- 14.Szunyogh I, et al. Assessing a local ensemble Kalman filter: Perfect model experiments with the national centers for environmental prediction global model. Tellus Ser A Dyn Meterol Oceanogr. 2005;57:528–545. [Google Scholar]
- 15.Majda AJ, Harlim J. Filtering Complex Turbulent Systems. Cambridge, UK: Cambridge Univ Press; 2012. [Google Scholar]
- 16.Sapsis TP, Majda AJ. Blended reduced subspace algorithms for uncertainty quantification of quadratic systems with a stable mean state. Physica D. 2013;258:61–76. [Google Scholar]
- 17.Sapsis TP, Majda AJ. Blending modified Gaussian closure and non-Gaussian reduced subspace methods for turbulent dynamical systems. J Nonlin Sci. 2013;23(6):1039–1071. [Google Scholar]
- 18.Sapsis TP, Majda AJ. A statistically accurate modified quasilinear Gaussian closure for uncertainty quantification in turbulent dynamical systems. Physica D. 2013;252:34–45. doi: 10.1073/pnas.1313065110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Sapsis TP, Majda AJ. Statistically accurate low-order models for uncertainty quantification in turbulent dynamical systems. Proc Natl Acad Sci USA. 2013;110(34):13705–13710. doi: 10.1073/pnas.1313065110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Lermusiaux PF. Uncertainty estimation and prediction for interdisciplinary ocean dynamics. J Comput Phys. 2006;217(1):176–199. [Google Scholar]
- 21.Sorenson HW, Alspach DL. Recursive Bayesian estimation using Gaussian sums. Automatica. 1971;7:465–479. [Google Scholar]
- 22.Hoteit I, Luo X, Pham DT. Particle Kalman filtering: A nonlinear Bayesian framework for ensemble Kalman filters. Mon Weather Rev. 2012;140:528–542. [Google Scholar]
- 23.Salmon R. Lectures on Geophysical Fluid Dynamics. Oxford, UK: Oxford Univ Press; 1998. [Google Scholar]
- 24.Majda AJ, Wang X. Nonlinear Dynamics and Statistical Theories for Basic Geophysical Flows. Cambridge, UK: Cambridge Univ Press; 2006. [Google Scholar]
- 25.Majda AJ. Introduction to PDEs and Waves for the Atmosphere and Ocean, Courant Lecture Notes in Mathematics. Vol 9. Providence, RI: Am Math Soc; 2003. [Google Scholar]
- 26.Gershgorin B, Majda AJ. A nonlinear test model for filtering slow-fast systems. Commun Math Sci. 2008;6:611–649. [Google Scholar]
- 27.Subramanian AC, Hoteit I, Cornuelle B, Miller AJ, Song H. Linear versus nonlinear filtering with scale-selective corrections for balanced dynamics in a simple atmospheric model. J Atmos Sci. 2012;69(11):3405–3419. [Google Scholar]
- 28.Majda AJ, Timofeyev I, Vanden-Eijnden E. A mathematical framework for stochastic climate models. Commun Pure Appl Math. 2001;54:891–974. [Google Scholar]
- 29.Majda AJ, Franzke C, Khouider B. An applied mathematics perspective on stochastic modelling for climate. Philos Trans R Soc A Math Phys Eng Sci. 2008;366:2427–2453. doi: 10.1098/rsta.2008.0012. [DOI] [PubMed] [Google Scholar]
- 30.Grooms I, Lee Y, Majda AJ. Ensemble Kalman filters for dynamical systems with unresolved turbulence. J Comput Phys. 2014 in press. [Google Scholar]
- 31.Sapsis TP, Lermusiaux PF. Dynamically orthogonal field equations for continuous stochastic dynamical systems. Physica D. 2009;238:2347–2360. [Google Scholar]
- 32.Lorenz EN. Predictability: A problem partly solved. Proc Semin Predict. 1996;1(1):1–18. [Google Scholar]
- 33.Lorenz EN, Emanuel KA. Optimal sites for supplementary weather observations: Simulation with a small model. J Atmos Sci. 1998;55:399–414. [Google Scholar]
- 34.Abramov RV, Majda AJ. Blended response algorithms for linear fluctuation-dissipation for complex nonlinear dynamical systems. Nonlinearity. 2007;20(12):2793–2821. [Google Scholar]
- 35.Doucet A, Godsill S, Andrieu C. On sequential Monte Carlo sampling methods for Bayesian filtering. Stat Comput. 2000;10(3):197–208. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.


