Abstract
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
Keywords: Approximate inference, Model comparison, Variational Bayes, EM, Laplace approximation, Free-energy, SDE, Nonlinear stochastic dynamical systems, Nonlinear state-space models, DCM, Kalman filter, Rauch smoother
1. Introduction
In nature, the most interesting dynamical systems are only observable through a complex (and generally non-invertible) mapping from the system’s states to some measurements. For example, we cannot observe the time-varying electrophysiological states of the brain but we can measure the electrical field it generates on the scalp using electroencephalography (EEG). Given a model of neural dynamics, it is possible to estimate parameters of interest (such as initial conditions or synaptic connection strengths) using probabilistic methods (see e.g. [1], or [2]). However, incomplete or imperfect model specification can result in misleading parameter estimates, particularly if random or stochastic forces on system’s states are ignored [3]. Many dynamical systems are nonlinear and stochastic; for example neuronal activity is driven by, at least partly, physiological noise (see e.g. [4,5]). This makes recovery of both neuronal dynamics and the parameters of their associated models a challenging focus of ongoing research (see e.g. [6,7]). Another example of stochastic nonlinear system identification is weather forecasting; where model inversion allows predictions of hidden-states from meteorological models (e.g. [8]). This class of problems is found in many applied research fields such as control engineering, speech recognition, meteorology, oceanography, ecology and quantitative finance. In brief, the identification and prediction of stochastic nonlinear dynamical systems have to cope with subtle forms of uncertainty arising from; (i) the complexity of the dynamical behaviour of the system, (ii) our lack of knowledge about its structure and (iii) our inability to directly measure its states (hence the name “hidden- states”). This speaks to the importance of probabilistic methods for identifying nonlinear stochastic dynamic models (see [9] for a “data assimilation” perspective).
Most statistical inference methods for stochastic dynamical systems rely on a state-space formulation i.e. the specification of two densities; the likelihood, derived from an observation model and a first-order Markovian transition density, which embodies prior beliefs about the evolution of the system [10]. The nonlinear filtering and smoothing1 problems have already been solved using a Bayesian formulation by Kushner [11] and Pardoux [12] respectively. These authors show that the posterior densities on hidden-states given the data so far (filtering) or all the data (smoothing) obey stochastic partial differential (Kushner–Pardoux) equations. However:
-
•
They suffer from the curse of dimensionality; i.e. an exponential growth of computational complexity with the number of hidden-states [13]. This is why most approximate inversion techniques are variants of the simpler Kalman filter [14,15] or [10,16]. Sampling based approximations to the posterior density (particle filters, see e.g. [58] or [17]) have also been developed, but these also suffer from the curse of dimensionality.
-
•
The likelihood and the transition densities depend on the potentially unknown parameters and hyperparameters2 of the underlying state-space model. These quantities have also to be estimated and induce a hierarchical inversion problem, for which there is no generally accepted solution (see [18] for an approximate maximum-likelihood approach to this problem). This is due to the complexity (e.g. multimodality and high-order dependencies) of the joint posterior density over hidden-states, parameters and hyperparameters. The hierarchical structure of the generative model prevents us from using the Kushner–Pardoux equations or Kalman Filter based approximations. A review of modified Kalman filters for joint estimation of model parameters and hidden-states can be found in Wan [19].
These issues make variational Bayesian (VB) schemes [20–23] appealing candidates for joint estimation of states, parameters and hyperparameters. However, somewhat surprisingly, only a few VB methods have been proposed to finesse this triple estimation problem for nonlinear systems. These include:
-
•
Roweis and Ghahramani [24] propose an Expectation-Maximization algorithm that yields an approximate posterior density over hidden-states and maximum-likelihood estimates of the parameters.
-
•
Valpola and Karhunen [25] propose a VB method for unsupervised extraction of dynamic processes from noisy data. The nonlinear mappings in the model are represented using multilayer perceptron networks. This dynamical blind deconvolution approach generalizes [24], by deriving an approximate posterior density over the mapping parameters. However, as in Roweis [24] the method cannot embed prior knowledge about the functional form of both observation and evolution processes.
-
•
Friston et al. [7], present a VB inversion scheme for nonlinear stochastic dynamical models in generalized coordinates of motion. The approach rests on formulating the free-energy optimization dynamically (in generalized coordinates) and furnishes a continuous analogue to extended Kalman smoothing algorithms. Unlike previous schemes, the algorithm can deal with serially correlated state-noise and can optimize a joint posterior density on all unknown quantities.
Despite the advances in model inversion described in theses papers, there remain some key outstanding issues: First, the difficult problem of time-series prediction, given the (inferred) structure of the system (see [26] for an elegant Gaussian process solution). Second, no attempt has been made to assess the statistical efficiency of the proposed VB estimators for nonlinear systems (see [27] for a study of asymptotic behaviour of VB estimators for conjugate-exponential models). Third, there has been no attempt to optimize the form or structure of the state-space model using approximate Bayesian model comparison.
In this paper, we present a VB approach for approximating the posterior density over hidden-states and model parameters of stochastic nonlinear dynamic models. This is important because it allows one to infer the hidden-states causing data, parameters causing the dynamics of hidden-states and any non-controlled exogenous input to the system, given observations. Critically, we can make inferences even when both the observation and evolution function are nonlinear. Alternatively, this approach can be viewed as an extension of VB inversion of static models (e.g. [28]) to invert nonlinear state-space models. We also extend the VB scheme to approximate both the predictive density (on hidden-states and measurement space) and the sojourn density (i.e. the stationary distribution of the Markov chain) that summaries long-term behaviour [29].
In brief, model inversion entails optimizing an approximate posterior density that is parameterized by its sufficient statistics. This density is derived by updating the sufficient statistics using an iterative coordinate ascent on a free-energy bound on the marginal likelihood. We demonstrate the performances of this VB inference scheme when inverting (and predicting) stochastic variants of chaotic dynamic systems.
This paper comprises three sections. In the first, we review the general problem of model inversion and comparison in a variational Bayesian framework. More precisely, this section describes the extension of the VB approach to non-Gaussian posterior densities, under the Laplace approximation. The second section demonstrates the VB-Laplace update rules for a specific yet broad class of generative models, namely: stochastic dynamic causal models (see [1] for a Bayesian treatment of deterministic DCMs). It also provides a computationally efficient alternative to the standard tool for long-term prediction (the stationary or sojourn density), based upon an approximation to the predictive density. The third section provides an evaluation of the method’s capabilities in terms of accuracy, model comparison, self-consistency and prediction, using Monte Carlo simulations from three stochastic nonlinear dynamical systems. In particular, we compare the VB approach to standard extended Kalman filtering, which is used routinely in nonlinear filtering applications. We also include results providing evidence for the asymptotic efficiency of the VB estimator in this context. Finally, we discuss the properties of the VB approach.
2. Approximate variational Bayesian inference
2.1. Variational learning
To interpret any observed data with a view to making predictions based upon it, we need to select the best model that provides formal constraints on the way those data were generated; and will be generated in the future. This selection can be based on Bayesian probability theory to choose among several models in the light of data. This necessarily involves evaluating the marginal likelihood; i.e. the plausibility of observed data given model :
(1) |
where the generative model is defined in terms of a likelihood and prior on the model parameters, , whose product yields the joint density by Bayes rule:
(2) |
The marginal likelihood or evidence is required to compare different models. Usually, the evidence is estimated by converting the difficult integration problem in Eq. (1) into an easier optimization problem by optimizing a free-energy bound on the log-evidence. This bound is constructed using Jensen’s inequality and is induced by an arbitrary density [21]:
(3) |
The free-energy comprises an energy term and an entropy term .3 The free-energy is a lower bound on the log-evidence because the Kullback–Leibler cross-entropy or divergence, between the arbitrary and posterior densities is non-negative. Maximizing the free-energy with respect to minimizes the divergence, rendering the arbitrary density an approximate posterior density.
To make this maximization easier one usually assumes factorizes into approximate marginal posterior densities, over sets of parameters
(4) |
In statistical physics this is called a mean-field approximation [30]. This approximation replaces stochastic dependencies between the partitioned model variables by deterministic relationships between the sufficient statistics of their approximate marginal posterior density (see [31] and below).
Under the mean-field approximation it is straightforward to show that the approximate marginal posterior densities satisfy the following set of equations [32]:
(5) |
where are the sufficient statistics of the approximate marginal posterior density , and is a normalisation constant (i.e., partition function). We will call the variational energy. If the integral in Eq. (5) is analytically tractable (e.g., through the use of conjugate priors) the above Boltzmann equation can be used as an update rule for the sufficient statistics. Iterating these updates then provides a simple deterministic optimization of the free-energy with respect to the approximate posterior density.
2.2. The Laplace approximation
When inverting realistic generative models, nonlinearities in the likelihood function generally induce posterior densities that are not in the conjugate-exponential family. This means that there are an infinite number of sufficient statistics of the approximate posterior density; rendering the integral in Eq. (5) analytically intractable. The Laplace approximation is a useful and generic device, which can finesse this problem by reducing the set of sufficient statistics of the approximate posterior density to its first two moments. This means that each approximate marginal posterior density is further approximated by a Gaussian density:
(6) |
where the sufficient statistics encode the posterior mean and covariance of the -th approximate marginal posterior density. This (fixed-form) Gaussian approximation is derived from a second-order truncation of the Taylor series to the variational energy [28]:
(7) |
Eq. (7) defines each variational energy and approximate marginal posterior density as explicit functions of the sufficient statistics of the other approximate marginal posterior densities. Under the VB-Laplace approximation, the iterative update of the sufficient statistics just requires the gradients and curvatures of (the log-joint density) with respect to the unknown variables of the generative model. We will refer to this approximate Bayesian inference scheme to as the VB-Laplace approach.
2.3. Statistical Bayesian inference
The VB-Laplace approach above provides an approximation to the posterior density over any unknown model parameter , given a set of observations and a generative model . Since this density summarizes our knowledge (from both the data and priors), we could use it as the basis for posterior inference; however, these densities generally tell us more than we need to know. In this section, we briefly discuss standard approaches for summarizing such distributions; i.e. Bayesian analogues for common frequentist techniques of point estimation and confidence interval estimation.4 We refer the reader to [33] for further discussion.
To obtain a point estimate of any unknown we need to select a summary of , such as its mean or mode. These estimators can be motivated by different estimation losses, which, under the Laplace approximation, are all equivalent and reduce to the first-order posterior moment or posterior mean. The Bayesian analogue of a frequentist confidence interval is defined formally as follows: a % posterior confidence interval for is a subset of the parameter space, such that its posterior probability is equal to ; i.e., . Under the Laplace approximation, the optimal % posterior confidence interval is the interval whose bounds are the and quantiles of [34]. This means Bayesian confidence intervals are simple functions of the second-order posterior moment or posterior variance. We will demonstrate this later.
In what follows, we introduce the class of generative models we are interested in; i.e. hierarchical stochastic nonlinear dynamic models. We then present update equations for each approximate marginal posterior density, starting with the straightforward updates (the parameters of the generative model) and finishing with the computationally more demanding updates of the time-varying hidden-states. These are derived from a variational extended Kalman–Rauch marginalization procedure [10], which exploits the Laplace approximation above.
3. Variational Bayesian treatment of stochastic DCMs
In this section, we illustrate VB inference in the context of an important and broad class of generative models. These are stochastic dynamic causal models that combine nonlinear stochastic differential equations governing the evolution of hidden-states and a nonlinear observer function, to provide a nonlinear state-space model of data. Critically, neither the states nor the parameters of the state-space model functions are known. This means that the generative model is hierarchical, which induces a natural mean-field partition into states and parameters. This section describes stochastic DCMs and the update rules entailed by our VB-Laplace approach. In the next section, we illustrate the performance of the method in terms of model inversion, selection and time-series prediction using Monte Carlo simulations of chaotic systems.
3.1. Stochastic DCMs and state-space models
The generative model of a stochastic DCM rests on two equations: the observation equation, which links observed data comprising vector-samples to hidden-states and a stochastic differential equation (SDE) governing the evolution of these hidden-states:
(8) |
where and are unknown parameters of the observation function and equation of motion (drift) respectively; are known exogenous inputs that drive the hidden-states or response; is a vector of random Gaussian measurement-noise; may, in general, be a function of the states and time and denotes a Wiener process or state-noise that acts as a stochastic forcing term.
A Wiener process is a continuous zero mean random process, whose variance grows as time increases; i.e.
(9) |
The continuous-time formulation of the SDE in Eq. (8) can also be written using the following (stochastic) integral formulation:
(10) |
where the second integral is a stochastic integral, whose peculiar properties led to the derivation of Ito stochastic calculus [35]. Eq. (10) can be converted into a discrete-time analogue using local linearization, or Euler–Maruyama methods, yielding the standard first-order autoregressive process (AR(1)) form of nonlinear state-space models:
(11) |
where is a Gaussian state-noise vector of variance and is the evolution function given by:
(12) |
Here is the Jacobian of and is the time interval between samples. The first line corresponds to the local linearization method [36], and the second line instantiates the so-called Euler–Maruyama discretisation scheme [35]. The discrete-time variant of the state-space model yields the Gaussian likelihood and transition densities (where dependence on exogenous inputs and time is left implicit):
(13) |
where (resp. ) is the precision of the measurement-noise (resp. state-noise ). From Eqs. (10) and (13), we note that the state-noise precision is , where the transition density can be regarded as a prior that prescribes the likely evolution of hidden-states. From now on, we will assume the state-noise precision is independent of the hidden-states, which narrows the class of generative models we deal with (e.g. GARCH models, see [37]); volatility models, see e.g. [38]; bilinear stochastic models, see [39].
3.1.1. The predictive and sojourn densities
The predictive density over the hidden-states is derived from the transition density given in Eq. (13) through the iterated Chapman–Kolmogorov equation:
(14) |
This exploits the Markov property of the hidden-states. Despite the Gaussian form of the transition density, nonlinearities in the evolution function render the predictive density non-Gaussian. In particular, nonlinear evolution functions can lead to multimodal predictive densities.
Under mild conditions, it is known that nonlinear stochastic systems as in Eq. (8) are ergodic, i.e. their distribution becomes stationary [40]. The fact that a dynamical system is ergodic means that random state-noise completely change its stability properties. Its deterministic variant can have several stable fixed points or attractors, whereas, when there are stochastic forces, there is a unique steady state, which is approached in time by all other states. Any local instabilities of the deterministic system disappear, manifesting themselves only in the detailed form of the stationary density. This (equilibrium) stationary density, which we will call the sojourn density, is given by the predictive density when . The sojourn density summarizes the long-term behaviour of the hidden-states: it quantifies the proportion of time spent by the system at each point in state-space (the so-called “sojourn time”). We will provide approximate solutions to the sojourn density below and use it in the next section for long-term prediction.
3.1.2. The hierarchical generative model
In a Bayesian setting, we also have to specify prior densities on the unknown parameters of the generative model . Without loss of generality,5 we assume Gaussian priors on the parameters, initial conditions of the hidden-states and Gamma priors on the precision hyperparameters:
(15) |
where (resp. and ) are the prior mean and covariance of the observation parameters (resp. the evolution parameters and initial condition ); and (resp. ) are the prior shape and inverse scale parameters of the Gamma-variate precision of the measurement-noise (resp. state-noise).
Fig. 1 shows the Bayesian dependency graph representing the ensuing generative model defined by Eqs. (13) and (15). The structure of the generative model is identical to that in [22]; the only difference is the nonlinearity in the observation and evolution functions (i.e. in the likelihood and transition densities). This class of generative model defines a stochastic DCM and generalizes both static convolution models (i.e. ) and non-stochastic DCMs (i.e. ).
3.2. The VB-Laplace update rules
The mean-field approximation to the approximate posterior density, for the state-space model described above is
(16) |
Eq. (5) provides the variational energy of each mean-field partition variable using the expectations of , under the Markov blanket6 of each of these variables. Using the mean-field partition in Eq. (16), these respective variational energies are (omitting constants for clarity):
(17) |
We will use the VB-Laplace approximation (Eq. (7)) to handle nonlinearities in the generative model when deriving approximate posterior densities, with the exception of the precision hyperparameters, for which we used free-form VB update rules.
3.2.1. Updating the sufficient statistics of the hyperparameters
Under the VB-Laplace approximation on the parameters and hidden-states, the approximate posterior density of the precision parameters does not require any further approximation. This is because their prior is conjugate to a Gaussian likelihood. Therefore, their associated VB update rule is derived from the standard free-form approximate posterior density in Eq. (5).
First, consider the free-form approximate posterior density of the measurement-noise precision. It can be shown that has the form , which means is a Gamma density
(18) |
with shape and scale parameters given by
(19) |
Here, is a matrix of prediction errors in measurement space; , and denotes the instantaneous posterior covariance of the hidden-states (see below). A similar treatment shows that is also a posteriori Gamma-distributed:
(20) |
with shape and scale parameters
(21) |
where is the vector of estimated state-noise, is the lagged posterior covariance of the hidden-states (see below).
3.2.2. Updating the sufficient statistics of the parameters
These updates follow the same procedure above, except that the VB-Laplace update rules for deriving the approximate posterior densities of the parameters are based on an iterative Gauss–Newton optimization of their respective variational energy (see Eqs. (6) and (7)). Consider the variational energy of the observation parameters:
(22) |
This quadratic form in yield the Gauss–Newton update rule for the mean of the approximate posterior density over observation parameters:
(23) |
where the gradient and curvatures are evaluated at the previous estimate of the approximate posterior mean . Note that, in the following, we use condensed notations for mixed derivatives; i.e.
(24) |
Using a bilinear Taylor expansion of the observation function, Eq. (23) can be implemented as:
(25) |
Similar considerations give the VB-Laplace update rules for the evolution parameters:
(26) |
which yields:
(27) |
Iterating Eqs. (25) and (27) implements a standard Gauss–Newton scheme for optimizing the variational energy of the observation and evolution parameters. To ensure convergence, we halve the size of the Gauss–Newton update until the variational energy increases. Under certain mild assumptions, this regularized Gauss–Newton scheme is guaranteed to converge [41].
3.2.3. Updating the sufficient statistics of the hidden-states
The last approximate posterior density is . This approximate posterior could be obtained by treating the time-series of hidden-states as a single finite-dimensional vector and using the VB-Laplace approximation with an expansion of the evolution and observation functions around the last mean. However, it is computationally more expedient to exploit the Markov properties of the dynamics and assemble the sufficient statistics and sequentially, using a VB-Laplace variant of the extended Kalman–Rauch smoother [10]. These probabilistic filters evaluate the (instantaneous) marginals, time point by time point, as opposed to the full joint posterior density over the whole time sequence, . They are approximate solutions to the Kushner–Pardoux partial differential equations that describe the instantaneous evolution of the marginal posterior density on the hidden-states.
Algorithmically, the VB-Laplace Kalman–Rauch marginalization procedure is divided into two passes that propagate (in time) the first and second-order moments of the approximate posterior density. These propagation equations require only the gradients and mixed derivatives of the evolution and observation functions. The two passes comprise a forward pass (which furnishes the approximate filtering density, which can be used to derive an on-line version of the algorithm) and a backward pass (which derives the approximated posterior density from the approximate filtering density).
3.2.3.1. Forward pass
The forward pass entails two steps (prediction and update) that are alternated from to : The prediction step is derived from the Chapman–Kolmogorov belief propagation Eq. (14):
(28) |
where is the current approximate predictive density and is the last VB-Laplace approximate filtering density (see above update step). Under the VB-Laplace approximation, the prediction step is given by the following Gauss–Newton update for the predicted mean and covariance:
(29) |
This VB-Laplace approximation to the predictive density differs from the traditional extended Kalman filter because it accounts for the uncertainty in the evolution parameters (mean-field terms in Eq. (29)). This is critical when making predictions of highly nonlinear systems (as we will see in the next section) with unknown parameters. The update step can be written as follows:
(30) |
Again, under the VB-Laplace approximation, the update rule for the sufficient statistics of the approximate filtering density is given by:
(31) |
3.2.3.2. Backward pass
In its parallel implementation (two-filter Kalman–Rauch–Striebel smoother), the backward pass also requires two steps, which are alternated from to . The first is a -message passing scheme:
(32) |
Where a local VB-Laplace approximation ensures (omitting constants):
(33) |
leading to the following mean and covariance backward propagation equation:
(34) |
Note that the -message is not a density over the hidden-states; it has the form of a likelihood function. More precisely, it is the approximate likelihood of the current hidden-states with respect to all future observations. It contains the information discarded by the forward pass, relative to the approximate posterior density. The latter is given by combining the output of the forward pass (updated density) with the -message (see below) giving the -message passing scheme:
(35) |
with, by convention and:
(36) |
where the necessary sufficient statistics are given in Eqs. (29), (31) and (34). These specify the instantaneous posterior density on the hidden-states.
Eqs. (29), (31), (34) and (36) specify the VB-Laplace update rules for the sufficient statistics of the approximate posterior of the hidden-states. These correspond to a Gauss–Newton scheme for optimizing their variational energy, where the Gauss–Newton increment is simply the difference between the result of Eq. (36) and the previous approximate mean.
Finally, we need the expression for the lagged posterior covariance to update the evolution, observation and precision parameters (see Eqs. (22) and (25)). This is derived from the following joint density [22]:
(37) |
where the last line follows from the VB-Laplace approximation. As in the forward step of the VB-Laplace Kalman filter, the sufficient statistics of this approximate joint posterior density can be derived explicitly from the gradients of the evolution function:
(38) |
where and are given in Eqs. (26) and (31), and the gradients are evaluated at the mode .
3.2.3.3. Initial conditions
The approximate posterior density over the initial conditions is obtained from the usual VB-Laplace approach. The update rule for the Gauss–Newton optimization of the variational energy of the initial conditions is7:
(39) |
3.2.4. Evaluation of the free-energy
Under the mean-field approximation, the free-energy evaluation requires the sum of the entropy of each approximate marginal posterior density. Except for the hidden-states, evaluating these are relatively straightforward under the Laplace assumption. However, due to the use of the Kalman–Rauch marginalization scheme in the derivation of the posterior , the calculation of the joint entropy over the hidden-states requires special consideration. First, let us note that the joint factorizes over instantaneous transition density (Chapman–Kolmogorov equation):
(40) |
Therefore, its entropy decomposes into:
(41) |
where the matrix determinants are evaluated during the backward pass (when forming the -messages) and the posterior lagged covariance is given by Eq. (38).
3.2.5. Predictive and sojourn densities
Having identified the model, one may want to derive predictions about the evolution of the system. This requires the computation of a predictive density; i.e. the propagation of the posterior density over the hidden-states from the last observation. The predictive density can be accessed through the Chapman–Kolmogorov equation (Eq. (17)). However, the requisite integrals do not have an analytical solution. To finesse this problem we can extend our VB-Laplace approach to derive an approximation to the predictive density:
(42) |
for any . Here, the last line motivates a recursive Laplace approximation to the predictive density. As above, this is used to form a propagation equation for the mean and covariance of the approximate predictive density:
(43) |
Eq. (43) is used recursively in time to yield a Laplace approximation to the predictive density over hidden-states in the future. Similarly, we can derive an approximate predictive density for the data:
(44) |
which leads to the following moment propagation equations:
(45) |
These equations are very similar to the predictive step of the forward pass of the VB-Laplace Kalman filter (Eq. (29)). They can be used for time-series prediction on hidden-states and measurements by iterating from to .
From the approximate predictive densities we can derive the approximate sojourn distribution over both state and measurement spaces. By definition, the sojourn distribution is the stationary density of the Markov chain, i.e. it is invariant under the transition density:
(46) |
Estimating the sojourn density from partial observations of the system is a difficult inferential problem (see e.g. [42]). Here, we relate the sojourn distribution to the predictive density via the ergodic decomposition theorem [29]:
(47) |
where is the number of predicted time steps and is the Laplace approximation of the predictive density at time (Eqs. (42) and (43)). Eq. (47) subsumes three approximations: (i) the system is ergodic, (ii) a truncation of the infinite series of the ergodic decomposition theorem and (iii) a Laplace approximation to the predictive density. Effectively, Eq. (47) represents a mixture of Gaussian densities approximation to the sojourn distribution. It is straightforward to show that the analogous sojourn distribution in measurement space is given by:
(48) |
where is the Laplace approximation to the measurement predictive density at time (Eqs. (44) and (45)).
4. Evaluations of the VB-Laplace scheme
In this section, we try to establish the validity and accuracy of the VB-Laplace scheme using four complementary approaches:
-
•
Comparative evaluations with the extended Kalman filter (EKF): We compared the estimation error of the VB-Laplace and EKF estimators in terms of estimation efficiency, when applied to systems with nonlinear evolution and observation functions.
-
•
Bayesian model comparison: The application of the proposed scheme may include the identification of different forms or structures of state-space models subtending observed data. We therefore asked whether models whose structure could have generated the data are a posteriori more plausible than models that could not. To address this question we used the free-energy as a bound approximation to the log-model-evidence to compute an approximate posterior density on model space.
-
•
Quantitative evaluation of asymptotic efficiency: Since our VB-Laplace approach provides us with an approximate posterior density, we assessed whether the VB estimator becomes optimal with large sample size.
-
•
Assessment of time-series prediction: We explored the potential advantages and caveats in using the VB-Laplace approach for time-series prediction.
These analyses were applied to three well-known low-dimensional nonlinear stochastic systems; a double-well potential, Lorenz attractor and van der Pol oscillator. The dynamical behaviours of these systems cover diverse but important phenomena, ranging from limit cycles to strange attractors. These systems are described qualitatively below and their equations of motion are given in Table 1.
Table 1.
Double-well | |
Lorenz | |
van der Pol |
After having reviewed the dynamical properties of these systems, we will summarize the Bayesian decision theory used to quantify the performance of the method. Finally, we describe the Monte Carlo simulations used to compare VB-Laplace to the standard EKF, perform model comparison, assess asymptotic efficiency and characterise the prediction capabilities of VB-Laplace approach.
4.1. Simulated systems
4.1.1. Double-well
The double-well potential system models a dissipative system, whose potential energy is a quadratic (double-well) function of position. As a consequence, the system is bistable with two basins of attraction to two stable fixed points, and . In its deterministic variant, the system ends up spiralling around one or the other attractors, depending on its initial conditions and the magnitude of a damping force or dissipative term. Because we consider state-noise, the stochastic DCM can switch (tunnel) from one basin to the other, which leads to itinerant behaviour; this is why the double-well system can be used to model bistable perception [43].
Fig. 2 shows the double-well potential and a sample path of the system (as a function of time in state-space; ). In this example, the evolution parameters were , the precision of state-noise was and the initial conditions were picked at random. The path shows two jumps over the potential barrier (points and ), the first being due primarily to kinetic energy (), and the second to state-noise (). Between these two, the path spirals around the stable attractors.
4.1.2. Lorenz attractor
The Lorenz attractor was originally proposed as a simplified version of the Navier–Stokes equations, in the context of meteorological fluid dynamics [44]. The Lorenz attractor models the autonomous formation of convection cells, whose dynamics are parameterized using three parameters; : the Rayleigh number, which characterizes the fluid viscosity, : the Prandtl number which measures the efficacy of heat transport through the boundary layer and : a dissipative coefficient. When the Rayleigh number is bigger than one, the system has two symmetrical fixed points , which act as a pair of local attractors. For certain parameter values; e.g., , the Lorenz attractor exhibits chaotic behaviour on a butterfly-shaped strange attractor. For almost any initial conditions (other than the fixed points), the trajectory unfolds on the attractor. The path begins spiralling onto one wing and then jumps to the other and back in a chaotic way. The stochastic variant of the Lorenz system possesses more than one random attractor. However, with the parameters above, the sojourn distribution settles around the deterministic strange attractor [45].
Fig. 3 shows a sample path of the Lorenz system (). In this example, the evolution parameters were set as above, the precision of state-noise was and the initial conditions were picked at random. The path shows four jumps from one wing to the other.
4.1.3. van der Pol oscillator
The van der Pol oscillator has been used as the basis for neuronal action potential models [46,47]. It is a non-conservative oscillator with nonlinear damping parameterized by a single parameter, . It is a stable system for all initial conditions and dampening parameter. When is positive, the system enters a limit cycle. Fig. 4 shows a sample path () of the van der Pol oscillator. In this example, the evolution parameter was , the precision of state-noise was and the initial conditions were picked at random. The path exhibits four periods of a quasi-limit cycle after a short transient (point ).
4.2. Estimation loss and statistical efficiency
The statistical efficiency of an estimator is a decision theoretic measure of accuracy [34]. Given the true parameters of the generative model and their estimator, we can evaluate the squared error loss with:
(49) |
where is the th element of the estimator of . The SEL is a standard estimation error measure, whose a posteriori expectation is minimized by the posterior mean. In Bayesian decision theoretic terms, this means that an estimator based on the posterior mean; is optimal with respect to squared error loss.
It can be shown that the expected SEL under the joint density is bounded by the Bayesian Fisher information:
(50) |
Eq. (50) gives the so-called Bayesian Cramer–Rao bound, which quantifies the minimum average SEL, under the generative model [48]. By definition, the proximity to the Cramer–Rao bound measures the efficiency of an approximate Bayesian estimator. The efficiency of the method is related to the amount of available information, which, when the observation function is the identity mapping (), is proportional to the sample size . In this case, asymptotic efficiency is achieved whenever estimators attain the Cramer–Rao bound when .
In addition to efficiency, we also evaluated the approximate posterior confidence intervals. As noted above, under the Laplace assumption, this reduces to assessing the accuracy of the posterior covariance. In decision theoretic terms, confidence interval evaluation, under the Laplace approximation, is equivalent to squared error loss estimation, since:
(51) |
where the a posteriori expected loss is the Bayesian estimator of SEL. thus provides a self-consistency measure that is related to confidence intervals (see [34]).
4.3. Comparing VB-Laplace and EKF
The EKF provides an approximation to the posterior density on the hidden-states of the state-space model given in Eq. (11). The standard variant of the EKF uses a forward pass, comprising a prediction and an update step (see e.g. [16]):
(52) |
These two steps are iterated from to . It is well known that both model misspecification (e.g. using incorrect parameters and hyperparameters) and local linearization can introduce biases and errors in the covariance calculations that degrade EKF performance [49].
We conducted a series of fifty Monte Carlo simulations for each dynamical system. The observation function for all three systems was taken to be the following sigmoid mapping:
(53) |
where the constants were chosen to ensure changes in hidden-states were of sufficient amplitude to cause nonlinear effects (i.e. saturation) in measurement space. Table 2 shows the different simulation and prior parameters for the dynamical systems we examined.
Table 2.
Double-well | Lorenz | van der Pol | ||
---|---|---|---|---|
Measurement-noise precision | Simulated | |||
Prior pdf | ||||
System-noise precision | Simulated | |||
Prior pdf | ||||
Evolution parameters | Simulated | |||
Prior pdf | ||||
Initial conditions | Simulated | |||
Prior pdf | ||||
Observation function | 0.5 | 0.2 | 5 | |
50 | 50 | 50 |
Note that the standard EKF cannot estimate parameters or hyperparameters. Therefore, we have used two EKF versions: EKF1 used the prior means of the parameters (), and EKF2 uses their posterior mean from the VB-Laplace algorithm ().
Figs. 5–7 show the results of the comparative evaluations of VB-Laplace, EKF1 and EKF2, where these and subsequent figures use the same format:
-
•
Top-left: first- and second-order moments of the approximate predictive density on the observations (and simulated data) as given by VB-Laplace.
-
•
Bottom-left: first- and second-order moments of the approximate posterior density on the hidden-states (and simulated hidden-states) as given by VB-Laplace.
-
•
Top-right: first- and second-order moments of the approximate posterior density on the hidden-states (and simulated hidden-states) as given by EKF1.
-
•
Bottom-right: first- and second-order moments of the approximate posterior density on the hidden-states (and simulated hidden-states) as given by the EKF2.
It can be seen that despite the nonlinear observation and evolution functions, both VB-Laplace and EKF2 estimate the hidden-states accurately. Furthermore, they both provide reliable posterior confidence intervals. This is not the case for the EKF1, which, in these examples, exhibits significant estimation errors.
We computed the SEL score on the hidden-states for the three approaches. The Monte Carlo distributions of this score are given in Fig. 8. There was always a significant difference (one-sample paired -test, 5% confidence level, df = 49) between the VB-Laplace and the EKF1 approaches, with the VB-Laplace method exhibiting greater efficiency. This difference is greatest for the van der Pol system, in which the nonlinearity in the observation function was the strongest. There was a (less) significant difference between the VB-Laplace and the EKF2 approaches for the Lorenz and the van der Pol systems; VB-Laplace is more (respectively less) efficient than the EKF2 when applied to the van der Pol (respectively Lorenz) system. Table 3 summarizes these results. It is also worth reporting that 11% of the Monte Carlo simulations led to numerical divergences of the EKF2 algorithm for the van der Pol system (these were not used for when computing the paired -test).
Table 3.
Double-Well | Lorenz | van der Pol | |
---|---|---|---|
VB-Laplace | 3.32 | 4.24 | 4.02 |
EKF1 | 8.80 a | 8.58 a | 13.9 a |
EKF2 | 3.35 | 4.19 a | 4.39 a |
Indicates a significant difference relative to the corresponding VB-Laplace SEL score (one-sample paired -test, 5% confidence level, df=49). The grey cells of the table indicate which of the three approaches (VB-Laplace, EKF1 or EKF2) were best, in terms of efficiency.
To summarize, the EKF seems sensitive to model misspecification. This is why the EKF1 (relying on prior means) performs badly when compared to the EKF2 (relying on the VB-Laplace posterior means). This is not the case for the VB-Laplace approach, which seems more robust to model misspecification. In addition, the EKF seems very sensitive to noise in presence of strong nonlinearity (cf. numerical divergence of EKF2 for the van der Pol system). It could be argued that the good estimation performances achieved by EKF2 are inherited from the VB-Laplace through the posterior parameter estimates and implicit learning of the structure of the hidden stochastic systems.
4.4. Assessing VB-Laplace model comparison
Here, we asked whether one can identify the structure of the hidden stochastic system using Bayesian model comparison based on the free-energy. We assessed whether models whose structure could have generated the data are a posteriori more plausible than models that could not. To do this, we conducted another 50 Monte Carlo simulations for each of the three systems. For each of these simulations, we compared two classes of models: the model used to generate the simulated data (referred to as the “true” model) and a so-called “generic” model, which was the same as the true model except for the form of the evolution function:
(54) |
where the elements of the matrices were unknown and estimated using VB-Laplace. The number of evolution parameters depends on the number of hidden-states: . This evolution function can be regarded as a second-order Taylor expansion of the equations of motion; . This means that the generic model recover the dynamical structure of the Lorenz system, which is a generic model with the following parameters:
(55) |
However, the generic model cannot capture the dynamical structure of the van Der Pol and double-well systems (cf. Table 1). The specifications of the generative models are identical to those given in Table 2, except for the “generic” generative model, for which the priors on the evolution parameters are given in Table 4.
Table 4.
Double-well | Lorenz | van der Pol | |
---|---|---|---|
Evolution parameters prior pdf |
Figs. 9–11 compare the respective VB-Laplace inversion of the true and the generic generative models; specifically
-
•
Top-left: first- and second-order moments of the approximate predictive density on the observations (and simulated data) under the true model.
-
•
Bottom-left: first- and second-order moments of the approximate posterior density on the hidden-states (and simulated hidden-states) under the true model.
-
•
Top-right: first- and second-order moments of the approximate predictive density on the observations (and the simulated data) under the generic model.
-
•
Bottom-right: first- and second-order moments of the approximate posterior density on the hidden-states (and simulated hidden-states) under the generic model.
It can be seen from these figures that the Lorenz system’s hidden-states are estimated well under both the true and generic models. This is not the case for the van der Pol and the double-well systems, for which the estimation of the hidden-states under the generic model deviates significantly from the simulated time-series. Note also that the posterior confidence intervals reflect the mismatch between the simulated and estimated hidden-states. This is most particularly prominent for the van der Pol system (Fig. 11), where the posterior variances increase enormously, whenever the observations fall on the nonlinear (saturation) domain of the sigmoid observation function. Nevertheless, for both true and generic models, the data were predicted almost perfectly for all three systems: the measured data always lie within the confidence intervals of the approximate predictive densities.
The VB-Laplace approach provides us with the free-energy of the true and generic models for each Monte Carlo simulation. Its empirical Monte Carlo distribution for each class of systems is shown in Fig. 12. In addition, for each simulation, we computed the standard “goodness-of-fit” sum of squared error , which is the basis for any non-Bayesian statistical model comparison. Finally, we computed the estimation loss (SEL) on the hidden-states, which cannot be obtained in real applications. These performance measures allowed us to test for significant differences between the true and generic models in terms of their free-energy, SSE and SEL. The results are summarized in Table 5.
Table 5.
Double-well | Lorenz | van der Pol | ||
---|---|---|---|---|
Free-energy | Native model | −1.98×103a | 1.04×106 | −5.55× 102a |
Generic model | −3.04×103 | 1.05×106a | −8.83× 102 | |
log SSE | Native model | 0.53 a | 0.37 a | 3.58 |
Generic model | 0.60 | 0.72 | 2.93 a | |
log-SEL | Native model | 3.32 a | 4.24 a | 4.00 a |
Generic model | 6.29 | 6.98 | 5.01 |
Indicates a significant difference between the true and generic models (one-sample paired -test, 5% confidence level, df=49). Grey cells indicate which of the two models (true or generic) are best with respect to the three indices.
Unsurprisingly, the estimation loss (SEL) was always significantly smaller for the true model. This means that the hidden-states were always estimated more accurately under the true, relative to the generic model. More surprisingly (because the fits looked equally accurate), there was always a significant difference between the true and generic models, in terms of their goodness-of-fit (SSE). However had we based our model comparison on this index, we would have favoured the generic model over the true van der Pol system.
There was always a significant difference between the true and generic models in terms of free-energy. Model comparison based on the free-energy would have led us to select the true against the generic model for the Double-well and van der Pol — but not for the Lorenz system. This is what we predicted, because the generic model covers the dynamical structure of the Lorenz system. Fig. 13 shows the Monte Carlo average of the posterior means of both matrices and , given data generated by the Lorenz system. The inferred structure is very similar to the true system. Note however; (i) the global rescaling of the Monte Carlo average of the matrix relative to its Lorenz analogue and (ii) the slight ambiguity regarding the contributions of the nonlinear and effects on . The global rescaling is due to the “minimum norm” priors imposed on the evolution parameters of the generic model. The fact that the nonlinear effects on are shared between the quadratic and interaction terms is due to the strong correlation between the time-series of and (see e.g. Figs. 3, 6 and 10). We discuss the results of this model comparison below.
4.5. Assessing the asymptotic efficiency of the VB-Laplace approach
In this third set of simulations, we asked whether the VB-Laplace estimation accuracy is close to optimal and assessed the quality of the posterior confidence intervals, when the sample size becomes large. In other words, we wanted to understand the influence of sample size on the estimation capabilities of the method. To do this, we used the simplest observation function; the identity mapping: and varied sample size. This means we could evaluate the behaviour of the measured squared error loss as a function of sample size , for each of the three nonlinear stochastic systems above.
We conducted a series of fifty Monte Carlo simulations for seven sample sizes () and for each dynamical system. Table 3 shows the simulated and prior parameters used.
We applied the VB-Laplace scheme to each of these 1050 simulations. We then calculated the squared error loss (SEL) and expected loss (EL)8 from the ensuing approximated posterior densities.
Sampling the empirical Monte Carlo distributions of both these evaluation measures allowed us to approximate their expectation under the marginal likelihood. Therefore, characterising the behaviour of Monte Carlo average SEL as a function of the sample size provides a numerical assessment of asymptotic efficiency. Furthermore, comparing the Monte Carlo average SEL and Monte Carlo average EL furnishes a quantitative validation of the posterior confidence intervals.
Fig. 14 (resp. Fig. 15) shows the Monte Carlo distributions (10%, 50% and 90% percentiles) of the relative squared error for the initial conditions, evolution parameters and hidden-states (resp. the estimated state-noise and the precision hyperparameters). Except for the initial conditions, all the VB-Laplace estimators show a jump around ; above which the squared error loss seems to asymptote. Moreover, the VB-Laplace estimators of both evolution parameters and hidden-states exhibit a significant (quasi-monotonic) variation with (see Fig. 14).9 On average, and within the range of we considered, the squared root loss seems to be inversely related to the sample size :
(56) |
This would be expected when estimating the parameters of a linear model, since (under a linear model) the Cramer–Rao bound is:
(57) |
where enumerates the degrees of freedom. However, we are dealing with nonlinear models, whose number of unknowns (the hidden-states) increases with sample size and for which no theoretical bound is available. Nevertheless, our Monte Carlo simulations suggest that Eq. (57) seems to be satisfied over the range of considered. This result seems to indicate that the VB-Laplace estimator of both hidden-states and evolution parameters attains asymptotic efficiency.
Surprisingly, the estimation efficiency for the initial conditions does not seem to be affected by the sample size because it does not show significant variation within the range of considered. This might be partially explained by the fact that the systems we are dealing with are close to ergodic. If the system is ergodic, then there is little information about the initial conditions at the end of the time-series. In this case, the approximate marginal posterior density of the initial conditions depends weakly on the sample size. This effect also interacts with the mean-field approximation: the derivation of the approximate posterior density of the initial conditions depends primarily on that of the first hidden-state through the message passing algorithm.10 Therefore, it should not matter whether we increase the sample size: the effective amount of available information for the initial conditions is approximately invariant. Lastly, we note a significant variation of the estimation efficiency for both the state-noise and the precision hyperparameters (except for the van der Pol case: see Fig. 9). This efficiency gain is qualitatively similar to that of evolution parameters and hidden-states, though to a lesser extent.
Fig. 16 shows the VB-Laplace self-consistency measure, in terms of the quantitative relationship between the measured loss () and its posterior expectation (). To demonstrate the ability of the method to predict its own estimation error, we constructed log–log scatter plots of the posterior loss versus measured loss (having pooled over simulation) for hidden-states (), parameters ( and ) and state-noise (). The hidden-states show a nearly one-to-one mapping between measured and expected loss, which is due to the fact that the hidden-states populated the lowest level in the hierarchical model. As a consequence, the VB-Laplace approximation to their posterior density does not suffer from having to integrate over intermediate levels. Both the evolution parameters and initial conditions show a close relationship between measured and expected loss. Nevertheless, it can be seen from Fig. 16 that the VB-Laplace estimates of the evolution parameters for the double-well and the van der Pol system are slightly underconfident. This underconfidence is also observed for the state-noise precision. This might partially be due to a slight but systematic underestimation of the state-noise precision hyperparameter .This pessimistic VB-Laplace estimation of the squared error loss (SEL) would lead to conservative posterior confidence intervals.
However, note that this underconfidence is not observed for the Lorenz parameters, whose VB-Laplace estimation appears to be slightly overconfident (shrinked posterior confidence intervals). This is important, since this means that the bias of posterior confidence interval VB-Laplace estimation depends upon the system to be inverted. These underconfidence/overconfidence effects are discussed in details below (see discussion section “On asymptotic efficiency”).
4.6. Assessing prediction ability
Finally, we assessed the quality of the predictive and sojourn densities. Figs. 17–19 show the approximate predictive densities over the hidden-states (), as given by VB-Laplace and a standard Monte Carlo Markov Chain (MCMC) sampling technique [35], for each of the three dynamical systems. Specifically:
-
•
Top-left: MCMC predictive density using the true parameters.
-
•
Top-right: MCMC predictive density using the parameters and hyperparameters estimated by the VB-Laplace approach.
-
•
Bottom-left: VB-Laplace approximate predictive density using the parameters and hyperparameters estimated by VB-Laplace.
Note that we used the Monte Carlo averages of the VB-Laplace posterior densities parameters and hyperparameters from the first series of Monte Carlo simulations. After a “burn-in” period, the predictive density settles down into stationary (double-well and van der Pol) or cyclostationary11 (Lorenz) states that are multimodal.12
The double-well system (Fig. 17) exhibits a stationary bimodal density whose modes are centred on the two wells. Its burn-in period is similar for both MCMC estimates (ca. one second). The bimodality occurs because of diffusion over the barrier caused by state-noise. The Lorenz system (Fig. 18) shows a quasi-cyclostationary predictive density, after a burn-in period of about 1.5 s under the true parameters, and 0.8 s under their VB estimates. Note that due to the diffusive effect of state-noise, this quasi-cyclostationary density slowly converges to a stationary density (not shown). Within a cycle, each mode reproduces the trajectory of one oscillation around each wing of the Lorenz attractor. The bimodality of the Lorenz predictive density is very different in nature to that of the double-well system. First, there are periodic times at which the two modes co-occur, i.e. for which the predictive density can be considered as unimodal. This occurs approximately every 700 ms. At these times the states are close to the transition point between the two attractor wings. At this transition point, state-noise allows the system to switch to one or the other wing of the attractor. However, the trajectory between transition points is quasideterministic, i.e. it evolves in the neighbourhood of the deterministic orbit around the chosen wing. This is because the evolution function is dominated by the deterministic part of the evolution function. The van der Pol system (Fig. 19) shows a stationary bimodal density, after a burn-in period of about 1 s. The modes of the stationary density are centred on the extremal values of its deterministic variant (around ). Here again, the bimodality of the van der Pol predictive density is very different from the two other systems. The main effect of state-noise is to cause random jitter in the phase of the van der Pol oscillator. In addition, the system slows down when approaching extremal values. As a consequence, an ensemble of stochastic van der Pol oscillator will mostly populate the neighbourhoods of both the extremal values of the deterministic oscillator.
The stationarity in each of the three systems seems to be associated with ergodicity (at least for the first moment of the predictive density). Note that both the form of the stationary density and the burn-in period depends upon the structure of the dynamical system, and particularly on the state-noise precision hyperparameter. This latter dependence is expressed acutely in the Lorenz attractor (Fig. 18): the modes of the cyclostationary distribution under the true parameters and hyperparameters are wider than those under the VB estimates. Also, the burn-in period is much shorter under the VB estimates. This is due to the fact that the state-noise precision hyperparameter has been underestimated.
The VB-Laplace approximation to the predictive density cannot reproduce the multimodal structure of the predictive density (Figs. 17, 18 and 19). However, it is a good approximation to the true predictive density during the burn-in period. It can be seen from Figs 17, 18 and 19 that the burn-in MCMC unimodal predictive density is very similar to its VB-Laplace approximation, except for the slight overconfidence problem. Note also the drop in the precision of the VB-Laplace approximate predictive density after the burn-in period, for both the double-well and the Lorenz system. This means that the VB-Laplace approach predicts its own inaccuracy, after the burn-in period. In summary, these results mean that, contrary to middle-term predictions, short-term predictions are not compromised by the Gaussian approximation to the predictive density. By short-term predictions, we mean predictions over the burn-in period. The accuracy of the VB-Laplace predictions shows a clear transition when the system actually becomes ergodic. When this is the case (middle-term), the VB-Laplace predictions become useless.
Figs. 20–22 depict the sojourn distributions as given by VB-Laplace and Monte Carlo Markov Chain (MCMC) sampling, for each of the three dynamical systems. The MCMC sojourn density of the double-well system (Fig. 20) is composed of two (nearly Gaussian) modes, connected to each other by a “bridge”. The difference between the amplitudes of this bridge under the true parameters and under the VB estimates is again due to a slight underestimation of the state-noise precision hyperparameter. As can be seen from Fig. 20, the approximate sojourn distribution of the Double-Well system is far from perfect: one of the two modes (associated with the left potential well) is missing. This is due to the fact that the Gaussian approximation to the predictive density cannot account for stochastic phase transitions. This means that the prediction for this system will be biased by the initial conditions (last a posteriori inferred state), and will get worse with time. In contrast, Figs. 21 and 22 suggest a good agreement between VB-Laplace approximate and MCMC sampled sojourn distributions for the Lorenz and van der Pol systems. Qualitatively, their state-space maps seem to be recovered correctly, ensuring a robust long-term (average) prediction. Note that the lack of precision of the Lorenz VB-Laplace approximate sojourn density (Fig. 21) is mainly due to the underestimation of the state-noise precision hyperparameter, since the same “smoothing” effect is noticeable on the MCMC sojourn distribution under the VB hyperparameters. The structure of the van der Pol sojourn distribution is almost perfectly captured, except for a slight residual from the initial conditions (centred on the fixed point ).
Taken together, these preliminary results indicate that the long-term predictive power of the VB-Laplace scheme depends on the structure of the stochastic system to be predicted. This means that accuracy of the VB-Laplace long-term predictions might only hold for a certain class of stochastic nonlinear systems (see Section 5).
5. Discussion
We have proposed a variational Bayesian approach to the inversion and prediction of nonlinear stochastic dynamic models. This probabilistic technique yields (i) approximate posterior densities over hidden-states, parameters and hyperparameters and (ii) approximate predictive and sojourn densities on state and measurement space. Using simulations of three nonlinear stochastic dynamical systems, the schemes’ estimation and model identification capabilities have been demonstrated and examined in terms self-consistency. The results suggest that:
-
•
VB-Laplace outperforms standard extended Kalman filtering, in terms of estimating of hidden-states. In particular, VB-Laplace seems to be more robust to model misspecification.
-
•
Approximate Bayesian model comparison allows one to identify models whose structure could have generated the data. This means that the free-energy bound on log-model-evidence is not confounded by the variational approximations and remains an operationally useful proxy for model comparison.
-
•
VB-Laplace estimators of hidden-states and model parameters seem to attain asymptotic efficiency. However, we have observed a slight but systematic underestimation of the state-noise precision hyperparameter.
-
•
Short- and long-term prediction can be efficient, depending on the nature of the stochastic nonlinear dynamical system.
Overall, our results suggest that the VB-Laplace scheme is a fairly efficient solution to estimation, time-series prediction and model comparison problems. Nevertheless, some very specific characteristics of the proposed VB-Laplace scheme were shown to be system-specific. We discuss these properties below, along with related issues and insights.
5.1. On asymptotic efficiency
Asymptotic efficiency for the state-noise per se might be important for estimating unknown exogenous input to the system. For example, when inverting neural-mass models using neuroimaging data, retrieving the correct structure of the network might depend on explaining away external inputs. Furthermore, discovering consistent trends in estimated innovations might lead to further improvements in modelling the dynamical system. Alternative models can then be compared using the VB-Laplace approximation to the marginal likelihood as above.
We now consider an analytic interpretation of asymptotic efficiency for VB-Laplace estimators. Recall that under the Laplace approximation, the posterior covariance matrix is given by:
(58) |
Therefore, its expectation under the marginal likelihood should, asymptotically, tend to the Bayesian Cramer–Rao bound:
(59) |
Provided the approximate posterior density converges to the true posterior density with large sample sizes. For non-asymptotic regime, the normal approximation is typically more accurate for marginal distributions of components of than for the full joint distribution. Determining the marginal distribution of a component of is equivalent to averaging over all other components of ; rendering it closer to normality, by the same logic that underlies the central limit theorem [51]. Therefore, the numerical evidence for asymptotic efficiency of the VB-Laplace scheme13 can be taken as a post hoc justification of the underlying variational approximations. This provides a numerical argument for extending the theoretical result of [27] on VB asymptotic convergence for conjugate-exponential (CE) models to nonlinear (non-CE) hierarchical models. Nevertheless, this does not give any prediction about the convergence rate to the likely VB-Laplace asymptotic efficiency. The Monte Carlo simulation series seem to indicate that this convergence rate might be dependent upon the system to be inverted (in our examples, the Lorenz system might be quicker than the double-well and the van der Pol systems; see Figs. 14 and 15). In other words, the minimum sample size required to confidently identify a system might strongly depend on the system itself.
In addition, VB-Laplace seems to suffer from an underconfidence problem: the posterior expectation of the estimation error is often over-pessimistic when compared to empirically measured estimation error. Generally speaking, free-form variational Bayesian inference on conjugate-exponential models is known to be overconfident[21]. This is thought to be due to the mean-field approximation, which neglects dependencies within the exact joint posterior density. However, this heuristic does not hold for non-exponential models, e.g. nonlinear hierarchical models of the sort that we are dealing with.
This underconfidence property might be due to a slight underestimation of the precision hyperparameters, which would inflate posterior uncertainty about other variables in the model. This underestimation bias of the precision hyperparameters might itself be due to the priors we have chosen (weakly informative Gamma pdf with first-order moment two orders of magnitude lower than the actual precision hyperparameters, see Tables 2 and 6). This is important, since the overall underconfidence bias (on evolution parameters) that was observed in the simulation series might be sensitive to the choice of precision hyperparameters priors.
Table 6.
Double-well | Lorenz | van der Pol | ||
---|---|---|---|---|
Measurement-noise precision | Simulated | |||
Prior pdf | ||||
System-noise precision | Simulated | |||
Prior pdf | ||||
Evolution parameters | Simulated | |||
Prior pdf | ||||
Initial conditions | Simulated | |||
Prior pdf |
However, this is certainly not the only effect, since this could not explain why the evolution parameter estimates of the Lorenz system are (as in the CE case) overconfident (see Fig. 16). Note that in this latter case, the evolution function is linear in the evolution parameters. This means that in the context of hierarchical nonlinear models, VB-Laplace might over-compensate for the tendency of variational approaches to underestimate posterior uncertainty. The subsequent underconfidence might then be due the Taylor approximation of the curvature of the log-transition density:
(60) |
Eq. (60) gives the expression for the posterior covariance matrix of the evolution parameters. When the evolution function is linear in the parameters (CE case), the neglected term is zero. In this case the curvature of the log-transition density is estimated exactly, which would allow VB overconfidence to be expressed in the usual way. However, in the nonlinear case, neglecting this term will result in an overestimate of the posterior covariance. Note that underestimating leads to an (even more) increased posterior covariance for the evolution parameters. This effect can be seen in the VB-Laplace approximation to the Lorenz sojourn distribution. This potential lack of consistency of variational Bayesian inversion of linear state-space models has already been pointed out by Wang [27]. It is possible that both effects highlighted by Eq. (60) could contribute to underconfidence in nonlinear models.
5.2. On time-series prediction
Our assessment of the approximate predictive and sojourn densities provided only partly satisfactory results. Overall, the VB-Laplace scheme furnishes a veridical approximation to the short-term predictive density. In addition, the long-term predictions seem to be accurate for systems that have qualitatively similar deterministic and stochastic dynamical behaviours, which is the case for both the Lorenz and the van der Pol systems, but not for the double-well system. The VB-Laplace approximation to the sojourn density relies on the ergodicity of the hidden stochastic system, which is a weak assumption for the class of systems we have considered. However, there are two classes of stochastic ergodic systems, for which the deterministic variant might also be ergodic or not. The former class of stochastic systems is called quasideterministic, and has a number of desirable properties [52]. The dynamical behaviour of quasideterministic systems can be approximated by small fluctuations around their deterministic trajectory (hence their name). This means that a local Gaussian approximation around the deterministic trajectory of the system will lead to a veridical approximation of the sojourn distribution. Systems are quasideterministic if and only if they are stable with respect to small changes in the initial conditions [40]. This is certainly the case for the van der Pol oscillator, which exhibits a stable limit cycle. The stochastic Lorenz system is also quasideterministic [56]. As a consequence, their VB-Laplace approximation to the stationary (sojourn) distribution is qualitatively valid. However, this is not the case for the double-well system, for which weak stochastic forces can lead to a drastic departure from deterministic dynamics [57] (e.g. phase transitions). In brief, long-term predictions based on the VB-Laplace approximations are only valid if the system is quasideterministic; i.e. if the complexity of its dynamical behaviour is not increased substantially by the stochastic effects.
5.3. On model comparison
In terms of model comparison, our results show that the VB-Laplace scheme could identify the structure of the hidden stochastic nonlinear dynamical system; in the sense that models that cover the dynamical structure of the hidden system are a posteriori the most plausible. However, the free-energy showed a slight bias in favour of more complex models: when comparing two models that could both have generated the data, the free-energy identified the model with the higher dimensionality (e.g. comparison between generic versus true Lorenz systems). This might be due to the minimum norm priors that were used for the evolution parameters. As a consequence, the structure of the true hidden system was explained by a large number of small parameters (as opposed to a small number of large parameters). Since the free-energy decreases with the Kullback–Leibler divergence between the prior and the posterior density, this “minimum norm spreading” is less costly. Importantly, this effect does not seem to confound correct model identification when models that do not cover the true structure are compared.
5.4. On algorithmic convergence
The variational Bayesian approach replaces the multidimensional integrals required for standard Bayesian inference by an optimization scheme. However, this optimization can also be a difficult problem, because the free-energy is a nonlinear function of the sufficient statistics of the posterior density. The VB-Laplace update rule optimizes a third-order approximation to the free-energy with respect to the sufficient statistics [28]. Note that this approximation to the free-energy comes from neglecting the contributions of fourth and higher (even) order central moments of the Gaussian approximate posterior densities. Since the latter are polynomial functions of the posterior covariance matrix (and are independent of the posterior modes ), a moment closure procedure could be used to finesse the calculation of the variational energies, guaranteeing strict convergence. However, when dealing with analytic observation and evolution functions, the series generally converge rapidly. This means that the contributions of high-order moments to the free-energy, under the Laplace approximation, become negligible. Under these conditions, marginal optimization of the variational energies almost guarantees local optimization of the free-energy.
Obviously, this does not circumvent the problem of global optimization of the free-energy. However, local convergence of the free-energy w.r.t. the sufficient statistics now reduces to local convergence of the variational energy optimization w.r.t. the modes. This is because the only sufficient statistics that need to be optimized are the first-order moments of the approximate marginal posterior densities (the second-order moments are functions of the modes; see Eq. (7)). We used a regularized Gauss–Newton scheme for the variational energy optimization, which is expected to converge under mild conditions. This convergence has been empirically observed over all our Monte Carlo simulations. However, we foresee two reasons why VB-Laplace might not converge: either the evolution or the observation functions are non-analytic or the algorithm reaches its stopping criterion too early. The first situation includes models with discrete types of nonlinearities (i.e., “on/off” switches). In this case, convergence issues could be handled by extending to switching state-space hierarchical models (see [55] for the CE case). The second situation might arise due to slow convergence rates, if the stopping criterion is based on the free-energy increment between two iterations.
5.5. On scalability
A key issue with Bayesian filters is scalability. It is well known that scalability is one of the main advantages of Kalman-like filters over sampling schemes (e.g. particle filters) or high-order approximations to the Kushner–Pardoux PDEs. The VB-Laplace update of the hidden-states posterior density is a regularized Gauss–Newton variant of the Kalman filter. Therefore, the VB-Laplace and Kalman schemes share the same the scalability properties.
To substantiate this claim, we analyzed the VB-Laplace scheme using basic computational complexity of matrix algebra. Assuming that arithmetic with individual elements has complexity (as with fixed-precision floating-point arithmetic), it is easy to show that the per-iteration costs (i.e. the number of computations) for the VB updates are:
(61) |
This derives from the sparsity of the mean-field terms, which rely on Kronecker products with identity matrices (see Eqs. (29), (31) and (34)). It can be seen that the per-iteration cost is the same as a Kalman filter; i.e., it grows as , where is the number of hidden-states.
In terms of memory, the implementation of our VB scheme has the following matrix storage requirements: , which is required for the calculation of the posterior covariance matrices (see Eqs. (29), (31) and (34)). This computational load is similar to a Kalman filter; i.e., it grows as . Overall, this means that the VB-Laplace scheme inherits the scalability properties of the Kalman filter.
5.6. On influence of noise
In the Monte Carlo simulation series we presented, we did not assess the response of the VB-Laplace scheme to a systematic variation of noise precision. This was justified by our main target application, i.e. neuroimaging data (EEG/MEG and fMRI) analysis, for which the SNR is known (see e.g., [53]).
In addition, we have also fixed the state-noise precision hyperparameter. This is because a subtle balance between drift and state-noise is required for stochastic dynamical systems to exhibit “interesting” properties, which would disappear in both low- and high-noise situations. For example, the expected time interval between two transitions of the double-well system is proportional to the state-noise precision (see e.g. [54]). As a consequence, the low-noise double-well system will hardly show any transition. In contradistinction, the high-noise double-well system looks like white noise, because the drift term has no significant influence on the dynamics anymore. Therefore, local and global oscillations co-occur only within a given range of state-noise precision (stochastic resonance).
Nevertheless, a comprehensive assessment of the behaviour of the VB-Laplace scheme would require varying the precision of both the measurement and the state-noise precisions. Preliminary results (not shown) seem to indicate that the VB-Laplace scheme does not systematically suffer from over- or under-fitting, even in the weakly informative precision prior case. However, no formal conclusions can yet be drawn onto the influence of high noise on the VB-Laplace scheme, which could potentially be a limiting factor for particular applications.
6. Conclusion
In this paper, we have presented an approximate variational Bayesian inference scheme to estimate the hidden-states, parameters, and hyperparameters of dynamic nonlinear causal models. We have also assessed its asymptotic efficiency, prediction ability and model selection performances using decision theoretic measures and extensive Monte Carlo simulations. Our results suggest that variational Bayesian techniques are a promising avenue for solving complex inference problems that arise from structured uncertainty in dynamical systems.
Acknowledgement
The work was funded by Wellcome trust.
Communicated by S. Coombes
Footnotes
Note that filtering techniques provide the instantaneous posterior density, (i.e. the posterior density given observed time-series data so far) as opposed to smoothing schemes, which cannot operate on-line, but furnish the full posterior density (given the complete time-series data).
In this article, we refer to parameters governing the second-order moments of the probability density functions as (variance or, reciprocally, precision) hyperparameters.
Note that all these quantities are the negative of their thermodynamic homologues.
The class of decision theoretic problems (i.e. hypothesis testing) is treated as a model comparison problem in a Bayesian framework.
One can apply any arbitrary nonlinear transform to the parameters to implement an implicit probability integral transform.
The Markov blanket of a node in a directed acyclic graph (of the sort given in Fig. 1) comprises the node’s parents, children and parents of those children.
For both hidden-states and initial conditions, we halve the size of the Gauss–Newton update until their respective variational energy increases.
Note that the relationship between RSEL and depicted in Fig. 14 might not, strictly speaking, appear monotonic (cf., e.g., the Lorenz evolution parameters). This is likely to be due to finite size effects in the Monte Carlo simulation series (50 samples per value of ). However, the rate at which the VB-Laplace reaches the asymptotic regime might be different for the systems considered (see Section 5 “on asymptotic efficiency”).
Strictly speaking, also depends on .
A cyclostationary system is such that the sufficient statistics of its predictive density are periodic. It can be thought of as an ergodic process that constitutes multiple interleaved stationary processes [50].
Note that the bimodality of the predictive density does not imply bimodality of the posterior density.
The Monte Carlo simulations provide us with a sampling approximation to the left-hand term of Eq. (55) (sampling averages of the squared error loss, see Figs. 8 and 9) given model .
References
- 1.Friston K.J., Harrison L., Penny W. Dynamic causal modelling. Neuroimage. 2003;19:1273–1302. doi: 10.1016/s1053-8119(03)00202-7. [DOI] [PubMed] [Google Scholar]
- 2.Kiebel S.J., Garrido M.I., Friston K.J. Dynamic causal modelling of evoked responses: The role of intrinsic connections. Neuroimage. 2007;36:332–345. doi: 10.1016/j.neuroimage.2007.02.046. [DOI] [PubMed] [Google Scholar]
- 3.Judd K., Smith L.A. Indistinguishable states II: The imperfect model scenario. Physica D. 2004;196:224–242. [Google Scholar]
- 4.Saarinen A., LInne M.L., Yli-Harja O. Stochastic differential equation model for cerebellar granule cell excitability. Plos Comput. Bio. 2008;4 doi: 10.1371/journal.pcbi.1000004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Herrmann C.S. Human EEG responses to 1–100 Hz flicker: Resonance phenomena in visual cortex and their potential correlation to cognitive phenomena. Exp. Brain Res. 1988;137:149–160. doi: 10.1007/s002210100682. [DOI] [PubMed] [Google Scholar]
- 6.Jimenez J.C., Ozaki T. An approximate innovation method for the estimation of diffusion processes from discrete data. J. Time Ser. Anal. 2006;76:77–97. [Google Scholar]
- 7.Friston K.J., Trujillo N.J., Daunizeau J. DEM: A variational treatment of dynamical systems. Neuroimage. 2008;41:849–885. doi: 10.1016/j.neuroimage.2008.02.054. [DOI] [PubMed] [Google Scholar]
- 8.A. Joly-Dave, The fronts and Atlantic storm-track experiment (FASTEX): Scientific objectives and experimental design, Bull. Am. Soc. Meteorol, Mto-France, Toulous, France, 1997. http://citeseer.ist.psu.edu/496255.html
- 9.Wikle C.K., Berliner L.M. A Bayesian tutorial for data assimilation. Physica D. 2007;230:1–16. [Google Scholar]
- 10.Briers M., Doucet A., Maskell S. Smoothing algorithm for state-space models. IEEE Trans. Signal Process. 2004 [Google Scholar]
- 11.Kushner H.J. Probability Methods for Approximations in Stochastic Control and for Elliptic Equations. vol. 129. Accademic Press; New York: 1977. (Mathematics in Science and Engineering). [Google Scholar]
- 12.Pardoux E. vol. 1464. Springer-Verlag; 1991. Filtrage non-lineaire et equations aux derivees partielles stochastiques associees, Ecole d’ete de probabilites de Saint-Flour XIX - 1989. (Lectures Notes in Mathematics). [Google Scholar]
- 13.F.E. Daum, J. Huang, The curse of dimensionality for particle filters, in: Proc. of IEEE Conf. on Aerospace, Big Sky, MT, 2003
- 14.Julier S., Uhlmann J., Durrant-Whyte H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Automat. Control. 2000 [Google Scholar]
- 15.G.L. Eyink, A variational formulation of optimal nonlinear estimation. ArXiv:physics/0011049, 2001
- 16.Budhiraja A., Chen L., Lee C. A survey of numerical methods for nonlinear filtering problems. Physica D. 2007;230:27–36. [Google Scholar]
- 17.Arulampalam M.S., Maskell M., Gordon N., Clapp T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002;50(2) (special issue) [Google Scholar]
- 18.Doucet A., Tadic V. Parameter estimation in general state-space models using particle methods. Ann. Inst. Stat. Math. 2003;55:409–422. [Google Scholar]
- 19.Wan E., Nelson A. Dual extended Kalman filter methods. In: Haykin S., editor. Filtering and Neural Networks. Wiley; New York: 2001. pp. 123–173. (Chapter 5) [Google Scholar]
- 20.Yedidia J.S. MIT Press; 2000. An Idiosyncratic Journey Beyond Mean Field Theory. [Google Scholar]
- 21.M. Beal, Variational algorithms for approximate Bayesian inference, University of London Ph.D. Thesis, 2003
- 22.M. Beal, Z. Ghahramani, The variational Kalman smoother. Technical Report, University College London, 2001. http://citeseer.ist.psu.edu/ghahramani01variational.html
- 23.Wang B., Titterington D.M. Convergence and asymptotic normality of variational Bayesian approximations for exponential family models with missing values. ACM Internat. Conf. Proc. Series. 2004;70:577–584. [Google Scholar]
- 24.Roweis S.T., Ghahramani Z. An EM algorithm for identification of nonlinear dynamical systems. In: Haykin S., editor. Kalman Filtering and Neural Networks. 2001. http://citeseer.ist.psu.edu/306925.html [Google Scholar]
- 25.Valpola H., Karhunen J. An unsupervised learning method for nonlinear dynamic state-space models. Neural Comput. 2002;14(1):2547–2692. [Google Scholar]
- 26.C. Archambeau, D. Cornford, M. Opper, J. Shawe-Taylor, Gaussian process approximations of stochastic differential equations, in: JMLR: Workshop and Conferences Proceedings, vol. 1, 2007, pp. 1–16
- 27.Wang B., Titterington D.M. Lack of consistency of mean-field and variational Bayes approximations for state-space models. Neural Process. Lett. 2004;20:151–170. [Google Scholar]
- 28.Friston K.J., Mattout J., Trujillo-Barreto N., Ashburner J., Penny W. Variational free-energy and the Laplace approximation. Neuroimage. 2007;34:220–234. doi: 10.1016/j.neuroimage.2006.08.035. [DOI] [PubMed] [Google Scholar]
- 29.Gray R.M. Springer-Verlag; 1990. Entropy and Information Theory. [Google Scholar]
- 30.Tanaka T. A theory of mean field approximation. In: Kearns M.S., Solla S.A., Cohn D.A., editors. Advances in Neural Information Processing Systems. 2001. http://Citeseer.ist.psu.edu/303901.html [Google Scholar]
- 31.Tanaka T. Information geometry of mean field approximation. Neural Comput. 2000;12:1951–1968. doi: 10.1162/089976600300015213. [DOI] [PubMed] [Google Scholar]
- 32.G.E. Hinton, D. Van Camp, Keeping neural networks simple by minimizing the description length of the weights, in: Proc. of COLT-93, 1993, pp. 5–13
- 33.Carlin B.P., Louis T.A. Text in Statistical Science. 2nd ed. Chapman and Hall/CRC; 2000. Bayes and empirical Bayes methods for data analysis. [Google Scholar]
- 34.C. Robert, L’analyse statistique Bayesienne, Ed. Economica, 1992
- 35.Kloeden P.E., Platen E. third ed. Springer; 1999. Numerical Solution of Stochastic Differential Equations, Stochastic Modeling and Applied Probability. [Google Scholar]
- 36.Ozaki T. A bridge between nonlinear time series models and nonlinear stochastic dynamical systems: A local linearization approach. Statistica Sinica. 1992;2:113–135. [Google Scholar]
- 37.Kleibergen F., Van Dijk H.K. Non-stationarity in GARCH models: A Bayesian analysis. J. Appl. Econom. 1993;8:S41–S61. [Google Scholar]
- 38.Meyer R., Fournier D.A., Berg A. Stochastic volatility: Bayesian computation using automatic differentiation and the extended Kalman filter. Econom. J. 2003;6:408–420. [Google Scholar]
- 39.Sornette D., Pisarenko V.F. Properties of a simple bilinear stochastic model: Estimation and predictability. Physica D. 2008;237:429–445. [Google Scholar]
- 40.Tropper M.M. Ergodic and quasideterministic properties of finite-dimensional stochastic systems. J. Stat. Phys. 1977;17:491–509. [Google Scholar]
- 41.Björck A. SIAM; Philadelphia: 1996. Numerical Methods for Least Squares Problems. [Google Scholar]
- 42.Lacour C. Nonparametric estimation of the stationary density and the transition density of a Markov chain. Stoch. Process. Appl. 2008;118:232–260. [Google Scholar]
- 43.Angeli D., Ferrell J.E., Sontag E.D. Detection of multistability, bifurcations, and hysteresis in a large class of biological positive-feedback systems. Proc. Nat. Atl. Sci. 2004;101:1822–1827. doi: 10.1073/pnas.0308265100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Lorenz E.N. Deterministic nonperiodic flow. J. Atmospheric Sci. 1963;20:130–141. [Google Scholar]
- 45.H. Keller, Attractors and bifurcations of the stochastic Lorenz system, Technical Report No. 389, Universitat Bremen, 1996. citeseer.ist.psu.edu/keller96attractors.html
- 46.Fitzhugh R. Impulses and physiological states in theoretical models of nerve membranes. Biophys. J. 1961;1:445–466. doi: 10.1016/s0006-3495(61)86902-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.J.S. Nagumo, S. Arimoto, S. Yoshizawa, An active pulse transmission line simulating nerve axon, Proc. IRE 1962 50, pp. 2061–2070
- 48.Gill R.D., Levit B.Y. Applications of the van trees inequality: a Bayesian Cramer–Rao bound. Bernouilli. 1995;1:59–79. [Google Scholar]
- 49.Slotine J., Li W. Prentice-Hall, Inc; New Jersey: 1991. Applied Nonlinear Control. [Google Scholar]
- 50.Gardner W.A., Napolitano A., Paura L. Cyclostationarity: Half a century of research. Sig. Process. 2006;86:639–697. [Google Scholar]
- 51.Gelman A., Carlin J.B., Stern H.S., Rubin D.B. 2d ed. Chapman & Hall/CRC editions; 2004. Bayesian Data Analysis. [Google Scholar]
- 52.Hanson F.B., Ryan D. Mean and quasideterministic equivalence for linear stochastic dynamics. Math. Biosci. 1988;93:1–14. doi: 10.1016/0025-5564(89)90010-2. [DOI] [PubMed] [Google Scholar]
- 53.Friston K.J., Ashburner J., Kiebel S.J., Nichols T., Penny W.D. Academic Press, Elsevier Ltd.; 2006. Statistical Parametric Mapping, The Analysis of Functional Brain Images. ISBN: 10: 0-12-372560-7. [Google Scholar]
- 54.Petrelis F., Aumaitre S., Mallick K. Escape from a potential well, stochastic resonance and zero-frequency component of the noise. Europhys. Lett. 2007;79:40004. [Google Scholar]
- 55.Ghahramani Z., Hinton G.A. Variational learning for switching state-space models. Neural Comput. 2000;12:831–864. doi: 10.1162/089976600300015619. [DOI] [PubMed] [Google Scholar]
- 56.Ito H.M. Ergodicity of randomly perturbed Lorenz model. J. Stat. Phys. 1984;35:151–158. [Google Scholar]
- 57.Turbiner A. Anharmonic oscillator and double-well potential: Approximating eigenfunctions. Lett. Math. Phys. 2005;74:169–180. [Google Scholar]
- 58.Crisan D., Lyons T. A particle approximation of the solution of the Kushner–Stratonovitch equation. Probab. Theory Related Fields. 1999;115:549–578. [Google Scholar]