Skip to main content
Springer logoLink to Springer
. 2016 Jun 2;27(4):1065–1082. doi: 10.1007/s11222-016-9671-0

Statistical analysis of differential equations: introducing probability measures on numerical solutions

Patrick R Conrad 1,, Mark Girolami 1,2, Simo Särkkä 3, Andrew Stuart 4, Konstantinos Zygalakis 5
PMCID: PMC7089645  PMID: 32226237

Abstract

In this paper, we present a formal quantification of uncertainty induced by numerical solutions of ordinary and partial differential equation models. Numerical solutions of differential equations contain inherent uncertainties due to the finite-dimensional approximation of an unknown and implicitly defined function. When statistically analysing models based on differential equations describing physical, or other naturally occurring, phenomena, it can be important to explicitly account for the uncertainty introduced by the numerical method. Doing so enables objective determination of this source of uncertainty, relative to other uncertainties, such as those caused by data contaminated with noise or model error induced by missing physical or inadequate descriptors. As ever larger scale mathematical models are being used in the sciences, often sacrificing complete resolution of the differential equation on the grids used, formally accounting for the uncertainty in the numerical method is becoming increasingly more important. This paper provides the formal means to incorporate this uncertainty in a statistical model and its subsequent analysis. We show that a wide variety of existing solvers can be randomised, inducing a probability measure over the solutions of such differential equations. These measures exhibit contraction to a Dirac measure around the true unknown solution, where the rates of convergence are consistent with the underlying deterministic numerical method. Furthermore, we employ the method of modified equations to demonstrate enhanced rates of convergence to stochastic perturbations of the original deterministic problem. Ordinary differential equations and elliptic partial differential equations are used to illustrate the approach to quantify uncertainty in both the statistical analysis of the forward and inverse problems.

Electronic supplementary material

The online version of this article (doi:10.1007/s11222-016-9671-0) contains supplementary material, which is available to authorized users.

Keywords: Numerical analysis, Probabilistic numerics, Inverse problems, Uncertainty quantification

Introduction

Motivation

The numerical analysis literature has developed a large range of efficient algorithms for solving ordinary and partial differential equations, which are typically designed to solve a single problem as efficiently as possible (Hairer et al. 1993; Eriksson 1996). When classical numerical methods are placed within statistical analysis, however, we argue that significant difficulties can arise as a result of errors in the computed approximate solutions. While the distributions of interest commonly do converge asymptotically as the solver mesh becomes dense [e.g. in statistical inverse problems (Dashti and Stuart 2016)], we argue that at a finite resolution, the statistical analyses may be vastly overconfident as a result of these unmodelled errors.

The purpose of this paper is to address these issues by the construction and rigorous analysis of novel probabilistic integration methods for both ordinary and partial differential equations. The approach in both cases is similar: we identify the key discretisation assumptions and introduce a local random field, in particular a Gaussian field, to reflect our uncertainty in those assumptions. The probabilistic solver may then be sampled repeatedly to interrogate the uncertainty in the solution. For a wide variety of commonly used numerical methods, our construction is straightforward to apply and provably preserves the order of convergence of the original method.

Furthermore, we demonstrate the value of these probabilistic solvers in statistical inference settings. Analytic and numerical examples show that using a classical non-probabilistic solver with inadequate discretisation when performing inference can lead to inappropriate and misleading posterior concentration in a Bayesian setting. In contrast, the probabilistic solver reveals the structure of uncertainty in the solution, naturally limiting posterior concentration as appropriate.

As a motivating example, consider the solution of the Lorenz’63 system. Since the problem is chaotic, any typical fixed-step numerical methods will become increasingly inaccurate for long integration times. Figure 1 depicts a deterministic solution for this problem, computed with a fixed-step, fourth-order, Runge–Kutta integrator. Although the solver becomes completely inaccurate by the end of the depicted interval given the step-size selected, the solver provides no obvious characterisation of its error at late times. Compare this with a sample of randomised solutions based on the same integrator and the same step-size; it is obvious that early-time solutions are accurate and that they diverge at late times, reflecting instability of the solver. Every curve drawn has the same theoretical accuracy as the original classical method, but the randomised integrator provides a detailed and practical approach for revealing the sensitivity of the solution to numerical errors. The method used requires only a straightforward modification of the standard Runge–Kutta integrator and is explained in Sect. 2.3.

Fig. 1.

Fig. 1

A comparison of solutions to the Lorenz’63 system using deterministic (red) and randomised (blue) integrators based on a fourth-order Runge–Kutta integrator

We summarise the contributions of this work as follows:

  • Construct randomised solvers of ODEs and PDEs using natural modification of popular, existing solvers.

  • Prove the convergence of the randomised methods and study their behaviour by showing a close link between randomised ODE solvers and stochastic differential equations (SDEs).

  • Demonstrate that these randomised solvers can be used to perform statistical analyses that appropriately consider solver uncertainty.

Review of existing work

The statistical analysis of models based on ordinary and partial differential equations is growing in importance and a number of recent papers in the statistics literature have sought to address certain aspects specific to such models, e.g. parameter estimation (Liang and Wu 2008; Xue et al. 2010; Xun et al. 2013; Brunel et al. 2014) and surrogate construction (Chakraborty et al. 2013). However, the statistical implications of the reliance on a numerical approximation to the actual solution of the differential equation have not been addressed in the statistics literature to date and this is the open problem comprehensively addressed in this paper. Earlier work in the literature including randomisation in the approximate integration of ordinary differential equations (ODEs) includes (Coulibaly and Lécot 1999; Stengle 1995). Our strategy fits within the emerging field known as Probabilistic Numerics (Hennig et al. 2015), a perspective on computational methods pioneered by Diaconis (1988), and subsequently (Skilling 1992). This framework recasts solving differential equations as a statistical inference problem, yielding a probability measure over functions that satisfy the constraints imposed by the specific differential equation. This measure formally quantifies the uncertainty in candidate solution(s) of the differential equation, allowing its use in uncertainty quantification (Sullivan 2016) or Bayesian inverse problems (Dashti and Stuart 2016).

A recent Probabilistic Numerics methodology for ODEs (Chkrebtii et al. 2013) [explored in parallel in Hennig and Hauberg (2014)] has two important shortcomings. First, it is impractical, only supporting first-order accurate schemes with a rapidly growing computational cost caused by the growing difference stencil [although Schober et al. (2014) extends to Runge–Kutta methods]. Secondly, this method does not clearly articulate the relationship between their probabilistic structure and the problem being solved. These methods construct a Gaussian process whose mean coincides with an existing deterministic integrator. While they claim that the posterior variance is useful, by the conjugacy inherent in linear Gaussian models, it is actually just an a priori estimate of the rate of convergence of the integrator, independent of the actual forcing or initial condition of the problem being solved. These works also describe a procedure for randomising the construction of the mean process, which bears similarity to our approach, but it is not formally studied. In contrast, we formally link each draw from our measure to the analytic solution.

Our motivation for enhancing inference problems with models of discretisation error is similar to the more general concept of model error, as developed by Kennedy and O’Hagan (2001). Although more general types of model error, including uncertainty in the underlying physics, are important in many applications, our focus on errors arising from the discretisation of differential equations leads to more specialised methods. Future work may be able to translate insights from our study of the restricted problem to the more general case. Existing strategies for discretisation error include empirically fitted Gaussian models for PDE errors (Kaipio and Somersalo 2007) and randomly perturbed ODEs (Arnold et al. 2013); the latter partially coincides with our construction, but our motivation and analysis are distinct. Recent work (Capistrán et al. 2013) uses Bayes factors to analyse the impact of discretisation error on posterior approximation quality. Probabilistic models have also been used to study error propagation due to rounding error; see Hairer et al. (2008).

Organisation

The remainder of the paper has the following structure: Sect. 2 introduces and formally analyses the proposed probabilistic solvers for ODEs. Section 3 explores the characteristics of random solvers employed in the statistical analysis of both forward and inverse problems. Then, we turn to elliptic PDEs in Sect. 4, where several key steps of the construction of probabilistic solvers and their analysis have intuitive analogues in the ODE context. Finally, an illustrative example of an elliptic PDE inference problem is presented in Sect. 5.1

Probability measures via probabilistic time integrators

Consider the following ordinary differential equation (ODE):

dudt=f(u),u(0)=u0, 1

where u(·) is a continuous function taking values in Rn.2 We let Φt denote the flow map for Eq. (1), so that u(t)=Φt(u(0)). The conditions ensuring that this solution exists will be formalised in Assumption 2, below.

Deterministic numerical methods for the integration of this equation on time interval [0, T] will produce an approximation to the equation on a mesh of points {tk=kh}k=0K, with Kh=T, (for simplicity we assume a fixed mesh). Let uk=u(tk) denote the exact solution of (1) on the mesh and Ukuk denote the approximation computed using finite evaluations of f. Typically, these methods output a single discrete solution {Uk}k=0K, often augmented with some type of error indicator, but do not statistically quantify the uncertainty remaining in the path.

Let Xa,b denote the Banach space C([a,b];Rn). The exact solution of (1) on the time interval [0, T] may be viewed as a Dirac measure δu on X0,T at the element u that solves the ODE. We will construct a probability measure μh on X0,T, that is straightforward to sample from both on and off the mesh, for which h quantifies the size of the discretisation step employed, and whose distribution reflects the uncertainty resulting from the solution of the ODE. Convergence of the numerical method is then related to the contraction of μh to δu.

We briefly summarise the construction of the numerical method. Let Ψh:RnRn denote a classical deterministic one-step numerical integrator over time-step h, a class including all Runge–Kutta methods and Taylor methods for ODE numerical integration (Hairer et al. 1993). Our numerical methods will have the property that, on the mesh, they take the form

Uk+1=Ψh(Uk)+ξk(h), 2

where ξk(h) are suitably scaled, i.i.d. Gaussian random variables. That is, the random solution iteratively takes the standard step, Ψh, followed by perturbation with a random draw, ξk(h), modelling uncertainty that accumulates between mesh points. The discrete path {Uk}k=0K is straightforward to sample and in general is not a Gaussian process. Furthermore, the discrete trajectory can be extended into a continuous time approximation of the ODE, which we define as a draw from the measure μh.

The remainder of this section develops these solvers in detail and proves strong convergence of the random solutions to the exact solution, implying that μhδu in an appropriate sense. Finally, we establish a close relationship between our random solver and a stochastic differential equation (SDE) with small mesh-dependent noise. Intuitively, adding Gaussian noise to an ODE suggests a link to SDEs. Additionally, note that the mesh-restricted version of our algorithm, given by (2), has the same structure as a first-order Ito–Taylor expansion of the SDE

du=f(u)dt+σdW, 3

for some choice of σ. We make this link precise by performing a backwards error analysis, which connects the behaviour of our solver to an associated SDE.

Probabilistic time integrators: general formulation

The integral form of Eq. (1) is

u(t)=u0+0tf(u(s))ds. 4

The solutions on the mesh satisfy

uk+1=uk+tktk+1f(u(s))ds, 5

and may be interpolated between mesh points by means of the expression

u(t)=uk+tktf(u(s))ds,t[tk,tk+1). 6

We may then write

u(t)=uk+tktg(s)ds,t[tk,tk+1), 7

where g(s)=f(u(s)) is an unknown function of time. In the algorithmic setting, we have approximate knowledge about g(s) through an underlying numerical method. A variety of traditional numerical algorithms may be derived based on approximation of g(s) by various simple deterministic functions gh(s). The simplest such numerical method arises from invoking the Euler approximation that

gh(s)=f(Uk),s[tk,tk+1). 8

In particular, if we take t=tk+1 and apply this method inductively the corresponding numerical scheme arising from making such an approximation to g(s) in (7) is Uk+1=Uk+hf(Uk). Now consider the more general one-step numerical method Uk+1=Ψh(Uk). This may be derived by approximating g(s) in (7) by

graphic file with name 11222_2016_9671_Equ9_HTML.gif 9

We note that all consistent (in the sense of numerical analysis) one-step methods will satisfy

ddτ(Ψτ(u))τ=0=f(u).

The approach based on the approximation (9) leads to a deterministic numerical method which is defined as a continuous function of time. Specifically, we have U(s)=Ψs-tk(Uk),s[tk,tk+1). Consider again the Euler approximation, for which Ψτ(U)=U+τf(U), and whose continuous time interpolant is then given by U(s)=Uk+(s-tk)f(Uk),s[tk,tk+1). Note that this produces a continuous function, namely an element of X0,T, when extended to s[0,T]. The preceding development of a numerical integrator does not acknowledge the uncertainty that arises from lack of knowledge about g(s) in the interval s[tk,tk+1). We propose to approximate g stochastically in order to represent this uncertainty, taking

gh(s)=ddτ(Ψτ(Uk))τ=s-tk+χk(s-tk),s[tk,tk+1)

where the {χk} form an i.i.d. sequence of Gaussian random functions defined on [0, h] with χkN(0,Ch).3

We will choose Ch to shrink to zero with h at a prescribed rate (see Assumption 1), and also to ensure that χkX0,h almost surely. The functions {χk} represent our uncertainty about the function g. The corresponding numerical scheme arising from such an approximation is given by

Uk+1=Ψh(Uk)+ξk(h), 10

where the i.i.d. sequence of functions {ξk} lies in X0,h and is given by

ξk(t)=0tχk(τ)dτ. 11

Note that the numerical solution is now naturally defined between grid points, via the expression

U(s)=Ψs-tk(Uk)+ξk(s-tk),s[tk,tk+1). 12

When it is necessary to evaluate a solution at multiple points in an interval, s(tk,tk+1], the perturbations ξk(s-tk) must be drawn jointly, which is facilitated by their Gaussian structure. Although most users will only need the formulation on mesh points, we must consider off-mesh behaviour to rigorously analyse higher order methods, as is also required for the deterministic variants of these methods.

In the case of the Euler method, for example, we have

Uk+1=Uk+hf(Uk)+ξk(h) 13

and, between grid points,

U(s)=Uk+(s-tk)f(Uk)+ξk(s-tk),s[tk,tk+1). 14

This method is illustrated in Fig. 2. Observe that Eq. (13) has the same form as an Euler–Maryama method for an associated SDE (3) where σ depends on the step-size h. In particular, in the simple one-dimensional case, σ would be given by Ch/h. Section 2.4 develops a more sophisticated connection that extends to higher order methods and off the mesh.

Fig. 2.

Fig. 2

An illustration of deterministic Euler steps and randomised variations. The random integrator in (b) outputs the path in red; we overlay the standard Euler step constructed at each step, before it is perturbed (blue)

While we argue that the choice of modelling local uncertainty in the flow map as a Gaussian process is natural and analytically favourable, it is not unique. It is possible to construct examples where the Gaussian assumption is invalid; for example, when a highly inadequate time-step is used, a systemic bias may be introduced. However, in regimes where the underlying deterministic method performs well, the centred Gaussian assumption is a reasonable prior.

Strong convergence result

To prove the strong convergence of our probabilistic numerical solver, we first need two assumptions quantifying properties of the random noise and of the underlying deterministic integrator, respectively. In what follows we use ·,· and |·| to denote the Euclidean inner product and norm on Rn. We denote the Frobenius norm on Rn×n by |·|F, and Eh denotes expectation with respect to the i.i.d. sequence {χk}.

Assumption 1

Let ξk(t):=0tχk(s)ds with χkN(0,Ch). Then there exists K>0,p1 such that, for all t[0,h], Eh|ξk(t)ξk(t)T|F2Kt2p+1; in particular Eh|ξk(t)|2Kt2p+1. Furthermore, we assume the existence of matrix Q, independent of h, such that Eh[ξk(h)ξk(h)T]=Qh2p+1.

Here, and in the sequel, K is a constant independent of h, but possibly changing from line to line. Note that the covariance kernel Ch is constrained, but not uniquely defined. We will assume the form of the constant matrix is Q=σI, and we discuss one possible strategy for choosing σ in Sect. 3.1. Section 2.4 uses a weak convergence analysis to argue that once Q is selected, the exact choice of Ch has little practical impact.

Assumption 2

The function f and a sufficient number of its derivatives are bounded uniformly in Rn in order to ensure that f is globally Lipschitz and that the numerical flow map Ψh has uniform local truncation error of order q+1:

supuRn|Ψt(u)-Φt(u)|Ktq+1.

Remark 2.1

We assume globally Lipschitz f, and bounded derivatives, in order to highlight the key probabilistic ideas, whilst simplifying the numerical analysis. Future work will address the non-trivial issue of extending of analyses to weaken these assumptions. In this paper, we provide numerical results indicating that a weakening of the assumptions is indeed possible.

Theorem 2.2

Under Assumptions 1, 2 it follows that there is K>0 such that

sup0khTEh|uk-Uk|2Kh2min{p,q}.

Furthermore,

sup0tTEh|u(t)-U(t)|Khmin{p,q}.

This theorem implies that every probabilistic solution is a good approximation of the exact solution in both a discrete and continuous sense. Choosing pq is natural if we want to preserve the strong order of accuracy of the underlying deterministic integrator; we proceed with the choice p=q, introducing the maximum amount of noise consistent with this constraint.

Examples of probabilistic time integrators

The canonical illustration of a probabilistic time integrator is the probabilistic Euler method already described.4 Another useful example is the classical Runge–Kutta method which defines a one-step numerical integrator as follows:

Ψh(u)=u+h6(k1(u)+2k2(u,h)+2k3(u,h)+k4(u,h)),

where

k1(u)=f(u),k2(u,h)=f(u+12hk1(u))k3(u,h)=f(u+12hk2(u)),k4(u,h)=f(u+hk3(u)).

The method has local truncation error in the form of Assumption 2 with q=4. It may be used as the basis of a probabilistic numerical method (12), and hence (10) at the grid points. Thus, provided that we choose to perturb this integrator with a random process χk satisfying Assumption 1 with p4,5 Theorem 2.2 shows that the error between the probabilistic integrator based on the classical Runge–Kutta method is, in the mean square sense, of the same order of accuracy as the deterministic classical Runge–Kutta integrator.

Backward error analysis

Backwards error analyses are useful tool for numerical analysis; the idea is to characterise the method by identifying a modified equation (dependent upon h) which is solved by the numerical method either exactly, or at least to a higher degree of accuracy than the numerical method solves the original equation. For our random ODE solvers, we will show that the modified equation is a stochastic differential equation (SDE) in which only the matrix Q from Assumption 1 enters; the details of the random processes used in our construction do not enter the modified equation. This universality property underpins the methodology we introduce as it shows that many different choices of random processes all lead to the same effective behaviour of the numerical method.

We introduce the operators L and Lh defined so that, for all ϕC(Rn,R),

ϕ(Φh(u))=(ehLϕ)(u),Eϕ(U1|U0=u)=(ehLhϕ)(u). 15

Thus L:=f· and ehLh is the kernel for the Markov chain generated by the probabilistic integrator (2). In fact we never need to work with Lh itself in what follows, only with ehLh, so that questions involving the operator logarithm do not need to be discussed.

We now introduce a modified ODE and a modified SDE which will be needed in the analysis that follows. The modified ODE is

du^dt=fh(u^) 16

whilst the modified SDE has the form

du~=fh(u~)dt+h2pQdW. 17

The precise choice of fh is detailed below. Letting E denote expectation with respect to W, we introduce the operators L^h and L~h so that, for all ϕC(Rn,R),

ϕ(u^(h)|u^(0)=u)=(ehL^hϕ)(u), 18
Eϕ(u~(h)|u~(0)=0)=(ehL~hϕ)(u). 19

Thus,

L^h:=fh·,L~h=fh·+12h2pQ:, 20

where  :  denotes the inner product on Rn×n which induces the Frobenius norm, that is, A:B =trace(ATB).

The fact that the deterministic numerical integrator has uniform local truncation error of order q+1 (Assumption 2) implies that, since ϕC,

ehLϕ(u)-ϕ(Ψh(u))=O(hq+1). 21

The theory of modified equations for classical one-step numerical integration schemes for ODEs (Hairer et al. 1993) establishes that it is possible to find fh in the form

fh:=f+i=qq+lhifi, 22

such that

ehL^hϕ(u)-ϕ(Ψh(u))=O(hq+2+l). 23

We work with this choice of fh in what follows.

Now for our stochastic numerical method we have

ϕ(Uk+1)=ϕ(Ψh(Uk))+ξk(h)·ϕ(Ψh(Uk))+12ξk(h)ξkT(h):ϕ(Ψh(Uk))+O(|ξk(h)|3).

Furthermore, the last term has mean of size O(|ξk(h)|4). From Assumption 1 we know that Ehξk(h)ξkT(h)=Qh2p+1. Thus

ehLhϕ(u)-ϕ(Ψh(u))=12h2p+1Q:ϕ(Ψh(u))+O(h4p+2). 24

From this it follows that

ehLhϕ(u)-ϕ(Ψh(u))=12h2p+1Q:ϕ(u)+O(h2p+2). 25

Finally we note that (20) implies that

ehL~hϕ(u)-ehL^hϕ(u)=ehL^h(e12h2p+1Q:-I)ϕ(u)=ehL^h(12h2p+1Q:ϕ(u)+O(h4p+2))=(I+O(h))(12h2p+1Q:ϕ(u)+O(h4p+2)).

Thus we have

ehL~hϕ(u)-ehL^hϕ(u)=12h2p+1Q:ϕ(u)+O(h2p+2). 26

Now using (23), (25), and (26) we obtain

ehL~hϕ(u)-ehLhϕ(u)=O(h2p+2)+O(hq+2+l). 27

Balancing these terms, in what follows we make the choice l=2p-q. If l<0 we adopt the convention that the drift fh is simply f. With this choice of q we obtain

ehL~hϕ(u)-ehLhϕ(u)=O(h2p+2). 28

This demonstrates that the error between the Markov kernel of one-step of the SDE (17) and the Markov kernel of the numerical method (2) is of order O(h2p+2). Some straightforward stability considerations show that the weak error over an O(1) time interval is O(h2p+1). We make assumptions giving this stability and then state a theorem comparing the weak error with respect to the modified Eq. (17), and the original Eq. (1).

Assumption 3

The function f is in C and all its derivatives are uniformly bounded on Rn. Furthermore, f is such that the operators ehL and ehLh satisfy, for all ψC(Rn,R) and some L>0,

supuRn|ehLψ(u)|(1+Lh)supuRn|ψ(u)|,supuRn|ehLhψ(u)|(1+Lh)supuRn|ψ(u)|.

Remark 2.3

If p=q in what follows (our recommended choice) then the weak order of the method coincides with the strong order; however, measured relative to the modified equation, the weak order is then one plus twice the strong order. In this case, the second part of Theorem 2.2 gives us the first weak order result in Theorem 2.4. Additionally, Assumption 3 is stronger than we need, but allows us to highlight probabilistic ideas whilst keeping overly technical aspects of the numerical analysis to a minimum. More sophisticated, but structurally similar, analysis would be required for weaker assumptions on f. Similar considerations apply to the assumptions on ϕ.

Theorem 2.4

Consider the numerical method (10) and assume that Assumptions 1 and 3 are satisfied. Then, for ϕC function with all derivatives bounded uniformly on Rn, we have that

|ϕ(u(T))-Eh(ϕ(Uk))|Khmin{2p,q},kh=T,

and

|E(ϕ(u~(T)))-Eh(ϕ(Uk))|Kh2p+1,kh=T,

where u and u~ solve (1) and (17), respectively.

Example 2.5

Consider the probabilistic integrator derived from the Euler method in dimension n=1. We thus have q=1, and we hence set p=1. The results in Hairer et al. (2006) allow us to calculate fh with l=1. The preceding theory then leads to strong order of convergence 1, measured relative to the true ODE (1), and weak order 3 relative to the SDE

du^=(f(u^)-h2f(u^)f(u^)+h212(f(u^)f2(u^)+4(f(u^))2f(u^)))dt+ChdW.

These results allow us to constrain the behaviour of the randomised method using limited information about the covariance structure, Ch. The randomised solution converges weakly, at a high rate, to a solution that only depends on Q. Hence, we conclude that the practical behaviour of the solution is only dependent upon Q, and otherwise, Ch may be any convenient kernel. With these results now available, the following section provides an empirical study of our probabilistic integrators.

Statistical inference and numerics

This section explores applications of the randomised ODE solvers developed in Sect. 2 to forward and inverse problems. Throughout this section, we use the FitzHugh–Nagumo model to illustrate ideas (Ramsay et al. 2007). This is a two-state non-linear oscillator, with states (VR) and parameters (abc), governed by the equations

dVdt=cV-V33+R,dRdt=-1cV-a+bR. 29

This particular example does not satisfy the stringent Assumptions 2 and 3 and the numerical results shown demonstrate that, as indicated in Remarks 2.1 and 2.3, our theory will extend to weaker assumptions on f, something we will address in future work.

Calibrating forward uncertainty propagation

Consider Eq. (29) with fixed initial conditions V(0)=-1,R(0)=1, and parameter values (.2, .2, 3). Figure 3 shows draws of the V species trajectories from the measure associated with the probabilistic Euler solver with p=q=1, for various values of the step-size and fixed σ=0.1. The random draws exhibit non-Gaussian structure at large step-size and clearly contract towards the true solution.

Fig. 3.

Fig. 3

The true trajectory of the V species of the FitzHugh–Nagumo model (red) and one hundred realisations from a probabilistic Euler ODE solver with various step-sizes and noise scale σ=.1 (blue)

Although the rate of contraction is governed by the underlying deterministic method, the scale parameter, σ, completely controls the apparent uncertainty in the solver.6 This tuning problem exists in general, since σ is problem dependent and cannot obviously be computed analytically.

Therefore, we propose to calibrate σ to replicate the amount of error suggested by classical error indicators. In the following discussion, we often explicitly denote the dependence on h and σ with superscripts, hence the probabilistic solver is Uh,σ and the corresponding deterministic solver is Uh,0. Define the deterministic error as e(t)=u(t)-Uh,0(t). Then we assume there is some computable error indicator E(t)e(t), defining Ek=E(tk). The simplest error indicators might compare differing step-sizes, E(t)=Uh,0(t)-U2h,0(t), or differing order methods, as in a Runge–Kutta 4–5 scheme.

We proceed by constructing a probability distribution π(σ) that is maximised when the desired matching occurs. We estimate this scale matching by comparing: (i) a Gaussian approximation of our random solver at each step k, μ~kh,σ=N(E(Ukh,σ),V(Ukh,σ)); and (ii) the natural Gaussian measure from the deterministic solver, Ukh,0, and the available error indicator, Ek, νkσ=N(Ukh,0,(Ek)2). We construct π(σ) by penalising the distance between these two normal distributions at every step: π(σ)kexp-d(μ~kh,σ,νkσ). We find that the Bhattacharyya distance (closely related to the Hellinger metric) works well (Kailath 1967), since it diverges quickly if either the mean or variance differs. The density can be easily estimated using Monte Carlo. If the ODE state is a vector, we take the product of the univariate Bhattacharyya distances. Note that this calibration depends on the initial conditions and any parameters of the ODE.

Returning to the FitzHugh–Nagumo model, sampling from π(σ) yields strongly peaked, uni-modal posteriors, hence we proceed using σ=arg maxπ(σ). We examine the quality of the scale matching by plotting the magnitudes of the random variation against the error indicator in Fig. 4, observing good agreement of the marginal variances. Note that our measure still reveals non-Gaussian structure and correlations in time not revealed by the deterministic analysis. As described, this procedure requires fixed inputs to the ODE, but it is straightforward to marginalise out a prior distribution over input parameters.

Fig. 4.

Fig. 4

A comparison of the error indicator for the V species of the FitzHugh–Nagumo model (blue) and the observed variation in the calibrated probabilistic solver. The red curves depict 50 samples of the magnitude of the difference between a standard Euler solver for several step-sizes and the equivalent randomised variant, using σ, maximising π(σ)

Bayesian posterior inference problems

Given the calibrated probabilistic ODE solvers described above, let us consider how to incorporate them into inference problems.

Assume we are interested in inferring parameters of the ODE given noisy observations of the state. Specifically, we wish to infer parameters θRd for the differential equation u˙=f(u,θ), with fixed initial conditions u(t=0)=u0 (a straightforward modification may include inference on initial conditions). Assume we are provided with data dRm, dj=u(τj)+ηj at some collection of times τj, corrupted by i.i.d. noise, ηjN(0,Γ). If we have prior Q(θ), the posterior we wish to explore is, P(θd)Q(θ)L(d,u(θ)), where density L compactly summarises this likelihood model.

The standard computational strategy is to simply replace the unavailable trajectory u with a numerical approximation, inducing approximate posterior Ph,0(θd)Q(θ)L(d,Uh,0(θ)). Informally, this approximation will be accurate when the error in the numerical solver is small compared to Γ and often converges formally to P(θd) as h0 (Dashti and Stuart 2016). However, highly correlated errors at finite h can have substantial impact.

In this work, we are concerned about the undue optimism in the predicted variance, that is, when the posterior concentrates around an arbitrary parameter value even though the deterministic solver is inaccurate and is merely able to reproduce the data by coincidence. The conventional concern is that any error in the solver will be transferred into posterior bias. Practitioners commonly alleviate both concerns by tuning the solver to be nearly perfect, however, we note that this may be computationally prohibitive in many contemporary statistical applications.

We can construct a different posterior that includes the uncertainty in the solver by taking an expectation over random solutions to the ODE

Ph,σ(θd)Q(θ)L(d,Uh,σ(θ,ξ))dξ, 30

where Uh,σ(θ,ξ) is a draw from the randomised solver given parameters θ and random draw ξ. Intuitively, this construction favours parameters that exhibit agreement with the entire family of uncertain trajectories. The typical effect of this expectation is to increase the posterior uncertainty on θ, preventing the inappropriate posterior collapse we are concerned about. Indeed, if the integrator cannot resolve the underlying dynamics, hp+1/2σ will be large. Then Uh,σ(θ,ξ) is independent of θ, hence the prior is recovered, Ph,σ(θd)Q(θ).

Notice that as h0, both the measures Ph,0 and Ph,σ typically collapse to the analytic posterior, P, hence both methods are correct. We do not expect the bias of Ph,σ to be improved, since all of the averaged trajectories are of the same quality as the deterministic solver in Ph,0. We now construct an analytic inference problem demonstrating these behaviours.

Example 3.1

Consider inferring the initial condition, u0R, of the scalar linear differential equation, u˙=λu, with λ>0. We apply a numerical method to produce the approximation Uku(kh). We observe the state at some times t=kh, with additive noise ηkN(0,γ2): dk=Uk+ηk. If we use a deterministic Euler solver, the model predicts Uk=(1+hλ)ku0. These model predictions coincide with the slightly perturbed problem

dudt=h-1log(1+λh)u,

hence error increases with time. However, the assumed observational model does not allow for this, as the observation variance is γ2 at all times.

In contrast, our proposed probabilistic Euler solver predicts

Uk=(1+hλ)ku0+σh3/2j=0k-1ξj(1+λh)k-j-1,

where we have made the natural choice p=q, where σ is the problem-dependent scaling of the noise and the ξk are i.i.d. N(0,1). For a single observation, ηk and every ξk are independent, so we may rearrange the equation to consider the perturbation as part of the observation operator. Hence, a single observation at k has effective variance

γh2:=γ2+σ2h3j=0k-1(1+λh)2(k-j-1)=γ2+σ2h3(1+λh)2k-1(1+λh)2-1.

Thus, late-time observations are modelled as being increasingly inaccurate.

Consider inferring u0, given a single observation dk at time k. If a Gaussian prior N(m0,ζ02) is specified for u0, then the posterior is N(m,ζ2), where

ζ-2=(1+hλ)2kγh2+ζ0-2,ζ-2m=(1+hλ)kdkγh2+ζ0-2m0.

The observation precision is scaled by (1+hλ)2k because late-time data contain increasing information. Assume that the data are dk=eλkhu0+γη, for some given true initial condition u0 and noise realisation η. Consider now the asymptotic regime, where h is fixed and k. For the standard Euler method, where γh=γ, we see that ζ20, whilst m((1+hλ)-1ehλ)ku0. Thus the inference scheme becomes increasingly certain of the wrong answer: the variance tends to zero and the mean tends to infinity.

In contrast, with a randomised integrator, the fixed h, large k asymptotics are

ζ21ζ0-2+λ(2+λh)σ-2h-2,m(1+hλ)-1ehλku01+ζ0-2σ2h2λ-1(2+λh)-1.

Thus, the mean blows up at a modified rate, but the variance remains positive.

We take an empirical Bayes approach to choosing σ, that is, using a constant, fixed value σ=arg maxπ(σ), chosen before the data are observed. Joint inference of the parameters and the noise scale suffer from well-known MCMC mixing issues in Bayesian hierarchic models. To handle the unknown parameter θ, we can marginalise it out using the prior distribution, or in simple problems, it may be reasonable to choose a fixed representative value.

We now return to the FitzHugh–Nagumo model; given fixed initial conditions, we attempt to recover parameters θ=(a,b,c) from observations of both species at times τ=1,2,,40. The priors are log-normal, centred on the true value with unit variance, and with observational noise Γ=0.001. The data are generated from a high-quality solution, and we perform inference using Euler integrators with various step-sizes, h{0.005,0.01,0.02,0.05,0.1}, spanning a range of accurate and inaccurate integrators.

We first perform the inferences with naive use of deterministic Euler integrators. We simulate from each posterior using delayed rejection MCMC (Haario et al. 2006), shown in Fig. 5. Observe the undesirable concentration of every posterior, even those with poor solvers; the posteriors are almost mutually singular, hence clearly the posterior widths are meaningless.

Fig. 5.

Fig. 5

The posterior marginals of the FitzHugh–Nagumo inference problem using deterministic integrators with various step-sizes

Secondly, we repeat the experiment using our probabilistic Euler integrators, with results shown in Fig. 6. We use a noisy pseudomarginal MCMC method, whose fast mixing is helpful for these initial experiments (Medina-Aguayo et al. 2015). These posteriors are significantly improved, exhibiting greater mutual agreement and obvious increasing concentration with improving solver quality. The posteriors are not perfectly nested, possible evidence that our choice of scale parameter is imperfect, or that the assumption of locally Gaussian error deteriorates for large step-sizes. Note that the bias of θ3 is essentially unchanged with the randomised integrator, but the posterior for θ2 broadens and is correlated to θ3, hence introduces a bias in the posterior mode; without randomisation, only the inappropriate certainty about θ3 allowed the marginal for θ2 to exhibit little bias.

Fig. 6.

Fig. 6

The posterior marginals of the FitzHugh–Nagumo inference problem using probabilistic integrators with various step-sizes

Probabilistic solvers for partial differential equations

We now turn to present a framework for probabilistic solutions to partial differential equations, working within the finite element setting. Our discussion closely resembles the ODE case, except that now we randomly perturb the finite element basis functions.

Probabilistic finite element method for variational problems

Let V be a Hilbert space of real-valued functions defined on a bounded polygonal domain DRd. Consider a weak formulation of a linear PDE specified via a symmetric bilinear form a:V×VR, and a linear form r:VR to give the problem of finding uV:a(u,v)=r(v),vV. This problem can be approximated by specifying a finite-dimensional subspace VhV and seeking a solution in Vh instead. This leads to a finite-dimensional problem to be solved for the approximation U:

UVh:a(U,v)=r(v),vVh. 31

This is known as the Galerkin method.

We will work in the setting of finite element methods, assuming that Vh=span{ϕj}j=1J, where ϕj is locally supported on a grid of points {xj}j=1J. The parameter h is introduced to measure the diameter of the finite elements. We will also assume that

ϕj(xk)=δjk. 32

Any element UVh can then be written as

U(x)=j=1JUjϕj(x) 33

from which it follows that U(xk)=Uk. The Galerkin method then gives AU=r, for U=(U1,,UJ)T, Ajk=a(ϕj,ϕk), and rk=r(ϕk).

In order to account for uncertainty introduced by the numerical method, we will assume that each basis function ϕj can be split into the sum of a systematic part ϕjs and random part ϕjr, where both ϕj and ϕjs satisfy the nodal property (32), hence ϕjr(xk)=0. Furthermore, we assume that each ϕjr shares the same compact support as the corresponding ϕjs, preserving the sparsity structure of the underlying deterministic method.

Strong convergence result

As in the ODE case, we begin our convergence analysis with assumptions constraining the random perturbations and the underlying deterministic approximation. The bilinear form a(·,·) is assumed to induce an inner product, and then norm via ·a2=a(·,·); furthermore, we assume that this norm is equivalent to the norm on V. Throughout, Eh denotes expectation with respect to the random basis functions.

Assumption 4

The collection of random basis functions {ϕjr}j=1J are independent, zero-mean, Gaussian random fields, each of which satisfies ϕjr(xk)=0 and shares the same support as the corresponding systematic basis function ϕjs. For all j, the number of basis functions with index k which share the support of the basis functions with index j is bounded independently of J, the total number of basis functions. Furthermore, the basis functions are scaled so that j=1JEhϕjra2Ch2p.

Assumption 5

The true solution u of problem (4.1) is in L(D). Furthermore, the standard deterministic interpolant of the true solution, defined by vs:=j=1Ju(xj)ϕjs, satisfies u-vsaChq.

Theorem 4.1

Under Assumptions 4 and 5 it follows that the approximation U, given by (31), satisfies

Ehu-Ua2Ch2min{p,q}.

As for ODEs, the solver accuracy is limited by either the amount of noise injected or the convergence rate of the underlying deterministic method, making p=q the natural choice.

Poisson solver in two dimensions

Consider a Poisson equation with Dirichlet boundary conditions in dimension d=2, namely

-u=f,xD,u=0,xD.

We set V=H01(D) and H to be the space L2(D) with inner product ·,· and resulting norm |·|2=·,·. The weak formulation of the problem has the form (4.1) with

a(u,v)=Du(x)v(x)dx,r(v)=f,v.

Now consider piecewise linear finite elements satisfying the assumptions of Sect. 4.2 in Johnson (2012) and take these to comprise the set {ϕjs}j=1J. Then h measures the width of the triangulation of the finite element mesh. Assuming that fH it follows that uH2(D) and that

u-vsaChuH2. 34

Thus q=1. We choose random basis members {ϕjr}j=1J so that Assumption 4 hold with p=1. Theorem 4.1 then shows that, for e=u-U, Ehea2Ch2. We note that that in the deterministic case, we expect an improved rate of convergence in the function space H. Such a result can be shown to hold in our setting, following the usual arguments for the Aubin–Nitsche trick Johnson (2012), which is available in the supplementary materials.

PDE inference and numerics

We now perform numerical experiments using probabilistic solvers for elliptic PDEs. Specifically, we perform inference in a 1D elliptic PDE, ·(κ(x)u(x))=4x for x[0,1], given boundary conditions u(0)=0,u(1)=2. We represent logκ as piecewise constant over ten equal-sized intervals; the first, on x[0,.1) is fixed to be one to avoid non-identifiability issues, and the other nine are given a prior θi=logκiN(0,1). Observations of the field u are provided at x=(0.1,0.2,0.9), with i.i.d. Gaussian error, N(0,10-5); the simulated observations were generated using a fine grid and quadratic finite elements, then perturbed with error from this distribution.

Again we investigate the posterior produced at various grid sizes, using both deterministic and randomised solvers. The randomised basis functions are draws from a Brownian bridge conditioned to be zero at the nodal points, implemented in practice with a truncated Karhunen–Loève expansion. The covariance operator may be viewed as a fractional Laplacian, as discussed in Lindgren et al. (2011). The scaling σ is again determined by maximising the distribution described in Sect. 3.1, where the error indicator compares linear to quadratic basis functions, and we marginalise out the prior over the κi values.

The posteriors are depicted in Figs. 7 and 8. As in the ODE examples, the deterministic solvers lead to incompatible posteriors for varying grid sizes. In contrast, the randomised solvers suggest increasing confidence as the grid is refined, as desired. The coarsest grid size uses an obviously inadequate ten elements, but this is only apparent in the randomised posterior.

Fig. 7.

Fig. 7

The marginal posterior distributions for the first four coefficients in 1D elliptic inverse problem using a classic deterministic solver with various grid sizes

Fig. 8.

Fig. 8

The marginal posterior distributions for the first four coefficients in 1D elliptic inverse problem using a randomised solver with various grid sizes

Conclusions

We have presented a computational methodology, backed by rigorous analysis, which enables quantification of the uncertainty arising from the finite-dimensional approximation of solutions of differential equations. These methods play a natural role in statistical inference problems as they allow for the uncertainty from discretisation to be incorporated alongside other sources of uncertainty such as observational noise. We provide theoretical analyses of the probabilistic integrators which form the backbone of our methodology. Furthermore we demonstrate empirically that they induce more coherent inference in a number of illustrative examples. There are a variety of areas in the sciences and engineering which have the potential to draw on the methodology introduced including climatology, computational chemistry, and systems biology.

Our key strategy is to make assumptions about the local behaviour of solver error, which we have assumed to be Gaussian, and to draw samples from the global distribution of uncertainty over solutions that results. Section 2.4 describes a universality result, simplifying task of choosing covariance kernels in practice, within the family of Gaussian processes. However, assumptions of Gaussian error, even locally, may not be appropriate in some cases, or may neglect important domain knowledge. Our framework can be extended in future work to consider alternate priors on the error, for example, multiplicative or non-negative errors.

Our study highlights difficult decisions practitioners face, regarding how to expend computational resources. While standard techniques perform well when the solver is highly converged, our results show standard techniques can be disastrously wrong when the solver is not converged. As the measure of convergence is not a standard numerical analysis one, but a statistical one, we have argued that it can be surprisingly difficult to determine in advance which regime a particular problem resides in. Therefore, our practical recommendation is that the lower cost of the standard approach makes it preferable when it is certain that the numerical method is strongly converged with respect to the statistical measure of interest. Otherwise, the randomised method we propose provides a robust and consistent approach to address the error introduced into the statistical task by numerical solver error. In difficult problem domains, such as numerical weather prediction, the focus has typically been on reducing the numerical error in each solver run; techniques such as these may allow a difference balance between numerical and statistical computing effort in the future.

The prevailing approach to model error described in Kennedy and O’Hagan (2001) is based on a non-intrusive methodology where the effect of model discrepancy is allowed for in observation space. Our intrusive randomisation of deterministic methods for differential equations can be viewed as a highly specialised discrepancy model, designed using our intimate knowledge of the structure and properties of numerical methods. In this vein, we intend to extend this work to other types of model error, where modifying the internal structure of the models can produce computationally and analytically tractable measures of uncertainty which perform better than non-intrusive methods. Our future work will continue to study the computational challenges and opportunities presented by these techniques.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Acknowledgments

The authors gratefully acknowledge support from EPSRC Grant CRiSM EP/D002060/1, EPSRC Established Career Research Fellowship EP/J016934/2, EPSRC Programme Grant EQUIP EP/K034154/1, and Academy of Finland Research Fellowship 266940. Konstantinos Zygalakis was partially supported by a grant from the Simons Foundation. Part of this work was done during the author’s stay at the Newton Institute for the program “Stochastic Dynamical Systems in Biology: Numerical Methods and Applications.;”

A Numerical analysis details and proofs

Proof

(Theorem 2.2) We first derive the convergence result on the grid, and then in continuous time. From (10) we have

Uk+1=Ψh(Uk)+ξk(h) 35

whilst we know that

uk+1=Φh(uk). 36

Define the truncation error ϵk=Ψh(Uk)-Φh(Uk) and note that

Uk+1=Φh(Uk)+ϵk+ξk(h). 37

Subtracting Eq. (37) from (36) and defining ek=uk-Uk, we get

ek+1=Φh(uk)-Φh(uk-ek)-ϵk-ξk(h).

Taking the Euclidean norm and expectations give, using Assumption 1 and the independence of the ξk,

Eh|ek+1|2=Eh|Φh(uk)-Φh(uk-ek)-ϵk|2+O(h2p+1),

where the constant in the O(h2p+1) term is uniform in k:0khT. Assumption 2 implies that ϵk=O(hq+1), again uniformly in k:0khT. Noting that Φh is globally Lipschitz with constant bounded by 1+Lh under Assumption 2, we then obtain

Eh|ek+1|2(1+Lh)2Eh|ek|2+Eh|h12(Φh(uk)-Φh(uk-ek)),h-12ϵk|+O(h2q+2)+O(h2p+1).

Using Cauchy–Schwarz on the inner product, and the fact that Φh is Lipschitz with constant bounded independently of h, we get

Eh|ek+1|2(1+O(h))Eh|ek|2+O(h2q+1)+O(h2p+1).

Application of the Gronwall inequality gives the desired result.

Now we turn to continuous time. We note that, for s[tk,tk+1),

U(s)=Ψs-tk(Uk)+ξk(s-tk),u(s)=Φs-tk(uk).

Let Ft denote the σ-algebra of events generated by the {ξk} up to time t. Subtracting we obtain, using Assumptions 1 and 2 and the fact that Φs-tk has Lipschitz constant of the form 1+O(h),

Eh(|U(s)-u(s)||Ftk)|Φs-tk(Uk)-Φs-tk(uk)|+|Ψs-tk(Uk)-Φs-tk(Uk)|+Eh(|ξk(s-tk)||Ftk)(1+Lh)|ek|+O(hq+1)+Eh|ξk(s-tk)|(1+Lh)|ek|+O(hq+1)+(Eh|ξk(s-tk)|2)12(1+Lh)|ek|+O(hq+1)+Ohp+12.

Now taking expectations we obtain

Eh|U(s)-u(s)|(1+Lh)(Eh|ek|2)12+O(hq+1)+Ohp+12.

Using the on-grid error bound gives the desired result, after noting that the constants appearing are uniform in 0khT.

Proof

(Theorem 2.4) We prove the second bound first. Let wk=E(ϕ(u~(tk))|u~(0)=u) and Wk=Eh(ϕ(Uk)|U0=u). Then let δk=supuRn|Wk-wk|. It follows from the Markov property that

Wk+1-wk+1=ehLhWk-ehL~hwk=ehLhWk-ehLhwk+(ehLhwk-ehL~hwk).

Using (28) and Assumption 3 we obtain

δk+1(1+Lh)δk+O(h2p+2).

Iterating and employing the Gronwall inequality gives the second error bound.

Now we turn to the first error bound, comparing with the solution u of the original Eq. (1). From (25) and then (21) we see that

ehLhϕ(u)-ϕ(Ψh(u))=O(h2p+1),ehLϕ(u)-ehLhϕ(u)=O(hmin{2p+1,q+1}).

This gives the first weak error estimate, after using the stability estimate on ehL from Assumption 3.

Proof (Theorem 4.1) Recall the Galerkin orthogonality property which follows from subtracting the approximate variational principle from the true variational principle: it states that, for e=u-U,

a(e,v)=0,vVh. 38

From this it follows that

eau-va,vVh. 39

To see this note that, for any vVh, the orthogonality property (38) gives

a(e,e)=a(e,e+U-v)=a(e,u-v). 40

Thus, by Cauchy–Schwarz, ea2eau-va,vVh implying (39). We now set, for vV,

v(x)=j=1Ju(xj)ϕj(x)=j=1Ju(xj)ϕjs(x)+j=1Ju(xj)ϕjr(x)=:vs(x)+vr(x).

By the mean-zero and independence properties of the random basis functions we deduce that

Ehu-va2=Eha(u-v,u-v)=Eha(u-vs,u-vs)+Eha(vr,vr)=u-vsa2+j=1Ju(xj)2Ehϕjra2.

The result follows from Assumptions 4 and 5.

Ornstein–Uhlenbeck integrator

An additional example of a randomised integrator is an integrated Ornstein–Uhlenbeck process, derived as follows. Define, on the interval s[tk,tk+1), the pair of equations

dU=Vdt,U(tk)=Uk, 41a
dV=-ΛVdt+2ΣdW,V(tk)=f(Uk). 41b

Here W is a standard Brownian motion and Λ and Σ are invertible matrices, possibly depending on h. The approximating function gh(s) is thus defined by V(s), an Ornstein–Uhlenbeck process.

Integrating (41b) we obtain

V(s)=exp(-Λ(s-tk))f(Uk)+χk(s-tk), 42

where s[tk,tk+1) and the {χk} form an i.i.d. sequence of Gaussian random functions defined on [0, h] with

χk(s)=2Σ0sexp(Λ(τ-s))dW(τ).

Note that the h-dependence of Ch comes through the time interval on which χk is defined, and through Λ and Σ.

Integrating (41a), using (42), we obtain

U(s)=Uk+Λ-1I-exp(-Λ(s-tk))f(Uk)+ξk(s-tk), 43

where s[tk,tk+1], and, for t[0,h],

ξk(t)=0tχk(τ)dτ. 44

The numerical method (43) may be written in the form (12), and hence (10) at the grid points, with the definition

Ψh(u)=u+Λ-1(I-exp(-Λh))f(u).

This integrator is first-order accurate and satisfies Assumption 2 with p=1. Choosing to scale Σ with h so that q1 in Assumption 1 leads to convergence of the numerical method with order 1.

Had we carried out the above analysis in the case Λ=0 we would have obtained the probabilistic Euler method (14), and hence (13) at grid points, used as our canonical example in the earlier developments.

Convergence rate of L2 convergence

When considering the Poisson problem in two dimensions, as discussed in Sect. 4, we expect an improved rate of convergence in the function space H. We now show that such a result also holds in our random setting.

Note that, under Assumption 4, if we introduce γjk that is 1 when two basis functions have overlapping support, and 0 otherwise, then γjk is symmetric and there is constant C, independent of j and J, such that k=1JγjkC. Now let φ solve the equation a(φ,v)=e,v,vV. Then φH2C|e|. We define φs and φr in analogy with the definitions of vs and vr. Following the usual arguments for application of the Aubin–Nitsche trick Johnson (2012), we have |e|2=a(e,φ)=a(e,φ-φs-φr). Thus

|e|2eaφ-φs-φra2ea(φ-φsa2+φra2)12. 45

We note that φr(x)=j=1Jφ(xj)ϕjr(x)=φH2j=1Jajϕjr(x) where, by Sobolev embedding (d=2 here), aj:=φ(xj)/φH2 satisfies max1jJ|aj|C. Note, however, that the aj are random and correlated with all of the random basis functions. Using this, together with (34), in (45), we obtain

|e|2Cea(h2+j=1Jajϕjr(x)a2)12φH2.

We see that

|e|Cea(h2+j=1Jk=1Jajaka(ϕjr,ϕkr))12.

From this and the symmetry of γjk, we obtain

|e|Cea(h2+j=1Jk=1Jγjk(ϕjra2+ϕkra2))12Cea(h2+2Cj=1Jϕjra2)12.

Taking expectations, using that p=q=1, we find, using Assumption 4, that Eh|e|Ch(Ehea2)12Ch2 as desired. Thus we recover the extra order of convergence over the rate 1 in the ·a norm (although the improved rate is in L1(Ω;H) whilst the lower rate of convergence is in L2(Ω;V).).

Footnotes

1

Supplementary materials and code are available online: http://www2.warwick.ac.uk/pints.

2

To simplify our discussion we assume that the ODE is autonomous, that is, f(u) is independent of time. Analogous theory can be developed for time-dependent forcing.

3

We use χkN(0,Ch) to denote a zero-mean Gaussian process defined on [0, h] with a covariance kernel cov(χk(t),χk(s))Ch(t,s).

4

An additional example of a probabilistic integrator, based on a Ornstein–Uhlenbeck process, is available in the supplementary materials.

5

Implementing Eq. 10 is trivial, since it simply adds an appropriately scaled Gaussian random number after each classical Runge–Kutta step.

6

Recall that throughout we assume that, within the context of Assumption 1, Q=σI. More generally it is possible to calibrate an arbitrary positive semi-definite Q.

References

  1. Arnold A, Calvetti D, Somersalo E. Linear multistep methods, particle filtering and sequential Monte Carlo. Inverse Probl. 2013;29(8):085,007. doi: 10.1088/0266-5611/29/8/085007. [DOI] [Google Scholar]
  2. Brunel NJB, Clairon Q, dAlch Buc F. Parametric estimation of ordinary differential equations with orthogonality conditions. J. Am. Stat. Assoc. 2014;109(505):173–185. doi: 10.1080/01621459.2013.841583. [DOI] [Google Scholar]
  3. Capistrán, M., Christen, J.A., Donnet, S.: Bayesian analysis of ODE’s: solver optimal accuracy and Bayes factors (2013). arXiv:1311.2281
  4. Chakraborty A, Mallick BK, Mcclarren RG, Kuranz CC, Bingham D, Grosskopf MJ, Rutter EM, Stripling HF, Drake RP. Spline-based emulators for radiative shock experiments with measurement error. J. Am. Stat. Assoc. 2013;108(502):411–428. doi: 10.1080/01621459.2013.770688. [DOI] [Google Scholar]
  5. Chkrebtii, O.A., Campbell, D.A., Girolami, M.A., Calderhead, B.: Bayesian uncertainty quantification for differential equations (2013). arXiv:1306.2365
  6. Coulibaly I, Lécot C. A quasi-randomized Runge–Kutta method. Math. Comput. Am. Math. Soc. 1999;68(226):651–659. doi: 10.1090/S0025-5718-99-01056-X. [DOI] [Google Scholar]
  7. Dashti M, Stuart A. The Bayesian approach to inverse problems. In: Ghanem R, Higdon D, Owhadi H, editors. Handbook of Uncertainty Quantification. New York: Springer; 2016. [Google Scholar]
  8. Diaconis P. Bayesian numerical analysis. Stat. Decision Theory Relat. Top. IV. 1988;1:163–175. doi: 10.1007/978-1-4613-8768-8_20. [DOI] [Google Scholar]
  9. Eriksson, K.: Computational Differential Equations, vol. 1. Cambridge University Press, Cambridge (1996). https://books.google.co.uk/books?id=gbK2cUxVhDQC
  10. Haario H, Laine M, Mira A, Saksman E. DRAM: efficient adaptive MCMC. Stat. Comput. 2006;16(4):339–354. doi: 10.1007/s11222-006-9438-0. [DOI] [Google Scholar]
  11. Hairer, E., Nørsett, S., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems. Solving Ordinary Differential Equations, Springer, New York (1993). https://books.google.co.uk/books?id=F93u7VcSRyYC
  12. Hairer E, Lubich C, Wanner G. Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations. New York: Springer; 2006. [Google Scholar]
  13. Hairer E, McLachlan RI, Razakarivony A. Achieving Brouwer’s law with implicit Runge–Kutta methods. BIT Numer. Math. 2008;48(2):231–243. doi: 10.1007/s10543-008-0170-3. [DOI] [Google Scholar]
  14. Hennig, P., Hauberg, S.: Probabilistic solutions to differential equations and their application to Riemannian statistics. In: Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 33 (2014)
  15. Hennig, P., Osborne, M.A., Girolami, M.: Probabilistic numerics and uncertainty in computations. Proceedings of the Royal Society A (2015) (in press) [DOI] [PMC free article] [PubMed]
  16. Johnson, C.: Numerical Solution of Partial Differential Equations by the Finite Element Method. Dover Books on Mathematics Series, Dover Publications, New York (2012). Incorporated, https://books.google.co.uk/books?id=PYXjyoqy5qMC
  17. Kailath T. The divergence and Bhattacharyya distance measures in signal selection. IEEE Trans. Commun. Technol. 1967;15(1):52–60. doi: 10.1109/TCOM.1967.1089532. [DOI] [Google Scholar]
  18. Kaipio J, Somersalo E. Statistical inverse problems: discretization, model reduction and inverse crimes. J. Comput. Appl. Math. 2007;198(2):493–504. doi: 10.1016/j.cam.2005.09.027. [DOI] [Google Scholar]
  19. Kennedy MC, O’Hagan A. Bayesian calibration of computer models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2001;63(3):425–464. doi: 10.1111/1467-9868.00294. [DOI] [Google Scholar]
  20. Liang H, Wu H. Parameter estimation for differential equation models using a framework of measurement error in regression models. J. Am. Stat. Assoc. 2008;103(484):1570–1583. doi: 10.1198/016214508000000797. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Lindgren F, Rue H, Lindström J. An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2011;73(4):423–498. doi: 10.1111/j.1467-9868.2011.00777.x. [DOI] [Google Scholar]
  22. Medina-Aguayo, F.J., Lee, A., Roberts, G.O.: Stability of Noisy Metropolis-Hastings (2015). arxiv:1503.07066 [DOI] [PMC free article] [PubMed]
  23. Ramsay JO, Hooker G, Campbell D, Cao J. Parameter estimation for differential equations: a generalized smoothing approach. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2007;69(5):741–796. doi: 10.1111/j.1467-9868.2007.00610.x. [DOI] [Google Scholar]
  24. Schober, M., Duvenaud, D.K., Hennig, P.: Probabilistic ODE solvers with Runge–Kutta means. In: Advances in Neural Information Processing Systems, pp. 739–747 (2014)
  25. Skilling, J.: Bayesian solution of ordinary differential equations. In: Maximum Entropy and Bayesian Methods, pp 23–37. Springer, New York (1992)
  26. Stengle G. Error analysis of a randomized numerical method. Numer. Math. 1995;70(1):119–128. doi: 10.1007/s002110050113. [DOI] [Google Scholar]
  27. Sullivan T. Uncertainty Quantification. New York: Springer; 2016. [Google Scholar]
  28. Xue, H., Miao, H., Wu, H.: Sieve estimation of constant and time-varying coefficients in nonlinear ordinary differential equation models by considering both numerical error and measurement error. Ann. Stat. 38(4), 2351–2387 (2010) [DOI] [PMC free article] [PubMed]
  29. Xun X, Cao J, Mallick B, Maity A, Carroll RJ. Parameter estimation of partial differential equation models. J. Am. Stat. Assoc. 2013;108(503):1009–1020. doi: 10.1080/01621459.2013.794730. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials


Articles from Statistics and Computing are provided here courtesy of Springer

RESOURCES