Abstract
In this paper we introduce and analyse Langevin samplers that consist of perturbations of the standard underdamped Langevin dynamics. The perturbed dynamics is such that its invariant measure is the same as that of the unperturbed dynamics. We show that appropriate choices of the perturbations can lead to samplers that have improved properties, at least in terms of reducing the asymptotic variance. We present a detailed analysis of the new Langevin sampler for Gaussian target distributions. Our theoretical results are supported by numerical experiments with non-Gaussian target measures.
Introduction and Motivation
Sampling from probability measures in high-dimensional spaces is a problem that appears frequently in applications, e.g. in computational statistical mechanics and in Bayesian statistics. In particular, we are faced with the problem of computing expectations with respect to a probability measure on , i.e. we wish to evaluate integrals of the form:
| 1 |
As is typical in many applications, particularly in molecular dynamics and Bayesian inference, the density (for convenience denoted by the same symbol ) is known only up to a normalization constant; furthermore, the dimension of the underlying space is quite often large enough to render deterministic quadrature schemes computationally infeasible.
A standard approach to approximating such integrals is Markov Chain Monte Carlo (MCMC) techniques [19, 32, 52], where a Markov process is constructed which is ergodic with respect to the probability measure . Then, defining the long-time average
| 2 |
for , the ergodic theorem guarantees almost sure convergence of the long-time average to .
There are infinitely many Markov, and, for the purposes of this paper diffusion, processes that can be constructed in such a way that they are ergodic with respect to the target distribution. A natural question is then how to choose the ergodic diffusion process . Naturally the choice should be dictated by the requirement that the computational cost of (approximately) calculating (1) is minimized. A standard example is given by the overdamped Langevin dynamics defined to be the unique (strong) solution of the following stochastic differential equation (SDE):
| 3 |
where is the potential associated with the smooth positive density . Under appropriate assumptions on V, i.e. on the measure , the process is ergodic and in fact reversible with respect to the target distribution.
Another well-known example is the underdamped Langevin dynamics given by defined on the extended space (phase space) by the following pair of coupled SDEs:
| 4 |
where the mass and friction tensors M and are assumed to be symmetric positive definite matrices. It is well-known [36, 46] that is ergodic with respect to the measure , having density with respect to the Lebesgue measure on given by
| 5 |
where is a normalization constant. Note that has marginal with respect to p and thus for functions , we have that almost surely. Notice also that the dynamics restricted to the q-variables is no longer Markovian. The p-variables can thus be interpreted as giving some instantaneous memory to the system, facilitating efficient exploration of the state space. Higher order Markovian models, based on a finite dimensional (Markovian) approximation of the generalized Langevin equation can also be used [12].
As there is a lot of freedom in choosing the dynamics in (2), see the discussion in Sect. 2, it is desirable to choose the diffusion process in such a way that can provide a good estimation of . The performance of the estimator (2) can be quantified in various manners. The ultimate goal, of course, is to choose the dynamics as well as the numerical discretization in such a way that the computational cost of the longtime-average estimator is minimized, for a given tolerance. The minimization of the computational cost consists of three steps: bias correction, variance reduction and choice of an appropriate discretization scheme. For the latter step see Sect. 5 and [14, Sect. 6].
Under appropriate conditions on the potential V it can be shown that both (3) and (4) converge to equilibrium exponentially fast, e.g. in relative entropy. One performance objective would then be to choose the process so that this rate of convergence is maximised. Conditions on the potential V which guarantee exponential convergence to equilibrium, both in and in relative entropy can be found in [7, 39, 54]. In the case when the target measure is Gaussian, both the overdamped (3) and the underdamped (4) dynamics become generalized Ornstein–Uhlenbeck processes. For such processes the entire spectrum of the generator—or, equivalently, the Fokker–Planck operator—can be computed analytically and, in particular, an explicit formula for the -spectral gap can be obtained [38, 43, 44]. A detailed analysis of the convergence to equilibrium in relative entropy for stochastic differential equations with linear drift, i.e. generalized Ornstein–Uhlenbeck processes, has been carried out in [1, 2].
In addition to speeding up convergence to equilibrium, i.e. reducing the bias of the estimator (2), one is naturally also interested in reducing the asymptotic variance. Under appropriate conditions on the target measure and the observable f, the estimator satisfies a central limit theorem (CLT) [31], that is,
where is the asymptotic variance of the estimator . The asymptotic variance characterises the magnitude of fluctuations of around . Consequently, another natural objective is to choose the process such that is as small as possible. It is well known that the asymptotic variance can be expressed in terms of the solution to an appropriate Poisson equation for the generator of the dynamics [31]
| 6 |
Techniques from the theory of partial differential equations can then be used in order to study the problem of minimizing the asymptotic variance. This is the approach that was taken in [14], see also [23], and it will also be used in this paper.
Other measures of performance have also been considered. For example, in [50, 51], performance of the estimator is quantified in terms of the rate functional of the ensemble measure . See also [28] for a study of the nonasymptotic behaviour of MCMC techniques, including the case of overdamped Langevin dynamics.
Similar analyses have been carried out for various modifications of (3). Of particular interest to us are the Riemannian manifold MCMC [18] (see the discussion in Sect. 2) and the nonreversible Langevin samplers [20, 21]. As a particular example of the general framework that was introduced in [18], we mention the preconditioned overdamped Langevin dynamics that was presented in [4]. There, the long-time behaviour of as well as the asymptotic variance of the corresponding estimator are studied and applied to equilibrium sampling in molecular dynamics. A variant of the standard underdamped Langevin dynamics that can be thought of as a form of preconditioning and that has been used by practitioners is the mass-tensor molecular dynamics [6].
The nonreversible overdamped Langevin dynamics
| 7 |
where the vector field satisfies is ergodic (but not reversible) with respect to the target measure for all choices of the divergence-free vector field . The asymptotic behaviour of this process was considered for Gaussian diffusions in [20], where the rate of convergence of the covariance to equilibrium was quantified in terms of the choice of . This work was extended to the case of non-Gaussian target densities, and consequently for nonlinear SDEs of the form (7) in [21]. The problem of constructing the optimal nonreversible perturbation, in terms of the spectral gap for Gaussian target densities was studied in [34] see also [55]. Optimal nonreversible perturbations with respect to miniziming the asymptotic variance were studied in [14, 23]. In all these works it was shown that, in theory [i.e. without taking into account the computational cost of the discretization of the dynamics (7)], the nonreversible Langevin sampler (7) is never worse that its reversible counterpart (3), both in terms of converging faster to the target distribution as well as in terms of having a lower asymptotic variance. It should be emphasized that the two optimality criteria, maximizing the spectral gap and minimizing the asymptotic variance, lead to different choices for the nonreversible drift .
The goal of this paper is to extend the analysis presented in [14, 34] by introducing the following modification of the standard underdamped Langevin dynamics:
| 8 |
where are constant strictly positive definite matrices, and are scalar constants and are constant skew-symmetric matrices. As demonstrated in Sect. 2, the process defined by (8) will be ergodic with respect to the Gibbs measure defined in (5).
Our objective is to investigate the use of these dynamics for computing ergodic averages of the form (2). To this end, we study the long time behaviour of (8) and, using hypocoercivity techniques, prove that the process converges exponentially fast to equilibrium. This perturbed underdamped Langevin process introduces a number of parameters in addition to the mass and friction tensors which must be tuned to ensure that the process is an efficient sampler. For Gaussian target densities, we derive estimates for the spectral gap and the asymptotic variance, valid in certain parameter regimes. Moreover, for certain classes of observables, we are able to identify the choices of parameters which lead to the optimal performance in terms of asymptotic variance. While these results are valid for Gaussian target densities, we advocate these particular parameter choices also for more complex target densities. To demonstrate their efficacy, we perform a number of numerical experiments on more complex, multimodal distributions. In particular, we use the Langevin sampler (8) in order to study the problem of diffusion bridge sampling.
The rest of the paper is organized as follows. In Sect. 2 we present some background material on Langevin dynamics, we construct general classes of Langevin samplers and we introduce criteria for assessing the performance of the samplers. In Sect. 3 we study qualitative properties of the perturbed underdamped Langevin dynamics (8) including exponentially fast convergence to equilibrium and the overdamped limit. In Sect. 4 we study in detail the performance of the Langevin sampler (8) for the case of Gaussian target distributions. In Sect. 5 we introduce a numerical scheme for simulating the perturbed dynamics (8) and we present numerical experiments on the implementation of the proposed samplers for the problem of diffusion bridge sampling. Section 6 is reserved for conclusions and suggestions for further work.
Construction of General Langevin Samplers
Background and Preliminaries
In this section we consider estimators of the form (2) where is a diffusion process given by the solution of the following Itô SDE:
| 9 |
with drift coefficient and diffusion coefficient both having smooth components, and where is a standard –valued Brownian motion. Associated with (9) is the infinitesimal generator formally given by
| 10 |
where , denotes the Hessian of the function f and denotes the Frobenius inner product. In general, is nonnegative definite, and could possibly be degenerate. In particular, the infinitesimal generator (10) need not be uniformly elliptic. To ensure that the corresponding semigroup exhibits sufficient smoothing behaviour, we shall require that the process (9) is hypoelliptic in the sense of Hörmander. If this condition holds, then irreducibility of the process will be an immediate consequence of the existence of a strictly positive invariant distribution , see [30].
Suppose that is nonexplosive. It follows from the hypoellipticity assumption that the process possesses a smooth transition density p(t, x, y) which is defined for all and , [5, Theorem VII.5.6]. The associated strongly continuous Markov semigroup is defined by Suppose that is invariant with respect to the target measure , i.e. for and all bounded continuous functions f. Then can be extended to a positivity preserving contraction semigroup on which is strongly continuous. Moreover, the infinitesimal generator corresponding to is given by an extension of , also denoted by .
Due to hypoellipticity and invariance with respect to , the probability measure on has a smooth density with respect to the Lebesgue measure. If this density is strictly positive, it follows that is necessarily the unique invariant distribution. Slightly abusing the notation, we will denote both the measure and its density by . Furthermore, we will denote by be the Hilbert space of -square integrable functions equipped with inner product and norm .
A General Characterisation of Ergodic Diffusions
A natural question is what conditions on the coefficients a and b of (9) are required to ensure that is invariant with respect to the distribution . The following result provides a necessary and sufficient condition for a diffusion process to be invariant with respect to a given target distribution.
Theorem 1
Consider a diffusion process on defined by the unique, non-explosive solution to the Itô SDE (9) with drift and diffusion coefficient . Then is invariant with respect to if and only if
| 11 |
where and is a continuously differentiable vector field satisfying
| 12 |
If additionally , then there exists a skew-symmetric matrix function such that In this case the infinitesimal generator can be written as an -extension of
The proof of the first part of this result can be found in [46, Chap. 4]; similar versions of this characterisation can be found in [54] and [21]. For the existence of the skew-symmetric matrix C see, e.g., [16, Sec.4, Prop. 1]. See also [37].
Remark 1
If (11) holds and is hypoelliptic it follows immediately that is ergodic with unique invariant distribution (see [30]).
More generally, we can consider Itô diffusions in an extended phase space:
| 13 |
where is a standard Brownian motion in , . This is a Markov process with generator
| 14 |
where . We will consider dynamics that is ergodic with respect to such that where .
There are various well-known choices of dynamics which are invariant (and indeed ergodic) with respect to the target distribution .
Choosing and we immediately recover the overdamped Langevin dynamics (3).
- Given a target density on , if we consider the augmented target density on given in (5), then choosing
where M and are positive definite symmetric matrices, the conditions of Theorem 1 are satisfied for the target density . The resulting dynamics is determined by the underdamped Langevin equation (4). It is straightforward to verify that the generator is hypoelliptic, [35, Sec 2.2.3.1], and thus is ergodic.16 - More generally, consider the augmented target density on as above, and choose
where and are scalar constants and are constant skew-symmetric matrices. With this choice we recover the perturbed Langevin dynamics (8). It is straightforward to check that (17) satisfies the invariance condition (12), and thus Theorem 1 guarantees that (8) is invariant with respect to .17 - In a similar fashion, one can introduce an augmented target density on , with
where , for . Clearly . We now define by
and by
where and , for . The resulting process (9) is given by
where are independent –valued Brownian motions. This process is ergodic with unique invariant distribution , and under appropriate conditions on V, converges exponentially fast to equilibrium in relative entropy [42]. Equation (18) is a Markovian representation of a generalised Langevin equation of the form18
where N(t) is a mean-zero stationary Gaussian process with autocorrelation function F(t), i.e. and - Let be a positive density on where such that where . Then choosing and we obtain the dynamics
then is immediately ergodic with respect to .
Comparison Criteria
For a fixed observable f, a natural measure of accuracy of the estimator is the mean square error (MSE) defined by
| 19 |
where denotes the expectation conditioned on the process starting at x. It is instructive to introduce the decomposition , where
| 20 |
Here measures the bias of the estimator and measures the variance of fluctuations of around the mean.
The speed of convergence to equilibrium of the process will control both the bias term and the variance . To make this claim more precise, suppose that the semigroup associated with decays exponentially fast in , i.e. there exist constants and such that
| 21 |
Remark 2
If (21) holds with , this estimate is equivalent to having a spectral gap in . Allowing for a constant is essential for our purposes though in order to treat nonreversible and degenerate diffusion processes by the theory of hypocoercivity as outlined in [54].
The following lemma characterises the decay of the bias as in terms of and C. The proof can be found in [41].
Lemma 1
Let be the unique, non-explosive solution of (9), such that and , where denotes the Radon-Nikodym derivative of with respect to . Suppose that the process is ergodic with respect to such that the Markov semigroup satisfies (21). Then for ,
The study of the long time behaviour of the variance involves deriving a central limit theorem for the additive functional . As discussed in [13], we reduce this problem to proving well-posedness of the Poisson equation
| 22 |
The only complications in this approach arise from the fact that the generator need not be symmetric in nor uniformly elliptic. The following result summarises conditions for the well-posedness of the Poisson equation and it also provides with us with a formula for the asymptotic variance. Again, the proof can be found in [41].
Lemma 2
Let be the unique, non-explosive solution of (9) with smooth drift and diffusion coefficients, such that the corresponding infinitesimal generator is hypoelliptic. Syppose that is ergodic with respect to and moreover, decays exponentially fast in as in (21). Then for all , there exists a unique mean zero solution to the Poisson equation (22). If , then for all
| 23 |
where is the asymptotic variance defined by
| 24 |
Moreover, if where and then (23) holds for all .1
Clearly, observables that only differ by a constant have the same asymptotic variance. In the sequel, we will hence restrict our attention to observables satisfying , simplifying expressions (22) and (23). The corresponding subspace of will be denoted by . If the exponential decay estimate (21) is satisfied, then Lemma 2 shows that is invertible on , so we can express the asymptoptic variance as
| 25 |
We note that the constants C and appearing in the exponential decay estimate (21) also control the speed of convergence of to zero. Indeed, it is straightforward to show that if (21) is satisfied, then the solution of (22) satisfies
| 26 |
Lemmas 1 and 2 would suggest that choosing the coefficients and to optimize the constants C and in (26) would be an effective means of improving the performance of the estimator , especially since the improvement in performance would be uniform over an entire class of observables. When this is possible, this is indeed the case. However, as has been observed in [20, 21, 34], maximising the speed of convergence to equilibrium is a delicate task. As the leading order term in MSE(f, T), it is typically sufficient to focus specifically on the asymptotic variance and study how the parameters of the SDE (9) can be chosen to minimise . This study was undertaken in [14] for processes of the form (7).
Perturbation of Underdamped Langevin Dynamics
The primary objective of this work is to compare the performances of the perturbed underdamped Langevin dynamics (8) and the unperturbed dynamics (4) according to the criteria outlined in Sect. 2.3 and to find suitable choices for the matrices , , M and that improve the performance of the sampler. We begin our investigations of (8) by establishing ergodicity and exponentially fast return to equilibrium, and by studying the overdamped limit of (8). As the latter turns out to be nonreversible and therefore in principle superior to the usual overdamped limit (3), e.g. [21], this calculation provides us with further motivation to study the proposed dynamics.
For the bulk of this work, we focus on the particular case when the target measure is Gaussian, i.e. when the potential is given by with a symmetric and positive definite precision matrix S (i.e. the covariance matrix is given by ). In this case, we advocate the following conditions for the choice of parameters:
| 27 |
Under the above choices (27), we show that the large perturbation limit exists and is finite and we provide an explicit expression for it (see Corollary 4). From this expression, we derive an algorithm for finding optimal choices for in the case of quadratic observables (see Algorithm 2).
If the friction coefficient is not too small (), and under certain mild nondegeneracy conditions, we prove that adding a small perturbation will always decrease the asymptotic variance for observables of the form :
see Theorem 3. In fact, we conjecture that this statement is true for arbitrary observables , but we have not been able to prove this. The dynamics (8) [used in conjunction with the conditions (27)] proves to be especially effective when the observable is antisymmetric (i.e. when it is invariant under the substitution ) or when it has a significant antisymmetric part. In particular, in Proposition 3 we show that under certain conditions on the spectrum of , for any antisymmetric observable it holds that .
Numerical experiments and analysis show that departing significantly from (27) in fact possibly decreases the performance of the sampler. This is in stark contrast to (7), where it is not possible to increase the asymptotic variance by any perturbation. For that reason, until now it seems practical to use (8) as a sampler only when a reasonable estimate of the global covariance of the target distribution is available. In the case of Bayesian inverse problems and diffusion bridge sampling, the target measure is given with respect to a Gaussian prior. We demonstrate the effectiveness of our approach in these applications, taking the prior Gaussian covariance as S in (27).
Remark 3
In [34, Rem. 3] another modification of (4) was suggested (albeit with the simplifications and ):
| 28 |
J again denoting an antisymmetric matrix. However, under the change of variables the above equations transform into
| 29 |
where and . Since any observable f depends only on q (the p-variables are merely auxiliary), the estimator as well as its associated convergence characteristics (i.e. asymptotic variance and speed of convergence to equilibrium) are invariant under this transformation. Therefore, (28) reduces to the underdamped Langevin dynamics (4) and does not represent an independent approach to sampling. Suitable choices of M and will be discussed in Sect. 4.5.
Properties of Perturbed Underdamped Langevin Dynamics
In this section we study some of the properties of the perturbed underdamped dynamics (8). First, note that its generator is given by
| 30 |
decomposed into the perturbation and the unperturbed operator , which can be further split into the Hamiltonian part and the thermostat (Ornstein–Uhlenbeck) part , see [35, 36, 46].
Lemma 3
The infinitesimal generator (30) is hypoelliptic.
Proof
The proof consists of verifying the conditions of Hörmander’s Theorem for the generator (30) and can be found in [41].
An immediate corollary of this result and of Theorem 1 is that the perturbed underdamped Langevin process (8) is ergodic with unique invariant distribution given by (5).
As explained in Sect. 2.3, the exponential decay estimate (21) is crucial for our approach, as in particular it guarantees the well-posedness of the Poisson equation (22). From now on, we will therefore make the following assumption on the potential V, required to prove exponential decay in :
Assumption 1
Assume that the Hessian of V is bounded and that the target measure satisfies a Poincare inequality, i.e. there exists a constant such that
| 31 |
Sufficient conditions on the potential so that Poincaré’s inequality holds, e.g. the Bakry-Emery criterion, are presented in [7].
Theorem 2
Under Assumption 1 there exist constants and such that the semigroup generated by satisfies exponential decay in as in (21).
Proof
The proof uses the machinery of hypocoercivity developed in [54] and can be found in [41]. Using the framework of [15], we conjecture that the assumption on the boundedness of the Hessian of V can be substantially weakened and more quantitative decay estimates (in particular with respect to and ) can be obtained. This approach has recently been successfully applied to equilibrium and nonequilibirum Langevin dynamics, see [27, 53]. We leave this work track for future study.
The Overdamped Limit
In this section we develop a connection between the perturbed underdamped Langevin dynamics (8) and the nonreversible overdamped Langevin dynamics (7). The analysis is very similar to the one presented in [35, Sect. 2.2.2] and we will be brief. For convenience in this section we will perform the analysis on the d-dimensional torus , i.e. we will assume . Consider the following scaling of (8):
| 32a |
| 32b |
valid for the small mass/small momentum regime Equivalently, those modifications can be obtained from subsituting and , and so in the limit as the dynamics (32) describes the limit of large friction with rescaled time. It turns out that as , the dynamics (32) converges to the limiting SDE
| 33 |
The following proposition makes this statement precise.
Proposition 1
Denote by the solution to (32) with (deterministic) initial conditions and by the solution to (33) with initial condition For any , converges to in as , i.e.
Proof
The proof follows standard arguments (see for instance [46]) and can be found in [41]. By a more refined analysis, it is also possible to get information on the rate of convergence; see, e.g. [48, 49].
Remark 4
The overdamped limit (33) respects the invariant distribution, in the sense that it is ergodic with respect to .
The limiting SDE (33) is nonreversible due to the term and also because the matrix is in general neither symmetric nor antisymmetric. This result, together with the fact that nonreversible perturbations of overdamped Langevin dynamics of the form (7) are by now well-known to have improved performance properties, motivates further investigation of the dynamics (8).
Sampling from a Gaussian Distribution
In this section we study in detail the performance of the Langevin sampler (8) for Gaussian target densities, first considering the case of unit covariance. In particular, we study the optimal choice for the parameters in the sampler, the exponential decay rate and the asymptotic variance. We then extend our results to Gaussian target densities with arbitrary covariance matrices.
Unit Covariance: Small Perturbations
In our study of the dynamics given by (8) we first consider the simple case when , i.e. the task of sampling from a Gaussian measure with unit covariance. We will assume , and (so that the and dynamics are perturbed in the same way, albeit posssibly with different strengths and ). Our first result concerns the asymptotic variance for linear and quadratic observables for small perturbations of equal strength (). For sufficiently strong damping ) always leads to an improvement in asymptotic variance under the nondegeneracy conditions and :
Theorem 3
Consider the dynamics
| 34 |
with and an observable of the form , where , and . If at least one of the conditions and is satisfied, then the asymptotic variance of the unperturbed sampler is at a local maximum independently of K and J (and , as long as ), i.e.
Proof
The dynamics (34) are of Ornstein–Uhlenbeck type, i.e. we can write
| 35 |
with , and denoting a standard Wiener process on . The generator of (35) is then given by
| 36 |
According to Lemma 2, the asymptotic variance can be expressed as
| 37 |
By calculations similar to those in [14, Sect. 4], is given by , where
| 38 |
using the notations
| 39 |
The asymptotic variance is then given by
| 40 |
Taking derivatives of 38 and solving the ensuing matrix equations, it is possible to obtain explicit expressions for , , and as detailed in [41]. We obtain
Notice that and that [J, K] is symmetric. It follows that with equality if and only if . Together with for and for , the claim follows.
Remark 5
As we will see in Sect. 4.3, Example 1, if and , the asymptotic variance is constant as a function of , i.e. the perturbation has no effect.
Numerical examples show that the conditions and are indeed necessary for the conclusions of Theorem 3 to hold (an explanation in terms of the spectrum of the generator will be provided in Sect. 4.2). In particular, an unfortunate choice of the perturbations will actually increase the asymptotic variance of the dynamics.
Let us illustrate this by plotting the asymptotic variance as a function of the perturbation strength (see Fig. 1), making the choices , ,
| 41 |
The asymptotic variance has been computed according to (38) and (40). Going beyond the results of this section, the graphs give an impression of the behaviour of the asymptotic variace for large values of , discussed further in Sect. 4.3.
Fig. 1.
Asymptotic variance for linear and quadratic observables, depending on relative perturbation and friction strengths. a Equal perturbations: . b Approximately equal perturbations: . c Opposing perturbations: . d Equal perturbations: (sufficiently large friction ). e Equal perturbations: (small friction )
Figure 1a, b, c show the asymptotic variance associated with the quadratic observable . In accordance with Theorem 3, the asymptotic variance is at a local maximum at zero perturbation in the case (see Fig. 1a). For increasing perturbation strength, the graph shows monotone decay for (this limiting behaviour will be explored analytically in Sect. 4.3). If the condition is only approximately satisfied (Fig. 1b), our numerical examples still exhibits decaying asymptotic variance in the neighbourhood of the critical point. In this case, however, the asymptotic variance diverges for growing values of the perturbation . If the perturbations are opposed (), it is possible for certain observables that the unperturbed dynamics represents a global minimum. Such a case is observed in Fig. 1c. In Fig. 1d, e the observable is considered. If the damping is sufficiently strong (), the unperturbed dynamics is at a local maximum of the asymptotic variance (Fig. 1d). Furthermore, the asymptotic variance approaches zero as (for a theoretical explanation see again Sect. 4.3). The graph in Fig. 1e shows that the assumption of not being too small cannot be dropped from Theorem 3. Even in this case though the example shows decay of the asymptotic variance for large values of .
Exponential Decay Rate
Let us denote by the optimal exponential decay rate in (21), i.e.
| 42 |
Note that is well-defined and positive by Theorem 2. We also define the spectral bound of the generator by
| 43 |
In [38] it is proven that the Ornstein–Uhlenbeck semigroup considered in this section is differentiable (see Proposition 2.1). In this case (see Corollary 3.12 of [17]), it is known that the exponential decay rate and the spectral bound coincide, i.e. , whereas in general only holds. In this section we will therefore analyse the spectral properties of the generator in the Gaussian case. In particular, this leads to some intuition of why choosing equal perturbations () is crucial for the performance of the sampler.
In [38] (see also [43]), it was proven that the spectrum of as in (36) in is given by
| 44 |
Note that only depends on the drift matrix B. In the case where , the spectrum of B can be computed explicitly.
Lemma 4
Assume . Then the spectrum of B is given by2
| 45 |
Proof
We will compute and then use the identity We have
where I is understood to denote the identity matrix of appropriate dimension. The above quantity is zero if and only if
Together with (4.2), the claim follows.
Using formula (45), in Fig. 2a we show a sketch of the spectrum ) for the case of equal perturbations ( with the convenient choices and Of course, the eigenvalue at 0 is associated to the invariant measure since . The arrows indicate the movement of the eigenvalues as the perturbation increases in accordance with Lemma 4. Clearly, the spectral bound of is not affected by the perturbation. Note that the eigenvalues on the real axis stay invariant under the perturbation. The subspace of associated to those will turn out to be crucial for the characterisation of the limiting asymptotic variance as (see Remark 10).
Fig. 2.
Effects of the perturbation on the spectra of and B. a in the case . The arrows indicate the movement of the spectrum as the perturbation strength increases. b in the case , i.e. the dynamics is only perturbed by . The arrows indicate the movement of the eigenvalues as increases
To illustrate the suboptimal properties of the perturbed dynamics when the perturbations are not equal, we plot the spectrum of the drift matrix in the case when the dynamics is only perturbed by the term (i.e. ) for , and
| 46 |
(see Fig. 2b). Note that the full spectrum can be inferred from (44). For we have that the spectrum only consists of the (degenerate) eigenvalue 1. For increasing , the figure shows that the degenerate eigenvalue splits up into four eigenvalues, two of which get closer to the imaginary axis as increases, leading to a smaller spectral bound and therefore to a decrease in the speed of convergence to equilibrium. Figure 2a, b give an intuitive explanation of why the fine-tuning of the perturbation strengths is crucial.
We close this subsection by providing autocorrelation plots (see Fig. 3) for the linear observable considered in Fig. 1d (with a friction coefficient of ). It is well-known that the asymptotic variance is given by the integrated autocorrelation function (see e.g. Proposition IV 1.3 in [3]),
| 47 |
Comparing Fig. 3a, b yields additional insight into the mechanics of the variance reduction: the increase of the imaginary part of the eigenvalues of (as indicated in Fig. 2a) leads to oscillations in the autocorrelation function and therefore to cancellations in (47). A similar effect has already been observed in [50] for the nonreversible overdamped Langevin dynamics (15).
Fig. 3.
Autocorrelation plots for the perturbed and unperturbed dynamics. a Unperturbed Langevin dynamics. b Perturbed Langevin dynamics
Unit Covariance: Large Perturbations
In the previous subsection we observed that for the particular perturbation and [see equation (34)] the perturbed Langevin dynamics demonstrated an improvement in performance for in a neighbourhood of 0, when the observable is linear or quadratic. Recall that this dynamics is ergodic with respect to a standard Gaussian measure on with marginal with respect to the q-variable. As before, we shall consider only observables that do not depend on p. Moreover, we assume without loss of generality that . For such observables we will write and consider the canonical embedding . We emphasize that consists of functions that only depend on q, whereas functions in may depend on both q and p.
In this subsection will analyse the asymptotic variance for large values of . The infinitesimal generator of (34) can be written as
| 48 |
where we have introduced the notation . In the sequel, the adjoint of an operator in will be denoted by . In the rest of this section we will make repeated use of the Hermite polynomials
| 49 |
invoking the notation . For define the Hilbert spaces
The following result (Theorem 4) holds for operators of the form (36) providing an orthogonal decomposition of into invariant subspaces. The drift and diffusion matrices B and Q are assumed to be such that is the generator of an ergodic stochastic process (see [2, Definition 2.1] for precise conditions).
Theorem 4
[2, Sect. 5]. The following holds:
- The space has a decomposition into mutually orthogonal subspaces:
For all , is invariant under as well as under the semigroup .
- The spectrum of has the following decomposition:
Remark 6
Note that by the ergodicity of the dynamics, consists of constant functions and so . Therefore, has the decomposition
Our first main result of this section is an expression for the asymptotic variance in terms of the unperturbed operator and the perturbation :
Proposition 2
Let (so in particular ). Then the associated asymptotic variance is given by
| 50 |
Remark 7
The proof of the preceding Proposition will show that is invertible on and that for all .
To prove Proposition 2 we will make use of the generator with reversed perturbation
and the momentum flip operator
| 51 |
Clearly, and . Further properties of , and the auxiliary operators and P are gathered in the following lemma:
Lemma 5
For all the following holds:
- The generator is symmetric in with respect to P:
- The perturbation is skewadjoint in :
- The operators and commute:
- The perturbation satisfies
-
and commute,and the following relation holds:
52 The operators , , , and P leave the Hermite spaces invariant.
Remark 8
The claim (c) in the above lemma is crucial for our approach, which itself rests heavily on the fact that the and perturbations match ().
Proof of Lemma 5
The statement (a) is well-known and its proof can be found in [35, Sect. 2.2.3.1] for instance. The claim (b) follows by noting that the flow vector field associated to is divergence-free with respect to , i.e. . Therefore, is the generator of a strongly continuous unitary semigroup on and hence skewadjoint by Stone’s Theorem. The claims (c), (d) and (e) follow by direct computations which can be found in [41]. To prove (f) first notice that , and are of the form (36) and therefore leave the spaces invariant by Theorem 4. It follows immediately that also leaves those spaces invariant. The fact that P leaves the spaces invariant follows directly by inspection of (49) and (51).
Now we proceed with the proof of Proposition 2:
Proof of Proposition 2
Since the potential V is quadratic, Assumption 1 clearly holds and thus Lemma 2 ensures that and are invertible on with
| 53 |
and analogously for . In particular, the asymptotic variance can be written as Due to the respresentation (53) and Theorem 4, the inverses of and leave the Hermite spaces invariant. We will prove the claim from Proposition 2 under the assumption that which includes the case . For the following calculations we will assume for fixed . Combining statement (f) with (a) and (e) of Lemma 5 (and noting that ) we see that
| 54 |
when restricted to . Therefore, the following calculations are justified:
where in the second line we have used the assumption and in the third line the properties , and Eq. (54). Since and commute on according to Lemma 5(e),(f) we can write
for the restrictions on , using . We also have since and commute. We thus arrive at the formula
| 55 |
Now since for all , it follows that the operator is bounded. We can therefore extend formula (55) to the whole of by continuity, using the fact that .
Applying Proposition 2 we can analyse the behaviour of in the limit of large perturbation strength . To this end, we introduce the orthogonal decomposition
| 56 |
where is understood as an unbounded operator acting on , obtained as the smallest closed extension of acting on . In particular, is a closed linear subspace of . Let denote the -orthogonal projection onto . We will write to stress the dependence of the asymptotic variance on the perturbation strength. The following result shows that for large perturbations, the limiting asymptotic variance is always smaller than the asymptotic variance in the unperturbed case. Furthermore, the limit is given as the asymptotic variance of the projected observable for the unperturbed dynamics.
Theorem 5
Let (so in particular ). Then
Remark 9
Note that the fact that the limit exists and is finite is nontrivial. In particular, as Fig. 1b, c demonstrate, it is often the case that if the condition is not satisfied.
Remark 10
The decomposition (56) can be interpreted in terms of the spectrum as follows: First observe that for functions f that only depend on q, is equivalent to . Let us denote by the part of that is not affected by the perturbation and by
the corresponding subspace. Then it is straightforward to see that .3 In Fig. 2a, has been highlighted by diamonds.
Proof of Theorem 5
Note that and leave the Hermite spaces invariant and their restrictions to those spaces commute (see Lemma 5, (b), (c) and (f)). Furthermore, as the Hermite spaces are finite-dimensional, those operators have discrete spectrum. As is nonnegative self-adjoint, there exists an orthogonal decomposition into eigenspaces of the operator , the decomposition being finer then in the sense that every is a subspace of some . Moreover, where is the eigenvalue of associated to the subspace . Consequently, formula (50) can be written as
| 57 |
where and . Let us assume now without loss of generality that , so in particular . Then clearly
Now notice that , showing the equality in the claim. It remains to show that . To see this, we write
where
Note that since we only consider observables that do not depend on p, and . Since commutes with , it follows that leaves both and invariant. Therefore, as the latter spaces are orthogonal to each other, it follows that , from which the result follows.
From Theorem 5 it follows that in the limit as , the asymptotic variance is not decreased by the perturbation if . In fact, this result also holds true non-asymptotically, i.e. observables in are not affected at all by the perturbation:
Lemma 6
Let . Then for all .
Proof
From it follows immediately that . Then the claim follows from the expression (57).
Example 1
Recall the case of observables of the form with , and from Sect. 4.1. If and , then as
From the preceding lemma it follows that for all showing that the assumption in Theorem 3 does not exclude nontrivial cases.
The following result shows that the dynamics (34) is particularly effective for antisymmetric observables (at least in the limit of large perturbations):
Proposition 3
Let satisfy and assume that . Furthermore, assume that the eigenvalues of J are rationally independent, i.e.
| 58 |
with and for all . Then .
Proof of Proposition 3
The claim would immediately follow from according to Theorem 5, but that does not seem to be so easy to prove directly. Instead, we again make use of the Hermite polynomials.
Recall from the proof of Proposition 2 that is invertible on and its inverse leaves the Hermite spaces invariant. Consequently, the asymptotic variance of an observable can be written as
| 59 |
where denotes the orthogonal projection onto . From (49) it is clear that is symmetric for even and antisymmetric for odd. Therefore, from f being antisymmetric it follows that In view of (45), ((c)) and (58) the spectrum of can be written as
| 60 |
with appropriate real constants that depend on and , but not on . For odd, we have that
| 61 |
Indeed, assume to the contrary that the above expression is zero. Then it would follow that for all by rational independence of and would have to be even. From (60) and (61) it is clear that
where B(0, r) denotes the ball of radius r centered at the origin in . Consequently, the spectral radius of and hence itself converges to zero as . The result then follows from (59).
Remark 11
The idea of the preceding proof can be explained using Fig. 2a and Remark 10. The eigenvalues in the fixed spectrum (on the real axis, highlighted by diamonds) correspond to Hermite polynomials of even order. The independence condition on the eigenvalues of J prevents cancellations that would lead to fixed eigenvalues associated to Hermite polynomials of odd order. Therefore, antisymmetric observables are orthogonal to .
The following corollary gives a version of the converse of Proposition 3 and provides further intuition into the mechanics of the variance reduction achieved by the perturbation.
Corollary 1
Let and assume that . Then
for all , where B(0, r) denotes the ball centered at 0 with radius r.
Proof
According to Theorem 5, implies . We can write
| 62 |
and recall from the proof of Proposition 2 that and leave the Hermite spaces invariant. Therefore in , and in particular implies , which in turn shows that . From , it follows that
| 63 |
Hence, there exists a sequence such that in . Taking a subsequence if necessary, we can assume that the convergence is pointwise -almost everywhere and that the sequence is pointwise bounded by a function in . Since J is antisymmetric, we have that . Now Gauss’s theorem yields
where n denotes the outward normal to the sphere . This quantity is zero due to the orthogonality of Jq and n, and so the result follows from Lebesgue’s dominated convergence theorem.
Optimal Choices of J for Quadratic Observables
Assume is given by , with and (note that the constant term is chosen such that ). Our objective is to choose J in such a way that becomes as small as possible. To stress the dependence on the choice of J, we introduce the notation . Also, we denote the orthogonal projection onto by .
Lemma 7
(Zero variance limit for linear observables). Assume and . Then
Proof
According to Theorem 5, we have to show that , where is the -orthogonal projection onto . Let us thus use (63) and prove that Indeed, since , by Fredholm’s alternative there exists such that . Now define by leading to so the result follows.
Lemma 8
(Zero variance limit for purely quadratic observables.) Let and consider the decomposition into the traceless part and the trace-part For the corresponding decomposition of the observable
the following holds:
There exists an antisymmetric matrix J such that and there is an algorithmic way (see Algorithm 1) to compute an appropriate J in terms of K.
The trace-part is not effected by the perturbation, i.e. for all .
Proof
To prove the first claim, according to Theorem 5 it is sufficient to show that . Let us consider the function , with . It holds that The task of finding an antisymmetric matrix J such that can therefore be accomplished by constructing an antisymmetric matrix J such that there exists a symmetric matrix A with the property . Given any traceless matrix there exists an orthogonal matrix such that has zero entries on the diagonal, and that U can be obtained in an algorithmic manner (see for example [29] or [22, Chap. 2, Sect. 2, Problem 3]) Assume thus that such a matrix has been found and choose real numbers such that if . We now set and
| 64 |
Observe that since is symmetric, is antisymmetric. A short calculation shows that . We can thus define and to obtain . Therefore, the J constructed in this way indeed satisfies (4.4). For the second claim, note that , since due to the antisymmetry of J. The result then follows from Lemma 6.
We would like to stress that the perturbation J constructed in the previous lemma is far from unique due to the freedom of choice of U and in its proof. However, it is asymptotically optimal:
Corollary 2
In the setting of Lemma 8 the following holds:
Proof
The claim follows immediately since for arbitrary antisymmetric J as shown in (4.4), and therefore the contribution of the trace part to the asymptotic variance cannot be reduced by any choice of J according to Lemma 6.
As the proof of Lemma 8 is constructive, we obtain the following algorithm for determining optimal perturbations for quadratic observables:
Algorithm 1
Given , determine an optimal antisymmetric perturbation J as follows:
Set
Find such that has zero entries on the diagonal.
- Choose such that for and set
for and otherwise. Set .
Remark 12
In [14], the authors consider the task of finding optimal perturbations J for the nonreversible overdamped Langevin dynamics given in (15). In the Gaussian case this optimization problem turns out be equivalent to the one considered in this section. Indeed, equation (39) of [14] can be rephrased as Therefore, Algorithm 1 and its generalization Algorithm 2 (described in Sect. 4.5) can be used without modifications to find optimal perturbations of overdamped Langevin dynamics.
Gaussians with Arbitrary Covariance and Preconditioning
In this section we extend the results of the preceding sections to the case when the target measure is given by a Gaussian with arbitrary covariance, i.e. with symmetric and positive definite. The dynamics (8) then takes the form
| 65 |
The key observation is now that the choices and together with the transformation and lead to the dynamics
| 66 |
which is of the form (34) if and obey the condition . Clearly the dynamics (66) is ergodic with respect to a Gaussian measure with unit covariance, in the following denoted by . The connection between the asymptotic variances associated to (65) and (66) is as follows:
For an observable we can write
where . Therefore, the asymptotic variances satisfy where denotes the asymptotic variance of the process . Because of this, the results from the previous sections generalise to (65), subject to the condition that the choices , and are made. We formulate our results in this general setting as corollaries:
Corollary 3
Consider the dynamics
| 67 |
with . Assume that , with and . Let be an observable of the form
| 68 |
with , and . If at least one of the conditions and is satisfied, then the asymptotic variance is at a local maximum for the unperturbed sampler, i.e.
Proof
Note that
is again of the form (68) (where in the last equality, and have been defined). From (66), (4.5) and Theorem 3 the claim follows if at least one of the conditions and is satisfied. The first of those can easily seen to be equivalent to which is equivalent to since S is nondegenerate. The second condition is equivalent to which is equivalent to again by nondegeneracy of S.
Corollary 4
Assume the setting from the previous corollary and denote by the orthogonal projection onto . For it holds that
Proof
Theorem 5 implies for the transformed system (66). Here is the transformed observable and denotes -orthogonal projection onto . According to (4.5), it is sufficient to show that . This however follows directly from the fact that the linear transformation maps bijectively onto .
Let us also reformulate Algorithm 1 for the case of a Gaussian with arbitrary covariance.
Algorithm 2
Given with and (assuming S is nondegenerate), determine optimal perturbations and as follows:
Set and .
Find such that has zero entries on the diagonal.
- Choose , such that for and set
Set .
Put and .
Finally, we obtain the following optimality result from Lemma 7 and Corollary 2.
Corollary 5
Let and assume that . Then
where , . Optimal choices for and can be obtained using Algorithm 2.
Remark 13
Since in Sect. 4.1 we analysed the case where and are proportional, we are not able to drop the restriction from the above optimality result. Analysis of completely arbitrary perturbations will be the subject of future work.
Remark 14
The choices and have been introduced to make the perturbations considered in this article lead to samplers that perform well in terms of reducing the asymptotic variance. However, adjusting the mass and friction matrices according to the target covariance in this way (i.e. and ) is a popular way of preconditioning the dynamics, see for instance [18] and, in particular mass-tensor molecular dynamics [6]. Here we will present an argument why such a preconditioning is indeed beneficial in terms of the convergence rate of the dynamics. Let us first assume that S is diagonal, i.e. and that and are chosen diagonally as well. Then (65) decouples into one-dimensional SDEs of the following form:
| 69 |
Let us write those Ornstein–Uhlenbeck processes as
| 70 |
As in Sect. 4.2, the rate of the exponential decay of (70) is equal to . A short calculation shows that the eigenvalues of are given by
Therefore, the rate of exponential decay is maximal when
| 71 |
in which case it is given by
Naturally, it is reasonable to choose in such a way that the exponential rate is the same for all i, leading to the restriction with . Choosing c small will result in fast convergence to equilibrium, but also make the dynamics (69) quite stiff, requiring a very small timestep in a discretisation scheme. The choice of c will therefore need to strike a balance between those two competing effects. The constraint (71) then implies . By a coordinate transformation, the preceding argument also applies if S, M and are diagonal in the same basis, and of course M and can always be chosen that way. Numerical experiments show that it is possible to increase the rate of convergence to equilibrium even further by choosing M and nondiagonally with respect to S (although only by a small margin). A clearer understanding of this is a topic of further investigation.
Numerical Experiments: Diffusion Bridge Sampling
Numerical Scheme
In this section we introduce a splitting scheme for simulating the perturbed underdamped Langevin dynamics given by Eq. (8). In the unpertubed case, i.e. when , the BAOAB scheme (see [33] and references therein) has proven to be efficient for computing long time ergodic averages with respect to q-dependent observables. Motivated by this, we introduce the following perturbed scheme, introducing additional Runge-Kutta integration steps:
| 72a |
| 72b |
| 72c |
| 72d |
| 72e |
| 72f |
| 72g |
where refers to fourth order Runge-Kutta integration of the ODE
| 73 |
up until time . We remark that the -perturbation is linear and can therefore be included in the O-part without much computational overhead. We emphasize the fact that many different splitting schemes could be investigated: although the BAOAB-scheme works well for unperturbed Langevin dynamics, it is not clear whether this remains true for the perturbed dynamics. Moreover, the perturbations introduced by and can be added in various places. Other discretisation schemes for the ODE (73) could be useful as well, for instance one could use a symplectic integrator, using the Hamiltonian structure of (73). However, since V as the Hamiltonian for (73) is not separable in general, such a symplectic integrator would have to be implicit. Note that (72c) and (72e) could be merged since (72e) commutes with (72d). In this paper, we content ourselves with the above scheme for our numerical experiments. Investigation of optimal numerical schemes for perturbed Langevin dynamics is an interesting problem for further research.
Remark 15
The aformentioned schemes lead to an error in the approximation for , since the invariant measure is not preserved exactly by the numerical scheme. In practice, the BAOAB-scheme can therefore be accompanied by an accept-reject Metropolis step as in [40], leading to an unbiased estimate of , albeit with an inflated variance. In this case, after every rejection the momentum variable has to be flipped () in order to keep the correct invariant measure. We note here that our perturbed scheme can be ’Metropolized’ in a similar way by ’flipping’ the matrices and after every rejection ( and and using an appropriate (volume-preserving and time-reversible) integrator for the dynamics given by (73). Implementations of this idea are the subject of ongoing work. See [47] for a similar approach to nonreversible overdamped Langevin dynamics.
Diffusion Bridge Sampling
To numerically test our analytical results, we will apply the dynamics (8) to sample a measure on path space associated to a diffusion bridge. Specifically, consider the SDE
with , and the potential obeying adequate growth and smoothness conditions (see [24], Sect. 5 for precise statements). The law of the solution to this SDE conditioned on the events and is a probability measure on which poses a challenging and important sampling problem, especially if U is multimodal. This setting has been used as a test case for sampling probability measures in high dimensions (see for example [9] and [45]). For a more detailed introduction (including applications) see [11] and for a rigorous theoretical treatment the papers [11, 24–26].
In the case , it can be shown that the law of the conditioned process is given by a Gaussian measure with mean zero and precision operator on the Sobolev space equipped with appropriate boundary conditions. The general case can then be understood as a perturbation thereof: The measure is absolutely continuous with respect to with Radon-Nikodym derivative
| 74 |
where
We will make the choice , which is possible without loss of generality as explained in [10, Remark 3.1], leading to Dirichlet boundary conditions on for the precision operator . Furthermore, we choose and discretise the ensuing s-interval [0, 1] according to
in an equidistant way with stespize . Functions on this grid are determined by the values , recalling that by the Dirichlet boundary conditions. We discretise the functional as
such that its gradient is given by
We denote by the discretised Dirichlet Laplacian on [0, 1] with stepsize . Following (74), the discretised target measure has the form
In the following we will consider the case with potential given by and set . To test our algorithm we adjust the parameters M, , and according to the recommended choice in the Gaussian case, (27), where we take as the precision operator of the Gaussian target. We will consider the linear observable with and the quadratic observable . In a first experiment we adjust the perturbations and to the observable according to Algorithm 2. The dynamics (8) is integrated using the splitting scheme introduced in Sect. 5.1 with a stepsize of over the time interval [0, T] with . Furthermore, we choose initial conditions , and introduce a burn-in time , i.e. we take the estimator to be We compute the variance of the above estimator from realisations and compare the results for different choices of the friction coefficient and of the perturbation strength .
The numerical experiments show that the perturbed dynamics generally outperform the unperturbed dynamics independently of the choice of and , both for linear and quadratic observables. One notable exception is the behaviour of the linear observable for small friction (see Fig. 4a), where the asymptotic variance initially increases for small perturbation strengths . This does not contradict our analytical results, since the small perturbation results Theorem 3 and Corollary 3 are only valid if . We remark here that the condition , while necessary for the theoretical results from Sect. 4.1, is not a very advisable choice in practice (at least in this experiment), since Figs. 4b and 5b clearly indicate that the optimal friction is around . Interestingly, the problem of choosing a suitable value for the friction coefficient coefficient becomes mitigated by the introduction of the perturbation: While the performance of the unperturbed sampler depends quite sensitively on , the asymptotic variance of the perturbed dynamics is a lot more stable with respect to variations of . A somewhat surprising phenomenon is that the standard deviation associated to the linear observable decays in the range for the unperturbed sampler (see Fig. 4b). We confirmed this behaviour by further numerical experiments and remark that as the target measure is fairly complicated, convexity of the function should not be expected.
Fig. 4.
Standard deviation of for a linear observable as a function of friction and perturbation strength
Fig. 5.
Standard deviation of for a quadratic observable as a function of friction and perturbation strength
In the regime of growing values of , the experiments confirm the results from Sect. 4.3, i.e. the asymptotic variance approaches a limit that is smaller than the asymptotic variance of the unperturbed dynamics.
As a final remark we report our finding that the performance of the sampler for the linear observable is qualitatively independent of the choice of [as long as is adjusted according to (27)]. This result is in alignment with Propostion 3 which predicts good properties of the sampler for antisymmetric observables. In contrast to this, a judicious choice of is critical for quadratic observables. In particular, applying Algorithm 2 significantly improves the performance of the perturbed sampler in comparison to choosing arbitrarily.
Outlook and Future Work
A new family of Langevin samplers was introduced in this paper. These new SDE samplers consist of perturbations of the underdamped Langevin dynamics (that is known to be ergodic with respect to the canonical measure), where auxiliary drift terms in the equations for both the position and the momentum are added, in a way that the perturbed family of dynamics is ergodic with respect to the same (canonical) distribution. These new Langevin samplers were studied in detail for Gaussian target distributions where it was shown, using tools from spectral theory for differential operators, that an appropriate choice of the perturbations in the equations for the position and momentum can improve the performance of the Langvin sampler, at least in terms of reducing the asymptotic variance. The performance of the perturbed Langevin sampler to non-Gaussian target densities was tested numerically on the problem of diffusion bridge sampling.
The work presented in this paper can be improved and extended in several directions. First, a rigorous analysis of the new family of Langevin samplers for non-Gaussian target densities is needed. The analytical tools developed in [14] can be used as a starting point. Furthermore, the study of the actual computational cost and its minimization by an appropriate choice of the numerical scheme and of the perturbations in position and momentum would be of interest to practitioners. In addition, the analysis of our proposed samplers can be facilitated by using tools from symplectic and differential geometry. Finally, combining the new Langevin samplers with existing variance reduction techniques such as zero variance MCMC, preconditioning/Riemannian manifold MCMC can lead to sampling schemes that can be of interest to practitioners, in particular in molecular dynamics simulations. All these topics are currently under investigation.
Acknowledgements
AD was supported by the EPSRC under Grant No. EP/J009636/1. NN is supported by EPSRC through a Roth Departmental Scholarship. GP is partially supported by the EPSRC under Grants No. EP/J009636/1, EP/L024926/1, EP/L020564/1 and EP/L025159/1. Part of the work reported in this paper was done while NN and GP were visiting the Institut Henri Poincaré during the Trimester Program “Stochastic Dynamics Out of Equilibrium”. The hospitality of the Institute and of the organizers of the program is greatly acknowledged. We thank the referees for their useful comments and suggestions that have lead to various improvements in the presentation of this paper.
Footnotes
In fact, using the results from [8], we could consider observables in . However, we will not extend this point further in this paper.
Notice that is understood to be a complex number for .
Indeed, the fact that is equivalent to is easy to check if f is an eigenvector of (recall that f is then an eigenvector of as well, using Lemma 5(c) The claim then follows by extending linearly.
Contributor Information
A. B. Duncan, Email: Andrew.Duncan@sussex.ac.uk
N. Nüsken, Email: n.nusken14@imperial.ac.uk
G. A. Pavliotis, Email: g.pavliotis@imperial.ac.uk
References
- 1.Achleitner F, Arnold A, Stürzer D. Large-time behavior in non-symmetric Fokker-Planck equations. Riv. Math. Univ. Parma (N.S.) 2015;6(1):1–68. [Google Scholar]
- 2.Arnold, A., Erb, J.: Sharp entropy decay for hypocoercive and non-symmetric Fokker-Planck equations with linear drift. arXiv:1409.5425v2 (2014)
- 3.Asmussen S, Glynn PW. Stochastic Simulation: Algorithms and Analysis. New York: Springer; 2007. [Google Scholar]
- 4.Alrachid, H., Mones, L., Ortner, C.: Some remarks on preconditioning molecular dynamics. arXiv preprint arXiv:1612.05435 (2016)
- 5.Bass RF. Diffusions and Elliptic Operators. Berlin: Springer; 1998. [Google Scholar]
- 6.Bennett CH. Mass tensor molecular dynamics. J. Comput. Phys. 1975;19(3):267–279. doi: 10.1016/0021-9991(75)90077-7. [DOI] [Google Scholar]
- 7.Bakry D, Gentil I, Ledoux M. Analysis and Geometry of Markov Diffusion Operators. Berlin: Springer; 2013. [Google Scholar]
- 8.Bhattacharya RN. On the functional central limit theorem and the law of the iterated logarithm for Markov processes. Z. Wahrsch. Verw. Gebiete. 1982;60(2):185–201. doi: 10.1007/BF00531822. [DOI] [Google Scholar]
- 9.Beskos A, Pinski FJ, Sanz-Serna JM, Stuart AM. Hybrid Monte Carlo on Hilbert spaces. Stoch. Process. Appl. 2011;121(10):2201–2230. doi: 10.1016/j.spa.2011.06.003. [DOI] [Google Scholar]
- 10.Beskos A, Roberts G, Stuart A, Voss J. MCMC methods for diffusion bridges. Stoch. Dyn. 2008;8(3):319–350. doi: 10.1142/S0219493708002378. [DOI] [Google Scholar]
- 11.Beskos, A., Stuart, A.: MCMC methods for sampling function space. In: ICIAM 07—6th International Congress on Industrial and Applied Mathematics, pp. 337–364. European Mathematical Society, Zürich (2009)
- 12.Ceriotti M, Bussi G, Parrinello M. Langevin equation with colored noise for constant-temperature molecular dynamics simulations. Phys. Rev. Lett. 2009;102(2):020601. doi: 10.1103/PhysRevLett.102.020601. [DOI] [PubMed] [Google Scholar]
- 13.Cattiaux P, Chafaı D, Guillin A. Central limit theorems for additive functionals of ergodic markov diffusions processes. ALEA. 2012;9(2):337–382. [Google Scholar]
- 14.Duncan AB, Lelievre T, Pavliotis GA. Variance reduction using nonreversible Langevin samplers. J. Stat. Phys. 2016;163(3):457–491. doi: 10.1007/s10955-016-1491-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Dolbeault J, Mouhot C, Schmeiser C. Hypocoercivity for linear kinetic equations conserving mass. Trans. Am. Math. Soc. 2015;367(6):3807–3828. doi: 10.1090/S0002-9947-2015-06012-7. [DOI] [Google Scholar]
- 16.Eyink GL, Lebowitz JL, Spohn H. Hydrodynamics and fluctuations outside of local equilibrium: driven diffusive systems. J. Stat. Phys. 1996;83(3–4):385–472. doi: 10.1007/BF02183738. [DOI] [Google Scholar]
- 17.Engel, K.-J., Nagel, R.: One-parameter semigroups for linear evolution equations, volume 194 of Graduate Texts in Mathematics. Springer, New York (2000). With contributions by Brendle, S., Campiti, M., Hahn, T., Metafune, G., Nickel, G., Pallara, D., Perazzoli, C., Rhandi, A., Romanelli, S., Schnaubelt, R
- 18.Girolami M, Calderhead B. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Stat. Soc. Ser. B Stat. Methodol. 2011;73(2):123–214. doi: 10.1111/j.1467-9868.2010.00765.x. [DOI] [Google Scholar]
- 19.Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, Rubin DB. Bayesian Data Analysis. 3. Boca Raton, FL: CRC Press; 2014. [Google Scholar]
- 20.Hwang C-R, Hwang-Ma S, Sheu SJ. Accelerating Gaussian diffusions. Ann. Appl. Probab. 1993;3(3):897–913. doi: 10.1214/aoap/1177005371. [DOI] [Google Scholar]
- 21.Hwang C-R, Hwang-Ma S-Y, Sheu S-J. Accelerating diffusions. Ann. Appl. Probab. 2005;15(2):1433–1444. doi: 10.1214/105051605000000025. [DOI] [Google Scholar]
- 22.Horn Roger A, Johnson Charles R. Matrix Analysis. 2. Cambridge: Cambridge University Press; 2013. [Google Scholar]
- 23.Hwang C-R, Normand R, Wu S-J. Variance reduction for diffusions. Stoch. Process. Appl. 2015;125(9):3522–3540. doi: 10.1016/j.spa.2015.03.006. [DOI] [Google Scholar]
- 24.Hairer M, Stuart AM, Voss J. Analysis of SPDEs arising in path sampling. II. The nonlinear case. Ann. Appl. Probab. 2007;17(5–6):1657–1706. doi: 10.1214/07-AAP441. [DOI] [Google Scholar]
- 25.Hairer, M., Stuart, A., Voss, J.: Sampling conditioned diffusions. In: Trends in Stochastic Analysis. London London Mathematical Society Lecture Note Series, vol. 353, pp. 159–185. Cambridge University Press, Cambridge (2009)
- 26.Hairer M, Stuart AM, Voss J, Wiberg P. Analysis of SPDEs arising in path sampling. I. The Gaussian case. Commun. Math. Sci. 2005;3(4):587–603. doi: 10.4310/CMS.2005.v3.n4.a8. [DOI] [Google Scholar]
- 27.Iacobucci, A., Olla, S., Stoltz, G.: Convergence rates for nonequilibrium Langevin dynamics. arXiv:1702.03685 (2017)
- 28.Joulin A, Ollivier Y. Curvature, concentration and error estimates for Markov chain Monte Carlo. Ann. Probab. 2010;38(6):2418–2442. doi: 10.1214/10-AOP541. [DOI] [Google Scholar]
- 29.Kazakia JY. Orthogonal transformation of a trace free symmetric matrix into one with zero diagonal elements. Int. J. Eng. Sci. 1988;26(8):903–906. doi: 10.1016/0020-7225(88)90041-9. [DOI] [Google Scholar]
- 30.Kliemann W. Recurrence and invariant measures for degenerate diffusions. Ann Probab. 1987;15:690–707. doi: 10.1214/aop/1176992166. [DOI] [Google Scholar]
- 31.Komorowski, T., Landim, C., Olla, S.: Fluctuations in Markov processes. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 345. Springer, Heidelberg. Time symmetry and martingale approximation (2012)
- 32.Liu JS. Monte Carlo Strategies in Scientific Computing. Berlin: Springer; 2008. [Google Scholar]
- 33.Leimkuhler, B., Matthews,C.: Molecular Dynamics. Interdisciplinary Applied Mathematics, vol. 39. Springer, Berlin (2015). With deterministic and stochastic numerical methods
- 34.Lelièvre T, Nier F, Pavliotis GA. Optimal non-reversible linear drift for the convergence to equilibrium of a diffusion. J. Stat. Phys. 2013;152(2):237–274. doi: 10.1007/s10955-013-0769-x. [DOI] [Google Scholar]
- 35.Lelièvre, T., Rousset, M., Stoltz, G.: Free Energy Computations. Imperial College Press, London (2010). A Mathematical Perspective
- 36.Lelièvre T, Stoltz G. Partial differential equations and stochastic methods in molecular dynamics. Acta Numer. 2016;25:681–880. doi: 10.1017/S0962492916000039. [DOI] [Google Scholar]
- 37.Ma, Y.-A., Chen, T., Fox, E.: A complete recipe for stochastic gradient MCMC. In: Advances in Neural Information Processing Systems, pp. 2899–2907 (2015)
- 38.Metafune G, Pallara D, Priola E. Spectrum of Ornstein-Uhlenbeck operators in spaces with respect to invariant measures. J. Funct. Anal. 2002;196(1):40–60. doi: 10.1006/jfan.2002.3978. [DOI] [Google Scholar]
- 39.Markowich PA, Villani C. On the trend to equilibrium for the Fokker-Planck equation: an interplay between physics and functional analysis. Mat. Contemp. 2000;19:1–29. [Google Scholar]
- 40.Matthews, C., Weare, J., Leimkuhler, B.: Ensemble preconditioning for Markov Chain Monte Carlo simulation. arXiv:1607.03954 (2016)
- 41.Nüsken, N.: Construction of optimal samplers (in preparation). PhD thesis, Imperial College London (2018)
- 42.Ottobre M, Pavliotis GA. Asymptotic analysis for the generalized Langevin equation. Nonlinearity. 2011;24(5):1629. doi: 10.1088/0951-7715/24/5/013. [DOI] [Google Scholar]
- 43.Ottobre M, Pavliotis GA, Pravda-Starov K. Exponential return to equilibrium for hypoelliptic quadratic systems. J. Funct. Anal. 2012;262(9):4000–4039. doi: 10.1016/j.jfa.2012.02.008. [DOI] [Google Scholar]
- 44.Ottobre M, Pavliotis GA, Pravda-Starov K. Some remarks on degenerate hypoelliptic Ornstein-Uhlenbeck operators. J. Math. Anal. Appl. 2015;429(2):676–712. doi: 10.1016/j.jmaa.2015.04.019. [DOI] [Google Scholar]
- 45.Ottobre M, Pillai NS, Pinski FJ, Stuart AM. A function space HMC algorithm with second order Langevin diffusion limit. Bernoulli. 2016;22(1):60–106. doi: 10.3150/14-BEJ621. [DOI] [Google Scholar]
- 46.Pavliotis GA. Stochastic Processes and Applications: Diffusion Processes, the Fokker-Planck and Langevin Equations. Berlin: Springer; 2014. [Google Scholar]
- 47.Poncet, R.: Generalized and hybrid Metropolis-Hastings overdamped Langevin algorithms. arXiv:1701.05833 (2017)
- 48.Pavliotis GA, Stuart AM. White noise limits for inertial particles in a random field. Multiscale Model. Simul. 2003;1(4):527–533. doi: 10.1137/S1540345903421076. [DOI] [Google Scholar]
- 49.Pavliotis GA, Stuart AM. Analysis of white noise limits for stochastic systems with two fast relaxation times. Multiscale Model. Simul. 2005;4(1):1–35. doi: 10.1137/040610507. [DOI] [Google Scholar]
- 50.Rey-Bellet L, Spiliopoulos K. Irreversible Langevin samplers and variance reduction: a large deviations approach. Nonlinearity. 2015;28(7):2081–2103. doi: 10.1088/0951-7715/28/7/2081. [DOI] [Google Scholar]
- 51.Rey-Bellet, L., Spiliopoulos, K.: Variance reduction for irreversible Langevin samplers and diffusion on graphs. Electron. Commun. Probab., vol. 20, pp. 15, 16, (2015)
- 52.Robert C, Casella G. Monte Carlo Statistical Methods. Berlin: Springer; 2013. [Google Scholar]
- 53.Roussel, J., Stoltz, G.: Spectral methods for Langevin dynamics and associated error estimates. arXiv:1702.04718 (2017)
- 54.Villani, C.: Hypocoercivity. Number 949-951. American Mathematical Society (2009)
- 55.Wu S-J, Hwang C-R, Chu MT. Attaining the optimal Gaussian diffusion acceleration. J. Stat. Phys. 2014;155(3):571–590. doi: 10.1007/s10955-014-0963-5. [DOI] [Google Scholar]





