Skip to main content
Springer logoLink to Springer
. 2022 Sep 19;32(5):78. doi: 10.1007/s11222-022-10147-6

Geometry-informed irreversible perturbations for accelerated convergence of Langevin dynamics

Benjamin J Zhang 1,, Youssef M Marzouk 1, Konstantinos Spiliopoulos 2
PMCID: PMC9485103  PMID: 36156938

Abstract

We introduce a novel geometry-informed irreversible perturbation that accelerates convergence of the Langevin algorithm for Bayesian computation. It is well documented that there exist perturbations to the Langevin dynamics that preserve its invariant measure while accelerating its convergence. Irreversible perturbations and reversible perturbations (such as Riemannian manifold Langevin dynamics (RMLD)) have separately been shown to improve the performance of Langevin samplers. We consider these two perturbations simultaneously by presenting a novel form of irreversible perturbation for RMLD that is informed by the underlying geometry. Through numerical examples, we show that this new irreversible perturbation can improve estimation performance over irreversible perturbations that do not take the geometry into account. Moreover we demonstrate that irreversible perturbations generally can be implemented in conjunction with the stochastic gradient version of the Langevin algorithm. Lastly, while continuous-time irreversible perturbations cannot impair the performance of a Langevin estimator, the situation can sometimes be more complicated when discretization is considered. To this end, we describe a discrete-time example in which irreversibility increases both the bias and variance of the resulting estimator.

Keywords: Monte Carlo sampling, Stochastic gradient Langevin dynamics, Riemannian manifold Langevin dynamics, Geometry-informed irreversibility, Bayesian computation

Introduction

Bayesian inference often requires estimating expectations with respect to non-Gaussian distributions. To solve this problem, particularly in high dimensions, one frequently resorts to sampling methods. A commonly used class of sampling methods is based on the Langevin dynamics (LD), which uses the gradient of the log-target density to specify a stochastic differential equation (SDE) whose invariant distribution is the target (e.g., posterior) distribution of interest. Long term averages over a single trajectory of the SDE can be then used to estimate expectations of interest by appealing to the ergodicity of the stochastic process. Other LD-based approaches that reduce the mean squared error (MSE) of such estimators include the Metropolis-adjusted Langevin algorithm (MALA) (Roberts and Tweedie 1996; Girolami and Calderhead 2011), the stochastic gradient Langevin dynamics (SGLD) (Welling and Teh 2011), and their variants.

It is also known that certain perturbations to the LD can accelerate convergence of the dynamics to the stationary distribution. In Rey-Bellet and Spiliopoulos (2015) the authors show that suitable reversible and irreversible perturbations to diffusion processes can decrease the spectral gap of the generator, as well as increase the large deviations rate function and decrease the asymptotic variance of the estimators. One widely celebrated choice of reversible perturbation is the Riemannian manifold Langevin dynamics (Girolami and Calderhead 2011), in which one defines a Riemannian metric to alter the way distances and gradients are computed. The use of irreversible perturbations to accelerate convergence has also been well studied in a variety of contexts and general settings (Rey-Bellet and Spiliopoulos 2015; Hwang et al. 2005; Rey-Bellet and Spiliopoulos 2015, 2016); see also (Franke et al. 2010; Hwang et al. 1993; Bierkens 2016; Diaconis et al. 2010) and for linear systems, Lelievre et al. (2013). The authors of Ma et al. (2015) find general conditions on the drift and diffusion coefficients of an SDE so that a specified measure is the SDE’s invariant measure—without, however, exploring how different choices of these coefficients impact sampling quality. Existing literature shows that augmenting the drift of the LD with a vector field that is orthogonal to the gradient of the log-target density will leave the invariant measure unchanged while decreasing the spectral gap. A convenient choice is simply to add the vector field induced by a skew-symmetric matrix applied to the gradient of the log posterior.

At the same time, traditional sampling methods for Bayesian inference are often intractable for extremely large datasets. While Langevin dynamics-based sampling methods only require access to the unnormalized posterior density, they need many evaluations of this unnormalized density and its gradient. When the dataset is extremely large, each evaluation of the density may be computationally intractable, as it requires the evaluation of the likelihood over the entire dataset. In the past decade the stochastic gradient Langevin dynamics (SGLD) has been introduced and analyzed (Welling and Teh 2011; Teh et al. 2016) to address the problem posed by large datasets. Rather than evaluating the likelihood over the entire dataset, SGLD subsamples a portion of the data (either with or without replacement) and uses the likelihood evaluated at the sampled data to estimate the true likelihood. The resulting chain can then be used to estimate ergodic averages.

In this paper we present a state-dependent irreversible perturbation of Riemannian manifold Langevin dynamics that is informed by the geometry of the manifold. This departs from existing literature, as the vector field of the resulting perturbation is not orthogonal to the original drift term. This geometry-informed irreversible perturbation accelerates convergence and, if desired, can be used in combination with the SGLD algorithm to exploit the computational savings of a stochastic gradient.

We demonstrate this approach on a variety of examples: a simple anisotropic Gaussian target, a posterior on the mean and variance parameters of a normal distribution, Bayesian logistic regression, and Bayesian independent component analysis (ICA). Generally, we observe that the geometry-informed irreversible perturbation improves the convergence rate of LD compared to a standard irreversible perturbation. The improvement tends to be more pronounced as the target distribution deviates from Gaussianity. Our numerical studies also show that introducing irreversibility can reduce the MSE of the resulting long-term average estimator, mainly by reducing variance. In many cases this reduction can be significant, e.g., 1–2 orders of magnitude.

One must, however, also take the effects of discretization into account. In the continuous-time setting, it is known theoretically that irreversible perturbations can at worst only leave the spectral gap fixed. In borderline cases, though—i.e., in cases where the continuous-time theoretical improvement is nearly zero—after accounting for discretization, stiffness can actually cause the resulting estimator to perform worse than if no irreversibility were applied at all. Indeed, we will describe in Appendix A an illustrative Gaussian example in which the standard Langevin algorithm performs better than the algorithm with the standard irreversible perturbation—that is, an example in which additional irreversibility leads to increased bias and variance of the long term average estimator (see Remark 3.2 and Remark A.1 for a theoretical explanation). Along similar lines, the idea of applying irreversible perturbations to SGLD has recently been studied in the context of nonconvex stochastic optimization (Hu et al. 2020). The authors also note that while irreversibility increases the rate of convergence, it increases the discretization error and amplifies the variance of the gradient, compared to a non-perturbed system with the same step size; see also (Brosse et al. 2018) for a related discussion on the relation of SGLD to SGD and convergence properties. This reflects the increased stiffness of irreversible SGLD relative to standard SGLD.

The rest of the paper is organized as follows. In Section 2 we review reversible and irreversible perturbations of the overdamped Langevin dynamics that may improve the efficiency of sampling from equilibrium. Then, in Section 2.3, we present our new geometry-informed irreversible perturbation. In Section 3 we present simulation studies that demonstrate the good performance of this geometric perturbation, relative to a variety of other standard reversible and irreversible choices. In several of these examples, we also demonstrate the use of stochastic gradients. Section 4 summarizes our results and outlines directions for future work. Appendix A details the simple Gaussian example showing that in “borderline” cases—i.e., when continuous-time analysis does not predict improvements from irreversible perturbations—the stiffness created by an irreversible perturbation can, after discretization, lead to poorer performance than the unperturbed case.

Improving the performance of Langevin samplers

We begin by recalling some relevant background on Langevin samplers, Riemannian manifold Langevin dynamics, perturbations of Langevin dynamics, and the stochastic gradient Langevin dynamics algorithm. Let f(θ) be a test function on state space ERd and let π(θ) be some unnormalized target density on E. In our experiments, π(θ) arises as a posterior density of the form π(θ)L(θ;X)π0(θ), where L(θ;X) is the likelihood model, X are the data, and π0(θ) is the prior density. Define {θ(t)} as a Langevin process that has invariant density π(θ):

dθ(t)=βlogπ(θ(t))dt+2βdW(t), 1

where β>0 denotes the temperature, W(t) is a standard Brownian motion in Rd, and the initial condition may be arbitrary. By ergodicity, we may compute expectations with respect to the posterior by the long term average of f(θ) over a single trajectory:

Eπ[f(θ)]=Ef(θ)π(θ)dθ=limt1T0Tf(θ(t))dt. 2

For practical computations, we must approximate (2) by discretizing the Langevin dynamics and choosing a large but finite T. Applying the Euler-Maruyama method to (1) with step size h yields the following recurrence relation,

θk+1=θk+hβlogπ(θk)dt+2βhξk+1 3

where ξk are independent standard normal random variables. The total number of steps is equal to K=T/h. The resulting estimator for (2) is

Eπ[f(θ)]1Kk=0K-1f(θk). 4

This estimator is the unadjusted Langevin algorithm (ULA), which has found renewed interest in the context of high-dimensional machine learning problems (Durmus and Moulines 2019). Discretization and truncation, however, introduce bias into the estimator. Moreover, there are noted examples in which the continuous-time process and the discretized version do not have the same invariant distribution no matter the choice of the fixed, but nonzero, discretization step h; see Ganguly and Sundar (2021) for a related discussion. Certain Markov chain Monte Carlo (MCMC) methods such as MALA circumvent these issues by using the dynamics to propose new points, but accepting or rejecting them according to some rule so that the resulting discrete-time Markov chain has the target distribution as its invariant distribution (Roberts and Tweedie 1996; Girolami and Calderhead 2011).

Many different SDEs can have the same invariant distribution. Therefore, there has been much study into how the standard Langevin dynamics of some target distribution can be altered to increase its rate of convergence. Some examples of this can be found in the work of Hwang et al. (2005), Rey-Bellet and Spiliopoulos (2016) and others. The standard Langevin dynamics is a reversible Markov process, meaning that the process satisfies detailed balance. The work of Rey-Bellet and Spiliopoulos (2016) studies, in general terms, how reversible and irreversible perturbations to reversible processes decrease the spectral gap, increase the large deviations rate function, and decrease the asymptotic variance. Yet how to choose such perturbations to most efficiently accelerate convergence is yet to be thoroughly studied in settings beyond linear diffusion processes (Lelievre et al. 2013). Also, with the exception of a few examples—see for instance (Duncun et al. 2017; Jianfeng and Spiliopoulos 2018)—these perturbations have mainly been studied in the continuous-time setting.

Reversible perturbations and Riemannian manifold Langevin dynamics

We only review relevant aspects of reversible perturbations and RMLD in this section. For a detailed review of RMLD and its related Monte Carlo methods, we refer the reader to Girolami and Calderhead (2011), Livingstone and Girolami (2014), Xifara et al. (2014). Let B(θ) be a d×d symmetric positive definite matrix. A reversible perturbation on LD (1) is an SDE with multiplicative noise:

dθ(t)=βB(θ)logπ(θ(t))+·B(θ)dt+2βB(θ)dW(t). 5

Here, the i-th component of ·B(θ) is j=1dθjBij(θ). This is equivalent to Langevin dynamics defined on a Riemannian manifold, where the metric is G(θ)=B(θ)-1 (Xifara et al. 2014). A straightforward calculation shows that (5) with B(θ) being any symmetric positive-definite matrix admits the same invariant distribution, π. The improved rate of convergence depends on the choice of the underlying metric. The work of Girolami and Calderhead (2011) argues that choosing the expected Fisher information matrix plus the Hessian of the log-prior to be the metric improves the performance of the resulting manifold MALA method. Meanwhile, Rey-Bellet and Spiliopoulos (2016) shows that under certain regularity conditions, if B(θ) is chosen such that B(θ)-I is positive definite, then the resulting estimator is expected to have improved performance in terms of the asymptotic variance, the spectral gap, and the large deviations rate function.

Irreversible perturbations

Consider the following Langevin dynamics

dθ(t)=βlogπ(θ(t))+γ(θ(t))dt+2βdW(t). 6

When γ(θ)0, the process is reversible and has π(θ) as its invariant distribution. If γ0, then the resulting process will, in general, be time-irreversible unless γ(θ) can be written as a multiple of logπ(θ); see for example (Pavliotis 2014). However, an irreversible perturbation can still preserve the invariant distribution of the system. By considering the Fokker-Planck equation, one can show that if γ(θ) is chosen such that ·γπ=0, then π will still be the invariant distribution. A frequently used choice in the literature is γ(θ)=Jlogπ(θ), where J is a constant skew-symmetric matrix, i.e., J=-JT. The computational advantage of this choice is clear since only one additional matrix-vector multiply is needed to implement it. The optimal choice of irreversible perturbation to a linear system (i.e., that which yields fastest convergence) was completely analyzed in Lelievre et al. (2013).

The advantages of using irreversible perturbations is widely noted. The main result of Hwang et al. (2005) is that under certain conditions, the spectral gap, i.e., the difference between the leading two eigenvalues of the generator of the Markov semigroup, increases when γ0. In Rey-Bellet and Spiliopoulos (2015, 2015, 2016), the large deviations rate function is introduced as a measure of performance in the context of sampling from the equilibrium, and upon connecting it to the asymptotic variance of the long term average estimator, it is proven that adding an appropriate perturbation γ not only increases the large deviations rate function but also decreases the asymptotic variance of the estimator. The use of irreversible proposals in the MALA was studied in Ottobre et al. (2019).

Irreversible perturbations for RMLD

In this section, we will introduce our novel geometry-informed irreversible perturbation to Langevin dynamics. Suppose that we are given a diffusion process as in (5), and we want to study how to choose an irreversible perturbation that leaves the invariant distribution fixed. Indeed, our previous choice of irreversible perturbation remains valid for this system, that is, adding γ(θ)=Jlogπ(θ) for a constant skew-symmetric matrix J to the drift term of (5) will preserve the invariant density. This choice yields the following SDE:

dθ(t)=(βB(θ(t))+J)logπ(θ(t))+β·B(θ(t))dt+2βB(θ)dW(t) 7

We refer to this system as Riemannian manifold Langevin with an additive irreversible perturbation (RMIrr). This choice, however, does not take into account the relevant features that the reversible perturbation may provide when constructing an irreversible perturbation.

The reversible perturbation leads to a positive definite matrix (a metric, in the terminology of Riemannian geometry) that is state-dependent. In contrast, the skew-symmetric matrix J is fixed in the irreversible perturbation. The skew-symmetric matrix need not be constant, however, as an irreversible perturbation γ(θ) only needs to satisfy ·(γ(θ)π(θ))=0. In fact, if γ(θ)=C(θ)logπ(θ)+·C(θ) for C(θ)=-C(θ)T, then this irreversible perturbation will also leave the invariant density intact. Noting that Cii(θ)=0 and that Cij=-Cji, observe that

·(γ(θ)π(θ))=·(C(θ)π(θ)+(·C(θ))π(θ))=i,j=1dCij(θ)θiπ(θ)θj+Cij(θ)2π(θ)θiθj+2Cij(θ)θiθjπ(θ)+Cij(θ)θjπ(θ)θi=i>j,i=1dCij(θ)θiπ(θ)θj+Cij(θ)2π(θ)θiθj+2Cij(θ)θiθjπ(θ)+Cij(θ)θjπ(θ)θj+Cji(θ)θjπ(θ)θi+Cji(θ)2π(θ)θjθi+2Cji(θ)θjθiπ(θ)+Cji(θ)θiπ(θ)θi=i>j,i=1dCij(θ)θiπ(θ)θj+Cij(θ)2π(θ)θiθj+2Cij(θ)θiθjπ(θ)+Cij(θ)θjπ(θ)θi-Cij(θ)θjπ(θ)θi-Cij(θ)2π(θ)θjθi-2Cij(θ)θjθiπ(θ)-Cij(θ)θiπ(θ)θj=0.

We seek an irreversible perturbation that takes the reversible perturbation into account, with the possibility that C(θ) is not a constant matrix, and investigate if it leads to any performance improvements of the long term average estimator. Note that in the literature, the above condition ·(γπ)=0 is typically rewritten into the following sufficient conditions: ·γ(θ)=0 and γ(θ)·π(θ)=0 (Rey-Bellet and Spiliopoulos 2016). One can check, however, that when C is not constant, these conditions are not met, yet γ(θ) is still a valid irreversible perturbation. A simple choice of C(θ) that incorporates B(θ) is

C(θ)=12JB(θ)+12B(θ)J, 8

where J is a constant skew-symmetric matrix. The 12 factor is introduced so that if B(θ)=I, i.e., if there is no reversible perturbation, then this perturbation reverts to the standard irreversible perturbation (Irr). We arrive at the following system:

dθ(t)=(βB(θ(t))+C(θ(t)))logπ(θ(t))+·(βB(θ(t))+C(θ(t))dt+2βB(θ(t))dW(t). 9

We call this choice of perturbation the geometry-informed irreversible perturbation (GiIrr). Indeed, while there are infinitely many valid choices for C(θ), we will investigate the choice in (8) in the numerical examples. Since we will have already explicitly constructed B(θ) and J for the other systems, the additional computational cost of computing their product will be marginal. Furthermore, as mentioned earlier, this choice reduces to Irr when B(θ)=I.

One may wonder when does GiIrr result in improved performance over standard irreversible perturbations such as in Equation 7. Based on the numerical results and intuition, we will argue that GiIrr results in better performance if the underlying reversible perturbation already improves the sampling. Namely, if one knows that RMLD leads to improved sampling on a given problem, then employing GiIrr is expected to improve sampling even further. As we mentioned earlier, the choice of GiIrr that is made in this paper is not unique, and a further investigation of its theoretical properties is left for future work; see also the discussion in the Section 4. The goal of this paper is to present this new class of irreversible perturbations and investigate it numerically in a number of representative computational studies.

Stochastic gradient Langevin dynamics

In certain Bayesian inference problems, the data are conditionally independent of each other given the parameter value. Therefore, the likelihood model can often be factorized and the posterior density can be written as follows:

π(θ)π0(θ)i=1Nπi(Xi|θ) 10

where π(Xi|θ) is the likelihood function for data point Xi. When the dataset is extremely large, i.e., when N1, however, ULA becomes exceedingly expensive as it requires repeatedly evaluating the likelihood over the entire dataset for each step of the trajectory. To mitigate this challenge, the stochastic gradient Langevin dynamics was presented to reduce the computational cost of evaluating the posterior density by only evaluating the likelihood over subsets of the data at each step. The true likelihood is estimated based on the likelihood function evaluated at the subsampled data (Welling and Teh 2011). Specifically, the gradient is estimated using a stochastic gradient

logπ(θ|X)logπ(θ|X)^=logπ0(θ)+Nni=1nlogπ(Xτi|θ) 11

where τ is a random subset of {1,,N} of size n drawn with or without replacement. Depending on the choice of n, this approach cuts down on the computational costs dramatically with some additional variance incurred by the random subsampling of the data. The original version of this algorithm made the step size variable, approaching zero as the number of steps taken K became large. SGLD applied with a variable and shrinking step size was proven to be consistent: that is, the invariant distribution of the discretized system is equivalent to that of the continuous system (Teh et al. 2016). Having a decreasing step size counteracts the cost savings provided by computing the stochastic gradient, and therefore a version where the step size is fixed was presented in Vollmer et al. (2016), where theoretical characterizations of the asymptotic and finite-time bias and variance are also developed. In most of our numerical results, we use stochastic gradient version of the Langevin algorithm with fixed step size to demonstrate that SGLD can be used together with irreversible perturbations.

Numerical examples

In the following examples, we always apply the stochastic gradient version of each Langevin system unless otherwise stated. We fix β=1/2 for all examples. The efficacy of the GiIrr perturbation does not change whether or not the stochastic gradient is used. We illustrate this explicitly in Section 3.4, where we report the results of all perturbations both with and without the stochastic gradient, for comparison.

Evaluating sample quality

Here we discuss the measures we use to evaluate sample quality for each Langevin sampler. For each example, we estimate the bias, variance, mean-squared error, and asymptotic variance of the estimators of the expectations of two observables: ϕ1(θ)=l=1dθ(l) and ϕ2(θ)=l=1dθ(l)2, where θ(l) denotes the lth component of θ. Let ϕ¯K=1Kk=0K-1ϕ(θk) denote the estimator of Eπ[ϕ(θ)] obtained with a chain of length K.

Bias(ϕ¯K)=Eϕ¯K-Eπϕ(θ)=E1Kk=0K-1ϕ(θk)-Eπϕ(θ)1Mi=1M1Kk=0K-1ϕ([θk]i)-Eπϕ(θ), 12

where [θk]i is the state of the ith chain at iteration k. Here, we estimate Eπϕ(θ) by applying the unadjusted Langevin algorithm with a very long simulated trajectory and small discretization step. The expected value of the estimator is computed by averaging over M=1000 independent chains for the examples in Sections 3.2 and 3.3, and M=100 independent chains for the examples in Sections 3.4 and 3.5. The variance of each estimator is defined and estimated as follows

Var(ϕ¯K)=E(ϕ¯K)2-Eϕ¯K21Mi=1M1Kk=0K-1ϕ([θk]i)2-1Mi=1M1Kk=0K-1ϕ([θk]i)2. 13

The mean-squared error (MSE) of each estimator ϕ¯K follows analogously.

We also evaluate the asymptotic variance of the estimator of each observable, defined as

σ2(ϕ)=limttVar1t0tϕ(θt)dtlimKKhVarϕ¯K. 14

To compute these asymptotic variances, we use the batch means method in Asmussen and Glynn (2007). After the burn-in period, we evaluate the observable over each chain. Each observable chain is then batched into twenty separate chains, and their means are evaluated. The asymptotic variance is estimated by computing the empirical variance of those means and then multiplying by the length of each of the subsampled trajectories.

In addition to measuring the performance of estimators of specific observables, for each sampler we also evaluate overall sample quality by computing the recently-proposed kernelized Stein discrepancy (KSD) (Gorham and Mackey 2017). The KSD is a computable expression that can approximate certain integral probability metrics (IPMs) for a certain class of functions defined through the action of the Stein operator on a reproducing kernel Hilbert space. Let π^K=1Kk=0K-1δθk be the empirical approximation to π based on samples {θk}k=0K-1 produced by some Langevin algorithm. The IPM is defined as

dH(π^K,π):=suphHEπ^K[h(Z)]-Eπ[h(X)], 15

for some function space H, where hH are functions from Rd to R, and Zπ^K and Xπ are random variables. When H is large enough, dH(π^K,π)0 holds only if π^Kπ in distribution; see Gorham and Mackey (2017) and the references therein. This result implies that a better empirical approximation (i.e., better-quality samples) corresponds with a lower IPM value. Practically computing the IPM directly is intractable as it requires exactly knowing expectations with respect to the target distribution π (which is, after all, our original goal in sampling). By judiciously choosing the function space H, however, we can estimate the IPM by computing the KSD, using only the samples that comprise the empirical distribution π^.

As the expectation with respect to the target distribution π is intractable to compute, one seeks an H such that for all hH, Eπ[h(X)]=0. Such a function space is found through Stein’s identity, which states that Eπ[Ag(X)]=0 for all g:RdRd in some function space G, where A is the Stein operator,

Ag(x)=1π(x)·π(x)g(x)=g(x)·logπ(x)+·g(x),

and · denotes the divergence operator. The space G is defined by specifying a reproducing kernel Hilbert space (RKHS) Rr of functions from Rd to R, with a user-defined kernel r(xy) that is twice-continuously differentiable (Gorham and Mackey 2017; Izzatullah et al. 2020). Each function gG is made up of components gjRr, for j=1,,d, such that the vector (g1Rr,,gdRr) has unit norm in the dual space 2, where ·Rr is the norm of the RKHS (Gorham and Mackey 2017). Hence, if we set h=Ag for gG, then Eπ[h(X)]=0. Proposition 1 in Gorham and Mackey (2017) demonstrates that such a G is an appropriate domain for A so that one indeed has Eπ[Ag(X)]=0 for all gG. Then the space H is defined as H=AG, i.e., the space of functions resulting from the Stein operator applied to functions in G. With this H, we define the corresponding KSD of a measure μ as S(μ)=dH(μ,π)=suphHEμ[h(X)].

Proposition 2 in Gorham and Mackey (2017) shows that the KSD admits a closed form. Indeed, for any j=1,,d and letting bj(x)=xjlogπ(x), define the Stein kernel

r0j(x,y)=bj(x)bj(y)r(x,y)+bj(x)yjr(x,y)+bj(y)xjr(x,y)+xjyjr(x,y). 16

Then, by Proposition 2 of Gorham and Mackey (2017), if j=1dEμr0j(X,X)<, we have the equality S(μ)=w2, where for j=1,,d we have

wj=Eμ×μr0j(X,X~),X,X~i.i.d.μ.

Also, Gorham and Mackey (2017) emphasizes the importance of choosing the kernel r(xy) carefully so that convergence of the KSD to zero (for a sequence of empirical distributions) implies convergence in distribution to the target measure. As suggested by the results of Gorham and Mackey (2017), we use the inverse multiquadric kernel r(x,y)=(c2+x-y22)β with β=-1/2 and c=1. This choice has also been used in Izzatullah et al. (2020).

In our examples, μ=π^K is a discrete distribution, so the KSD can be computed by evaluating the kernels over all pairs of sample points (Gorham and Mackey 2017). Namely, if μ=π^K=1Kk=0K-1δθk, then

wj=k,k=1Kr0j(xk,xk). 17

We use the KSD to evaluate the quality of every Langevin sampler—effectively, sampler performance over a large class of observables—to complement our bias/variance/MSE computations for particular choices of observable. For each example below and for each Langevin sampler, we compute the KSD for 25 independent chains. The mean KSDs of the chains for each sampler are then plotted as a function of the number of steps of the chain. We comment on the quality of each sampler given these KSD plots. We refer the reader to Gorham et al. (2019), Gorham and Mackey (2015, 2017), Izzatullah et al. (2020) for further details on these choices and for further literature review.

Linear Gaussian example

Suppose we have data {Xi}i=1NRd generated from a multivariate normal distribution with mean θRd and known precision matrix ΓXRd×d. From the data, we infer the value of θ. Endow θ with a normal prior with mean zero and precision ΓθRd×d. Then the posterior distribution is Gaussian with mean and precision

μp=(Γθ+NΓX)-1ΓXi=1NXiandΓp=(Γθ+NΓX), 18

respectively. The Euler-Maruyama discretization with constant step size h applied to the corresponding Langevin dynamics is

θk+1=I-A¯hθk+D¯kh+hξk 19

where

A¯=12(Γθ+NΓX),D¯k=12ΓXi=1NXi,ξkN(0,I).

Using stochastic gradients yields the same recurrence above except with

D¯k=12ΓXNni=1nXτik 20

where nN and τik{1,,N} is randomly sampled (with or without replacement) (Welling and Teh 2011). Expectations with respect to the posterior are approximated by an long term average of the observable over the course of a trajectory. It has been shown that despite subsampling the data at each step in the dynamics, this estimator has comparable performance as the estimator produced by the regular Langevin dynamics with the full likelihood or MALA (Welling and Teh 2011; Vollmer et al. 2016).

Now, we consider the case where the dynamics are perturbed by an irreversible term that preserves the invariant distribution of the dynamics. We demonstrate that this leads to a lower MSE than standard SGLD or Langevin dynamics. In this case, we replace A¯ and D¯k with A and Dk, which are

A=12(I+J)(Γθ+NΓX),Dk=12(I+J)ΓXNni=1nXτik. 21

and J is a skew-symmetric matrix.

For the numerical experiments, we choose d=3, N=10, where the mini-batches are of size n=2. The initial condition is the zero vector. We have ΓX=0.25I, Γθ is a precision matrix with eigenvalues 0.2, 0.01, 0.05 and eigenvectors that are randomly generated, and h=0.005. Note that these matrices were chosen so that the resulting reversible perturbation has eigenvalues greater than one. To construct the perturbations, we choose B=Γp-1 and J to be

J=δ011-101-1-10 22

for δR. We consider the five different SDE systems presented in Table 1 and investigate how the MSE, bias, and variance differs for each case. For this example, since a constant metric is used, the geometry-informed irreversible perturbation simply produces a different constant skew-symmetric matrix than the other irreversible perturbations. Each system is simulated for K=105 steps with step size h=5×10-3. In Figures 1 and 2, we plot the MSE of the running average for each case when the observables are the sums of the first and second moments. In Table 2, we report the asymptotic variance of the estimator for each system. We see that irreversible perturbations definitely improve the performance of the estimators, although the improvement provided by the geometry-informed irreversible perturbation seems marginal over RMIrr when estimating the second moments.

Table 1.

Summary of the five SDEs that share the same invariant density π(θ). Stochastic gradients can be considered instead of the deterministic gradients. All systems are of the form dθt=b(θt)dt+σ(θt)dWt. The term β denotes the temperature

b(θ) σ(θ)
LD βlogπ(θ) 2βI
RM βB(θ)logπ(θ))+β·B(θ) 2βB(θ)
Irr (βI+J)logπ(θ) 2βI
RMIrr (βB(θ)+J)logπ(θ)+β·B(θ) 2βB(θ)
GiIrr (βB(θ)+12JB(θ)+12B(θ)J)logπ(θ)+·(βB(θ)+12JB(θ)+12B(θ)J) 2βB(θ)

Fig. 1.

Fig. 1

MSE of the running average for the first moment. Stochastic gradients are computed

Fig. 2.

Fig. 2

MSE of the running average for the second moment. Stochastic gradients are computed

Table 2.

Asymptotic variance estimates for the linear Gaussian example

E[AVarϕ1] Std[AVarϕ1] E[AVarϕ2] Std[AVarϕ2]
LD 37.75 11.94 209.4 84.38
RM 20.09 6.420 132.8 49.21
Irr 15.72 5.008 135.4 47.91
RMIrr 12.36 3.937 115.9 40.10
GiIrr 7.444 2.336 103.7 36.78

When the reversible perturbation is chosen such that the drift matrix is exactly the identity (for example, when the matrix is chosen to be the covariance matrix of the posterior), additional irreversibility cannot widen the spectral gap of the system. This fact can be deduced from the results of Lelievre et al. (2013). The improved performance of the geometry-informed irreversible perturbation is mostly due to the fact that the norm of the corresponding skew-symmetric matrix is greater than that of simple irreversibility. Even though one can scale the skew-symmetric matrix for the other two cases to observe similar performance as geometry-informed irreversibility, GiIrr accomplishes that in a more systematic way.

In Figure 3 we plot the kernelized Stein discrepancy (KSD) for the linear–Gaussian example. We see that irreversible perturbations typically have smaller KSD compared to reversible perturbations and that in all cases the theoretical slope of K-1/2 (see Liu et al. (2016)) is achieved.

Fig. 3.

Fig. 3

Kernelized Stein discrepancy plot for the Gaussian example. Black line has slope -1/2, which denotes the expected convergence rate

Remark 3.1

Note that in Figure 3 and in all subsequent KSD-related figures, K ranges up to 104, rather than up to 105 or higher (as in the bias/variance/MSE plots, e.g., Figures 1 and 2 and analogous figures in subsequent examples). The reason is that KSD is expensive to compute, and we find that evaluating it up to K=104 is sufficient to draw conclusions.

Remark 3.2

While it is known that irreversible perturbations can, at worst, maintain the same performance as standard Langevin in the continuous-time setting (Rey-Bellet and Spiliopoulos 2016), when considering discretization and in borderline cases (i.e., when one does not expect much or any improvement in continuous time), irreversibility may actually harm the performance of the estimator as it introduces additional stiffness into the system without resulting in faster convergence to the invariant density. A detailed exploration of this effect is presented in Appendix A, in which we compute the bias and variance of the long term average estimator for a simple linear Gaussian problem where the posterior precision is a scalar multiple of the identity matrix. As further discussed in Remark A.1, in this case, the irreversible perturbation is not expected to lead to improvement in the sampling properties from the equilibrium. Hence, the stiffness induced upon discretization has a more profound impact on the practical performance of the irreversible perturbation.

In the current numerical study, the posterior precision is diagonal, but not a scalar multiple of the identity matrix. The eigenvalues of the resulting drift matrix are therefore distinct, and by the theory in Lelievre et al. (2013), irreversible perturbations are able to reduce the spectral gap and result in improved performance. This is in contrast with the example studied in Appendix A. In the example of the appendix, the drift matrix is taken to be proportional to the identity matrix, and as explained in Remark A.1, the spectral gap therefore cannot be widened. In that case, irreversibility leads to increased stiffness of the system, which then leads to increased bias and variance in the resulting estimator.

Parameters of a normal distribution

This example is identical to that used in Girolami and Calderhead (2011, Section 5) to demonstrate the performance of RMLD. Given a dataset of R-valued data X={Xi}i=1NN(μ,σ2), we infer the parameters μ,σ. To be clear, in this example the state is θ=[μ,σ]T. The prior on μ,σ is chosen to be flat (and, therefore, improper). The log-posterior is

logp(μ,σ|X)=N2log2π-Nlogσ-i=1N(Xi-μ)22σ2. 23

The gradient is

logp(μ,σ|X)=m1(μ)/σ2-N/σ+m2(μ)/σ3 24

where m1(μ)=i=1N(Xi-μ), and m2(μ)=i=1N(Xi-μ)2. In Girolami and Calderhead (2011), the authors propose using the geometry of the manifold defined by the parameter space of the posterior distribution to accelerate the resulting Metropolis-adjusted Langevin algorithm. The authors in Girolami and Calderhead (2011) suggest using the expected Fisher information matrix to define the Riemannian metric, which in the context of reversible diffusions (Rey-Bellet and Spiliopoulos 2016), is equivalent to choosing B(μ,σ) to be the inverse of the sum of the expected Fisher information matrix and the negative Hessian of the log-prior. Straightforward computations yield

B=σ2N1001/2,B=σN1001/2,·B=0σ/N. 25

As for the geometry-informed irreversible perturbation, let J=δ01-10, for δR. Then the relevant quantities are

12JB+12BJ=3σ24NJ,12·(JB+BJ)=3δσ2N10. 26

In the experiments, we have N=30, h=10-3, δ=2. and simulate M=1000 independent trajectories of each system up to T=1000 for a total of K=106 steps. The initial condition is chosen to be μ=5 and σ=20, which is consistent with the choice in Girolami and Calderhead (2011). The data are subsampled at a rate of n=6 per stochastic gradient computation. Each trajectory is allotted a burn-in time of Tb=10. The dataset is generated by drawing samples from a normal distribution with μtrue=0 and σtrue=10. The observables we study are ϕ1(μ,σ)=μ+σ, and ϕ2(μ,σ)=μ2+σ2. We plot the MSE, squared bias, and variance of resulting estimators for each observable in Figures 5 and 6. Moreover, in Table 3 we report the asymptotic variance of the estimators of each of the five systems. We plot the kernelized Stein discrepancy in Figure 7. Notice that the irreversibly perturbed systems reach the K-1/2 convergence rate (see Liu et al. (2016)) faster than the reversibly perturbed system. The main takeaway is that an irreversible perturbation that is adapted to the existing reversible perturbation performs much better than if the irreversible perturbation were applied without regard to the underlying geometry. Notice that the reversible perturbation considered here still improves the performance of the long term average estimator despite the fact that B-I is not positive definite on the state space. Indeed, while B-I being positive definite is a sufficient condition to obtain improved performance, it is not a necessary one (Rey-Bellet and Spiliopoulos 2016). The reason for the reduced asymptotic variance we observed here is because the reversible perturbation B has eigenvalues larger than one where the bulk of the posterior distribution lies.

Fig. 5.

Fig. 5

Observable: ϕ1(μ,σ)=μ+σ, δ=2. Stochastic gradients are computed

Fig. 6.

Fig. 6

Observable: ϕ2(μ,σ)=μ2+σ2, δ=2. Stochastic gradients are computed

Table 3.

Asymptotic variance estimates for the parameters of a normal distribution example. Stochastic gradients are employed

E[AVarϕ1] Std[AVarϕ1] E[AVarϕ2] Std[AVarϕ2]
LD 55.29 21.52 8332 4359
RM 20.63 6.019 4034 1378
Irr 5.791 2.638 2169 1072
RMIrr 6.512 2.226 1729 631.2
GiIrr 1.400 0.4697 479.4 170.8

Fig. 7.

Fig. 7

Kernelized Stein discrepancy plot for the parameters of a normal distribution example. Black line has slope -1/2, which denotes the expected convergence rate

Figure 4 show single and mean trajectories of the burn-in period of trajectories from each of the five systems. The plot shows that the geometry-informed irreversible perturbation is able to find the bulk of the distribution sooner than the other systems without incurring additional errors due to stiffness.

Fig. 4.

Fig. 4

Trajectory burn-in: each trajectory is run for T=2.5. Left: single trajectories, right: mean paths. The gradients are computed exactly here

To show that the GiIrr perturbation is not intimately tied to the stochastic gradient, we also report the results for each system when the gradients are computed exactly in Table 4. We see that there is little meaningful difference in the results compared to when stochastic gradients are used.

Table 4.

Asymptotic variance estimates for the parameters of a normal distribution example. The gradients are computed exactly

E[AVarϕ1] Std[AVarϕ1] E[AVarϕ2] Std[AVarϕ2]
LD (no SG) 48.51 17.53 7339 3707
RM (no SG) 20.91 6.445 3855 1406
Irr (no SG) 5.658 2.108 2265 1191
RMIrr (no SG) 6.276 2.075 1648 565.1
GiIrr (no SG) 1.363 0.4223 492.9 183.8

In Figure 7 we plot the KSD for the parameters-of-a-normal-distribution example. We see that GiIrr yields lower KSD than all other perturbations.

Bayesian logistic regression

Next we consider Bayesian logistic regression. Given data {(xi,ti)}i=1N, where xiRd, and ti{0,1}, we seek a logistic function, parameterized by weights wRd, that best fits the data. The weights are obtained in a Bayesian fashion, in which we endow the weights with a prior and seek to characterize its posterior distribution via sampling. Define φ(y) to be the logistic function φ(y)=(1+exp(-y))-1. The log-likelihood function is

l(w)=i=1NtixiTw-i=1Nlog(1+exp(xiTw)). 27

The prior for the weights is normally distributed with mean zero and covariance α-1I. The gradient of the log-posterior is

wlogπ(w|X)=-αw+i=1Ntixi-i=1Nφ(xiTw)xi. 28

This term is used in the drift part of the Langevin dynamics that fully computes the gradient of the log-likelihood at every step. If the data are subsampled as in SGLD, we instead compute

wlogπ~(w|X)=-αw+Nni=1ntτixτi-Nni=1nϕ(xτiTw)xτi. 29

We use the german data set described in Gershman et al. (2012) for the numerical experiments. In this problem, there are 20 weight parameters to be learned. The training dataset is of size N=400 and we choose to subsample at a rate of n=10 per likelihood computation. The time step we choose is h=10-4 and K=4×105 steps. The initial condition is chosen to be the zero vector. We generate the skew-symmetric matrix by constructing a lower triangular matrix with entries randomly drawn from {1,-1} and then subtracting its transpose. The diagonal is then set to zero and the matrix is scaled to have norm one.

As for the Riemannian manifold Langevin dynamics, in Girolami and Calderhead (2011) the authors use the expected Fisher information matrix plus the negative Hessian of the log-prior as the underlying metric, which in this case is equal to

G(w)=α-1I+XΛ(w)XT 30

where Λ is a diagonal matrix with entries Λii(w)=(1-φ(xiTw))φ(xiTw) and xi is the i-th column of X. The resulting reversible perturbation uses the inverse of G(w). This perturbation, however, does not lead to accelerated convergence to the invariant measure since the eigenvalues of G are large. This implies that the eigenvalues of G-1 are less than one and so G-1(w)-I is not positive definite, a condition that needs to be satisfied to guarantee accelerated convergence (Rey-Bellet and Spiliopoulos 2015). To alleviate this issue, we consider the reversible perturbation B(w)=I+G-1(w). This guarantees B(w) to be positive definite for all w, but the drawback is that computing the square root of B(w) requires explicitly computing or at least approximating the inverse of G(w) repeatedly in the simulation (and not just computing the action of the inverse). This additional computational cost is incurred for all examples that consider a geometry-informed perturbation, both reversible and irreversible. We show the result of this state-dependent perturbation in Figures 8 and 9 and report the asymptotic variance in Table 5. The geometry-informed irreversible perturbation does provide improvement over all other perturbations. We observe that the asymptotic variance is reduced by half over RM, with only little additional computational effort. Most of the computational cost of applying GiIrr is due to the evaluation of the reversible perturbation. Therefore we emphasize that if one is already applying the reversible perturbation to the Langevin dynamics, the marginal cost of applying the GiIrr perturbation is negligible.

Fig. 8.

Fig. 8

Observable: ϕ1(w)=iwi. Bayesian logistic regression. Here, d=20

Fig. 9.

Fig. 9

Observable: ϕ2(w)=iwi2. Bayesian logistic regression. Here, d=20

Table 5.

Asymptotic variance estimates for the Bayesian logistic regression example with a state-dependent metric

E[AVarϕ1] Std[AVarϕ1] E[AVarϕ2] Std[AVarϕ2]
LD 1.967 0.9995 23.77 12.52
RM 1.328 0.6538 15.35 7.348
Irr 1.163 0.5698 14.84 7.738
RMIrr 0.8775 0.4228 10.68 5.306
GiIrr 0.7148 0.3450 8.798 4.490

In Figure 10 we plot the KSD for the Bayesian logistic regression example. We see that GiIrr has slightly lower KSD than the other perturbations, but the differences are small. The theoretical slope of K-1/2 is still realized.

Fig. 10.

Fig. 10

Kernelized Stein discrepancy plot for Bayesian logistic regression example. Black line has slope -1/2, which denotes the expected convergence rate

Independent component analysis

Our last example considers the problem of blind signal separation addressed in Welling and Teh (2011) and Amari et al. (1996). This problem yields a posterior that is strongly non-Gaussian and multi-modal, and we show that GiIrr has substantially better sampling performance over standard reversible and irreversible perturbations. Suppose there are m separate unknown independent signals si(t) for i=1,,m that are mixed by mixing matrix MRd×d. Suppose we can observe the mixed signals X(t)=Ms(t) for N instances in time. The goal of independent component analysis is to infer a de-mixing matrix W such that the m signals are recovered up to a nonzero constant and permutation. As such, this problem is generally ill-posed, but is suitable to be considered in a Bayesian context. The ICA literature states that, based on real-world data, it is best to assume a likelihood model with large kurtosis. Following (Welling and Teh 2011; Amari et al. 1996), let p(yi)=14sech212yi. The prior on the weights Wij is Gaussian with zero mean and precision λ. The posterior is equal to

p(W|X)|detW|i=1mp(wiTx)ijN(Wij;0,λ-1). 31

The gradient of the log posterior with respect to the matrix W is then

f(W)=Wlogp(W|X)=N(WT)-1-n=1Ntanh12ynxnT-λW. 32

It is suggested in Amari et al. (1996) that the natural gradient should be used instead of the gradient we see here above to account for the information geometry of the problem. Specifically, Teh et al. (2016), Amari et al. (1996) post-multiply the gradient by WTW and arrive at the so-called natural gradient of the system

DW:=NId-n=1Ntanh12ynynTW-λWWTW. 33

In the context of RMLD, this is equivalent to perturbing the system with a reversible perturbation with B~(W)=WTWId pre-multipled in front of the vectorized gradient. That is, we have

vec(f(W)WTW)=(WTWId)vecf(W).

This choice of reversible perturbation, however, may not be sufficient for accelerating the convergence of Langevin dynamics as B~(W)-Id2 is not positive definite throughout the state space (Rey-Bellet and Spiliopoulos 2016). Instead, we choose the reversible perturbation B(W)=Id2+(WWId)=((Id+WW)Id).

We construct the GiIrr term as follows. To take advantage of the matrix structure of the reversible perturbation, we choose the skew-symmetric matrix such that it acts within the computation of the natural gradient. We choose J=(IdC0)+(C0Id) where C0 has the same sign pattern as (22) but such that J has matrix norm equal to 1. Then the geometry-informed irreversible perturbation is

12B(W)J+12JB(W)=((Id+WTW)C0)+(C0Id)+12(WTWC0Id)+12(C0WTWId).

To simulate the RM and GiIrr systems, correction terms (such as ·B(θ)) need to be computed. The correction terms are derived using the symbolic algebra toolbox in MATLAB. Since the perturbations are vectors of polynomials, the symbolic algebra toolbox can easily derive and efficiently evaluate the correction terms.

For the numerical experiments, we synthetically generate m=3 signals, one of which is Laplace distributed, and two are distributed according to the squared hyperbolic secant distribution. The posterior distribution is d=9 dimensional, there are a total of N=400 data points, and the gradient is approximated by subsampling n=40 data points per estimate. The initial condition here is chosen to be a diagonal matrix with either +1 or -1 entries, which are chosen randomly. Since the posterior is nine-dimensional and highly multimodal, it is difficult to evaluate its marginal densities directly, i.e., without sampling. Instead, we establish a baseline reference density by simulating the standard Langevin dynamics with exact computation of the likelihood over all the data over T=10000 with h=10-4. One- and two-dimensional marginals of this baseline posterior distribution are plotted in Figure 11. The two-dimensional marginals highlight the challenges of sampling from this posterior. In Figure 12, we plot trace plots of the W11 variable for each system. By visual inspection, we see that that mixing is best for the geometry-informed irreversibly perturbed system. One can intuitively expect that with better mixing, the geometry-informed irreversibility should yield better estimation performance than the other systems. We assess this quantitatively below.

Fig. 11.

Fig. 11

Posterior distribution sampled with standard Langevin with a deterministic gradient with T=10000 and h=10-4. Notice that the system is very multimodal and non-Gaussian

Fig. 12.

Fig. 12

Trace plots of the W11 marginal

As in the previous example, we simulate the five systems and compute the asymptotic variances of two observables for each system. Each system is simulated independently 100 times up to time T=2000 with h=2×10-5. The smaller step size is to account for the additional stiffness irreversible perturbations introduce. Since the true mean of the posterior distribution is unknown, and because standard sampling methods fail to adequately sample from the posterior distribution to get a reasonable estimate for the mean, we only plot the variances of the selected observables with respect to K in Figure 13. To compute the asymptotic variance, we allot a burn-in time of Tb=20. The observables we estimate are ϕ1(W)=i,jWij, ϕ2(W)=i,jWij2, and ϕ3(W)=i,jWij2. The asymptotic variance numbers confirm that the faster mixing observed in the geometry-adapted irreversible perturbation does lead to a better sampling method. The values of the asymptotic variance are reported in Table 6. The results for the asymptotic variance and variance of ϕ2 are somewhat noisy, which is why GiIrr may appear to perform similarly as the other sampling methods. When estimating the posterior mean and an observable (ϕ3) that includes cross-moments, the geometry-informed irreversible perturbation outperforms standard irreversibility applied to the reversible perturbation.

Fig. 13.

Fig. 13

Variance of running average estimators. For the second moment (middle plot), there is less difference among the samplers as the distribution is quite symmetric, and one can properly estimate the second moment even if the samplers are stuck in a single mode. GiIrr is able to estimate the observable with cross-moments (right plot) better than the other samplers

Table 6.

Asymptotic variance estimates for the ICA example

E[AVarϕ1] Std[AVarϕ1] E[AVarϕ2] Std[AVarϕ2] E[AVarϕ3] Std[AVarϕ3]
LD 80.40 23.61 9.445×10-4 3.616×10-4 50.17 17.52
RM 53.01 12.22 5.489×10-4 1.607×10-4 26.75 8.442
Irr 52.38 17.27 6.854×10-4 2.035×10-4 27.02 9.134
RMIrr 39.20 10.36 5.794×10-4 1.873×10-4 19.47 6.086
GiIrr 15.50 4.441 1.253×10-3 3.800×10-4 6.381 1.777

In Figure 14 we plot the convergence of KSD for the ICA example. GiIrr yields lower KSD than the other perturbations in this example, and the theoretical slope of K-1/2 is also realized.

Fig. 14.

Fig. 14

Kernelized Stein discrepancy plot for the ICA example. Black line has slope -1/2, which denotes the expected convergence rate

Conclusion

We presented a novel irreversible perturbation, GiIrr, that accelerates the convergence of Langevin dynamics. By introducing an irreversible perturbation that incorporates any given underlying reversible perturbation, which can also be interpreted as defining a Riemannian metric, we have shown through numerical examples that geometry-informed irreversible perturbations outperform those that are not informed as such. In the examples, we found that GiIrr seems to perform best when the target distribution is highly non-Gaussian.

Most of our numerical examples used stochastic gradients to cut down on computational effort in sampling each trajectory. This demonstrates that SGLD can be used in conjunction with irreversibility for practical computations.

We also provided some analysis on how irreversibility interacts with discretization of the SDE systems. Irreversibility introduces additional stiffness into the system, which may lead to additional bias or variance in the estimator. For practical purposes, one can simply choose a small enough step size so that the asymptotic bias and variance are sufficiently small. At the same time, we note an example (see Appendix A) where the introduction of the irreversible term, once discretized, leads to no improvement in the long term average estimator.

Future work could study the use of novel integrators which circumvent stiffness. For example, Jianfeng and Spiliopoulos (2018) uses a multiscale integrator, but it is not readily adapted to the data-driven setting of Bayesian inference. Another direction for future work is to theoretically characterize the performance of the geometry-informed irreversible perturbation and to compare it with that of other perturbations. A starting point for such an analysis could be the general results of Rey-Bellet and Spiliopoulos (2016), in particular the large deviations Theorem 1 together with Propositions 2–4 therein. Preliminary investigation of this direction showed that it is a promising avenue for a theoretical investigation, but non-trivial work and a finer analysis are needed to demonstrate the effects of this class of irreversible perturbations. We leave this for future work, as such an analysis is outside the scope of this paper. Our goal in this paper has been to introduce the perturbation and showcase its potential through simulation examples.

Appendix A The effects of discretization

In this section, we study the effects of discretization in the setting of an irreversibly perturbed Langevin system. Results in full generality are, as yet, elusive; therefore we only consider a Gaussian example, as it still provides insight into how irreversibility interacts with discretization in impacting the asymptotic and finite sample bias and variance of the long term average estimator. While we do not present the results when a stochastic gradient is used, we note that the results are similar and can be easily extended based on what we present here. Recall that,

A=12(I+J)(Γθ+NΓX),D=12(I+J)ΓXi=1NXi,whereJ=δ01-10

For this analysis, all precision matrices are 2×2 scalar matrices. That is, we assume Γθ=σθ-2I, ΓX=σX-2I. This is distinct from the example in Section 3.2, since the precision matrices there are diagonal but not scalar. Let b=12σX2 and SX=k=1NXi, so that D=b(I+J)SX.

We summarize our findings here. For fixed discretization size h and scalar precision matrices as defined above, and introducing the irreversible perturbation scaled by δ, we find the following:

  • The asymptotic bias for linear observables is zero, that is, E[θ]=μp;

  • The asymptotic variance for linear observables increases. We found that
    TrVar[θ]=22a-ha2(1+δ2) 34
    where a=0.5(1/σθ2+N/σX2);
  • The finite time estimator for the observable ϕ(θ)=θ1+θ2 has lower bias and variance;

  • The finite time estimator for the observable ϕ(θ)=θ2 has higher bias and variance.

We focus on the finite time results and omit the asymptotic results, since the the former case is of more practical interest. The computations related to both are similar.

Finite time analysis: bias for linear observables. We study how the magnitude of the irreversibility, characterized by δ, impacts the mean-squared error MSE=Eθ¯K-μp2 where θ¯K=1Kk=0K-1θk. We approach this quantity via its bias-variance decomposition:

MSE=E[θ¯K]-μp2+TrVarθ¯K. 35

First, we compute the expected value of the sample average Eθ¯K=1Kk=0K-1E[θk]. For simplicity, we assume that the initial condition is always θ0=0. For any k, we have

E[θk]=(I-hA)E[θk-1]+hD=(I-hA)kθ0+hn=0k-1(I-hA)nD=h(Ah)-1(I-(I-hA)k)D=A-1D-A-1(I-hA)kD.

This yields

E[θ¯K]=1Kk=0K-1A-1D-A-1(I-hA)kD=A-1D-1KA-1(Ah)-1(I-(I-hA)K)D.

Since μp=A-1D, the bias is

bias=-1KhA-2(I-(I-hA)K)D.

The norm of the bias can in fact be computed. Note that A2=(1+δ2)a2I and we have

bias2=1K2h2DT(I-(I-hAT)K)A-2TA-2(I-(I-hA)K)D=1K2h2a4(1+δ2)2DT(I-(I-hAT)K)(I-(I-hA)K)D=b2K2h2a4(1+δ2)2SXT(I+J)T(I-(I-ATh)K)(I-(I-hA)K)(I+J)SX.

The inner matrix can be computed. Since each matrix above is simultaneously diagonalizable, we only need to consider the eigenvalues of each of the above matrices. Note that I+J is a normal matrix, so we may write the eigenvalue decomposition I+J=PDP, where denotes conjugate transpose, Q=diag(1+iδ,1-iδ), and

P=1211i-i

is orthogonal. Now note that

I-(I-hA)K=P1-(1-ah(1+iδ))K001-(1-ah(1-iδ))KP,

which implies

(I-(I-hAT)K)(I-(I-hA)K)=|1-(1-ah(1+iδ)K)|2I.

Using the fact that (I+J)T(I+J)=(1+δ2)I, and we have the following

bias2=b2K2h2a4(1+δ2)|1-(1-a(1+iδ)h)K|2SX. 36

To simplify further, we write 1-a(1+iδ)h=reiθ where r2=(1-ah)2+δ2a2h2, and tanθ=δah/(1-ah). Then we obtain

bias2=b2K2h2a4(1+δ2)|1-rKeiθK|2SX2=b2K2h2a4(1+δ2)(1+r2K-2rKcosKθ)SX2.

We know that r<1, since otherwise, the numerical scheme would be unstable. It is easy to see that for large, but not infinite, K, the bias decays as O(1/(Kh1+δ2)), so the introduction of irreversibility decreases the constant in front of the expression and therefore slightly improves the convergence of the bias.

Finite time analysis: variance for linear observables. For simplicity, we assume θ0=0. We compute TrVar(θ¯K). We begin with

TrVar(θ¯K)=TrE[θ¯Kθ¯KT]-TrE[θ¯K]E[θ¯K]T

and compute these terms separately. It is difficult to surmise a relationship between δ and TrVar(θ¯K) even with exact formulas, so we appeal to plots of the expressions to see that the variance decreases with irreversibility. We computed E[θ¯K] in the previous section.

With the observation that

A-2(I-(I-hA)K)E[D]=ba2PQPE[SX]

where

Q=1-(1-ah(1+iδ))K1+iδ001-(1-ah(1-iδ))K1-iδ,

and P is defined in the previous section. We compute that

TrE[θ¯K]E[θ¯K]T=μp2+bias2-2b2Kha3(1+δ2)Re{(1-iδ)(1-ah(1+iδ))K}E[SX]2. 37

The other term is more complicated and needs to be approached more gingerly. Observe that

TrE[θ¯Kθ¯KT]=1K2i,j=1KTrE[θiθjT]=1K2i=0K-1TrE[θiθiT]+2i<j=0K-1TrE[θiθjT].

We take each term individually. To compute E[θkθkT], it is actually better to consider the covariance matrix of θk, Σk=E[θkθkT]-E[θk]E[θk]T.

We first compute

E[θkθkT]=(I-hA)E[θk-1θk-1T](I-hA)T+h2DDT+hI+(I-hA)E[θk-1]DTh+hDE[θk-1T](I-hA)TE[θk]E[θk]T=(I-hA)E[θk-1]E[θk-1]T(I-hA)T+DE[θk-1]T(I-hA)T+(I-hA)E[θk-1]D+h2DDT

which imply the following recurrence relation. Assuming Σ0=0, we have

Σk=(I-hA)Σk-1(I-hA)T+hI=hn=0k-1(I-hA)(I-hA)Tn=hn=0k-1(I-(A+AT)h+h2AAT)n=((A+AT)-h2AAT)-1(I-(I-(A+AT)h+AATh2)k).

Let s=1-2ah+h2a2(1+δ2), then, by recalling that A+AT=2aI and AAT=a2(1+δ2)I, the above sum is equal to 1-sk1-shI. Therefore,

TrΣk=2h(1-sk)1-s. 38

Meanwhile note that

E[θk]=μp-A-1(I-hA)kD.

Therefore,

TrE[θk]E[θk]T=E[θk]TE[θk]=μp2+DT(I-hAT)kA-TA-1(I-hA)kD-2μpTA-1(I-hA)kD=μp2+skb2a2(1+δ2)SX2-2μpTA-1(I-hA)kD.

We now take the sum for each expression from k=0 to K-1. We have

i=0K-1TrΣi=2hK1-s-i=0K-1si1-s=2hK1-s-1-sK(1-s)2

and

i=0K-1E[θi]TE[θi]=Kμp2+(1-sK)b2a2(1+δ2)(1-s)SX2-2μpTA-1(hA)-1(I-(I-hA)K)D=Kμp2+(1-sK)b2a2(1+δ2)(1-s)SX2-2μpTh-1A-2(I-(I-hA)K)D.

For the cross-terms, observe that we may write

i<j=0K-1θiθjT=i=0K-1θij=i+1K-1θjT 39

which can be simplified further. First note that

θj=(I-hA)θj-1+Dh+hξj-1=(I-hA)j-iθi+hn=0j-1-i(I-hA)nD+hn=0j-i-1(I-hA)nξj-1-n.

Plugging this expression into the double sum above, we have

i=0K-1θij=i+1K-1θiT(I-ATh)j-i+hn=0j-1-iDT(I-ATh)n+hn=0j-i-1ξj-1-nT(I-ATh)n. 40

Taking expectations, we have

i=0K-1j=i+1K-1E[θiθiT](I-hAT)j-i+hE[θi]DTn=0j-1-i(I-hAT)n=i=0K-1j=i+1K-1E[θiθiT](I-hAT)j-i+hE[θi]DT(ATh)-1(I-(I-ATh)j-i).

Carrying out the computation for the first term, we have

F=i=0K-1E[θiθiT]j=i+1K-1(I-ATh)j-i=i=0K-1E[θiθiT](I-ATh)(ATh)-1(I-(I-ATh)K-1-i).

For the second term we have,

i=0K-1E[θi]μpTj=i+1K-1(I-(I-ATh)j-i)=i=0K-1E[θi]μpT(K-1-i)I-(I-ATh)(ATh)-1(I-(I-ATh)K-1-i).

The summations are difficult to compute precisely, so we compute them by direct evaluation instead. For simplicity, we assume that μp=E[SX]=[0,0]T, σX and σθ are chosen such that a=1. For this scenario, the bias is zero and only the variance contributes to the MSE. The variance is

Tr[Varθ¯K]=1K22hK1-s-1-sK(1-s)2+2TrF

where E[θiθiT]=Σi=1-si1-shI.

In Figure 15 we plot the variance for varying choices of δ. In this plots, h=0.001, K=2×105, and δ varies between zero and ten. We can clearly see that strengthening the irreversible perturbation leads to improvement of the squared bias and variance of the long term average estimator.

Fig. 15.

Fig. 15

Variance for different δ, fixed h

Finite sample analysis for the quadratic observable ϕ(θ)=θ2. The previous finite sample results for the observable ϕ(θ)=θ1+θ2 suggests that both the bias and variance of the long term average estimator goes down with a larger irreversible term. In this section, we show that this is actually a special case, and that when the observable is not linear, then the bias and variance may increase. We analyze the bias and variance of the long term average estimator of the observable ϕ(θ)=θ2. Define

ϕ¯=ϕ(θ)π(θ)dθ,ϕ¯K=1Kk=0K-1ϕ(θk). 41

As before, we assume that μp=[0,0]T, E[SX]=0, and σx and σθ are chosen such that a=1. We compute |Eϕ¯K-ϕ¯|2 and Varϕ¯k and see how they vary with δ. From previous computations, we can show that

Eϕ¯K=2h11-s-1-sKK(1-s)2, 42

where s=1-2ah+a2h2(1+δ2). Given this, the only term left to compute is the variance of the second moment of this observable:

Eϕ¯K2=1K2Ek=0K-1θkTθk2=1K2k=0K-1E(θkTθk)2+2K2Ek=0K-1l=k+1K-1(θkTθk)(θlTθl). 43

To compute the first sum, consider the following:

θkTθk=θk-1T(I-hA)T(I-hA)θk-1+hξk-1Tξk-1+2hθk-1T(I-hA)Tξk-1

and so we have

(θkTθk)2=s2(θk-1Tθk-1)2+h2(ξk-1Tξk-1)2+4h(ξk-1T(I-hA)θk-1)2+2sθk-1Tθk-1hξk-1Tξk-1+4sh(θk-1Tθk-1)θk-1T(I-hA)Tξk-1+4h3/2(ξk-1Tξk-1)θk-1T(I-hA)Tξk-1.

Taking the expectation, we have

E[(θkTθk)2]=s2E[(θk-1Tθk-1)2]+h2E[(ξk-1Tξk-1)2]+4hE(ξk-1T(I-hA)θk-1)2+2shE(θk-1Tθk-1)(ξk-1Tξk-1).

After simplifying, we arrive at the following recurrence relation:

E(θkTθk)2=s2E(θk-1Tθk-1)2+8h2+8shEθk-1Tθk-1. 44

Let βk=E[(θkTθk)2], ζk=8shE[θkTθk], and κ=8h2. We have the following recurrence, which we solve

βk=s2βk-1+ζk-1+κ=s2kβ0+n=0k-1s2nζk-n-1+n=0k-1s2nκ.

From previous for the term ζk, we have

βk=n=0k-1s2n8sh·2h1-sk-n-11-s+κ1-s2k1-s2=16sh21-sn=0k-1(s2n-sk+n-1)+8h2(1-s2k)1-s2=16sh21-s1-s2k1-s2-sk-11-sk1-s+8h2(1-s2k)1-s2.

Next we compute the summation of the cross terms. Define Rk such that

k=0K-1Rk=k=0K-1l=k+1K-1E(θkTθk)(θlTθl). 45

We write

θlTθl=sθl-1Tθl-1+hξl-1Tξl-1+2hξl-1T(I-hA)θl-1=sl-kθkTθk+n=0l-k-1hsnξl-n-1Tξl-n-1+2hsnξl-n-1T(I-hA)θl-n-1.

This implies that

Rk=l=k+1K-1sl-kE[(θkTθk)2]+n=0l-k-12hsnE[θkTθk]=l=k+1K-1βksl-k+2hE[θkTθk]n=0l-k-1sn=l=k+1K-1βksl-k+2hE[θkTθk]1-sl-k1-s=βkl=k+1K-1sl-k+2hE[θkTθk]1-sl=k+1K-11-sl-k=βks-sK-k1-s+2hE[θkTθk]1-sK-1-k-s-sK-k1-s.

To summarize, we have

Eϕ¯K2=1K2k=0K-1βk+2Rk, 46

the squared bias is (Eϕ¯K-1)2 and the variance is E[(ϕ¯K)2]-E[ϕ¯K]2. These expressions are not simplifiable easily, so we plot these expressions and study their trends. In Figure 16, we plot the squared bias and variance for fixed h and K and varying δ. In these plots, h=0.001, K=2×105, and δ varies between zero and ten. Notice that for these choices, both the squared bias and variance increases as δ grows, showing that for irreversibility provides no benefit, and in fact, harms the performance of the standard estimator.

Fig. 16.

Fig. 16

(Squared) bias and variance of ϕ(θ)=θ2 for varying levels of irreversibility

Remark A.1

When discretization is considered, sampling properties in this simple example in which the drift is proportional to the state cannot be improved upon using irreversibility. Let us now explain this phenomenon from a theoretical point of view. In Rey-Bellet and Spiliopoulos (2016), the authors show that adding an irreversible perturbation to the generator of the diffusion process may decrease the spectral gap, and will never increase it. They further prove that in the continuous case, decreasing the spectral gap then decreases the asymptotic variance. However this improvement is not strict, that is, irreversibility is only guaranteed to not increase the spectral gap.

Meanwhile, in Lelievre et al. (2013), the authors consider irreversibility only in the context of linear systems, and rigorously study optimal irreversible perturbations that accelerate convergence to the invariant distribution. Their results show that when the drift matrix is proportional to the identity matrix, the spectral gap cannot be widened. Proposition 4 in Lelievre et al. (2013) shows that the leading nonzero eigenvalue of the irreversibly perturbed drift matrix is bounded above by the leading nonzero eigenvalue of the original drift matrix and below by the trace of the original drift matrix over the dimension of the state space. The lower bound is then the optimal spectral gap. For a drift matrix that is a multiple of the identity, the upper and lower bounds are the same, which implies that the spectral gap can never decrease from its original value in the continuous case. After factoring in discretization, the irreversible perturbation increases stiffness of the system, which contributes to increased bias and variance in the resulting estimator.

Funding

BJZ and YMM acknowledge support from the Air Force Office of Scientific Research, Analysis and Synthesis of Rare Events (ANSRE) MURI. KS was partially supported by the National Science Foundation (DMS 1550918, DMS 2107856) and Simons Foundation Award 672441.

Declarations

Conflicts of interest

Not applicable.

Availability of data and material

Not applicable.

Code availability

Code can be made availble upon request.

Footnotes

BJZ and YMM acknowledge support from the Air Force Office of Scientific Research, Analysis and Synthesis of Rare Events (ANSRE) MURI. KS was partially supported by the National Science Foundation (DMS 1550918, DMS 2107856) and Simons Foundation Award 672441.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Amari, Shun-ichi, Cichocki, Andrzej, Yang, Howard Hua: A new learning algorithm for blind signal separation. In: Advances in Neural Information Processing Systems, pages 757–763. Morgan Kaufmann Publishers, (1996)
  2. Asmussen Søren, Glynn Peter W. Stochastic simulation: algorithms and analysis. Germany: Springer Science & Business Media; 2007. [Google Scholar]
  3. Bierkens Joris. Non-reversible Metropolis-Hastings. Stat. Comput. 2016;26:1213–1228. doi: 10.1007/s11222-015-9598-x. [DOI] [Google Scholar]
  4. Brosse, Nicolas, Durmus, Alain, Moulines, Éric: The promises and pitfalls of stochastic gradient Langevin dynamics. In: NeurIPS 2018 (Advances in Neural Information Processing Systems 2018), (2018)
  5. Diaconis P, Holmes S, Neal R. Analysis of a nonreversible Markov chain sampler. Ann. Appl. Probab. 2010;10:726–752. [Google Scholar]
  6. Duncun, A.B., Pavliotis, G.A., Zygalakis, K.C.: Nonreversible Langevin samplers: Splitting schemes, analysis and implementation. arXiv preprint arXiv:1701.04247, (2017)
  7. Durmus Alain, Moulines Eric. High-dimensional Bayesian inference via the unadjusted Langevin algorithm. Bernoulli. 2019;25(4A):2854–2882. doi: 10.3150/18-BEJ1073. [DOI] [Google Scholar]
  8. Franke Brice, Hwang C-R, Pai H-M, Sheu S-J. The behavior of the spectral gap under growing drift. Trans. Am. Math. Soc. 2010;362(3):1325–1350. doi: 10.1090/S0002-9947-09-04939-3. [DOI] [Google Scholar]
  9. Ganguly Arnab, Sundar P. Inhomogeneous functionals and approximations of invariant distribution of ergodic diffusions: error analysis through central limit theorem and moderate deviation asymptotics. Stoch. Proc. Appl. 2021;133(C):74–110. doi: 10.1016/j.spa.2020.10.009. [DOI] [Google Scholar]
  10. Gershman, Samuel J., Hoffman, Matthew D., Blei, David M.: Nonparametric variational inference. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, ICML’12, pages 235–242, Madison, WI, USA, (2012). Omnipress
  11. Girolami Mark, Calderhead Ben. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Stat. Soc.: Ser. B (Stat. Methodol.) 2011;73(2):123–214. doi: 10.1111/j.1467-9868.2010.00765.x. [DOI] [Google Scholar]
  12. Gorham Jackson, Duncan Andrew B, Vollmer Sebastian J, Mackey Lester. Measuring sample quality with diffusions. Ann. Appl. Probab. 2019;29(5):2884–2928. doi: 10.1214/19-AAP1467. [DOI] [Google Scholar]
  13. Gorham, Jackson, Mackey, Lester: Measuring sample quality with stein’s method. Advances in Neural Information Processing Systems, 28, (2015)
  14. Gorham, Jackson, Mackey, Lester: Measuring sample quality with kernels. In: International Conference on Machine Learning, pages 1292–1301. PMLR, (2017)
  15. Hu, Yuanhan, Wang, Xiaoyu, Gao, Xuefeng, Gürbüzbalaban, Mert, Zhu, Lingjiong: Non-convex optimization via non-reversible stochastic gradient Langevin dynamics. arXiv preprint arXiv:2004.02823, (2020)
  16. Hwang, Chii-Ruey, Hwang-Ma, Shu-Yin, Sheu, Shuenn-Jyi: Accelerating Gaussian diffusions. The Annals of Applied Probability, pages 897–913, (1993)
  17. Hwang Chii-Ruey, Hwang-Ma Shu-Yin, Sheu Shuenn-Jyi. Accelerating diffusions. Ann. Appl. Probab. 2005;15(2):1433–1444. doi: 10.1214/105051605000000025. [DOI] [Google Scholar]
  18. Izzatullah, Muhammad, Baptista, Ricardo, Mackey, Lester, Marzouk, Youssef, Peter, Daniel: Bayesian seismic inversion: Measuring langevin mcmc sample quality with kernels. In: SEG International Exposition and Annual Meeting. OnePetro, (2020)
  19. Lelievre Tony, Nier Francis, Pavliotis Grigorios A. Optimal non-reversible linear drift for the convergence to equilibrium of a diffusion. J. Stat. Phys. 2013;152(2):237–274. doi: 10.1007/s10955-013-0769-x. [DOI] [Google Scholar]
  20. Liu Q, Lee J, Jordan M. A kernelized Stein discreprancy for goodness-of-fit tests. Proc. of 33rd ICML. 2016;48:276–284. [Google Scholar]
  21. Livingstone Samuel, Girolami Mark. Information-geometric Markov chain Monte Carlo methods using diffusions. Entropy. 2014;16(6):3074–3102. doi: 10.3390/e16063074. [DOI] [Google Scholar]
  22. Jianfeng Lu, Spiliopoulos Konstantinos. Analysis of multiscale integrators for multiple attractors and irreversible Langevin samplers. Mult. Model. Simul. 2018;16(4):1859–1883. doi: 10.1137/16M1083748. [DOI] [Google Scholar]
  23. Ma, Yi.-An., Chen, Tianqi, Fox, Emily B.: A complete recipe for stochastic gradient MCMC. NIPS’15: Proceedings of the 28th International Conference on Neural Information Processing Systems 2, 2917–2925 (2015)
  24. Ottobre, Michela, Pillai, Natesh S., Spiliopoulos, Konstantinos: Optimal scaling of the MALA algorithm with irreversible proposals for Gaussian targets. Stochastics and Partial Differential Equations: Analysis and Computations, pages 1–51, (2019)
  25. Pavliotis Grigorios A. Stochastic processes and applications: diffusion processes, the Fokker-Planck and Langevin equations. Germany: Springer; 2014. [Google Scholar]
  26. Rey-Bellet, Luc, Spiliopoulos, Konstantinos: Irreversible Langevin samplers and variance reduction: a large deviations approach. Nonlinearity 28(7), 2081 (2015)
  27. Rey-Bellet, Luc, Spiliopoulos, Konstantinos: Variance reduction for irreversible Langevin samplers and diffusion on graphs. Electronic Communications in Probability, 20, (2015)
  28. Rey-Bellet Luc, Spiliopoulos Konstantinos. Improving the convergence of reversible samplers. J. Stat. Phys. 2016;164(3):472–494. doi: 10.1007/s10955-016-1565-1. [DOI] [Google Scholar]
  29. Roberts Gareth O, Tweedie Richard L. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli. 1996;2(4):341–363. doi: 10.2307/3318418. [DOI] [Google Scholar]
  30. Teh, Yee Whye, Thiery, Alexandre H., Vollmer, Sebastian J.: Consistency and fluctuations for stochastic gradient Langevin dynamics. Journal of Machine Learning Research, 17, (2016)
  31. Vollmer Sebastian J, Zygalakis Konstantinos C, Teh Yee Whye. Exploration of the (non-) asymptotic bias and variance of stochastic gradient Langevin dynamics. J. Mach. Learn. Res. 2016;17(1):5504–5548. [Google Scholar]
  32. Welling, Max, Teh, Yee W.: Bayesian learning via stochastic gradient Langevin dynamics. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 681–688. Citeseer, (2011)
  33. Xifara Tatiana, Sherlock Chris, Livingstone Samuel, Byrne Simon, Girolami Mark. Langevin diffusions and the Metropolis-adjusted Langevin algorithm. Stat. Probab. Lett. 2014;91:14–19. doi: 10.1016/j.spl.2014.04.002. [DOI] [Google Scholar]

Articles from Statistics and Computing are provided here courtesy of Springer

RESOURCES