Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Nov 1.
Published in final edited form as: Stat Comput. 2016 Sep 13;27(6):1473–1490. doi: 10.1007/s11222-016-9699-1

Hamiltonian Monte Carlo acceleration using surrogate functions with random bases

Cheng Zhang 1, Babak Shahbaba 2, Hongkai Zhao 1
PMCID: PMC5624739  NIHMSID: NIHMS836406  PMID: 28983154

Abstract

For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

Keywords: Markov chain Monte Carlo, Hamiltonian dynamics, Surrogate method, Random bases

1 Introduction

Bayesian Statistics has provided a principled and robust framework to create many important and powerful data analysis methods over the past several decades. Given a probabilistic model for the underlying mechanism of observed data, Bayesian methods properly quantify uncertainty and reveal the landscape or global structure of the parameter space. However, these methods tend to be computationally intensive since Bayesian inference usually requires the use of Markov chain Monte Carlo (MCMC) algorithms to simulate samples from intractable distributions. Although the simple Metropolis et al. algorithm (1953) is often effective at exploring low-dimensional distributions, it can be very inefficient for complex, high-dimensional distributions: successive states may exhibit high autocorrelation, due to the random walk nature of the movement. As a result, the effective sample size (ESS) tends to be quite low and the convergence to the true distribution is usually very slow. The celebrated Hamiltonian Monte Carlo (HMC; Duane et al. 1987; Neal 2011) reduces the random walk behavior of Metropolis by Hamiltonian dynamics, which uses gradient information to propose states that are distant from the current state, but nevertheless have a high probability of acceptance.

Although HMC explores the parameter space more efficiently than random walk Metropolis does, it does not fully exploit the structure (i.e., geometric properties) of parameter space (Girolami and Calderhead 2011) since dynamics are defined over Euclidean space. To address this issue, Girolami and Calderhead (2011) proposed a new method, called Riemannian manifold HMC (RMHMC), that uses the Riemannian geometry of the parameter space (Amari and Nagaoka 2000) to improve standard HMC’s efficiency by automatically adapting to local structures.

To make such geometrically motivated methods practical for big data analysis, one needs to combine them with efficient and scalable computational techniques. A common bottleneck for using such sampling algorithms for big data analysis is repetitive evaluations of functions, their derivatives, geometric and statistical quantities that involve the whole observed data and maybe a complicated model. A natural question is how to construct effective approximation of these quantities that provides a good balance between accuracy and computation cost. One common approach is subsampling (see, for example, Welling and Teh 2011; Hoffman et al. 2010; Shahbaba et al. 2014; Chen et al. 2014), which restricts the computation to a subset of the observed data. This is based on the idea that big datasets contain a large amount of redundancy so the overall information can be retrieved from a small subset. However, in general applications, we cannot simply use random subsets for this purpose: the amount of information we lose as a result of random sampling leads to non-ignorable loss of accuracy, which in turn has a substantially negative impact on computational efficiency (Betancourt 2015). Therefore, in practice, it is a challenge to find good criteria and strategies for an appropriate and effective subsampling.

Another approach is to exploit smoothness or regularity in parameter space, which is true for most statistical models. This way, one could find computationally cheaper surrogate functions to substitute the expensive target (or potential energy) functions (Liu 2001; Rasmussen 2003; Welling and Teh 2011; Meeds and Welling 2014; Lan et al. 2014; Strathmann et al. 2015). However, the usefulness of these methods is often limited to moderate dimensional problems because of the computational cost needed to achieve desired approximation accuracy.

In this work, our objective is to develop a faster alternative to the method of Rasmussen (2003). To this end, we propose to use random non-linear bases along with efficient learning algorithms to construct a surrogate functions that provides effective approximation of the probabilistic model based on the collective behavior of the large data. The randomized non-linear basis functions combined with the computationally efficient learning process can incorporate correct criteria for an efficient implicit subsampling resulting in both flexible and scalable approximation (Huang et al. 2006a, b; Rahimi and Recht 2007, 2008). Because our method can be presented as a special case of shallow random networks implemented in HMC, we refer to it as random network surrogate function (RNS); however, we will show that our proposed method is related to (and can be extended to) other surrogate functions such as generalized additive models (GAMs) and Gaussian process models by constructing the surrogate functions using different bases and optimization process.

Our proposed method provides a natural framework to incorporate surrogate functions in the sampling algorithms such as HMC, and it can be easily extended to geometrically motivated methods such as RMHMC. Further, for problems with a limited time budget, we propose an adaptive version of our method that substantially reduces the required number of training points. This way, the random bases surrogate function could be utilized earlier and its approximation accuracy could be improved adaptively as more training points become available. We show that theoretically the learning procedure for our surrogate function is asymptotically equivalent to potential matching, which is itself a novel distribution matching strategy similar to the score matching method discussed in Hyvärinen (2005) and Strathmann et al. (2015).

Finally, we should emphasize that our method is used to generate high quality proposals at low computational cost. However, when calculating the acceptance probability of these proposals, we use the original Hamiltonian (used in standard HMC) to ensure that the stationary distribution of the Markov chain will remain the correct target distribution.

Our paper is organized as follows. An overview of HMC and RMHMC is given in Sect. 2. Our RNS–HMC is explained in detail in Sect. 3. The adaptive model is developed in Sect. 4. Experimental results based on simulated and real data are presented in Sect. 5. Code for these examples is available at github.com/chengzhang-uci/RNSHMC. Finally, Sect. 6 is devoted to discussion and future work. As an interesting supplement, an overall description of the potential matching procedure is presented in the Appendix 2.

2 Preliminaries

2.1 Hamiltonian Monte Carlo

In Bayesian Statistics, we are interested in sampling from the posterior distribution of the model parameters q given the observed data, Y = (y1, y2, …, yN)T,

P(q|Y)exp(U(q)), (1)

where the potential energy function U is defined as

U(q)=i=1Nlog P(yi|q)log P(q). (2)

The posterior distribution is almost always analytically intractable. Therefore, MCMC algorithms are typically used for sampling from the posterior distribution to perform statistical inference. As the number of parameters increases, however, simple methods such as random walk (Metropolis et al. 1953) and Gibbs sampling (Geman and Geman 1984) may require a long time to converge to the target distribution. Moreover, their explorations of parameter space become slow due to inefficient random walk proposal-generating mechanisms, especially when there exist strong dependencies between parameters in the target distribution. By inclusion of geometric information from the target distribution, HMC (Duane et al. 1987; Neal 2011) introduces a Hamiltonian dynamics system with auxiliary momentum variables p to propose samples of q in a Metropolis framework that explores the parameter space more efficiently compared to standard random walk proposals. More specifically, HMC generates proposals jointly for q and p using the following system of differential equations:

dqidt=Hpi, (3)
dpidt=Hqi, (4)

where the Hamiltonian function is defined as H(q,p)=U(q)+12pTM1p. The quadratic kinetic energy function K(p)=12pTM1p corresponds to the negative log-density of a zero-mean multivariate Gaussian distribution with the covariance M. Here, M is known as the mass matrix, which is often set to the identity matrix, I. Starting from the current state (q, p), the Hamiltonian dynamics system (3), (4) is simulated for L steps using the leapfrog method, with a stepsize of ε. The proposed state, (q*, p*), which is at the end of the trajectory, is accepted with probability min(1, exp[−H(q*, p*) + H(q, p)]). By simulating the Hamiltonian dynamics system together with the correction step, HMC generates samples from a joint distribution

P(q,p)exp (U(q)12pTM1p).

Notice that q and p are separated, the marginal distribution of q then follows the target distribution. These steps are presented in Algorithm 1.

Following the dynamics of the assumed Hamiltonian system, HMC can generate distant proposals with high acceptance probability which allows an efficient exploration of parameter space.

2.2 Riemannian manifold HMC

Although HMC explores the target distribution more efficiently than random walk Metropolis, it does not fully exploit the geometric structures of the underlying probabilistic model since a flat metric (i.e., M = I) is used. Using more geometrically motivated methods could substantially improve sampling algorithms’ efficiency. Recently, Girolami and Calderhead (2011) proposed a new method, called RMHMC, that exploits the Riemannian geometry of the target distribution to improve standard HMC’s efficiency by automatically adapting to local structures. To this end, instead of the identity mass matrix commonly used in standard HMC, they use a position-specific mass matrix M = G(q). More specifically, they set G(q) to the Fisher information matrix, and define Hamiltonian as follows:

Algorithm 1.

Hamiltonian Monte Carlo

Input: Starting position q(1) and step size ε
for t = 1, 2, … do
graphic file with name nihms836406t1.jpg Resample momentum p
p(t) ~ 𝒩(0, M), (q0, p0) = (q(t), p(t))
Simulate discretization of Hamiltonian dynamics:
for l = 1 to L do
graphic file with name nihms836406t2.jpg
pl1pl1ε2Uq(ql1)
qlql−1 + εM−1 pl−1
plplε2Uq(ql)
(q*, p*) = (qL, pL)
Metropolis–Hasting correction:
u ~ Uniform[0, 1]
ρ = exp[H(q(t), p(t)) − H(q*, p*)]
if u < min(1, ρ), then q(t+1) = q*
H(q,p)=U(q)+12log det G(q)+12pTG(q)1p=ϕ(q)+12pTG(q)1p, (5)

where ϕ(q)U(q)+12log det G(q). Note that standard HMC is a special case of RMHMC with G(q) = I. Based on this dynamic, they propose the following HMC on Riemannian manifold:

q˙=pH(q,p)=G(q)1p, (6)
=qH(q,p)=qϕ(q)+12ν(q,p),

where the ith element of the vector ν(q, p) is

(ν(q,p))i=pTi(G(q)1)
p=(G(q)1p)TiG(q)G(q)1p,

with the shorthand notation ∂i = ∂/∂qi for partial derivative.

The above dynamic is non-separable (it contains products of q and p), and the resulting proposal-generating mechanism based on the standard leapfrog method is neither time reversible nor symplectic. Therefore, the standard leapfrog algorithm cannot be used for the above dynamic (Girolami and Calderhead 2011). Instead, we can use the Stömer–Verlet (1967) method, known as generalized leapfrog (Leimkuler and Reich 2004),

p(t+1/2)=p(t)ε2[qϕ(q(t))12ν(q(t),p(t+1/2))], (7)
q(t+1)=q(t)+ε2[G1(q(t))+G1(q(t+1))]p(t+1/2), (8)
p(t+1)=p(t+1/2)ε2[qϕ(q(t+1))12ν(q(t+1),p(t+1/2))]. (9)

The resulting map is (1) deterministic, (2) reversible, and (3) volume preserving. However, it requires solving two computationally intensive implicit equations [Eqs. (7) and (8)] at each leapfrog step.

3 Random network surrogate–HMC (RNS–HMC)

For HMC, the Hamiltonian dynamics contains the information from the target distribution through the potential energy U and its gradient. For RMHMC, more geometric structure (i.e., the Fisher information) is included through the mass matrix for kinetic energy. It is the inclusion of such information in the Hamiltonian dynamics that allows HMC and RMHMC to improve upon random walk Metropolis. However, one common computational bottleneck for HMC and other Bayesian models for big data is repetitive evaluations of functions, their derivatives, geometric and statistical quantities. Typically, each evaluation involves the whole observed data. For example, one has to compute the potential U and its gradient from the Eq. (2) for HMC and mass matrix M, its inverse and the involved partial derivatives for RMHMC at every time step or MH correction step. When N is large, this can be extremely expensive to compute. In some problems, each evaluation may involve solving a computationally expensive problem. (See the inverse problem and Remark in Sect. 5.2.)

To alleviate this issue, in recent years several methods have been proposed to construct surrogate Hamiltonians. For relatively low-dimensional spaces (sparse) grid-based piecewise interpolative approximation using precomputing strategy was developed in Welling and Teh (2011). Such grid-based methods, however, are difficult to extend to high-dimensional spaces due to the use of structured grids. Alternatively, we can use Gaussian process model, which are commonly used as surrogate models for emulating expensive-to-evaluate functions, to learn the target functions from early evaluations (training data; Rasmussen 2003; Lan et al. 2014; Meeds and Welling 2014). However, naive (but commonly used) implementations of Gaussian process models have high computation cost associated with inverting the covariance matrix, which grows cubically as the size of the training set increases. This is especially crucial in high-dimensional spaces, where we need large training sets in order to achieve a reasonable level of accuracy. Recently, scalable Gaussian processes using inducing point methods (Snelson and Ghahramani 2006; Quinonero-Candela and Rasmussen 2005) have been introduced to scale up GPs to larger datasets. While these methods have been quite successful in reducing computation cost, the tuning of inducing points could still be problematic in high-dimensional spaces. (See a more detailed discussion in Sect. 3.3.)

The key in developing surrogate functions is to develop a method that can effectively capture the collective properties of large datasets with scalability, flexibility, and efficiency. In this work, we propose to construct surrogate functions using proper random non-linear bases and efficient optimization process on training data. More specifically, we present our method as a special case of shallow neural networks; although, we show that it is related to (and can be extended to) other surrogate functions such as GAMs and Gaussian process models. Random networks of non-linear functions prove capable of approximating a rich class of functions arbitrarily well (Huang et al. 2006b; Rahimi and Recht 2008). Using random non-linear networks and algebraic learning algorithms can also be viewed as an effective implicit subsampling with desired criteria. The choice of hidden units (basis functions) and the fast learning process can be easily adapted to be problem specific and scalable. Unlike typical (naive) Gaussian process models, our random network scales linearly with the number of training points. In fact, a random non-linear network can be considered as a standard regression model with randomly mapped features. For such shallow random networks, the computational cost for inference is cubic in the number of hidden nodes. Those differences in scaling allow us to explicitly trade off computational efficiency and approximation accuracy and construct more efficient surrogate in certain applications. As our empirical results suggest, with appropriate training data good approximation of smooth functions in high-dimensional space can be achieved using a moderate and scalable number of hidden units. Therefore, our proposed method has the potential to scale up to large data sizes and provide effective and scalable surrogate Hamiltonians that balance accuracy and efficiency well.

3.1 Shallow random network approximation

A typical shallow network architecture (i.e., a single-hidden layer feedforward scalar-output neural network) with s hidden units, a non-linear activation function a, and a scalar (for simplicity) output z for a given d-dimensional input q is defined as

z(q)=i=1sυia(q;γi)+b, (10)

where γi is the ith hidden node parameter, υi is the output weight for the ith hidden node, and b is the output bias. Given a training dataset

𝒯={(q(j),t(j))|q(j)d,t(j),j=1,,N},

the neural network can be trained by finding the optimal model parameters W = {γi, υi, i = 1, …, s, b} to minimize the mean square error cost function,

C(W|𝒯)=1Nj=1Nz(q(j))t(j)2. (11)

The most popular algorithm in machine learning to optimize (11) is backpropagation (Rumelhart et al. 1986). However, as a gradient descent-based iterative algorithm, backpropagation is usually quite slow and can be trapped at some local minimum since the cost function is non-linear, and for most cases, non-convex. Motivated by the fact that randomization is computationally cheaper than optimization, alternative methods based on random non-linear bases have been proposed (Huang et al. 2006a; Rahimi and Recht 2007). These methods drastically decrease the computation cost while maintaining a reasonable level of approximation accuracy. The key feature of random networks is that they reduce the full optimization problem into standard linear regression by mapping the input data to a randomized feature space and then apply existing fast algebraic training methods (e.g., by minimizing squared error) to find the output weight. Given the design objective, algebraic training can achieve exact or approximate matching of the data at the training points. Compared to the gradient descent-based techniques, algebraic training methods can reduce computational complexity and provide better generalization properties. A typical algebraic approach for single-hidden layer feedforward random networks is extreme learning machine (ELM; Huang et al. 2006a), which is summarized in Algorithm 2. Using randomized non-linear features, ELM estimates the output weight by finding the least squares solution to the resulting linear equations system Hυ = T. Note that presented this way, our method can also be regarded as a random version of GAM. In practice, people could add regularization to improve stability and generalizability.

3.2 Choice of non-linearity

There are many choices for non-linear activation functions in random networks. Different types of activation functions can be used for different learning tasks. In this paper, we focus on random networks with two typical types of non-linear nodes:

Algorithm 2.

Extreme Learning Machine

Given a training set
𝒯 = {(Ij, tj)|Ij ∈ ℝd, tj ∈ ℝm, j = 1, …, N}, activation
function a(x; γ) and hidden node number s
Step 1: Randomly assign hidden node parameters
γi, i = 1, …, s
Step 2: Calculate the hidden layer output matrix H
Hji = a(Ij; γi), i = 1, …, s, j = 1, …, N
Step 3: Calculate the output weight υ
υ = HT, T = [t1, t2, …, tN]T,
where H is the Moore–Penrose generalized inverse of matrix H
  • Additive nodes
    a(q;γ)=a(w·q+d),wd,d,γ={w,d},
    where w and d are the weight vector and the bias of the hidden node.
  • Radial basis functions (RBFs) nodes
    a(q;γ)=a(qc222),cd,+,γ={c,},
    where c and ℓ are the center and width of the hidden node.

Both random networks can approximate a rich class of functions arbitrarily well (Huang et al. 2006b; Rahimi and Recht 2008). With randomly assigned input weights and biases composed linearly inside the non-linear activation function, additive nodes form a set of basis functions, whose level sets are hyperplanes orientated by wi and shifted by di, respectively. Random networks with additive nodes tend to reflect the global structure of the target function. On the other hand, RBF nodes are almost compactly supported (can be adjusted by the width ℓ) rendering good local approximation for the corresponding random networks.

3.3 Connection to GPs and sparse GPs

It is worth noting the connection between networks with RBF nodes and Gaussian processes models (Rasmussen 1996; Neal 1999). Given a training dataset

𝒯={(q(j),t(j))|q(j)d,t(j),j=1,,N},

and using squared-exponential covariance function

K(q(j),q(j))=σf2exp(q(j)q(j)222),θ={σf,},

the standard GP regression with a Gaussian noise model has the following marginal likelihood

p(t|Q,θ)=𝒩(t|0,KN+σ2I),

where Q={q(j)}j=1N,t={t(j)}j=1N,[KN]jj=K(q(j),q(j)) is the covariance matrix and σ is the noise parameter. Prediction on a new observation q* is made according to the conditional distribution

p(t*|q*,𝒯,θ)=𝒩(t*|k*T(KN+σ2I)1t,K**k*T(KN+σ2I)1k*+σ2),

where [k*]j = K(q(j), q*) and K** = K(q*, q*).

On the other hand, if we use K(q(j), ·) as the j th hidden node, j = 1, 2, …, N, the output matrix becomes H = KN, and the output weight learned by algebraic approach to a regularized least square problem is

υ^=arg minυHυt2+σ2υTKNυ=(KN+σ2I)1t.

Therefore, such a network provides the same prediction point estimate as the above full GP models. This way, a Gaussian process model can be interpreted as a self-organizing RBF network where new hidden nodes are added adaptively with new observations. This is also an alternative point of view to Neal (1996) where GP models were shown to be equivalent to single-hidden layer neural networks with infinite many hidden nodes.

Notice that the above GP model scales cubically with the number of data points N which limits the application of GPs to relatively small datasets. To derive a GP model that is computationally tractable for large datasets, sparse GPs based on inducing point methods (Snelson and Ghahramani 2006; Quinonero-Candela and Rasmussen 2005) have been previously proposed. These sparse models introduce a set of inducing points Q¯={q¯(i)}i=1M and approximate the exact kernel K(q, q′) by an approximation (q, q′) for fast computation. For example, the fully independent training conditional (FITC; Snelson and Ghahramani 2006) method uses the approximate kernel

K˜FITC(q,q)=kqTKM1kq+δqq(K(q,q)kqTKM1kq),

where KM is the exact covariance matrix for the M inducing points and kq, kq are the exact covariance matrices between q, q′ and the inducing points. Given the same training dataset 𝒯, the marginal likelihood is

p(t|Q,θ)=𝒩(t|0,KNMKM1KMN+Λ+σ2I).

Here, Λ is a diagonal matrix with Λjj=KjjkjTKM1kj that adjusts the diagonal elements to the ones from the exact covariance matrix KN. Similarly, predictions can be made according to the conditional distribution

p(t*|q*,𝒯,θ)=𝒩(t*|k*TΣM1KMN(Λ+σ2I)1t,K**k*T(KM1ΣM1)k*+σ2),

where ΣM = KM + KMN(Λ + σ2I)−1KNM. The computation cost is reduced to 𝒪(M2N) for learning and after that the predictive mean costs 𝒪(M) per new observation. The hyperparameters θ, σ and inducing points can be tuned to maximize the marginal likelihood. However, in high dimension the tuning of becomes infeasible.

On the other hand, if we use the inducing points as the centers of an RBF network, the output matrix H = KNM. Given the diagonal matrix D = Λ + σ2I, the output weight estimated by the algebraic approach to a weighted least square problem plus a regularization term is

υ^=arg minυD12(Hυt)2+υTKMυ=ΣM1KMN(Λ+σ2I)1t.

Therefore, the same predictive mean can be reproduced if we use the inducing points as centers and use the same hyperparameter configuration in our random network with RBF nodes and optimize with respect to the above cost function. Moreover, those hyperparameters in each basis (or hidden nodes) can also be trained (typically by gradient decent) if we abandon the use of random bases and simple linear regression.

Figure 1 compares random networks with different node types and related GP methods on fitting a simple function y = x2/2 which corresponds to the negative log-density of the standard normal distributions. We used softplus function σ(x) = log(1 + exp(x)) in additive nodes and exponential square kernels in RBF nodes and GP methods. As we can see from the graph, random networks generally perform better than GP methods when the number of observations is small. The randomness in the configuration of hidden nodes force networks to learn more globally. In contrast, GP models are more local and need more data to generalize well. By introducing sparsity, FITC tends to generalize better than full GP, especially on small datasizes. Since our goal is to fit negative log-posterior density function in (2) and U(q) → ∞ as q moves away from the high density domain, using softplus basis functions in random networks are more capable to capture this far field feature by striking a better balance between flexibility and generalization while being less demanding on the datasize. Also, the number of hidden neurons (bases) can be used to regularize the approximation to mitigate overfitting issue.

Fig. 1.

Fig. 1

Comparing different surrogate approximations with an increasing number of observations N = 10, 20, 40 on target function y = x2/2. The observation points are nested samples from the standard normal distribution. For FITC and random networks, we choose five inducing points and five hidden neurons, respectively. FITC and random networks are all run 100 times and averaged to reduce the random effects on the results

3.4 Surrogate-induced Hamiltonian flow

As mentioned in the previous sections, repetitive computation of Hamiltonian, its gradient and other quantities that involve all dataset undermine the overall exploration efficiency of HMC. To alleviate this issue, we exploit the smoothness or regularity in parameter space, which is true for most statistical models. As discussed in Neal (1996), Liu (2001), and Rasmussen (2003), one can improve computational efficiency of HMC by approximating the energy function and using the resulting approximation to device a surrogate transition mechanism while still converging to the correct target distribution. To this end, Rasmussen (2003) proposed to use pre-convergence samples (which are discarded during the burn-in period) to approximate the energy function using a Gaussian process model. Here, we define an alternative surrogate-induced Hamiltonian as follows:

H˜(q,p)=Ũ(q)+12pTM1p,

where Ũ(q) is the neural network surrogate function. (q, p) now defines a surrogate-induced Hamiltonian flow, parametrized by a trajectory length t, which is a map ϕ̃t: (q, p) → (q*, p*). Here, (q*, p*) being the end-point of the trajectory governed by the following equations

dqdt=H˜p=M1p,dpdt=H˜q=Ũq.

When the original potential U(q) is computationally costly, simulating the surrogate-induced Hamiltonian system provides a more efficient proposing mechanism for our HMC sampler. The introduced bias along with discretization error from the leapfrog integrator are all naturally corrected in the MH step where we use the original Hamiltonian in the computation of acceptance probability. As a result, the stationary distribution of the Markov chain will remain the correct target distribution (see Appendix 1 for a detailed proof). Note that by controlling the approximation quality of the surrogate function, we can maintain a relatively high acceptance probability. This is illustrated in Fig. 2 for a two-dimensional banana-shaped distribution (Girolami and Calderhead 2011).

Fig. 2.

Fig. 2

Comparing HMC and NNS–HMC based on a two-dimensional banana-shaped distribution. The left panel shows the gradient fields (force map) for the original Hamiltonian flow (red) and the surrogate-induced Hamiltonian flow (blue). The middle and right panel show the trajectories for HMC and NNS–HMC samplers. Both samplers start from the same point (red square) with same initial momentums. Blue points at the end of the trajectories are the proposals. The overall acceptance probability drops from 0.97 using HMC to 0.88 using NNS–HMC. (Color figure online)

To construct such a surrogate, the early evaluations of the target function during the early iterations of MCMC will be used as the training set based on which we can train a shallow random network using fast algebraic approaches, such as ELM (Algorithm 2). The gradient of the scalar output z [see (10)] for a network with additive hidden nodes, for example, then can be computed as

zq=i=1sυia(wi·q+di)wi, (12)

which costs only 𝒪(s) computations. To balance the efficiency in computation and flexibility in approximation, and to reduce the possibility of overfitting, the number of hidden nodes s need to be small as long as a reasonable level of accuracy can be achieved. In practice, this can be done by monitoring the resulting acceptance rate using an initial chain.

Following Rasmussen (2003), we propose to run our method, henceforth called RNS–HMC (see Algorithm 3), in two phases: exploration phase and exploitation phase. During the exploration phase, we initialize the training dataset D with an empty set or some samples from the prior distribution of parameters. We then run the standard HMC algorithm for some iterations and collect information from the new states (i.e., accepted proposals). When we have sufficiently explored the high density domain in parameter space and collected enough training data (during the burn-in period), a shallow random network is trained based on the collected training set D to form a surrogate for the potential energy function. The surrogate function will be used to approximate the gradient information needed for HMC simulations later in the exploitation phase.

As an illustrative example, we compare the performance of different surrogate HMC methods on a challenging Gaussian target density in 32 dimensions (a lower dimensional case was used in Rasmussen 2003). The target density has 31 confined directions and a main direction that is 10 times wider, and all variables are correlated. Both the full GPs and FITC methods are implemented using GPML package (Rasmussen and Nickisch 2010). The results are presented in Fig. 3. Compared to the full GPs, FITC and random networks (with additive and RBF nodes) all scales linearly with the number of observations. Both RNSs can start with fewer training data. We also compare the efficiency of the surrogate induced Hamiltonian flows in terms of time-normalized mean ESSs. The efficiency of FITC and random networks all increases as the number of observations increase until no more approximation gain can be obtained (see the acceptance probability in the middle panel). However, the efficiency of full GP begin to drop before the model reaches its full capacity. That is because its predictive complexity also grows with the number of observations, which in turn diminishes the overall efficiency. Overall, the random network with additive nodes outperform other methods based on these example.

Fig. 3.

Fig. 3

Comparing the efficiency of our random network surrogates and Gaussian process surrogates on a challenging 32 dimensional Gaussian target whose covariance matrix has an eigenvector (1, 1, …, 1)T with a corresponding eigenvalue of 1.0, and all other eigenvalues are 0.01. We set the step size to keep the acceptance probability around 70% for HMC and use the same step size in all surrogate methods. For FITC and random networks, the number of inducing points and hidden neurons are all set to be 1000 to allow reasonably accurate approximation. We ran each algorithm ten times and plot the medians and 80% error bars

Our proposed method provides a natural framework to incorporate surrogate functions in HMC. Moreover, it can be easily extended to RMHMC. To this end, the Hessian matrix of the surrogate function can be used to construct a metric in parameter space and the third order derivatives can be used to simulate the corresponding modified Hamiltonian flow. We refer to this extended version of our method as RNS–RMHMC.

Note that the approximation quality of the neural network surrogate function depends on several factors including the

Algorithm 3.

Random Network Surrogate HMC

Input: Starting position q(1), step size ε and number of hidden
        units s
Initialize the training dataset: D = ∅ or several random samples
from the prior
for t = 1, 2, …, B do
graphic file with name nihms836406t3.jpg Resample momentum p
p(t) ~ 𝒩(0, M), (q0, p0) = (q(t), p(t))
Simulate discretization of Hamiltonian dynamics and propose
(q*, p*)
Metropolis–Hasting correction:
u ~ Uniform[0, 1], ρ = exp[H(q(t), p(t)) − H(q*, p*)]
if u < min(1, ρ), then
q(t+1) = q*, D = D ∪ {(q*, U(q*))}
Train a neural network with s hidden units via ELM on D to
form the surrogate function z
for t = B + 1, B + 2, … do
graphic file with name nihms836406t4.jpg Resample momentum p
p(t) ~ 𝒩(0, M), (q0, p0) = (q(t), p(t))
Simulate discretization of a new Hamiltonian dynamics using
z:
for l = 1 to L do
graphic file with name nihms836406t5.jpg
pl1pl1ε2zq(ql1)
qlql−1 + εM−1 pl−1
plplε2zq(ql)
(q*, p*) = (qL, pL)
Metropolis–Hasting correction:
u ~ Uniform[0, 1], ρ = exp[H(q(t), p(t)) − H(q*, p*)]
if u < min(1, ρ), then q(t+1) = q*

dimension of parameter space, d, the number of hidden neurons, s and the training size, N. Here, we assume that N is sufficiently large enough, and investigate the efficiency of RNS–HMC in terms of its acceptance probability for different values of d and s based on a standard logistic regression model with simulated data. Similar to the results presented in Strathmann et al. (2015), Fig. 4 shows the acceptance rate (over 10 MCMC runs) as a function of d and s. For dimensions up to d ≈ 50, RNS–HMC maintains a relatively high acceptance probability with a shallow random network trained in a few seconds on a laptop computer. Appendix 2 provides a theoretical justification for our method.

Fig. 4.

Fig. 4

Acceptance probability of surrogate-induced Hamiltonian flow on simulated logistic regression models for different number of parameters, d, and hidden neurons, s. The step size is chosen to keep the acceptance probability around 70% for HMC. Left acceptance probability as a function of s (x-axis) and d (y-axis). Middle acceptance probability as a function of s for a fixed dimension d = 32. Right acceptance probability as a function of d for a fixed s = 2000

4 Adaptive RNS–HMC

So far, we have assumed that the neural network model in our method is trained using a sufficiently large enough number of training points after waiting for an adequate number of iterations to allow the sampler explore the parameter space. This, however, could be very time consuming in practice: waiting for a long time to collect a large number of training points could undermine the benefits of using the surrogate Hamiltonian function.

Figure 5 shows the average acceptance probabilities (over 10 MCMC chains) as a function of the number of training points, N, and the number hidden neurons, s, on a simulated logistic regression model for a fixed number of parameters, d = 32. While it takes around 2000 training data points to fulfill the network’s capability and reach a high acceptance comparable to HMC, only 500 training points are enough to provide an acceptable surrogate Hamiltonian flow (around 0.1 acceptance probability). Therefore, we can start training the neural network surrogate earlier and adapting it as more training points become available. Although adapting a Markov chain based on its history may undermine its ergodicity and consequently its convergence to the target distribution (Andrieu and Thoms 2008), we can enforce a vanishing adaption rate at such that at → 0 and t=1at= and update the surrogate function with probability at at iteration t. By Theorem 5 of Roberts and Rosenthal (2007), the resulting algorithm is ergodic and converges to the right target distribution.

Fig. 5.

Fig. 5

Acceptance probability of the surrogate-induced Hamiltonian flow based on a simulated logistic regression models with dimension d = 32. Left acceptance probability as a function of the number of hidden neurons s (x-axis) and the number of training points N (y-axis). Middle acceptance probability as a function of N for a fixed s = 1000. Right acceptance probability as a function of s for a fixed N = 1600

It is straightforward to adapt the neural network surrogate from the history of Markov chain. However, the estimator in Algorithm 2 costs 𝒪(Nds + Ns2 + s3) computation and 𝒪(Ns) storage, where N is the number of training data (e.g., the number of rows in the output matrix H). As N increases, finding the right weight and bias for the output neuron becomes increasingly difficult. In 1960, Greville shows that H can be learned incrementally in real time as new data points become available. Based on Greville’s method, online and adaptive pseudoinverse solutions for updating ELM weights has been proposed in Schaik and Tapson (2015) which can be readily employed here to develop an adaptive version of RNS–HMC. To be more efficient, only the estimator is updated.

Proposition 1

Suppose the current output matrix is Hk and the target vector is Tk. At time k+1, a new sample qk+1 and the target (potential) tk+1 are collected. Denote the output vector from the hidden layer at qk+1 as hk+1. The adaptive updating formula for the empirical potential matching estimator is given by

υk+1=υk+(tk+1hk+1Tυk)bk+1,

where bk+1 and auxiliary matrices Φk+1, Θk+1 are updated according to ck+1 = Φkhk+1.

  1. ck+1 = 0
    bk+1=Θkhk+11+hk+1TΘkhk+1,Φk+1=Φk,
    Θk+1=ΘkΘkhk+1bk+1T.
  2. ck+1 ≠ 0
    bk+1=ck+1ck+12,Φk+1=ΦkΦkhk+1bk+1T,
    Θk+1=ΘkΘkhk+1bk+1T+(1+hk+1TΘkhk+1)bk+1bk+1Tbk+1hk+1TΘkT.

At time k, the estimator takes a one-off 𝒪(kds+ks2+s3) computation and 𝒪(s2) storage (only need to store Φk and Θk, not Hk). Starting at a previously computed solution υK=HKTK, and two auxiliary matrices ΦK=IHKT(HK)T,ΘK=HK(HK)T, this adaptive updating costs 𝒪(ds + s2) computation and 𝒪(s2) storage, independent of the training data size k. Further details are provided in Appendix 3. We refer to this extended version of our method as adaptive RNS–HMC (ARNS–HMC).

5 Experiments

In this section, we use several experiments based on logistic regression models and inverse problem for elliptic partial differential equation (PDE) to compare our proposed methods to standard HMC and RMHMC in terms of sampling efficiency defined as time-normalized ESS. Given B MCMC

Algorithm 4.

ARNS–HMC

Input: Initial estimator υ0 and auxiliary matrices Φ0, Θ0,
        adaption schedule at,
        step size ε, number of hidden units s
Initialize the surrogate z0 = z(q, υ0)
for t = 0, 1, … do
graphic file with name nihms836406t6.jpg Resample momentum p
p(t) ~ 𝒩(0, M), (q0, p0) = (q(t), p(t))
Propose (q*, p*) by simulating discretization of Hamiltonian
dynamics with Uq=ztq
Metropolis–Hasting correction:
u ~ Uniform[0, 1], ρ = exp[H(q(t), p(t)) − H(q*, p*)]
if u < min(1, ρ), then q(t+1) = q*; else q(t+1) = q(t)
update the estimator and auxiliary matrices to
υt+1, Φt+1, Θt+1 using (q(t+1), U(q(t+1)))
u ~ Uniform[0, 1],
if u < at, then zt+1 = z(q, υt+1); else zt+1 = zt

samples for each parameter, ESS=B[1+2Σk=1Kγ(k)]1, where Σk=1Kγ(k) is the sum of K monotone sample autocorrelations (Geyer 1992). We use the minimum ESS over all parameters normalized by the CPU time, s (in seconds), as the overall measure of efficiency: min(ESS)/s. The corresponding step sizes ε and number of leapfrog steps L for HMC and RMHMC are chosen to make them stable and efficient (e.g., reasonably high acceptance probability and fast mixing). The same settings are used for our methods. Note that while the acceptance rates are similar in the first two examples, they drop a little bit for the last two examples, which is mainly due to the constraints we imposed on our surrogate functions. To prevent non-ergodicity and ensure high ESS for both HMC and RNS–HMC, we follow the suggestion by Neal (2011) to uniformly sample L from {1, …, L} in each iteration. The number of hidden nodes s in RNSs are not tuned too much and better results could be obtained by more careful tunings.

In what follows, we first compare different methods in terms of their time-normalized ESS after the burin-period. To this end, we collect 5000 samples after a reasonable large number of iterations (5000) of burn-in to make sure the chains have reached their stationary states. For our methods, the accepted proposals during the burn-in period after a short warming-up session (the first 1000 iterations) are used as a training set for a shallow random network. Later, we show the advantages of our adaptive algorithm.

5.1 Logistic regression model

As our first example, we compare HMC, RMHMC, RNS–HMC, and RNS–RMHMC using a simulation study based on a logistic regression model with 50 parameters and N = 105 observations. The design matrix is X=(1101,X1) and true parameters β are uniformly sampled from [0, 1]50, where X1~𝒩49(0,1100I49). The binary responses Y = (y1, y2, …, yN)T are sampled independently from Bernoulli distributions with probabilities pi=1/(1+exp(xiTβ)). We assume β ~ 𝒩50(0, 100I50), and sample from the corresponding posterior distribution.

Notice that the potential energy function U is now a convex function, the Hessian matrix is positive semi-definite everywhere. Therefore, we use the Hessian matrix of the surrogate as a local metric in RNS–RMHMC. For HMC and RNS–HMC, we set the step size and leapfrog steps ε = 0.045, L = 6. For RMHMC and RNS–RMHMC, we set the step size and leapfrog steps ε = 0.54, L = 2.

To illustrate that our method indeed converges to the right target distribution, Fig. 6 provides the one- and two-dimensional posterior marginals of some selected parameters obtained by HMC and RNS–HMC. Table 1 compares the performance of the four algorithms. As we can see, RNS–HMC has substantially improved the sampling efficiency in terms of time-normalized min(ESS).

Fig. 6.

Fig. 6

HMC versus RNS–HMC: comparing one- and two-dimensional posterior marginals of β1, β11, β21, β31, β41 based on the logistic regression model with simulated data

Table 1.

Comparing the algorithms using logistic regression models and an elliptic PDE inverse problem

Experiments Methods AP ESS s/Iter min(ESS)/s spdup
LR (simulation) HMC 0.76 (4351, 5000, 5000) 0.061 14.17 1
RMHMC 0.80 (1182, 1496, 1655) 3.794 0.06 0.004
s = 2000 RNS–HMC 0.76 (4449, 4999, 5000) 0.007 123.56 8.72
RNS–RMHMC 0.82 (1116, 1471, 1662) 0.103 2.17 0.15
LR (Bank Marketing) HMC 0.70 (2005, 2454, 3368) 0.061 6.52 1
RMHMC 0.92 (1769, 2128, 2428) 0.631 0.56 0.09
s = 1000 RNS–HMC 0.70 (1761, 2358, 3378) 0.007 52.22 8.01
RNS–RMHMC 0.90 (1974, 2254, 2457) 0.027 14.41 2.21
LR (a9a 60 dimension) HMC 0.72 (1996, 2959, 3564) 0.033 11.96 1
RMHMC 0.82 (5000, 5000, 5000) 3.492 0.29 0.02
s = 2500 RNS–HMC 0.68 (1835, 2650, 3203) 0.005 81.80 6.84
RNS–RMHMC 0.79 (4957, 5000, 5000) 0.370 2.68 0.22
Elliptic PDE HMC 0.91 (4533, 5000, 5000) 0.775 1.17 1
RMHMC 0.80 (5000, 5000, 5000) 4.388 0.23 0.20
s = 1000 RNS–HMC 0.75 (2306, 3034, 3516) 0.066 7.10 6.07
RNS–RMHMC 0.66 (2126, 4052, 5000) 0.097 4.38 3.74

For each method, we provide the acceptance probability (AP), the CPU time (s) for each iteration and the time-normalized ESS

The speed up for the best method is identified in bold

Next, we apply our method to two real datasets: Bank Marketing and the a9a dataset Lin et al. (2008). The Bank Marketing dataset (40,197 observations and 24 features) is collected based on direct marketing campaigns of a Portuguese banking institution aiming at predicting if a client will subscribe to a term deposit (Moro et al. 2014). We set the step size and number of leapfrog steps ε = 0.012, L = 45 for HMC and RNS–HMC; ε = 0.4, L = 6 for RMHMC and RNS–RMHMC. The a9a dataset (32,561 features and 123 features) is complied from the UCI adult dataset (Bache and Lichman 2013) which has been used to determine whether a person makes over 50K a year. We reduce the number of features to 60 by random projection (increasing the dimension to 100 results in a substantial drop in the acceptance probability). We set the step size and number of leapfrog steps ε = 0.012, L = 10 for HMC and RNS–HMC; ε = 0.5, L = 4 for RMHMC and RNS–RMHMC. All datasets are normalized to have zero-mean and unit standard deviation. The priors are the same as before. The results for the two datasets are summarized in Table 1. As before, both RNS–HMC and RNS–RMHMC significantly outperform their counterpart algorithms.

5.2 Elliptic PDE inverse problem

Another computationally intensive model is the elliptic PDE inverse problem discussed in Dashti and Stuart (2011) and Conard et al. (2014). This classical inverse problem involves inference of the diffusion coefficient in an elliptic PDE which is usually used to model isothermal steady flow in porous media. Let c be the unknown diffusion coefficient and u be the pressure field, the forward model is governed by the elliptic PDE

x·(c(x,θ)xu(x,θ))=0, (13)

where x = (x1, x2) ∈ [0, 1]2 is the spatial coordinate. The boundary conditions are

u(x,θ)|x2=0=x1,u(x,θ)|x2=1=1x1,
u(x,0)x1|x1=0=0,u(x,0)x1|x1=1=0.

In our numerical simulation, (13) is solved using standard continuous GFEM with bilinear basis functions on a uniform 30 × 30 quadrilateral mesh.

A log-Gaussian process prior is used for c(x) with mean zero and an isotropic squared-exponential covariance kernel:

C(x1,x2)=σ2exp(x1x2222l2),

for which we set the variance σ2 = 1 and the length scale l = 0.2. Now, the diffusivity field can be easily parameterized with a Karhunen–Loeve (K–L) expansion:

c(x,θ)exp(i=1dθiλiυi(x)),

where λi and υi(x) are the eigenvalues and eigenfunctions of the integral operator defined by the kernel C, and the parameter θi are endowed with independent standard normal priors, θi ~ 𝒩(0, 0.52), which are the targets of inference. In particular, we truncate the K–L expansion at d = 20 modes and condition the corresponding mode weights on data. Data are generated by adding independent Gaussian noise to observations of the solution field on a uniform 11 × 11 grid covering the unit square.

yj=u(xj,θ)+εj,εj~𝒩(0,0.12),j=1,2,,N.

The number of leapfrog steps and step sizes are set to be L = 10, ε = 0.16 for both HMC and NNS–HMC. Note that the potential energy function is no longer convex; therefore, we can not construct a local metric from the Hessian matrix directly. However, the diagonal elements

2Uθi2=1σθ2+j=1N1σy2(ujθi)2j=1Nεjσy22ujθi2,
σθ=0.5,σy=0.1,i=1,2,,d,

are highly likely to be positive in that the deterministic part (first two terms) is always positive and the noise part (last term) tends to cancel out. The diagonals of the Hessian matrix of surrogate therefore induce an effective local metric which can be used in RNS–RMHMC. A comparison of the results of all algorithms are presented in Table 1. As before, RNS–HMC provides a substantial improvement in the sampling efficiency. For the RMHMC methods, we set L = 3, ε = 0.8. As seen in the table, RMHMC is less efficient than HMC mainly due to the slow computation speed. However, RNS–RMHMC improves RMHMC substantially and outperforms HMC. Although the metric induced by the diagonals of the Hessian matrix of surrogate may not be as effective as Fisher information, it is much cheaper to compute and provide a good approximation.

Remark

In addition to the usual computational bottleneck as in previous examples, e.g., large amount of data, there is another challenge on top of that for this example due to the complicated forward model. Instead of a simple explicit probabilistic model that prescribes the likelihood of data given the parameter of interest, a PDE (13) is involved in the probabilistic model. The evaluation of geometrical and statistical quantities, therefore, involves solving a PDE similar to (13) in each iteration of HMC and RMHMC. This is a preventive factor in practice. Using our methods based on neural network, surrogates provide a huge advantage. Numerical experiments show a gain of efficiency by more than 20 times. More improvement is expected as the amount of data increases.

5.3 Adaptive learning

Next, using the above four examples we show that ARNS–HMC can start with far fewer training points and quickly reach the same level of performance as that of RNS– HMC. Figure 7 shows that as the number of training points (from initial MCMC iterations) increases, ARNS–HMC fully achieves the network’s capability and reaches a comparable acceptance rate to that of HMC.

Fig. 7.

Fig. 7

Median acceptance rate of ANNS–HMC along with the corresponding 90% interval (shaded area). The red line shows the average acceptance rate of standard HMC. (Color figure online)

We also compare ARNS–HMC to HMC and RNS–HMC in terms of the relative error of mean (REM) which is defined as q(t)¯E(q)2/E(q)2, where q(t)¯ means sample mean up to time t. Figure 8 shows the results using the four examples discussed above. Note that before training the neural network models, both RNS–HMC and ARNS–HMC are simply standard HMC so the three algorithms have similar performance. As we can see, ARNS–HMC has the best overall performance: it tends to provide lower REM at early iterations. This could be useful if we have limited time budget to fit a model.

Fig. 8.

Fig. 8

Relative error of mean as a function of running time

6 Discussion and future work

In this paper, we have proposed an efficient and scalable computational method for Bayesian inference by exploring and exploiting regularity of probability models in parameter space. Our method is based on training surrogate function of the potential energy after exploring the parameter space sufficiently well. For situations where it is not practical to wait for a thorough exploration of parameter space, we have proposed an adaptive version of our method that can start with fewer training points and can quickly reach its full potential.

As an example, we used random networks and efficient learning algorithms to construct effective surrogate functions. These random bases surrogate functions provide good approximations of collective information of the full dataset while striking a good balance between accuracy and computation cost for efficient computation. Random networks combined with the optimized learning process can provide flexibility, accuracy, and scalability. Note that in general the overall performance could be sensitive to the architecture of the random network. Our proposed RNS method scales differently than GP emulators because of the specific constraints we imposed on its architecture. As our experimental results show, this approach could improve the performance of HMC in some applications.

In its current form, our method is more effective in problems with costly likelihood and a moderate number of parameters. In spite of improvements we have made to standard HMC, dealing with high-dimensional and complex distributions still remains quite challenging. For multimodal distributions, for example, our method’s effectiveness largely depends on the quality of training samples. If these samples are collected from one mode only, the surrogate function will miss the remaining modes and the sampler might not be able to explore them (especially if they are isolated modes). A surrogate function based on Gaussian processes might have a better chance at finding these modes in the tails of the approximate distribution since it tends to go to zero gradually. To address this issue, we can utilize mode searching and mode exploring ideas such as those proposed by Ahn et al. (2013) and Lan et al. (2014). For constrained target distributions, we can employ the method of Lan et al. (2014) based on spherical HMC.

For HMC, gradient of the potential function is an important driving force in the Hamiltonian dynamics. Although accurate approximation of a well-sampled smooth function automatically leads to accurate approximation of its gradient, this is not the case when the sampling is not well distributed. For example, when dense and well-sampled training datasets are difficult to obtain in very high dimensions, one can incorporate the gradient information in the training process. In future, we will study more effective way to utilize this information in the training process. As a common practice in adaptive MCMC methods, one may also learn the mass matrix M adaptively together with the surrogate in ARNS–HMC.

Acknowledgments

This work is supported by NSF grants DMS-1418422 and DMS-1622490.

Appendix 1: Convergence to the correct target distribution

In order to prove that the equilibrium distribution remains the same, it suffices to show that the detailed balance condition still holds. Denote θ = (p, q), θ′ = ϕ̃t(θ). In the Metropolis–Hasting step, we use the original Hamiltonian to compute the acceptance probability

α(θ,θ)=min(1, exp[H(θ)+H(θ)]),

therefore,

α(θ,θ)(dθ)=α(θ,θ) exp[H(θ)]dθ=θ=ϕ˜t1(θ)min(exp[H(θ)], exp[H(θ)])|dθdθ|dθ=α(θ,θ) exp[H(θ)]dθ=α(θ,θ)(dθ),

since |dθdθ|=1 due to the volume conservation property of the surrogate-induced Hamiltonian flow ϕ̃t. Now that we showed the detailed balance condition is satisfied, along with the reversibility of the surrogate-induced Hamiltonian flow, the modified Markov chain will converge to the correct target distribution.

Appendix 2: Potential matching

In the paper, training data collected from the history of Markov chain are used without a detailed explanation. Here, we will analyze the asymptotical behavior of surrogate-induced distribution and try to explain why the history of the Markov chain is a reasonable choice for training data. Recall that we find our neural network surrogate function by minimizing the mean square error (11). Similarly to Hyvärinen (2005), it turns out that minimizing (11) is asymptotically equivalent to minimizing a new distance between the surrogate-induced distribution and the underlying target distribution, independent of their corresponding normalizing constants.

Suppose we know the density of the underlying intractable target distribution up to a constant

P(q|Y)=1Zexp(U(q)),

where Z is the unknown normalizing constant. Our goal is to approximate this distribution using a parametrized density model, also known up to a constant,

Q(q,θ)=1Z(θ)exp(V(q,θ)).

Ignoring the multiplicative constant, the corresponding potential energy functions are U(q) and V(q, θ), respectively. The straightforward square distance between the two potentials will not be a well-defined measure between the two distributions because of the unknown normalizing constants. Therefore, we use the following distance instead:

K(θ)=mindV(q,θ)U(q)d2P(q|Y)dq=V(q,θ)U(q)2P(q|Y)dq[Eq(V(θ)U)]2=𝕍arq(V(θ)U). (14)

Because of its similarity to score matching (Hyvärinen 2005), we refer to the approximation method based on this new distance as potential matching; the corresponding potential matching estimator of θ is given by

θ^=arg minθK(θ).

It is easy to verify that K(θ) = 0 ⇒ V(θ) = U + constant ⇒ Q(q, θ) = P(q|Y), so K(θ) is a well-defined squared distance. Exact evaluation of (14) is analytically intractable. In practice, given N samples from the target distribution q1, q2, …, qN, we minimize the empirical version of (14)

K˜(θ)=mind1Nn=1NV(qn,θ)U(qn)d2=1Nn=1NV(qn,θ)U(qn)2(1Nn=1NV(qn,θ)U(qn))2, (15)

which is asymptotically equivalent to K by law of large numbers. (15) could be more concise if we allow a shift term in the parametrized model (V(q, θ) = V(q, θd)+θd). In that case, the empirical potential matching estimator is

θ˜=arg minθK˜(θ)=arg minθmind1Nn=1NV(qn,θd)+(θdd)U(qn)2=arg minθ1Nn=1NV(qn,θd)+θdU(qn)2=arg minθ1Nn=1NV(qn,θ)U(qn)2.

In particular, if we adopt an additive model for V(q, θ) with a shift term

V(q,θ)=i=1sυiσ(wi·q+di)+b,θ=(υ,b),

where wi, di, and activation function σ are chosen according to Algorithm 2 and collect early evaluations from the history of Markov chain

𝒯N={(q(1),U(q(1))),(q(2),U(q(2))),,×(q(N),U(q(N)))},

as training data; this way, the estimated parameters (i.e., weights for the output neuron) are asymptotically the potential matching estimates

limNθ^ELM,𝒯N=arg minθlimNC(θ|𝒯N)=θ^,

since the Markov chain will eventually converge to the target distribution. When truncated at a finite N, the estimated parameters are almost the empirical potential matching estimates except that the samples from the history of the Markov chain are not exactly (but quite close) from the target distribution.

Appendix 3: Adaptive learning

Theorem 1

(Greville) If a matrix A, with k columns, is denoted by Ak and partitioned as Ak = [Ak−1ak], with Ak−1 a matrix having k − 1 columns, then the Moore–Penrose generalized inverse of Ak is

Ak=[Ak1(IakbkT)bkT],

where

ck=(IAk1Ak1)ak,bk={(Ak1)TAk1ak1+Ak1ak2,ck=0,ckck2,ck0.

Proof of Proposition 1

To save computational cost, we only update the estimator. Suppose the current output matrix is Hk+1=[Hkkk+1T] and the target vector is Tk+1=[Tktk+1], then

υk+1THk+1T=Tk+1Tυk+1T=Tk+1T(Hk+1T)=[TkTtk+1]([HkThk+1])=[TkTtk+1][(HkT)(Ihk+1bk+1T)bk+1T]=TkT(HkT)(Ihk+1bk+1T)+tk+1bk+1T=υkT+(tk+1υkThk+1)bk+1T,

where bk+1 is obtained according to Theorem 1. Note that the computation of bk+1 and ck+1 still involves Hk and Hk whose sizes increase as data size grows. Following Kohonen (1988), Kovanic (1979), and Schaik and Tapson (2015), we introduce two auxiliary matrices here

Φk=IHkT(HkT)s×s,
Θk=((HkT))T(HkT)=Hk(HkT)s×s,

and rewrite bk+1 and ck+1 as

ck+1=Φkhk+1,bk+1={Θkhk+11+hk+1TΘkhk+1,ck+1=0,ck+1ck+12,ck+10.

Updating of the two auxiliary matrices can also be done adaptively

Φk+1=IHk+1T(Hk+1T)=I[HkThk+1][(HkT)(Ihk+1bk+1T)bk+1T]=ΦkΦkhk+1bk+1T,
Θk+1=Hk+1(Hk+1T)=[(Ibk+1hk+1T)Hkbk+1][(HkT)(Ihk+1bk+1T)bk+1T]=(Ibk+1hk+1T)Θk(Ihk+1bk+1T)+bk+1bk+1T,

and if ck+1 = 0, these formulas can be simplified as below

Φk+1=Φk,Θk+1=ΘkΘkhk+1bk+1T.

References

  1. Ahn S, Chen Y, Welling M. Distributed and adaptive darting Monte Carlo through regenerations; Proceedings of the 16th International Conference on Artificial Intelligence and Statistics (AI Stat, 2013). [Google Scholar]
  2. Amari S, Nagaoka H. Translations of Mathematical Monographs. Vol. 191. Oxford: Oxford University Press; 2000. Methods of Information Geometry. [Google Scholar]
  3. Andrieu C, Thoms J. A tutorial on adaptive MCMC. Stat. Comput. 2008;18(4):343–373. [Google Scholar]
  4. Bache K, Lichman M. UCI machine learning repository. [Accessed April 2015];2013 http://archive.ics.uci.edu/ml. [Google Scholar]
  5. Betancourt MJ. The fundamental incompatibility of scalable Hamiltonian Monte Carlo and naive data subsampling; Proceedings of 31th International Conference on Machine Learning (ICML’15); 2015. [Google Scholar]
  6. Chen T, Fox EB, Guestrin C. Stochastic gradient Hamiltonian Monte Carlo. 2014 Preprint. [Google Scholar]
  7. Conard PR, Marzouk YM, Pillai NS, Smith A. Asymptotically exact MCMC algorithms via local approximations of computationally intensive models. [Accessed Nov 2014];2014 arXiv: 1402.1694v3. unpublished manuscript, preprint. [Google Scholar]
  8. Dashti M, Stuart A. Uncertainty quantification and weak approximation of an elliptic inverse problem. SIAM J. Sci. Comput. 2011;49:2524–2542. [Google Scholar]
  9. Duane S, Kennedy AD, Pendleton BJ, Roweth D. Hybrid Monte Carlo. Phys. Lett. B. 1987;195:216–222. [Google Scholar]
  10. Geman S, Geman D. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 1984;6:721–741. doi: 10.1109/tpami.1984.4767596. [DOI] [PubMed] [Google Scholar]
  11. Geyer CJ. Practical Markov chain Monte Carlo. Stat. Sci. 1992;7:473–483. [Google Scholar]
  12. Girolami M, Calderhead B. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Stat. Soc. B. 2011;73:123–214. with discussion. [Google Scholar]
  13. Greville T. Some applications of the pseudoinverse of a matrix. SIAM Rev. 1960;2(1):15–22. [Google Scholar]
  14. Hoffman MD, Blei DM, Bach FR. Online learning for latent Dirichlet allocation. Neural Information Processing Systems (NIPS) 2010:856–864. [Google Scholar]
  15. Huang GB, Zhu QY, Siew CK. Extreme learning machine: theory and applications. Neurocomputing. 2006a;70(1–3):489–501. [Google Scholar]
  16. Huang GB, Chen L, Siew CK. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 2006b;17(4):879–892. doi: 10.1109/TNN.2006.875977. [DOI] [PubMed] [Google Scholar]
  17. Hyvärinen A. Estimation of non-normalized statistical models by score matching. J. Mach. Learn. Res. 2005;6:695–709. [Google Scholar]
  18. Kohonen T. Self-Organization and Associative Memory. Berlin: Springer; 1988. [Google Scholar]
  19. Kovanic P. On the pseudo inverse of a sum of symmetric matrices with applications to estimation. Kybernetika. 1979;15(5):341–348. [Google Scholar]
  20. Lan S, Streets J, Shahbaba B. Wormhole Hamiltonian Monte Carlo; Twenty-Eighth AAAI Conference on Artificial Intelligence; 2014a. [PMC free article] [PubMed] [Google Scholar]
  21. Lan S, Zhou B, Shahbaba B. Spherical Hamiltonian Monte Carlo for constrained target distributions; Proceedings of the 31st International Conference on Machine Learning (ICML, 2014b); [PMC free article] [PubMed] [Google Scholar]
  22. Lin CJ, Weng RC, Keerthi SS. Trust region Newton method for large-scale logistic regression. J. Mach. Learn. Res. 2008;9:627–650. [Google Scholar]
  23. Liu JS. Monte Carlo Strategies in Scientific Computing. New York: Springer; 2001. [Google Scholar]
  24. Meeds E, Welling M. GPS-ABC: Gaussian process surrogate approximate Bayesian computation; Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence (UAI); 2014. [Google Scholar]
  25. Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953;21:1087–1092. [Google Scholar]
  26. Moro S, Cortez P, Rita P. A data-driven approach to predict the success of bank telemarketing. Decis. Support Syst. 2014;62:22–31. [Google Scholar]
  27. Neal RM. MCMC using Hamiltonian dynamics. In: Brooks S, Gelman A, Jones G, Meng XL, editors. Handbook of Markov Chain Monte Carlo. Boca Raton: Chapman and Hall/CRC; 2011. pp. 113–162. [Google Scholar]
  28. Neal R. Regression and classification using Gaussian process priors. Bayesian Stat. 1999;6:475–501. [Google Scholar]
  29. Neal R. Bayesian Learning in Neural Networks. New York: Springer; 1996. [Google Scholar]
  30. Quinonero-Candela J, Rasmussen CE. A unifying view of sparse approximate Gaussian process regression. J. Mach. Learn. Res. 2005;6:1939–1959. [Google Scholar]
  31. Rahimi A, Recht B. Random features for large-scale kernel machines; Proceedings of the 21st Conference on Advances in Neural Information Processing Systems, NIPS; 2007. [Google Scholar]
  32. Rahimi A, Recht B. Uniform approximation of functions with random bases; Proceedings of the 46th Annual Allerton Conference on Communication, Control, and Computing; 2008. [Google Scholar]
  33. Rasmussen CE. PhD Thesis. University of Toronto; 1996. Evaluation of Gaussian processes and other methods for non-linear regression. [Google Scholar]
  34. Rasmussen CE. Gaussian processes to speed up hybrid Monte Carlo for expensive Bayesian integrals. Bayesian Stat. 2003;7:651–659. [Google Scholar]
  35. Rasmussen CE, Nickisch H. Gaussian processes for machine learning (GPML) toolbox. J. Mach. Learn. Res. 2010;11:3011–3015. [Google Scholar]
  36. Roberts GO, Rosenthal JS. Coupling and ergodicity of adaptive Markov chain MonteCarlo algorithm. J. Appl. Probab. 2007;44(2):458–475. [Google Scholar]
  37. Rumelhart DE, Hinton GE, Williams RJ. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol I. Cambridge: Bradford Books; 1986. Learning internal representations by error propagation; pp. 318–362. [Google Scholar]
  38. Shahbaba B, Lan S, Johnson WO, Neal RM. Split Hamiltonian Monte Carlo. Stat. Comput. 2014;24:339–349. [Google Scholar]
  39. Snelson E, Ghahramani Z. Advances in Neural Information Processing Systems (NIPS) Vol. 18. Cambridge: MIT Press; 2006. Sparse Gaussian processes using pseudoinput. [Google Scholar]
  40. Strathmann H, Sejdinovic D, Livingstone S, Szabo Z, Gretton A. Gradient-free Hamiltonian MonteCarlo with efficient kernel exponential families. Advances in Neural Information Processing Systems. 2015 [Google Scholar]
  41. van Schaik A, Tapson J. Online and adaptive pseudoinverse solutions for ELM weights. Neurocomputing. 2015;149:233–238. [Google Scholar]
  42. Verlet L. Computer ‘experiments’ on classical fluids, I: thermodynamical properties of Lennard–Jones molecules. Phys. Rev. 1967;159:98–103. [Google Scholar]
  43. Welling M, Teh YW. Bayesian learning via stochastic gradient Langevin dynamics; Proceedings of the 28th International Conference on Machine Learning (ICML); 2011. pp. 681–688. [Google Scholar]
  44. Zhang C, Shahbaba B, Zhao H. Precomputing strategy for Hamiltonian MonteCarlo method based on regularity in parameter space. [Accessed April 2015];2015 arxiv:1504.01418v1. unpublished manuscript. [Google Scholar]

RESOURCES