Abstract
Bayesian modelling enables us to accommodate complex forms of data and make a comprehensive inference, but the effect of partial misspecification of the model is a concern. One approach in this setting is to modularize the model and prevent feedback from suspect modules, using a cut model. After observing data, this leads to the cut distribution which normally does not have a closed form. Previous studies have proposed algorithms to sample from this distribution, but these algorithms have unclear theoretical convergence properties. To address this, we propose a new algorithm called the stochastic approximation cut (SACut) algorithm as an alternative. The algorithm is divided into two parallel chains. The main chain targets an approximation to the cut distribution; the auxiliary chain is used to form an adaptive proposal distribution for the main chain. We prove convergence of the samples drawn by the proposed algorithm and present the exact limit. Although SACut is biased, since the main chain does not target the exact cut distribution, we prove this bias can be reduced geometrically by increasing a user-chosen tuning parameter. In addition, parallel computing can be easily adopted for SACut, which greatly reduces computation time.
Keywords: Cutting feedback, Stochastic approximation Monte Carlo, Intractable normalizing functions, Discretization
1. Introduction
Bayesian models mathematically formulate our beliefs about the data and parameter. Such models are often highly structured models that represent strong assumptions. Many of the desirable properties of Bayesian inference require the model to be correctly specified. We say a set of models f (x |θ), where θ ∈ Θ , are misspecified if there is no θ 0 ∈ Θ such that data X is independently and identically generated from f (x |θ 0) (Walker 2013). In practice, models will inevitably fall short of covering every nuance of the truth. One popular approach when a model is misspecified is fractional (or power) likelihood. This can be used in both classical (e.g. Nakaya et al. 2005; Huang et al. 2010; Liu et al. 2018) and Bayesian (e.g. Miller and Dunson 2019; Bhattacharya et al. 2019) frameworks. However, this method treats all of the models as equally misspecified.
We consider the situation when the assumptions of the model are thought to partially hold: specifically, we assume that one distinct component (or module in the terminology of Liu et al. 2009) is thought to be incorrectly specified, whereas the other component is correctly specified. In standard Bayesian inference, these distinct modules are linked by Bayes’ theorem. Unfortunately, this means the reliability of the whole model may be affected even if only one component is incorrectly specified. To address this, in this paper we adopt the idea of “cutting feedback” (Lunn et al. 2009b; Liu et al. 2009; Plummer 2015; Jacob et al. 2017, 2020) which modifies the links between modules so that estimation of non-suspect modules is unaffected by information from suspect modules. This idea has been used in a broad range of applications including the study of population pharmacokinetic/pharmacodynamic (PK/PD) models (Lunn et al. 2009a), analysis of computer models (Liu et al. 2009), Bayesian estimation of causal effects with propensity scores (McCandless et al. 2010; Zigler 2016) and Bayesian analysis of health effect of air pollution (Blangiardo et al. 2011).
Consider the generic two-module model with observable quantities (data) Y and Z and parameters θ and φ, shown in the directed acyclic graph (DAG) in Fig. 1. The joint distribution is
and the standard Bayesian posterior, given observations of Y and Z, is
Fig. 1.
DAG representation of a generic two-module model. The two modules are separated by a dashed line
Suppose we are confident that the relationship between φ and Z is correctly specified but not confident about the relationship between φ and Y. To prevent this possible misspecification affecting estimation of φ, we can “cut” feedback by replacing p(φ|Y,Z) in the standard posterior with p(φ|Z), making the assumption that φ should be solely estimated by Z,
(1) |
We call (1) the “cut distribution”. The basic idea of cutting feedback is to allow information to “flow” in the direction of the directed edge, but not in the reverse direction (i.e. a “valve” is added to the directed edge).
Sampling directly from p cut(θ, φ) is difficult because the marginal likelihood p(Y |φ) = ∫ p(Y |θ,φ)p(θ)dθ depends on a parameter of interest φ and is not usually analytically tractable, except in the simple case when p(θ) is conditionally conjugate to p(Y |θ, φ), which we do not wish to assume. This intractable marginal likelihood is a conditional posterior normalizing constant: it is the normalizing function for the posterior distribution p(θ|Y,φ), conditional on φ, of a parameter θ of interest:
(2) |
This differs importantly to intractable likelihood normalizing constants, as discussed in the doubly intractable literature (e.g. Park and Haran 2018), in which the normalizing function H(φ)= ∫h(Y|φ)dY for the likelihood is intractable.
The normalizing function H (φ) is obtained by marginalizing the likelihood, with respect to the observable quantity Y, in contrast to the normalizing function p(Y |φ), which is obtained by marginalizing likelihood p(Y,θ|φ) with respect to a parameter θ of interest. This difference means that standard methods for doubly intractable problems (e.g. Møller et al. 2006; Murray et al. 2006; Liang 2010; Liang et al. 2016), which introduce an auxiliary variable, with the same distribution (or proposal distribution) as the distribution of the a posteriori observed and fixed Y to cancel the intractable normalizing function shared by them, do not directly apply to (2).
A simple algorithm that aims to draw samples from p cut(θ, φ) is implemented in WinBUGS (Lunn et al. 2009b). It is aGibbs-style sampler that involves updating θ and φ with a pair of transition kernels and that satisfy detailed balance with and p(φ|Z), respectively. However, the chain constructed by theWinBUGS algorithm may not have the cut distribution as its stationary distribution (Plummer 2015) since
where the weight function w is
The WinBUGS algorithm is inexact since , except in the simple case (conditionally conjugate) when it is possible to draw exact Gibbs updates from . Plummer (2015) proposed two algorithms that address this problem by satisfying approximately. One is a nested MCMC algorithm, which updates θ from by running a separate internal Markov chain with transition kernel satisfying detailed balance with the target distribution . The other is a linear path algorithm, which decomposes the complete MCMC move from (θ, φ) to into a series of substeps along a linear path from φ to and drawing a new θ at each substep. However, these methods require either the length of the internal chain or the number of substeps to go to infinity, meaning that in practice, these algorithms will not necessarily converge to p cut.
In this article, we propose a new sampling algorithm for p cut(θ, φ), called the stochastic approximation cut (SACut) algorithm. Since φ can be easily sampled from tractable part p(φ|Z), our algorithm aims to sample θ from the intractable part p(θ|Y,φ). Our algorithm is divided into two chains that are run in parallel: the main chain that approximately targets p cut(θ,φ) and an auxiliary chain that is used to form a proposal distribution for θ|φ in the main chain (Fig. 2b). The auxiliary chain uses stochastic approximation Monte Carlo (SAMC) (Liang et al. 2007) to approximate the intractable marginal likelihood p(Y |φ) and subsequently form a discrete set of distribution p(θ|Y,φ) for each , a set of pre-selected auxiliary parameters (Fig. 2a).
Fig. 2.
Relationship between and . This is a toy example when the conditional distribution of θ, given Y = 1 and φ, is N(φ,Y 2). Samples of the auxiliary variable are drawn from a mixture of discretized densities , i = 1,…,m, shown in the violin plot in a, with the green dots showing the median of each component (see Sect.2.1). Then , shown in b, is formed by using these auxiliary variables (see Sect. 2.2). Lemma 1 (Sect. 2.3) shows that converges to p (κ)(θ|Y,φ), which is shown in d, while (8) shows that p (κ)(θ|Y,φ) converges to the original density p(θ|Y, φ), shown in c
The basic “naive” form of our algorithm has convergence in distribution, but stronger convergence properties can be obtained by building a proposal distribution to target an approximation p (κ)(θ|Y,φ) instead of the true distribution p(θ|Y,φ) (Fig. 2c, d). We prove a weak law of large numbers for the samples drawn from the main chain. We also prove that the bias due to targeting p (κ)(θ|Y,φ) can be controlled by the precision parameter κ, and that the bias decreases geometrically as κ increases. Our algorithm is inspired by the adaptive exchange algorithm (Liang et al. 2016), but replaces the exchange step with a direct proposal distribution for θ given φ in the main chain.
2. Main result
Let the product space Θ×Φ be the supports of θ and φ under p cut. We assume the following throughout for simplicity.
Assumption 1 (a) Θ and Φ are compact and (b) p cut is continuous with respect to θ and φ over Θ × Φ.
Assumption 1(a) is restrictive, but is commonly assumed in the study of adaptive Markov chains (Haario et al. 2001). In most applied settings, it is reasonable to choose a prior for θ and φ with support on only a compact domain, making the domain of the cut distribution compact without any alteration to the likelihood. Note that Assumption 1 implies that p cut is bounded over Θ × Φ. From now on, define a probability space (Ω, 𝓕, ℙ). Denote Lebesgue measure μ on Θ and Φ and let p cut be the measure on Θ × Φ defined by its density p cut.
In following sections, we describe the construction of the algorithm. The naive version of our algorithm builds a discrete proposal distribution for θ, based on Liang et al. (2016). Note that, in Liang et al. (2016), this proposal distribution only draws auxiliary variables that are discarded once the parameter of interest is drawn, and so strong convergence results for samples drawn by this probability distribution are not needed by Liang et al. (2016). This does not always apply to our problem since θ is the parameter of interest and its parameter space can be continuous. The naive version does not allow us to prove stronger convergence and theoretical properties, so we apply a simple function approximation technique with a specially designed partition of the parameter space that enables a straightforward implementation. Although this approximation leads to bias, we showthat it can be controlled.
2.1. Naive stochastic approximation cut algorithm
To introduce ideas that we will use in Sect. 2.3, we first describe a naive version of the stochastic approximation cut algorithm. The overall naive algorithm (Algorithm 1) is divided into two chains that are run in parallel.
The auxiliary chain uses stochastic approximation Monte Carlo (Liang et al. 2007) to estimate p(Y|φ) at a set of m pre-selected auxiliary parameter values . Aswe detail in supplementary materials, the set Φ0 is chosen from a set of MCMC samples drawn from p(φ|Z), chosen using the Max–Min procedure (Liang et al. 2016) that repeatedly adds the sample that has the largest Euclidean distance to the hither to selected Φ0. This ensures that Φ0 covers the major part of the support of p(φ|Z). A reasonably large m ensures the distribution p(θ|Y, φ) overlaps each other for neighbouring and . The target density for , which is proportional to p(θ|Y, φ) in (1) at the values Φ0, is
(3) |
Given proposal distributions (e.g. symmetric random walk proposal) and (e.g. uniformly drawing from a neighbouring set of for and individually, at each iteration n, proposals and are drawn from amixture proposal distribution, with a fixed mixing probability p mix,
and accepted according to the Metropolis–Hastings acceptance probability with an iteration-specific target
Here, is the estimate of , i = 1,…, m,up to a constant, and is a vector of these estimates at each of the pre-selected auxiliary parameter values Φ0 . We set , i = 1,…,m at the start. As described in Liang et al. (2007) and Liang et al. (2016), the estimates are updated by
(4) |
where en ,i = 1 if and en,i = 0 otherwise, and ζn = n 0/ max(n 0, n) decreases to 0 when n goes to infinity (the shrink magnitude n 0 is a user-chosen fixed constant). Note that in this auxiliary chain, when the number of iterations is sufficiently large, we are drawing (θ, φ) from (3). Hence, by checking whether the empirical sampling frequency of each strongly deviates from m –1, we can detect potential non-convergence of the auxiliary chain.
In the main Markov chain (θ n, φn), n = 1, 2,…,we draw from a proposal distribution and then draw according to a random measure
(5) |
where 𝓑 ⊂ Θ is any Borel set. Given a φ, the random measure (5) is formed via a dynamic importance sampling procedure proposed in Liang (2002) with intention to approximate the unknown distribution p (θ | Y, φ) (see supplementary material for a detailed explanation). For any Borel set 𝓑 ⊂ Θ,we have
and similarly, the denominator of (5) converges to the mp(Y |φ). Hence, by Lemma 3.1 of Liang et al. (2016), since Θ × Φ is compact, for any Borel set 𝓑 ⊂ Θ and on any outcome ω of probability space Ω, we have:
(6) |
This implies that the distribution of {θ n}, drawn from (5), converges in distribution to p(θ|Y, φ), and this convergence occurs uniformly over Φ. Note that the probability measure is adapted to filtration on (Ω, 𝓕, ℙ) and has a Radon–Nikodym derivative with respect to a mixture of Dirac measures determined by (Gottardo and Raftery 2008), because it is the law of a discrete random variable defined on . To achieve stronger convergence results, we will build a continuous probability distribution based on (5).
Algorithm 1. Naive stochastic approximation cut algorithm.
Initialize at starting points ;
For n = 1,…,N;
-
(a)Auxiliary chain:
-
(1)Draw a proposal according to .
-
(2)Accept the proposal, and set according to the iteration-specific acceptance probability.
-
(3)Calculate according to (4), i = 1,…,m.
-
(1)
-
(b)Main chain:
-
(1)Draw a proposal according to .
-
(2)Set with probability:
- (3)
-
(4)Otherwise if is rejected, set (θ n , φ n ) = (θ n–1, φ n–1).
-
(1)
End For;
2.2. Simple function approximation cut distribution
The convergence in distribution (6) presented in the naive stochastic approximation cut algorithm is not sufficiently strong to infer a law of large numbers or ergodicity of the drawn samples. We will show that these properties can be satisfied by targeting an approximation of the density function p(θ|Y,φ).
We adopt a density function approximation technique which uses a simple function as the basis. The use of a simple function to approximate a density function has been discussed previously (Fu and Wang 2002; Malefaki and Iliopoulos 2009), but here we use a different partition of the support of the function, determined by rounding to a user-specified number of decimal places. The general theory is presented in supplementary material.
Given the d-dimensional compact set Θ and user-specified number of decimal places κ, we partition Θ in terms of (partial) hypercubes Θ r whose centres θ r are the rounded elements of Θ to κ decimal places,
(7) |
where R κ is the total number of rounded elements. The boundary set , which has Lebesgue measure 0, is:
Using this partition, we are able to build a simple function density that approximates p(θ|Y,φ):
and let P (κ) be the corresponding probability measure on Θ. The simple function approximation cut distribution is then formed by replacing the exact conditional distribution with this approximation
Let be the corresponding probability measure on Θ ×Φ.
Given the general theory presented in supplementary material, we have
(8) |
The rate of convergence is tractable if we further assume density p(θ |Y, φ) is continuously differentiable.
Corollary 1 If density function p (θ | Y, φ) is continuously differentiable, there exists a set ℰ ⊂ Θ with μ(ℰ) = μ(Θ) such that the local convergence holds:
where ε(θ, κ) → 0 as κ →∞.
In addition, the global convergence holds:
Proof See the general theory in supplementary material.
2.3. Stochastic approximation cut algorithm
We now refine the naive stochastic approximation cut algorithm by replacing in the main chain the proposal distribution , which concentrates on the discrete set , by a distribution, with support on the compact set Θ, that we will show converges almost surely to P (κ).
Let 𝒲 n (φ) = (Wn (Θ 1|Y, φ),…,Wn (Θ Rκ|Y, φ)) be a random weight process based on the probability of the original proposal distribution taking a value in each partition component Θ r as
(9) |
where r = 1,…,Rκ. Note that Wn (Θ r|Y, φ) is adapted to the auxiliary filtration 𝒢 n . By adding a (nR κ)–1, each Wn (Θ r|Y,φ), r = 1, …, Rκ, is strictly positive and yet this modification does not affect the limit since (nR κ)–1 → 0. That is, on any outcome ω of probability space Ω, we have
(10) |
We now define the random measure process that replaces used in the naive stochastic approximation cut algorithm. For any Borel set 𝓑,
(11) |
Clearly, so is a valid probability measure on Θ. Additionally, since 𝒲n(φ) is adapted to filtration 𝒢 n , is adapted to filtration 𝒢 n . The Radon–Nikodym derivative of with respect to the Lebesgue measure μ on Θ is
(12) |
This density is not continuous, but it is bounded on Θ. In addition, since Θ is the support of p(θ |Y, φ) and 𝒲n(φ) is strictly positive, the support of is Θ for all φ ∈ Φ as well.
Using as the proposal distribution has the advantage that converges almost surely to p (κ), in contrast to the convergence in distribution for the naive algorithm in (6).
Lemma 1 Given Assumption 1, on any outcome ω of probability space Ω, we have:
and this convergence is uniform over .
Note that the convergence is to p (κ)(θ|Y, φ) rather than p(θ|Y, φ), but we will show in Corollary 2 that this bias reduces geometrically as the precision parameterκ increases.
The complete stochastic approximation cut algorithm (SACut) is shown in Algorithm 2. The key idea is that we propose samples for θ from a density , which approximates p(θ|Y, φ) and from which we can draw samples, but we accept these proposals according to p (κ)(θ|Y, φ), which then cancels. This results in the acceptance probability being determined only by the proposal distribution for φ the proposal distribution for θ is not involved. Indeed, the acceptance probability is the same as the partial Gibbs sampler that we will discuss in Sect. 3.1.1.
2.4. Parallelization and simplification of computation
The main computational bottleneck of the stochastic approximation cut algorithm is the updating and storage of the cumulative set of auxiliary variable values Since we draw a new φ′ at each iteration, in order to calculate all possible probabilities defined by (5) and (9), the density must be calculated times. This is equivalent to running internal iterations at each step of external iteration for the existing approximate approaches proposed in Plummer (2015). Note that is solely generated from the auxiliary chain so is not affected by the precision parameter κ. If the calculation of this density is computationally expensive, the time to perform each update of the chain will become prohibitive when is large. However, the calculation of for different values of θ is embarrassingly parallel so can be evaluated in parallel whenever multiple computer cores are available, enabling a considerable speed up.
The speed of the computation can be further improved by reducing the size of . Given the precision parameter κ, we
Algorithm 2. Stochastic approximation cut (SACut) algorithm.
Initialize at starting points ;
For n = 1,…,N ;
-
(a)Auxiliary chain:
-
(1)Draw a proposal according to
-
(2)Accept the proposal, and set according to the iteration-specific acceptance probability.
-
(3)Calculate according to (4), i = 1,…, m.
-
(1)
- (b)
End For;
round all elements from to their κ decimal places and let be the set that contains these rounded elements. At each iteration, the number of calculations of density is equal to the number of d-orthotopes that auxiliary chain has visited up to iteration n and by (6) we know that the distribution of auxiliary samples of θ̃ converges to the true distribution p(θ|Y, φ). Hence, the computational speed is mainly determined by the precision parameter κ and the targetdistribution p (θ | Y, φ). In particular, for any fixed κ and a sufficiently long auxiliary chain the computational cost is upper bounded by the case of uniform distribution since it equally distributes over the space Θ.
Theorem 1 Given an arbitrary d-dimensional compact parameter space Θ and a precision parameter κ and suppose that the auxiliary chain has converged before we start collecting auxiliary variable θ̄, for any fixed number of iteration n. Then, the expected number of d-orthotopes visited is maximized when the target distribution is uniform distribution.
Proof See supplementary material (Online Resource 1).
For example, given a d-dimensional parameter space Θ = [0–5×10 –κ–1, 1+5×10 –κ–1] d and its partition Θ r , r = 1, …, 11dκ, we consider the uniform distribution as the target distribution. Assuming the auxiliary chain has converged, the expectation of is
In the case of d = 1, Fig. 3 compares the number of orthotopes visited between the uniform distribution and truncated normal distribution when the standard deviation is 0.1 and 0.05. It shows that larger precision parameter κ means more evaluations of are required. Hence, a wise choice of a small κ can significantly reduce computation time.
Fig. 3.
Relationship between the number of orthotopes visited and the number of iterations when precision parameter κ = 1, 2, 3, 4, 5. Separate Monte Carlo simulations were conducted for uniform distribution and truncated normal distribution with standard deviation 0.1 and 0.05
While small κ means a loss of precision since local variation so f original target distribution are smoothed by rounding the value of its samples, in most applied settings only a small number of significant figures are meaningful, and so the ability to trade-off the precision and computational speed is appealing. Comparing short preliminary run of chains for different candidates of κ may be useful when a suitable choice of κ is unclear. We will discuss this in Sect 4.1.
3. Convergence properties
In this section, we study the convergence properties of samples drawn by the stochastic approximation cut algorithm. We establish a weak law of large numbers with respect to the simple function approximation cut distribution , under some regularity conditions, by proving that the conditions required by Theorem 3.2 in Liang et al. (2016) are satisfied. We then prove that the bias with respect to p cut can be reduced geometrically by increasing the precision parameter κ. To aid exposition of the convergence properties, it is necessary to first introduce two simpler but infeasible alternative algorithms. Then, we prove the convergence of the algorithm.
The framework of our proofs follow Liang et al. (2016). However, adjustments are made for two key differences. Firstly, the parameter of interest here has two components,instead of just one, and we require completely different proposal distributions to those in Liang et al. (2016): the proposal distribution of θ involves an auxiliary chain and simple function approximation, and the proposal distribution of φ is a standard MCMC algorithm. Secondly, the parameter drawn by (12) here is retained, rather than being discarded as in Liang et al. (2016). This means the distributions involved here are different and more complicated.
3.1. Infeasible alternative algorithms
Definition 1 Given a signed measure 𝑀 defined on a set E, and a Borel set 𝓑 ⊂ E, define the total variation norm of 𝑀 as
3.1.1. A partial Gibbs sampler
The most straightforward algorithm that draws samples from is a standard partial Gibbs sampler, which draws proposals from , given a drawn from a proposal distribution . The transition kernel is
where δ is the multivariate Dirac delta function and
This transition kernel is Markovian and admits as its stationary distribution, provided a proper proposal distribution q(φn |φn –1) is used. We write U (1) for the corresponding probability measure.
Let u (s) denote the s-step transition kernel and write U (s) for the corresponding probability measure. By Meyn et al. (2009), we have ergodicity on Θ × Φ,
and for any bounded function f defined on Θ × Φ, we have a strong law of large numbers
Note, however, that this algorithm is infeasible because p (κ)(θ|Y, φ) is intractable, since p(θ|Y, φ) is intractable, and so we cannot directly draw proposals for θ.
3.1.2. An adaptive Metropolis–Hastings sampler
An adaptive Metropolis–Hastings sampler can be built by replacing p (κ) in the calculation of acceptance probability of the stochastic approximation cut algorithm by its approximation , which is the exact proposal distribution for θ at the nth step. The acceptance probability is determined by both θ and φ,
and we can write the transition kernel,
where δ is the multivariate Dirac delta function. Conditional on the filtration 𝒢 n , is Markovian. We write for the corresponding probability measure. Note that this sampler is not a standard Metropolis–Hastings algorithm since the transition kernel is not constant. Instead, it is an external adaptive MCMC algorithm (Atchadé et al. 2011).
Given information up to 𝒢 n , if we stop updating auxiliary process, then is fixed and not random, and this sampler reduces to a standard Metropolis–Hastings sampler. The transition kernel admits as its stationary distribution provided a proper proposal distribution is used. That is, define
and as the corresponding probability measure. Then, on Θ × Φ we have
Note, however, that this algorithm is also infeasible because, while we can draw proposals for θ, since is known up to 𝒢 n , p (κ)(θ|Y, φ) remains intractable so we cannot calculate the acceptance probability.
3.2. Convergence of the stochastic approximation cut algorithm
The infeasibility of the partial Gibbs sampler and the adaptive Metropolis–Hastings sampler motivates the development of the stochastic approximation cut algorithm, which replaces the proposal distribution by its target p (κ) in the accept–reject step of the adaptive Metropolis–Hastings sampler. This leads to the same acceptance probability as is used by the partial Gibbs sampler, so the proposed algorithm can be viewed as combining the advantages of both the partial Gibbs sampler and the adaptive Metropolis–Hastings sampler. The transition kernel of the stochastic approximation cut algorithm is
where δ is the multivariate Dirac delta function. Conditionally to 𝒢 n , the transition kernel is Markovian. We write for the corresponding probability measure. Given information up to 𝒢 n and stopping updating the auxiliary process, is fixed and not random, and we define the s-step transition kernel as
and write for the corresponding probability measure.
We now present several lemmas required to prove a weak law of large numbers for this algorithm (proofs in supplementary material (Online Resource 1)), appropriately modifying the reasoning of Meyn and Tweedie (1994), Roberts and Tweedie (1996) and Liang et al. (2016) for this setting.
Assumption 2 The posterior density p(φ|Z) is continuous on Φ and the proposal distribution is continuous with respect to on Φ × Φ.
Lemma 2 (Diminishing adaptation) Given Assumptions 1 and 2, then
Before presenting the next lemma, we introduce the concept of local positivity.
Definition 2 A proposal distribution satisfies local positivity if there exists δ > 0 and ε > 0 such thatfor every implies that .
Lemma 3 Given Assumption 1, the proposal distributions with densities and p (κ) : Θ → ℝ are both uniformly lower bounded away from 0 and satisfy local positivity uniformly for all values φ ∈ Φ.
Lemma 4 (Stationarity) Given Assumptions 1 and 2, and the filtration 𝒢 n (i.e. is not random), then if the transition kernel measures U (1) and both admit an irreducible and aperiodic Markov chain, then the transition kernel measure admits an irreducible and aperiodic chain. Moreover, if the proposal distribution satisfies local positivity, then there exists a probability measure ⊓n on Θ × Φ such that for any (θ 0, φ 0) ∈ Θ × Φ,
and this convergence is uniform over Θ × Φ.
Lemma 5 (Asymptotic Simultaneous Uniform Ergodicity) Given Assumptions 1 and 2 and the assumptions in Lemma 4, for any initial value (θ 0, φ 0) ∈ Θ × Φ, and anyε > 0 and e > 0, there exist constants S(ε) > 0 and N(ε) > 0 such that
for alls > S(ε) and n > N(ε).
Lemma 2 leads to condition (c) (diminishing adaptation), Lemma 4 leads to condition (a) (stationarity) and Lemma 5 leads to condition (b) (asymptotic simultaneous uniform ergodicity) in Theorem 3.2 of Liang etal.(2016). Hence,we have the following weak law of large numbers.
Theorem 2 (WLLN) Suppose that the conditions of Lemma 5 hold. Let f be any measurable bounded function on Θ×Φ. Then, for samples (θn , φn ), n = 1, 2, … drawn using the stochastic approximation cut algorithm, we have that
in probability.
Proof This follows by Theorem 3.2 in Liang et al. (2016).
Given further conditions and combining Corollary 1 with Theorem 2, we have the following corollary.
Corollary 2 Given the conditions in Corollary 1 hold for the cut distribution pcut and conditions in Theorem 2 hold. Then, given a measurable and bounded function f : Θ ×Φ → R, there exists, for any ε > 0 and e > 0, a precision parameter κ and iteration number N, such that for samples (θ n , φn ), n = 1, 2, … drawn using the stochastic approximation cut algorithm, we have that
More specifically, the bias
can be controlled by
Corollary 2 shows that, although the convergence established by Theorem 2 is biased with respect to the true cut distribution p cut, the bias can be geometrically reduced by selecting a large precision parameter κ.
4. Illustrative examples
We demonstrate the proposed algorithm in this section. First, we use as imulation example to introduce a simple method for choosing the precision parameter κ and demonstrate that the proposed algorithm can eliminate the feedback from a suspect module. We then examine a simulated case designed to highlight when existing algorithms will perform poorly. We finally apply our algorithm to an epidemiological example and compare results with existing studies. The R package SACut and code to replicate these examples can be downloaded from GitHub. 1
4.1. Simulated random effects example
In this example, we discuss a simple method for selecting the precision parameter κ and show that the proposed algorithm can effectively cut the feedback from a suspect module.
We consider a simple normal–normal random effect example previously discussed by Liu et al. (2009), with groups i = 1,…,100 = N, observations in each group, and random effects distribution β i ∼ N(0,θ 2). Our aim is to estimate the random effects standard deviation θ and the residual standard deviation φ = (φ 1,…,φ N ). By sufficiency, the likelihood can be equivalently represented in terms of the group-specific means and the sum of squared deviations as
Given the sufficient statistics and , the model consists of two modules: module 1 involving (s 2, φ) and module 2 involving , where β = (β 1,…,βN ).
We consider the situation when an outlier group is observed, meaning that module 2 is misspecified, and compare the standard Bayesian posterior distribution with the cut distribution. Specifically, we simulate data from the model with θ 2 = 2, and drawn from a Unif(0.5, 1.5) distribution , but we artificially set β1 = 10, making the first group an outlier and thus our normal assumption for the random effects misspecified. Given priors and , Liu et al. (2009) showed the standard Bayesian marginal posterior distribution for the parameters of interest is:
Since we are confident about our assumption of normality of Yij but not confident about our distributional assumption for the random effects βi, following Liu et al. (2009), we consider the cut distribution in which we remove the influence of on φ, so that possible misspecification of the first module does not affect φ:
where
To apply the proposed algorithm we first construct the auxiliary parameter set for the parameter φ by selecting 70 samples selected from posterior samples of p(φ|s 2) by the Max-Min procedure (Liang et al. 2016). We set the shrink magnitude n 0 = 1000 and run only the auxiliary chain for 104 iterations before starting to store the auxiliary variable hn , as suggested by Liang et al. (2016).
The precision parameter κ should be chosen large enough to obtain accurate results, while being small enough that computation is not prohibitively slow. To illustrate this, we compare results with κ = 10, which we regard as the gold standard, to results with κ = 1, 2, 3, 4. Different values of κ affect the sampling of θ only via (11), so we compare samples drawn from , averaged over the marginal cut distribution of φ:
(13) |
where the marginal cut distribution p cut(φ) is
We draw 105 samples from (13) for each value of κ, after running the proposed algorithm with few iterations (104) as a preliminary trial. Figure 4 shows the quantile–quantile plot for 5 choices for κ. The fit appears good for all choices of κ, except in the tails, where κ = 3 and κ = 4 provide a closer match to the gold standard. Thus, we choose κ = 3 as it gives a sufficiently accurate approximation.
Fig. 4.
Quantile–quantile plot for θ drawn from (13) with precision parameter κ = 1,2,3,4,10. Thex-axis of the quantile–quantile plot is the quantile of samples under different κ, and the y-axis is the quantile of samples under the gold standard κ = 10
We apply both the standard Bayesian approach and the stochastic approximation cut algorithm (κ = 3), each with ten independent chains. All chains were run for 105 iterations, and we retain only every 100th value, after discarding the first 10% of the samples, and we summarize the results by the mean and credible interval (CrI). Pooling the ten chains for the cut distribution gave estimates of θ 2 = 2.54 (95% CrI 1.93–3.44)and , where as the standard Bayesian approach gave estimates of θ 2 = 2.53 (95% CrI 1.93–3.44) and . Figure 5 presents the medians for the parameter of interest under each of the ten independent runs for the cut distribution and the standard Bayesian posterior. Recalling the true value for , it is clear that when using the stochastic approximation cut algorithm the medians locate around its true value rather than deviating systematically towards one side. This indicates the proposed algorithm has successfully prevented the outlying observation from influencing the estimation of .
Fig. 5.
Box plot ofmedian estimates for from each of ten independent runs, under the cut distribution and the standard Bayesian posterior. The dashed line indicates the true value of
4.2. Simulated strong dependence between θ and ϕ
In this section, we apply our algorithm in a simulated setting that illustrates when nested MCMC (Plummer 2015) can perform poorly. Consider the case when the distribution of θ is highly dependent on φ. In this case, if the distance between successive values and φ is large in the external MCMC chain, the weight function may not be close to 1 and so the internal chain will typically require more iterations to reach convergence. This will be particularly problematic if the mixing time for the proposal distribution is large.
To simulate this scenario, we consider a linear regression for outcomes Yi , i = 1,…,50, in which the coefficient vector θ = (θ 1,…,θ d ) is closely related to the coefficient φ for covariate Xi = (X θ,i, Xφ,i). To assess the performance under small and moderate dimension of θ, we consider d = 1 and 20 in this illustration. In addition to observations of the outcome Yi and the covariate Xi , we assume we have separate observations Zj , j = 1, …, 100 related to the coefficient φ.
(14) |
Suppose that we wish to estimate φ solely on the basis of Z = (Z 1,…, Z 100), and so we cut the feedback from Y = (Y 1,…,Y 50) to φ.
We generate Y and Z according to (14), with φ = 1 and θ p = sin(p), p = 1,…,d, and compare the results of stochastic approximation cut (SACut) algorithm, naive SACut and nested MCMC with internal chain length n int =1, 10, 200, 500, 1000, 1500 and 2000. Notably, nested MCMC with n int = 1 is the WinBUGS algorithm. The proposal distribution for each element of φ is a normal distribution, centred at the previous value and with standard deviation 0.25, and the proposal distribution for θ used in the nested MCMC is a normal distribution, centred at the previous value and with standard deviation 10–5. The priors for both parameters are uniformly distributed within a compact domain. We set the shrink magnitude n 0 = 2000 and precision parameter κp = 4, p = 1,…,20. The SACut and naive SACut algorithms are processed in parallel on ten cores of Intel Xeon E7-8860 v3 CPU (2.2 GHz), and the (inherently serial) nested MCMC algorithm is processed on a single core. All algorithms were independently run 20 times, and the results are the averages across runs. Each run consists 5 × 104 iterations. We retain only every tenth value after discarding the first 40% samples as burn-in.
To assess the performance of these algorithms, we compare their estimation of 𝔼(θ), lag-1 auto-correlation of samples, the Gelman-Rubin diagnostic statistic (Gelman and Rubin 1992) and the average time needed for the whole run. The precision of the estimation of θ is measured by the mean square error (MSE) across its d (either 1 or 20) components. The convergence is evaluated by averaging the Gelman–Rubin diagnostic statistic of d components.
Results are shown in Table 1. The time required to run the nested MCMC algorithm increases as the length of the internal chain or dimension of θ increases, although the influence of dimension of θ is relatively small. In a low-dimensional case (d = 1), the time needed to run SACut and naive SACut is more than the time needed to run the WinBUGS algorithm and nested MCMC algorithm when the length of internal chain is less than 500, but both the MSE and the Gelman–Rubin statistic are lower when using the SACut algorithm. In particular, the bias of the WinBUGS algorithm is large. There is only trivial difference in bias between SACut and nested MCMC when n int ≥ 1000, but SACut is significantly faster than nested MCMC. In the higher-dimensional case (d = 20), both SACut and naive SACut significantly outperform the WinBUGS and nested MCMC algorithm in terms of MSE. Although the difference between SACut and nested MCMC with n int = 1000 is small, the Gelman–Rubin statistic of the nested MCMC is still larger than the threshold 1.2 suggested by Brooks and Gelman(1998). TheMCMCchains produced by the nested MCMC converge better and the bias is smaller when n int ≥ 1500, but the SACut algorithm still outperforms it according to MSE and Gelman–Rubin statistic, and takes less time. It is also clear that nested MCMC samples show very strong auto-correlation for both cases and thinning may not efficiently solve this issue (Link and Eaton 2012); both SACut and naive SACut do not show any auto-correlation. We also note that the estimates provided by SACut and naive SACut are almost identical in practice. However, since the time needed for both algorithm is almost the same, providing the full approach with a more solid theoretical foundation is a valuable contribution to the computational statistics literature for the cut distribution.
Table 1.
Mean squared error (MSE), lag-1 auto-correlation (in absolute value) |AC|, Gelman-Rubin statistic , and clock time for the stochastic approximation cut (SACut) algorithm, naive SACut algorithm, WinBUGS algorithm, the nested MCMC algorithm (with varying internal chain length n int) and unbiased coupling algorithm
d | Algorithm | n int | MSE ×103 | |AC| | Time (min) | |
---|---|---|---|---|---|---|
1 | SACut | – | 0.112 | 0.019 | 1.00 | 311 |
Naive SACut | – | 0.114 | 0.016 | 1.00 | 308 | |
WinBUGS | 1 | 355.280 | 0.999 | 308.78 | 1 | |
Nested MCMC | 10 | 217.907 | 0.999 | 29.87 | 10 | |
Nested MCMC | 200 | 0.158 | 0.997 | 1.74 | 182 | |
Nested MCMC | 500 | 0.138 | 0.993 | 1.25 | 454 | |
Nested MCMC | 1000 | 0.109 | 0.990 | 1.07 | 910 | |
Nested MCMC | 1500 | 0.113 | 0.986 | 1.08 | 1349 | |
Nested MCMC | 2000 | 0.118 | 0.981 | 1.05 | 1771 | |
Unbiased Coupling | – | 0.114 | 0.012 | 1.01 | 22 | |
20 | SACut | – | 1.42 | 0.009 | 1.00 | 1239 |
Naive SACut | – | 1.47 | 0.002 | 1.01 | 1219 | |
WinBUGS | 1 | 16387.69 | 0.999 | 209.55 | 2 | |
Nested MCMC | 10 | 12490.25 | 0.999 | 22.73 | 11 | |
Nested MCMC | 200 | 249.18 | 0.999 | 2.38 | 259 | |
Nested MCMC | 500 | 10.76 | 0.997 | 1.33 | 517 | |
Nested MCMC | 1000 | 1.86 | 0.994 | 1.22 | 1010 | |
Nested MCMC | 1500 | 1.69 | 0.991 | 1.19 | 1515 | |
Nested MCMC | 2000 | 1.60 | 0.988 | 1.11 | 2058 | |
Unbiased Coupling | – | 1.36 | 0.013 | 1.00 | 2030 |
All values are means across 20 independent runs
Jacob et al. (2020) recently proposed an unbiased coupling algorithm which can sample from the cut distribution. It requires running coupled Markov chains where samples from each chain marginally target the same true distribution. The estimator is completely unbiased when two chains meet. Drawing samples from the cut distribution using the unbiased coupling algorithm typically involves two stages. In general, the first stage involves running coupled chains for φ until they meet. For each sampled φ, the second stage involves running another set of coupled chains for θ until they meet. Although the algorithm is unbiased, as illustrated in Sects. 4.2, 4.3 and the discussion of Jacob et al. (2020), the number of iterations for coupled chains is determined by meeting times, which can be very large especially when the dimension of the parameter is high. As a comparison, we apply the unbiased coupled algorithm on this example by using the R package “unbiasedmcmc” provided by Jacob et al. (2020). To simplify the implementation and computation of the unbiased coupling algorithm, we consider a simplified scenario with an informative conjugate prior for φ, meaning we can omit the first stage and instead directly draw5×104 samples from p(φ|Z). This prior is normal with mean equal to the true value of φ. We then ran preliminary coupled chains for θ that target p(θ|Y, φ)given these samples of φ so as to sample the meeting times. Over the 5×104 independent runs, the 95% and 99% quantiles of meeting times were 44 and 147, respectively, when d = 1. Although the majority of meeting times are, relatively, small, their 95% and 99% quantiles were 3525 and 5442, respectively, when d = 20. To ensure that the total number of iterations covers the majority of meeting times, following Jacob et al. (2020), we set the minimum number of iterations for each coupled chain to ten times the 95% quantile of meeting times. The algorithm was processed in parallel on the same ten cores as SACut, and the final result is shown in Table 1. Notably, unlike the nested MCMC algorithm, the computational time of the unbiased coupling algorithm increases significantly when the dimension of θ increases because it takes more time for coupled chains to couple in high-dimensional cases. In the low-dimensional case (d = 1), the unbiased coupling algorithm performs better according to all metrics. In the higher-dimensional case (d = 20), the unbiased coupling algorithm achieves similar MSE to the SACut algorithm, but it takes considerably more computation time than SACut, even though the unbiased coupling algorithm was been conducted under a simplified setting (i.e. no coupled chain for φ).
4.3. Epidemiological example
We now consider an epidemiological study of the relation between high-risk human papillomavirus (HPV) prevalence and cervical cancer incidence (Maucort-Boulch et al. 2008), which was previously discussed by Plummer (2015). In this study, age-stratified HPV prevalence data and cancer incidence data were collected from 13 cities. The model is divided into two modules. The first module concerns the number of people with HPV infection in city i, denoted as Zi , out of a sample of Ni women:
The second module describes the relation between the number of cancer cases Yi from Ti person-years and incidence which is assumed to be linked with φi by a log linear relationship:
The log-linear dose–response relationship is speculative, so we apply the cut algorithm to prevent the feedback from the second module to the estimation of φi (Plummer 2015).
We apply the stochastic approximation cut algorithm and compare results with the standard Bayesian approach (i.e. without a cut). Both algorithms were run ten times independently, each with 1.4 × 105 iterations. We set the shrink magnitude n 0 = 20000 and precision parameter κ1 = 3 for θ 1 and κ2 = 2 for θ 2. We retain only every 100th value after discarding the first 4 × 104 samples as burn-in. The pooled results of θ are shown in Fig. 6, highlighting the considerable effect of cutting feedback in this example. Our results are consistent with existing studies: specifically the scatter plot and density plot agree with Jacob et al. (2017) and Carmona and Nicholls (2020). Our results are also consistent with the results of nested MCMC algorithm when its internal chain length is largest (see Plummer (2015)). This again shows that the SACut algorithm provides similar estimates to the nested MCMC algorithm with a large internal chain length.
Fig. 6. Comparison of the distribution of θ 1 and θ 2 drawn from the cut distribution (red) and standard Bayesian posterior (blue).
5. Conclusion
We have proposed a new algorithm for approximating the cut distribution that improves on the WinBUGS algorithm and approximate approaches in Plummer (2015). Our approach approximates the intractable marginal likelihood p(Y|φ) using stochastic approximation Monte Carlo (Liang et al. 2007). The algorithm avoids the weakness of approximate approaches that insert an “internal limit” into each iteration of the main Markov chain. Obviously, one can argue that approximate approaches can be revised by setting the length of the internal chain to the number of iterations, i.e.n int = n so that the internal length diverges with n. However, since the sampling at each iteration is still not perfect and bias is inevitably introduced, the convergence of the main Markov chain remains unclear and the potential limit is not known. We proved convergence of the samples drawn by our algorithm and present the exact limit, though its convergence rate is not fully studied and needs further investigations. Although the bias is not completely removed by our algorithm, the degree of the bias is explicit in the sense that the shape of p κ(θ|Y,φ) is known since the shape of p(θ|Y,φ) is normally obtainable given a fixed φ. Corollary 2 shows that the bias in our approach can be reduced by increasing the precision parameter κ. We proposed that κ be selected by comparing results across a range of choices; quantitative selection of this precision parameter still needs further study.
Existing approximate approaches (Plummer 2015) which need an infinitely long internal chain may be computationally slow, because the internal chain requires sequential calculation so parallelization is not possible. In contrast, thanks to the embarrassingly parallel calculation of (5), our algorithm can be more computationally efficient when multiple computer cores are available, although the per-iteration time of our algorithm decays as the Markov chain runs due to the increasing size of collection of auxiliary variables.
Lastly, while the adaptive exchange algorithm (Liang et al. 2016) is used for intractable normalizing problems when the normalizing function is an integral with respect to the observed data, it would be interesting to investigate the use of our algorithm for other problems involving a normalizing function that is an integral with respect to the unknown parameter. For example, our algorithm can be directly extended to sample from the recently developed semi-modular inference distribution (Carmona and Nicholls 2020) which generalizes the cut distribution.
Supplementary Material
Acknowledgements
The authors thank Daniela De Angelis and Simon R. White for helpful discussions and suggestions.
Yang Liu was supported by a Cambridge International Scholarship from the Cambridge Commonwealth, European and International Trust. Robert J.B. Goudie was funded by the UK Medical Research Council [programme code MC_UU_00002/2]. The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care.
Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Atchadé Y, Fort G, Moulines E, Priouret P. In: Bayesian Time Series Models. Barber D, Cemgil AT, Chiappa S, editors. Cambridge University Press; 2011. Adaptive Markov chain Monte Carlo: theory and methods; pp. 32–51. [Google Scholar]
- Bhattacharya A, Pati D, Yang Y. Bayesian fractional posteriors. Ann Stat. 2019;47(1):39–66. [Google Scholar]
- Blangiardo M, Hansell A, Richardson S. A Bayesian model of time activity data to investigate health effect of air pollution in time series studies. Atmos Environ. 2011;45(2):379–386. [Google Scholar]
- Brooks SP, Gelman A. General methods for monitoring convergence of iterative simulations. J Comput Graph Stat. 1998;7(4):434–455. [Google Scholar]
- Carmona CU, Nicholls GK. Semi-modular inference: enhanced learning in multi-modular models by tempering the influence of components; Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics; 2020. pp. 4226–4235. [Google Scholar]
- Fu JC, Wang L. A random-discretization based Monte Carlo sampling method and its applications. Methodol Comput Appl Probab. 2002;4(1):5–25. [Google Scholar]
- Gelman A, Rubin DB. Inference from iterative simulation using multiple sequences. Stat Sci. 1992;7(4):457–472. [Google Scholar]
- Gottardo R, Raftery AE. Markov chain Monte Carlo with mixtures of mutually singular distributions. J Comput Graph Stat. 2008;17(4):949–975. [Google Scholar]
- Haario H, Saksman E, Tamminen J. An adaptive Metropolis algorithm. Bernoulli. 2001;7(2):223–242. [Google Scholar]
- Huang B, Wu B, Barry M. Geographically and temporally weighted regression for modeling spatio-temporal variation in house prices. Int J Geogr Inf Sci. 2010;24(3):383–401. [Google Scholar]
- Jacob PE, Murray LM, Holmes CC, Robert CP. Better together? Statistical learning in models made of modules. Preprint. 2017:arXiv:1708.08719 [Google Scholar]
- Jacob PE, O’Leary J, Atchadé YF. Unbiased Markov chain Monte Carlo methods with couplings. J R Stat Soc B. 2020;6:66. [Google Scholar]
- Liang F. Dynamically weighted importance sampling in Monte Carlo computation. J Am Stat Assoc. 2002;97(459):807–821. [Google Scholar]
- Liang F. A double Metropolis-Hastings sampler for spatial models with intractable normalizing constants. J Stat Comput Simul. 2010;80(9):1007–1022. [Google Scholar]
- Liang F, Liu C, Carroll RJ. Stochastic approximation in Monte Carlo computation. J Am Stat Assoc. 2007;102(477):305–320. [Google Scholar]
- Liang F, Jin IH, Song Q, Liu JS. An adaptive exchange algorithm for sampling from distributions with intractable normalizing constants. J Am Stat Assoc. 2016;111(513):377–393. [Google Scholar]
- Link WA, Eaton MJ. On thinning of chains in MCMC. Methods Ecol Evol. 2012;3(1):112–115. [Google Scholar]
- Liu F, Bayarri M, Berger J. Modularization in Bayesian analysis, with emphasis on analysis of computer models. Bayesian Anal. 2009;4(1):119–150. [Google Scholar]
- Liu Y, Lam K-F, Wu JT, Lam TT-Y. Geographically weighted temporally correlated logistic regression model. Sci Rep. 2018;8(1):1417. doi: 10.1038/s41598-018-19772-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lunn D, Best N, Spiegelhalter D, Graham G, Neuenschwander B. Combining MCMC with ‘sequential’ PKPD modelling. J Pharmacokinet Phar. 2009a;36(1):19. doi: 10.1007/s10928-008-9109-1. [DOI] [PubMed] [Google Scholar]
- Lunn D, Spiegelhalter D, Thomas A, Best N. The BUGS project: evolution, critique and future directions. Stat Med. 2009b;28(25):3049–3067. doi: 10.1002/sim.3680. [DOI] [PubMed] [Google Scholar]
- Malefaki S, Iliopoulos G. Simulation from a target distribution based on discretization and weighting. Commun Stat Simul Comput. 2009;38(4):829–845. [Google Scholar]
- Maucort-Boulch D, Franceschi S, Plummer M. International correlation between human papillomavirus prevalence and cervical cancer incidence. Cancer Epidem Biomar. 2008;17(3):717–720. doi: 10.1158/1055-9965.EPI-07-2691. [DOI] [PubMed] [Google Scholar]
- McCandless LC, Douglas IJ, Evans SJ, Smeeth L. Cutting feedback in Bayesian regression adjustment for the propensity score. Int J Biostat. 2010;6(2):16. doi: 10.2202/1557-4679.1205. [DOI] [PubMed] [Google Scholar]
- Meyn SP, Tweedie RL. Computable bounds for geometric convergence rates of Markov chains. Ann Appl Probab. 1994;4(4):981–1011. [Google Scholar]
- Meyn S, Tweedie RL, Glynn PW. Markov Chains and Stochastic Stability. 2nd edn. Cambridge Mathematical Library. Cambridge University Press; 2009. [Google Scholar]
- Miller JW, Dunson DB. Robust Bayesian inference via coarsening. J Am Stat Assoc. 2019;114(527):1113–1125. doi: 10.1080/01621459.2018.1469995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Møller J, Pettitt AN, Reeves R, Berthelsen KK. An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants. Biometrika. 2006;93(2):451–458. [Google Scholar]
- Murray I, Ghahramani Z, MacKay DJC. MCMC for doubly-intractable distributions; Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, UAI’06; Arlington, VA, USA. AUAI Press; 2006. pp. 359–366. [Google Scholar]
- Nakaya T, Fotheringham AS, Brunsdon C, Charlton M. Geographically weighted Poisson regression for disease association mapping. Stat Med. 2005;24(17):2695–2717. doi: 10.1002/sim.2129. [DOI] [PubMed] [Google Scholar]
- Park J, Haran M. Bayesian inference in the presence of intractable normalizing functions. J Am Stat Assoc. 2018;113(523):1372–1390. [Google Scholar]
- Plummer M. Cuts in Bayesian graphical models. Stat Comput. 2015;25(1):37–43. [Google Scholar]
- Roberts GO, Tweedie RL. Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika. 1996;83(1):95–110. [Google Scholar]
- Walker SG. Bayesian inference with misspecified models. J Stat Plan Inference. 2013;143(10):1621–1633. [Google Scholar]
- Zigler CM. The central role of Bayes’ theorem for joint estimation of causal effects and propensity scores. Am Stat. 2016;70(1):47–54. doi: 10.1080/00031305.2015.1111260. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.