Skip to main content
. Author manuscript; available in PMC: 2014 Oct 17.
Published in final edited form as: J Am Stat Assoc. 2012 Dec;107(500):1558–1574. doi: 10.1080/01621459.2012.720899

Algorithm 1.

The Multiresolution Sampler

  1. Let i = 0. Start from the k-resolution chain. Collect samples from fk (θ, Y{k} | Y) using any combination of local updating algorithms.

  2. Discard some initial samples as burn-in, and retain the remaining samples as the empirical distribution of (Y{k}, θ) from fk (θ, Y{k} | Y).

  3. Let ii + 1. Start the (k+i)-resolution chain. Initialize the chain to a state ( Yold{k+i}, θold).

  4. With probability 1 − p, perform a local update step to generate a new sample from fk+i (θ, Y{k+i} | Y), using any combination of local updates.

  5. With probability p, perform a cross-resolution move:
    1. Randomly select a state ( Ytrial{k+i-1}, θtrial) from the empirical distribution of the (k+i-1)-chain.
    2. Augment ( Ytrial{k+i-1}, θtrial) to ( Ytrial{k+i}, θtrial) by generating additional missing data values.
    3. With a Metropolis-Hasting type probability r, accept ( Ytrial{k+i}, θtrial) as the next sample in the chain; with probability 1 − r, keep the previous values of ( Yold{k+i}, θold) as the next sample in the chain.
  6. Rename the most recent draw as ( Yold{k+i}, θold), and repeat from Step 4 until a desired number of samples are achieved (typically determined in part by monitoring the chain for sufficient evidence of convergence).

  7. Discard some initial samples of the chain as burn-in, and retain the remaining samples to form an empirical distribution of (Y{k+i}, θ) from fk+i (θ, Y{k+i} | Y). If a finer approximation to the SDE is desired, repeat from Step 3.