Skip to main content
. Author manuscript; available in PMC: 2013 Oct 28.
Published in final edited form as: Bayesian Anal. 2012 Sep;7(3):10.1214/12-BA720. doi: 10.1214/12-BA720
Repeat L iterations of Steps 1–6:
 Step 1: Generate Y1 ~ f(Y1 | x1, θ̄12)
  Repeat for i = 2, …, M:
   Step 2: Evaluate g(xi | Y1, …, Yi−1, xi, …, xi−1, θ̄12). If g(·) is a deterministic function (as in dose allocation), record the value as xi. If g(·) is a non-degenerate distribution generate xi accordingly.
   Step 3: Generate Yi ~ f(Yi | Y1, …, Yi−1, xi, θ̄12), using xi from Step 2.
 Step 4: The vector Inline graphic = (Y1, …, YM) is a simulated outcome vector.
 Step 5: Evaluate the posterior variance covariance matrix Σ(θ | Inline graphic). This evaluation might require Markov chain Monte Carlo integration when the model is not conjugate.
 Step 6: Evaluate det(Σ−1( Inline graphic)).
Step 7: The average over all L iterations of steps 1 through 6.
1Ldet(-1(YM))det{-1(YM)}fM(YMθ¯12)dYM, (16)
 approximates the desired integral. The sum is over the L repeat simulations of Steps 1 through 6, plugging in the vector Inline graphic that is generated in each step of the simulation.