Skip to main content
. Author manuscript; available in PMC: 2018 Jan 12.
Published in final edited form as: Stat Interface. 2017;10:529–541. doi: 10.4310/SII.2017.v10.n4.a1

Algorithm 2.2.

  1. Initialize η̂t:t+k−1 and calculate ĥt+1:t+k using (3b).

  2. Evaluate d̂s, M̂s, and N̂s using equations (B.1), (B.3) and (B.4) respectively.

  3. Compute Gs, Js, and bs, for s = t+2,, t+k, recursively, as follows:
    Gs=M^s-N^s2Gs-1-1,Gt+1=M^t+1,Js=Ks-1-1N^s,Jt+1=0,Jt+k+1=0,bs=d^s-JsKt-1-1bs-1bt+1=d^t+1,

    where Ks=Gs.

  4. Define the auxiliary variables y^s=γ^s+Gs-1bs, where
    γ^s=h^s+Ks-1Js+1h^s+1,s=t+1,,t+k.
  5. Consider the linear Gaussian state-space model
    y^s=cs+Zshs+Hsξs,s=t+1,,t+k, (10)
    hs+1=α+ϕhs+Lsξs,s=t,t+1,,t+k, (11)

    where ξs ~ 𝒩(0, I2), cs=Ks-1Js+1α,Zs=1+Ks-1Js+1ϕ,Hs=Ks-1[1,Js+1ση], and Ls = [0, ση]. Apply the Kalman filter and a disturbance smoother [25] to the linear Gaussian state space model in equations (10) and (11) and obtain the posterior mean of ηt:t+k−1 (ht+1:t+k) and set η̂t:t+k−1 (ĥt+1:t+k) to this value.

  6. Return to Step 2 and repeat the procedure until achieving convergence.