Skip to main content
. Author manuscript; available in PMC: 2013 Jan 16.
Published in final edited form as: Electron J Stat. 2012;6:1059–1099. doi: 10.1214/12-EJS703

Fig 1.

Fig 1

Illustration of the TMLE procedure (with its general one-step updating procedure). We intentionally represent the initial estimator Pn0 closer to P0 than its kth and (k +1)th updates Pnk and Pnk+1, heuristically because Pn0 is as close to P0 as one can possibly get (given Pn and the specifics of the super-learning procedure) when targeting P0 itself. However, this obviously does not necessarily imply that Ψ(Pn0) performs well when targeting Ψ (P0) (instead of P0), which is why we also intentionally represent Ψ(Pnk+1) closer to Ψ (P0) than Ψ(Pn0). Indeed, Pnk+1 is obtained by fluctuating its predecessor Pnk in the direction of Ψ ”, i.e., taking into account the fact that we are ultimately interested in estimating Ψ (P0). More specifically, the fluctuation { Pnk(ε):ε<ηnk} of Pnk is a one-dimensional parametric model (hence its curvy shape in the large model Inline graphic) such that (i) Pnk(0)=Pnk, and (b) its score at ε = 0 equals the efficient influence curve D(Pnk) at Pnk (hence the dotted arrow). An optimal stretch εnk is determined (e.g. by maximizing the likelihood on the fluctuation), yielding the update Pnk+1=Pnk(εnk).