Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jul 22.
Published in final edited form as: Int J Biostat. 2017 Oct 12;13(2):/j/ijb.2017.13.issue-2/ijb-2015-0097/ijb-2015-0097.xml. doi: 10.1515/ijb-2015-0097

A Generally Efficient Targeted Minimum Loss Based Estimator based on the Highly Adaptive Lasso

Mark van der Laan 1,*
PMCID: PMC6054860  NIHMSID: NIHMS979184  PMID: 29023235

Abstract

Suppose we observe n independent and identically distributed observations of a finite dimensional bounded random variable. This article is concerned with the construction of an efficient targeted minimum loss-based estimator (TMLE) of a pathwise differentiable target parameter of the data distribution based on a realistic statistical model. The only smoothness condition we will enforce on the statistical model is that the nuisance parameters of the data distribution that are needed to evaluate the canonical gradient of the pathwise derivative of the target parameter are multivariate real valued cadlag functions (right-continuous and left-hand limits, (G. Neuhaus. On weak convergence of stochastic processes with multidimensional time parameter. Ann Stat 1971;42:1285–1295.) and have a finite supremum and (sectional) variation norm. Each nuisance parameter is defined as a minimizer of the expectation of a loss function over over all functions it its parameter space. For each nuisance parameter, we propose a new minimum loss based estimator that minimizes the loss-specific empirical risk over the functions in its parameter space under the additional constraint that the variation norm of the function is bounded by a set constant. The constant is selected with cross-validation. We show such an MLE can be represented as the minimizer of the empirical risk over linear combinations of indicator basis functions under the constraint that the sum of the absolute value of the coefficients is bounded by the constant: i.e., the variation norm corresponds with this L1-norm of the vector of coefficients. We will refer to this estimator as the highly adaptive Lasso (HAL)-estimator. We prove that for all models the HAL-estimator converges to the true nuisance parameter value at a rate that is faster than n−1/4 w.r.t. square-root of the loss-based dissimilarity. We also show that if this HAL-estimator is included in the library of an ensemble super-learner, then the super-learner will at minimal achieve the rate of convergence of the HAL, but, by previous results, it will actually be asymptotically equivalent with the oracle (i.e., in some sense best) estimator in the library. Subsequently, we establish that a one-step TMLE using such a super-learner as initial estimator for each of the nuisance parameters is asymptotically efficient at any data generating distribution in the model, under weak structural conditions on the target parameter mapping and model and a strong positivity assumption (e.g., the canonical gradient is uniformly bounded). We demonstrate our general theorem by constructing such a one-step TMLE of the average causal effect in a nonparametric model, and establishing that it is asymptotically efficient.

Keywords: asymptotic linear estimator, canonical gradient, cross-validated targeted minimum loss estimation (CV-TMLE), Donsker class, efficient influence curve, efficient estimator, empirical process, entropy, highly adaptive Lasso, influence curve, one-step TMLE, super-learning, targeted minimum loss estimation (TMLE)

1 Introduction

We consider the general statistical estimation problem defined by a statistical model for the data distribution, a Euclidean valued target parameter mapping defined on the statistical model, and observing n independent and identically distributed draws from the data distribution. Our goal is to construct a generally asymptotically efficient substitution estimator of the target parameter. An estimator is asymptotically efficient if and only if it is asymptotically linear with influence curve equal to the canonical gradient (also called the efficient influence curve) of the pathwise derivative of the target parameter [1]. For realistic statistical models construction of efficient estimators requires using highly data adaptive estimators of the relevant parts of the data distribution the efficient influence curve depends upon. We will refer to these relevant parts of the data distribution as nuisance parameters.

One can construct an asymptotically efficient estimator with the following two general methods. Firstly, the one-step estimator is defined by adding to an initial plug-in estimator of the target parameter an empirical mean of an estimator of the efficient influence curve at this same initial estimator [1]. In the special case that the efficient influence curve can be represented as an estimating function, one can represent this methodology as the first step of the Newton-Raphson algorithm for solving the estimating equation defined by setting the empirical mean of the efficient influence curve equal to zero. Such general estimating equation methodology for construction of efficient estimators has been developed for censored and causal inference models in the literature (e.g., [2, 3]). Secondly, the TMLE defines a least favorable parametric submodel through an initial estimator of the relevant parts (nuisance parameters) of the data distribution, and updates the initial estimator with the MLE over this least favorable parametric submodel. The one-step TMLE of the target parameter is now the resulting plug-in estimator [46]. In this article we focus on the one-step TMLE since it is a more robust estimator by respecting the global constraints of the statistical model, which becomes evident when comparing the one-step estimator and TMLE in simulations for which the information is low for the target parameter (e.g., even resulting in one-step estimators of probabilities that are outside the (0, 1) range) (e.g., [79]). Nonetheless, the results in this article have immediate analogues for the one-step estimator and estimating equation method.

The asymptotic linearity and efficiency of the TMLE and one-step estimator relies on a second order remainder to be oP(n−1/2), which typically requires that the nuisance parameters are estimated at a rate faster than n−1/4 w.r.t. an L2(P0)-norm (e.g., see our example in Section 7). To make the TMLE highly data adaptive and thereby efficient for large statistical models we have recommended to estimate the nuisance parameters with a super-learner based on a large library of candidate estimators [1013]. Due to the oracle inequality for the cross-validation selector, the super-learner will be asymptotically equivalent with the oracle selected estimator w.r.t. loss-based dissimilarity, even when the number of candidate estimators in the library grows polynomial in sample size. The loss-based dissimilarity (e.g., Kullback-Leibler divergence or loss-based dissimilarity for the squared error loss) behaves as a square of an L2(P0)-norm (see, for example Lemma 4 in our example). Therefore, in order to control the second order remainder, our goal should be to construct a candidate estimator in the library of the super-learner which will converge at a faster rate than n−1/4 w.r.t. square-root of the loss-based dissimilarity.

In this article, for each nuisance parameter, we propose a new minimum loss based estimator that minim-izes the loss-specific empirical risk over its parameter space under the additional constraint that the variation norm is bounded by a set constant. The constant is selected with cross-validation. We show that these MLEs can be represented as the minimizer of the empirical risk over linear combinations of indicator basis functions under the constraint that the sum of the absolute value of the coefficients is bounded by the constant: i.e., the variation norm corresponds with this L1-norm of the vector of coefficients. We will refer to this estimator as the highly adaptive Lasso (HAL)-estimator. We prove that the HAL-estimator converges at a rate that is for all models faster than n−1/4 w.r.t. square-root of the loss-based dissimilarity. This even holds if the model only assumes that the true nuisance parameters have a finite variation norm. As a corollary of the general oracle inequality for cross-validation, we will then show that the super-learner including this HAL-estimator it its library is guaranteed to converge to its true counterparts at the same rate as this HAL-estimator (and thus faster than n−1/4). By also including a large variety of other estimators in the library of the super-learner, the super-learner will also have excellent practical performance for finite samples relative to competing estimators [14]. Based on this fundamental result for the HAL-estimator and the super-learner, we proceed in this article with proving a general theorem for asymptotic efficiency of the one-step TMLE for arbitrary statistical models. In this article we will use a one-step cross-validated-TMLE (CV-TMLE), which avoids the Donsker-class entropy condition on the nuisance parameter space, in order to further minimize the conditions for asymptotic efficiency [5, 15]. In our accompanying technical report [16] we present the analogue results for the one-step TMLE. Beyond establishing these fundamental theoretical general results, we will also discuss the practical implementation of the HAL-estimator and corresponding TMLE.

2 Example: Treatment specific mean in nonparametric model

Before we start the main part of this article, in this section we will first introduce an example, and use this example to provide the reader with a guide through the different sections.

2.1 Defining the statistical estimation problem

Let O = (W, A, Y) ∼ P0 be a d-dimensional random variable consisting of a (d − 2)-dimensional vector of baseline covariates W, binary treatment A ∈ {0, 1} and binary outcome Y ∈ {0, 1}. We observe n i.i.d. copies O1, …, On of OP0. Let Q¯(P)(W)=EP(Y|A=1,W) and G¯(P)(W)=EP(A|W). Let Q2(P) be the marginal cumulative probability distribution of W, and Q=(Q1=Q¯,Q2). Let the statistical model be of the form M={P:G(P)G,Q(P)Q}, where G is a possibly restricted set, and Q is nonparametric. The only key assumption we will enforce on Q and G is that for each PM, WQ¯(P)(W) and WG¯(P)(W) are cadlag functions in W on a set [0, τP] ⊂ ℝd−2 [17], and that the variation norm of these functions Q¯(P) and G¯(P) are bounded. The definition of variation norm will be presented in the next section. Suppose that G assumes that G¯ only depends on W through a subset of covariates of dimension d2d − 2: if d2 = d − 2, then this does not represent an assumption.

Our target parameter Ψ:M is defined by Ψ(P)=Q¯(w)dQ2(w)Ψ1(Q1=Q¯,Q2). For notational convenience, we will use Ψ for both mappings Ψ and Ψ1. It is well known that Ψ is pathwise differentiable so that for each 1-dimensional parametric submodel {Pε:ε}M through P with score S at ε = 0, we have

ddεΨ(Pε)|ε=0=PD(P)S=oD(P)(o)S(o)dP(o),

for some D(P) ∈ L2(P), where L2(P) is the Hilbert space of functions of O with mean zero endowed with inner product 〈f, gP = Pfg. Here we use the notation Pf∫ f(o)dP(o). Such an object D(P) is called a gradient at P of the pathwise derivative. The unique gradient that is also an element of the tangent space T(P) is defined as the canonical gradient. The tangent space T(P) at P is defined as the closure of the linear span of the set of scores of the class of 1-dimensional parametric submodels we consider. In this example the canonical gradient D*(P) = D*(Q(P), G(P)) at P is given by:

D(Q,G)(O)=AG¯(W)(YQ¯(W))+Q¯(W)Ψ(Q).

Let D1(Q,G)=A/G¯(W)(YQ¯(W)) and D2(Q)=Q¯(W)Ψ(Q) and note that D(Q,G)=D1(Q,G)+D2(Q).

An estimator ψn of ψ0 = Ψ(P0) is asymptotically efficient (among the class of all regular estimators) if and only if it is asymptotically linear with influence curve equal to the canonical gradient D*(P0) [1]:

ψnψ0=PnD(P0)+oP(n1/2),

where Pn is the empirical probability distribution of O1, …, On. Therefore, the canonical gradient is also called the efficient influence curve.

We have that

Ψ(P)Ψ(P0)=(PP0)D(Q,G)+R20((Q¯,G¯),(Q¯0,G¯0)), (1)

where Q = Q(P), G = G(P), and the second order remainder R20() is defined as follows:

R20((Q¯,G¯),(Q¯0,G¯0))G¯(w)G¯0(w)G¯(w)(Q¯(w)Q¯0(w))dP0(w).

Of course, PD*(Q, G) = 0.

We define the following two log-likelihood loss functions for Q¯, Q2 and G¯, respectively:

L11(Q¯)(O)=A{YlogQ¯(W)+(1Y)log(1Q¯(W))};L12(Q2)(O)=logdQ2(W);L2(G¯)(O)={AlogG¯(W)+(1A)log(1G¯(W))}.

We also define the corresponding Kullback-Leibler dissimilarities d10,1(Q¯,Q¯0)=P0{L11(Q¯)L11(Q¯0)}, d10,2(Q2, Q20) = P0{L12(Q2) − L12(Q20)}, and d20(G¯,G¯0)=P0{L2(G¯)L2(G¯0)}. Here Q2 represents an easy to estimate parameter which we will estimate with the empirical probability distribution Q2n=Q^2(Pn) of W1, …, Wn.

Let the submodel M(δ)M be defined by the extra restriction that δ<Q¯(W)<1δ and G¯(W)>δ P0-a.e. If we would replace the log-likelihood loss L11(Q¯) (which becomes unbounded if Q¯ approximates 0 or 1) by a squared error loss (YQ¯(W))2A, then one can remove the restriction δ<Q¯(W)<1δ in the definition of M(δ). Given a sequence δn → 0 as n → ∞, we can define a sequence of models Mn=M(δn) which grows from below to M as n → ∞. By assumption, there exists an N0 = N(P0) < ∞ so that for n > N0 we have P0Mn.

Let Qn=Q1n×Q2n and Gn be the corresponding parameter spaces for Q=(Q¯,Q2) and G¯, respectively, and specifically, Q1n={Q¯:δn<Q¯<1δn}, while Q2n=Q2.

2.2 One step CV-TMLE

Let Q¯^:MnonpQ1n and G¯^:MnonpGn be initial estimators of Q¯0, G¯0, respectively, where Mnonp denotes a nonparametric model so that the estimator is defined for all realizations of the empirical probability distribution. Let Q^:MnonpQn be the estimator Q^(Pn)=(Q¯^(Pn),Q^2(Pn)) of Q0=(Q¯0,Q20). For a given cross-validation scheme Bn ∈ {0, 1}n, let Pn,Bn1, Pn,Bn0 be the empirical probability distributions of the validation sample {Oi : Bn(i) = 1} and training sample { Oi : Bn(i) = 0}, respectively. It is assumed that the proportion of observations in the validation sample (i.e., Σi Bn(i)/n) is between δ and 1−δ for some 0 < δ < 1. Let Qn,Bn=(Q¯n,Bn,Q2n,Bn)=Q^(Pn,Bn0) and G¯n,Bn=G¯^(Pn,Bn0) be the estimators applied to the training sample Pn,Bn0. Given a (Q¯,G¯), consider the uniform least favorable submodel (van der Laan and Gruber, 2015)

LogitQ¯ε1=LogitQ¯+ε1HG¯

through Q¯ at ε1 = 0, where HG¯(W)=1/G¯(W). We indeed have ddε1L11(Q¯ε1)=D1(Q¯ε1,G¯) for all ε1. Given a Q=(Q¯,Q2), consider also the local least favorable submodel

dQ2,ε2lfm(W)=dQ2(W)(1+ε2D2(Q)(W))

through Q2 at ε2 = 0. Indeed, ddε2L12(Q2,ε2lfm)|ε2=0=D2(Q¯,Q2). This local least favorable submodel implies the following uniform least favorable submodel (van der Laan and Gruber, 2015): for ε2 ≥ 0

dQ2,ε2=dQ2exp(0ε2D2(Q¯,Q2,x)dx).

This universal least favorable submodel implies a recursive construction of Q2,ε for all ε-values, by starting at ε = 0 and moving upwards. For negative values of ε2, we define 0ε2=ε20. For all ε2, ddε2L12(Q2,ε2)=D2(Q¯,Q2,ε2), which shows that this is indeed a universal least favorable submodel for Q2.

Let ε1n=argminε1EBnPn,Bn1L11(Q¯n,Bn,ε1), and Q¯n,Bn=Q¯n,Bn,ε1n. The score equation for ε1n shows that EBnPn,Bn1D1(Q¯n,Bn,G¯n,Bn)=0. Let ε2n=argminε2EBnPn,Bn1L12(Q2n,Bn,ε2) and Q2n,Bn=Q2n,Bnε2n. The score equation for ε2n shows that EBnPn,Bn1D2(Q¯n,Bn,Q2n,Bn)=0, which implies

EBnPn,Bn1Q¯n,Bn=EBnQ2n,BnQ¯n,Bn. (2)

The CV-TMLE of Ψ(Q0) is defined as ψnEBnΨ(Qn,Bn), where Qn,Bn=(Q¯n,Bn,Q2n,Bn). By eq. (2) this implies that the CV-TMLE can also be represented as:

ψn=EBnPn,Bn1Q¯n,Bn. (3)

Note that this latter representation proves that we never have to carry out the TMLE-update step for Q2n, but that the CV-TMLE is a simple empirical mean of Q¯n,Bn over the validation sample, averaged across the different splits Bn. We also conclude that this one-step CV-TMLE solves the crucial cross-validated efficient influence curve equation

EBnPn,Bn1D(Qn,Bn,G¯n,Bn)=0. (4)

2.3 Guide for article based on this example

Section 3: Formulation of general estimation problem

The goal of this article is far beyond establishing asymptotic efficiency of the CV-TMLE eq. (3) in this example. Therefore, we start in Section 3 by defining a general model and general target parameter, essentially generalizing the above notation for this example. Therefore, having read the above example, the presentation in Section 3 of a very general estimation problem will be easier to follow. Our subsequent definition and results for the HAL-estimator, the HAL-super-learner, and the CV-TMLE in the subsequent Sections 4-6 apply now to our general model and target parameter, thereby establishing asymptotic efficiency of the CV-TMLE for an enormous large class of semi-parametric statistical estimation problems, including our example as a special case.

Let’s now return to our example to point out the specific tasks that are solved in each section of this article. By eqs (1) and (4), we have the following starting identity for the CV-TMLE:

EBnΨ(Qn,Bn)Ψ(Q0)=EBn(Pn,Bn1P0)D(Qn,Bn,G¯n,Bn)+EBnR20((Q¯n,Bn,G¯n,Bn),(Q¯0,G¯0)). (5)

By the Cauchy-Schwarz inequality and bounding 1/G¯n,Bn by 1/δn, we can bound the second order remainder as follows:

|EBnR20((Q¯n,Bn,G¯n,Bn),(Q¯0,G¯0))|1δnEBnQ¯n,BnQ¯0P0G¯n,BnG¯0P0, (6)

where fP0(P0f2)1/2. Suppose we can construct estimators Q¯^ and G¯^ of Q¯0 and G¯0 so that Q¯nQ¯0P0=OP(n1/4α1) and G¯nG¯0P0=OP(n1/4α2) for some α1 > 0, α2 > 0. Since the training sample is proportional to sample size n, this immediately implies G¯n,BnG¯0P0=OP(n1/4α2) and Q¯n,BnQ¯0P0=OP(n1/4α1). In addition, it is easy to show (as we will formally establish in general) that the rate of convergence of the initial estimator Q¯n,Bn carries over to its targeted version so that Q¯n,BnQ¯0P0=OP(n1/4α1). Thus, with such initial estimators, we obtain

EBnR20((Q¯n,Bn,G¯n,Bn),(Q¯0,G¯0))=oP(δn1n1/2α1α2). (7)

Thus, by selecting δn so that δn1nα1α20, we obtain EBnR20((Q¯n,Bn,G¯n,Bn),(Q¯0,G¯0))=oP(n1/2).

Section 4: Construction and analysis of an M-specific HAL-estimator that converges at a rate faster than n−1/4

This challenge of constructing such estimators Q¯^ and G¯^ is addressed in Section 4. In the context of our example, in Section 4 we define a minimum loss estimator (MLE) Q¯n,M=argminQ¯ν<MPnL11(Q¯) that minimizes the empirical risk over all cadlag functions with variation norm smaller than M. In Section 4 we then show that, if M is chosen larger than the variation norm of Q¯0, d10,11/2(Q¯n,M,Q¯0) converges to zero at a faster rate than n1/4α1 for some α1 = α1(d) > 0 (for each dimension d). We provide an explicit representation eq. (17) of a cadlag function with finite variation norm M as an infinite linear combination of indicator functions for which the sum of the absolute value of the coefficients is bounded by M. As a consequence, it is shown in Appendix D that this M-specific minimum loss-based estimator can be approximated by (or can be exactly defined as) a Lasso-generalized linear regression problem in which the sum of the absolute values of the coefficients is bounded by M. Therefore, we will refer to Q¯n,M as the M-specific HAL-estimator. Our proof of Lemma 1 in Section 4, which establishes the rate of convergence of the M-specific HAL-estimator, relies on an empirical process result by [18] that expresses the upper bound for this rate of convergence in terms of the entropy of the model space Q1 of Q¯. The representation eq. (17) demonstrates that the set of cadlag functions that have variation norm smaller than a constant M is a difference of a“convex” hull of indicator functions, and, as a consequence of a general convex hull result in [19] this proves that it is a Donsker class with a specified upper bound on its entropy. In this way, we obtain an explicit entropy bound for our model space Q1. Given this explicit upper bound for the entropy, the result in [18] establishes a rate of convergence of the M-specific HAL-estimator faster than n1/4α1for a specified α1 > 0. By selecting M larger than the unknown variation norm of the true nuisance parameter value, we obtain an HAL-estimator that converges at a faster rate than n−1/4.

Section 5: Construction and analysis of an HAL-super-learner

Instead of assuming that the the variation norm of Q¯0 is bounded by a known M and use the corresponding M-specific HAL-estimator, in Section 5 we define a a collection of such M-specific estimators for a set of M-values for which the maximum value converges to infinity as sample size converges to infinity. We then use cross-validation to data adaptively select M. We now show that the resulting cross-validated selected estimator of Q¯0 will be asymptotically equivalent with the oracle (i.e., best w.r.t. loss-based dissimilarity) choice. This follows from a previously established oracle inequality for the cross-validation selector, as long as the supremum norm bound on the loss-function at the candidate estimators does not grow too fast to infinity as a function of sample size (e.g., [11, 13]). By using such a data adaptively selected M one obtains an estimator with better practical performance and it avoids having to know an upper bound M. As a consequence, our statistical model does not need to assume a universal bound M on the variation norm of the nuisance parameters, but it only needs to assume that each nuisance parameter value has a finite variation norm. For the sake of finite sample performance, we want to use a super-learner that uses cross-validation to select an estimator from a library of candidate estimators that includes these M-specific estimators as candidates, beyond other candidate estimators. In this way, the choice of estimator will be adapted to what works well for the actual data set. Therefore, in Section 5, we actually define such a general super-learner Q¯^ and Theorem 2 states that it will converge at least as fast as the best choice in the library, and thus certainly as fast as the M-specific HAL-estimator using M equal to the true variation norm of Q¯0. We refer to a super-learner whose library includes this collection of M-specific HAL-estimators as an HAL-super-learner. We will use an analogue HAL-super-learner of G¯0 (Theorem 6).

The convergence results for this super-learner in terms of the Kullback-Leibler loss-based dissimilarities also imply corresponding results for L2(P0)-convergence as needed to control the second order remainder eq. (6): see Lemma 4.

Section 6: Construction and analysis of HAL-CV-TMLE

To control the remainder we need to understand the behavior of the updated initial estimator Q¯n,Bn instead of the initial estimator Q¯n,Bn itself. In our example, since the updated estimator only involves a single updating step of the initial estimator, using a cross-validated MLE selector of ε, we can easily show that Q¯n,Bn converges at same rate to Q¯0 as the initial estimator Q¯n,Bn. In general, in Section 6 we define a one-step CV-TMLE for our general model and target parameter so that the targeted versions of the initial estimator of Q¯0 converges at the same rate as the initial HAL-super-learner estimator Q¯n. (Since the initial estimator is an HAL-super-learner, we refer to this type of CV-TMLE as an HAL-CV-TMLE.) This concerns a choice of least favorable submodel for which the CV-TMLE-step separately updates each of the components of the initial estimator Q^. We then show that with this choice of least favorable submodel the CV-TMLE-step preserves the convergence rate of the initial estimator (Lemma 3). We also establish in Appendix D that the one-step CV-TMLE already solves the desired cross-validated efficient influence curve equation (4) up till an oP(n−1/2)-term, so that an iterative CV-TMLE can be avoided (Lemma 13 and Lemma 14). At that point, we have shown that the generalized analogue of eq. (7) indeed holds with a specified α1 > 0, α2 > 0. In the final subsection of Section 6, Theorem 1 then establish the asymptotic efficiency of the HAL-CV-TMLE, which now also involves analyzing the cross-validated empirical process term, specifically, showing that

EBn(Pn,Bn1P0)D(Qn,Bn,G¯n,Bn)=(PnP0)D(Q0,G¯0)+oP(n1/2). (8)

This will hold under weak conditions, given that we have estimators Qn,Bn, Gn,Bn that converge at specified rates to their true counterparts and that, for each split Bn, conditional on the training sample, the empirical process is indexed by a finite dimensional (i.e., dimension of :) class of functions.

Section 7: Returning to our example

In Section 7 we return to our example to present a formal Theorem 2 with specified conditions, involving an application of our general efficiency Theorem 1 in Section 6.

Appendix: Various technical results are presented in the Appendix.

3 Statistical formulation of the estimation problem

Let O1, …, On be n independent and identically distributed copies of a d-dimensional random variable O with probability distribution P0 that is known to be an element of a statistical model M. Let Ψ:M be a one-dimensional target parameter, so that ψ0 = Ψ(P0) is the estimand of interest we aim to learn from the n observations o1, …, on. We assume that Ψ is pathwise differentiable at any PM with canonical gradient D*(P): for a specified rich class of one-dimensional submodels {Pε:ε(δ,δ)}M through P at ε = 0 and score S=ddεlogdPε/dP|ε=0, we have

ddεΨ(Pε)|ε=0=PD(P)SoD(P)(o)S(o)dP(o).

Our goal in this article is to construct a substitution estimator (i.e., a TMLE Ψ(Pn) for a targeted estimator Pn of P0) that is asymptotically efficient under minimal conditions.

Relevant nuisance parameters Q, G and their loss functions

Let Q(P) be a nuisance parameter of P so that Ψ(P) = Ψ1(Q(P)) for some Ψ1, so that Ψ(P) only depends on P through Q(P). Let Q=Q(M)={Q(P):PM} be the parameter space of this parameter Q:MQ. Suppose that Q(P) = (Qj(P) : j = 1, …, k1 + 1) has k1 + 1 components, and Qj:MQj are variation independent parameters j = 1, …, k1 + 1. Let Qj=Qj(M) be the parameter space of Qj. Thus, the parameter space of Q is a cartesian product Q=j=1k1+1Qj. In addition, suppose that for j = 1, …, k1 + 1, Qj(P0)=argminQjQjP0L1j(Qj) for specified loss functions (O, Qj) ↦ L1j(Qj)(O). Let Q¯=(Q1,,Qk1) represent parameters that require data adaptive estimation trading off variance and bias (e.g., densities), while Qk1+1 represents an easy to estimate parameter for which we have an empirical estimator Q^k1+1 available with negligible bias. In our treatment specific mean example above Q=(Q1=Q¯,Q2), where the easy to estimate parameter Q2 was the probability distribution of W which is naturally estimated with the empirical probability distribution. The parameter Q¯(P0) will be estimated with our proposed loss-based HAL-super-learner. In the special case that each of the components of Q require a super-learner type-estimator, we define Qk1+1 as empty (or equivalently, a known value), and in that case Q=Q¯. We define corresponding loss-based dissimilarities d10j(Qj, Qj0) = P0L1j(Qj)−P0L1j(Qj0), j = 1, …, k1 + 1. We assume that d10(k1+1)(Q^k1+1(Pn),Q(k1+1)0)=OP(rQ,k1+1(n)) for a known rate of convergence rQ,k1+1(n). Let

d10(Q,Q0)=(d10j(Qj,Qj0):j=1,,k1+1) (9)

be the collection of these k1 + 1 loss-based dissimilarities. We use the notation d10(Q¯,Q¯0)=(d10j(Qj,Qj0):j=1,k1) for the vector of k1 loss-based dissimilarities for Q¯.

Suppose that D*(P) only depends on P through Q(P) and an additional nuisance parameter G(P). In the special case that D*(P) only depends on P through Q(P), we define G as empty (or equivalently, as a known value). Let G=(G1,,Gk2+1) be a collection of (k2 + 1)-variation independent parameters of G for some integer k2 + 1 ≥ 1. Thus the parameter space of G is a cartesian product G=j=1k2+1Gj, where Gj is the parameter space of Gj:MGj. Let Gj0=argminGGjP0L2j(Gj) for a loss function (O, Gj) ↦ L2j(Gj)(O), and let d2j0(Gj, Gj0) = P0L2j(Gj) − P0L2j(Gj0) be the corresponding loss-based dissimilarity, j = 1, …, k2 + 1. Let Gk2+1 represents an easy to estimate parameter for which we have a well behaved and understood estimator G^k2+1 available. The parameter G¯(P0) will be estimated with our proposed HAL-super-learner. We assume that d20(k2+1)(G^k2+1(Pn),G(k2+1)0)=OP(rG,k2+1(n)) for a known rate of convergence rG,k2+1(n). As above, let d20(G,G0)=(d20j(Gj,Gj0):j=1,,k2+1) be the collection of these loss-based dissimilarities, and let d20(G¯,G¯0)=(d20j(Gj,Gj0):j=1,,k2), where G¯=(G1,,Gk2). In the special case that each Gj requires a super-learner based estimator, then we define Gk2+1 as empty, and G=G¯.

We also define

d0((Q,G),(Q0,G0))=(d10j1(Qj1,Qj10),d20j2(Gj2,Gj20):j1,j2) (10)

as the vector of k1 + k2 + 2 loss-based dissimilarities. We will also use the short-hand notation d0(P, P0) for d0((Q, G), (Q0, G0)).

We define

L1(Q)=(L1j(Qj):j=1,,k1+1) (11)

as the vector of k1 + 1-loss functions for Q=(Q1,,Qk1+1), and similarly we define

L2(G)=(L2j(Gj):j=1,,k2+1). (12)

We will also use the notation L1(Q¯)=(L1(Qj):j=1,,k1) and L2(G¯)=(L2j(Gj):j=1,,k2). We will assume that Q¯L1(Q¯) is a convex function in the sense that, for any Q¯1=(Qj1:j=1,,k1),,Q¯m=(Qjm:j=1,k1), for each j = 1, …, k1

P0L1j(k=1mαkQjk)k=1mαkP0L1j(Qjk) (13)

when Σk αk = 1 and mink αk ≥ 0. Similarly, we assume G¯L2(G¯) is a convex function. Our results for the TMLE generalize to non-convex loss functions, but the convexity of the loss functions allows a nicer representation for the super-learner oracle inequality, and in most applications a natural convex loss function is available.

We will abuse notation by also denoting Ψ(P) and D*(P) with Ψ(Q) and D*(Q, G), respectively. A special case is that D*(P) = D*(Q(P)) does not depend on an additional nuisance parameter G: for example, if O ∈ ℝ, M is nonparametric, and Ψ(P) = ∫p(o)2do is the integral of the square of the Lebesgue density p of P, then the canonical gradient is given by D*(P) = 2p2 − 2Ψ(P), so that one would define Q(P) = p, and there is no G.

Second order remainder for target parameter

We define the second order remainder R2(P, P0) as follows:

R2(P,P0)Ψ(P)Ψ(P0)+P0D(P). (14)

We will also denote R2(P, P0) with R20((Q, G), (Q0, G0)) to indicate that it involves differences between Q and Q0 and G and G0, beyond possibly some additional dependence on P0. In our experience, this remainder R2(P, P0) can be represented as a sum of terms of the type (H1(P) − H1(P0))(H2(P) − H2(P0))f(P, P0)dP0(o) for some functionals H1, H2 and f, where, typically, H1(P) and H2(P) represent functions of Q(P) or G(P). In certain classes of problems we have that R2(P, P0) only involves cross-terms of the type (H1(Q) − H1(Q0))(H2(G) − H2(G0))f(P, P0)dP0, so that R20((Q, G), (Q0, G0)) = 0 if either Q = Q0 or G = G0. In these cases, we say that the efficient influence curve is double robust w.r.t. misspecification of Q0 and G0:

P0D(P)=Ψ(P0)Ψ(P)ifG(P)=G(P0)orQ(P)=Q(P0).

Given the above double robustness property of the canonical gradient (i.e, of the target parameter), if P solves P0D*(P) = 0, and either G(P) = G0 or Q(P) = Q0, then Ψ(P) = Ψ(P0). This allows for the construction of so called double robust estimators of ψ0 that will be consistent if either the estimator of Q0 is consistent or the estimator of G0 is consistent.

Support of data distribution

The support of PM is defined as a set OPd so that P(OP)=1. It is assumed that for each PM, OP[0,τP] for some finite τP>0d. We define

τ=supPMτP, (15)

so that [0, τP] ⊂ [0, τ] for all PM, where τ = ∞ is allowed, in which case [0,τ]0d. That is, [0, τ] is an upper bound of all the supports, and the model M states that the support of the data structure O is known to be contained in [0, τ].

Cadlag functions on [0, τ], supremum norm and variation norm

Suppose τ is finite, and, in fact, if τ is not finite, then we will apply the definitions below to a τ = τn that is finite and converges to τ. Let D[0,τ] be the Banach space of d-variate real valued cadlag functions (right-continuous with left-hand limits) [17]. For a fD[0,τ], let ‖ f = supx∈[0,τ] | f(x) | be the supremum norm. For a fD[0,τ], we define the variation norm of f [20] as

fν=|f(0)|+s{1,,d}(0s,τs)|f(dxs,0s)|. (16)

For a subset s ⊂ {1, …, d}, xs = (xj : js), xs = (xj : js), and the Σs in the above definition of the variation norm is over all subsets of {1, …, d}. In addition, xsf(xs, 0s)) is the s-specific section of xf(x) that sets the coordinates in the compliment of s equal to 0. Note that ‖ fν is the sum of variation norms of s-specific sections of f (including f itself). Therefore, one might refer to this norm as the sectional variation norm, but, for convenience, for the purpose of this article, we will just refer to it as variation norm. If ‖ fν < ∞, then we can, in fact, represent f as follows [20]:

f(x)=f(0)+s{1,,d}{0s,xs}f(dus,0s), (17)

where f(dus, 0s) is the measure generated by the cadlag function usf(us, 0s). For a M ∈ ℝ≥0, let

Fν,M={fD(0,τ):fν<M}

denote the set of cadlag functions f : [0, 4] → ℝ with variation norm bounded by M.

Cartesian product of cadlag function spaces, and its component-wise operations

Let Dk[0, τ] be the product Banach space of k-dimensional (f1, …, fk) where each fjD[0,τ], j = 1, …, k. If fDk[0, τ], then we define ‖ f = (‖ fj : j = 1, …, k) as a vector whose j-th component equals the supremum norm of the j-th component fj of f. Similarly we define a variation norm of fDk[0, τ] as a vector

fν=(fjν:j=1,,k)

of variation norms. If fDk[0, τ], then ‖ fP0= (‖ fjP0: j = 1, …, k) is a vector whose components are the L2(P0)-norms of the components of f. Generally speaking, in this paper any operation on a function fDk[0, τ], such as taking a norm fP0, an expectation P0f, operations on a pair of functions f, gDk[0, τ], such as f/g, f × g, max(f, g) or an inequality f < g, is carried out component wise: for example, max(f, g) = (max(fj, gj) : j = 1, …, k) and infQQP0L1(Q)=(infQjQjP0L1j(Qj):j=1,,k1+1). In a similar manner, for an M>0k, let Fν,M=j=1kFν,Mj denote the cartesian product. This general notation allows us to present results with minimal notation, avoiding the need to continuously having to enumerate all the components.

Our results will hold for general models and pathwise differentiable target parameters, as long as the statistical model satisfies the following key smoothness assumption:

Assumption 1. (Smoothness Assumption)

For each PM, Q¯=Q¯(P)Dk1[0,τ], G¯=G¯(P)Dk2[0,τ], D(P)=D(Q,G)D[0,τ], L1(Q¯)Dk1[0,τ], L2(G¯)Dk2[0,τ], and Q¯, G¯, D*(P), L1(Q¯), L2(G¯) have a finite supremum and variation norm.

Definition of bounds on the statistical model

The properties of the super-learner and TMLE rely on bounds on the model M. Our estimators will also allow for unbounded models by using a sieve of models for which its finite bounds slowly approximate the actual model bound as sample size converges to infinity. These bounds will be defined now:

τ=τ(M)=supPMτ(P),M1Q=M1Q(M)=supQ,Q0QL1(Q¯)L1(Q¯0),M2Q=M2Q(M)=supP,P0ML1(Q¯)L1(Q¯0)P0{d10(Q¯,Q¯0)}1/2,M1G=M1G(M)=supG,G0GL2(G¯)L2(G¯0),M2G=M2G(M)=supP,P0ML2(G¯)L2(G¯0)P0{d20(G¯,G¯0)}1/2,MD=MD(M)=supPMD(P). (18)

Note that M1Q, M2Q0k1 and M1G, M2G0k2 are defined as vectors of constants, a constant for each component of Q¯ and G¯, respectively. The bounds M1Q, M2Q guarantee excellent properties of the cross-validation selector based on the loss-function L1(Q¯) (e.g., [11, 13]). A bound on M2Q shows that the loss-based dissimilarity d01(Q¯,Q¯0) behaves as a square of a difference between Q¯ and Q¯0. Similarly, the bounds M1G, M2G control the behavior of the cross-validation selector based on the loss function L2(G¯).

Bounded and Unbounded Models

We will call the model M bounded if it is a model for which τ < ∞ (i.e., universally bounded support), M1Q, M2Q, M1G, M2G, MD are finite. In words, in essence, a bounded model is a model for which the support and the supremum norm of Q¯(P), G¯(P), L1(Q¯), L2(G¯) and D*(Q, G) are uniformly (over the model) bounded. Any model that is not bounded will be called an unbounded model.

Sequence of bounded submodels approximating the unbounded model

For an unbounded model M, our initial estimators (Q¯n,G¯n) of (Q¯0,G¯0) are defined in terms of a sequence of bounded submodels MnM that are increasing in n and approximate the actual model M as n converges to infinity. The counterparts of the above defined universal bounds on M applied to Mn are denoted with τn, M1Q,n, M2Q,n, M1G,n, M2G,n, MD,n. The conditions of our general asymptotic efficiency Theorem 1 will enforce that these bounds converge slowly enough to infinity (in the case the corresponding true model bound is infinity). This model Mn could be defined as the largest subset of M for which these latter bounds apply. By Assumption 1, with this choice of definition of Mn, for any P0M, there exists an N0 = N(P0), so that for n > N0 P0Mn. Either way, we assume that Mn is defined such that the latter is true.

Let Qn=Q(Mn) and Gn=G(Mn) be the parameter spaces of Q and G under model Mn, and let Q¯n=Q¯(Mn) and G¯n=G¯(Mn) be the parameter spaces of Q¯ and G¯. We define the following true parameters corresponding with this model Mn:

Q¯0n=argminQ¯Q¯nP0L1(Q¯)G¯0n=argminG¯G¯nP0L2(G¯).

We will assume that Mn is chosen so that Qk1+1(P0n)=Qk1+1(P0) and Gk2+1(P0n)=Gk2+1(P0), where P0n=argmaxPMnP0logdPdP0. That is, our sieve is not affecting the estimation of the “easy” nuisance parameters Q(k1+1)0 and G(k2+1)0. Note that for n > N0, we have Q0n=Q0 and G0n=G0.

In this paper our initial estimators of Q¯0 and G¯0 are always enforced to be in the parameter spaces of this sequence of models Mn, but if the model M is already bounded, then one can set Mn=M for all n. However, even for bounded models M, the utilization of a sequence of submodels Mn with stronger universal bounds than M could result in finite sample improvements (e.g., if the universal bounds on M are very large relative to sample size and the dimension of the data).

4 Highly adaptive Lasso estimator of Nuisance parameters

Let M1 < ∞ be given. Our M1-specific HAL-estimator of Q¯0 is defined as the minimizer of the empirical risk PnL1(Q¯) over Q¯Q¯n for which L1(Q¯) has a variation norm bounded by M1 (see eq. (21)). The rate of convergence of a minimum empirical risk estimator is driven by the rate of convergence of the covering number of the parameter space over which one minimizes (e.g., [19]). This explains why the rate of convergence of the covering number of this set of functions L1(Q¯) defines a minimal rate of convergence for this HAL-estimator (while M1 will be selected with the cross-validation selector). Similarly, this applies to our HAL-estimator of G¯0. In the next subsection we define the relevant covering numbers and their rates α1, α2, and establish an upper bound on them. Subsequently, we establish in Lemma 1 the minimal rate of convergence of the HAL-estimator in terms of these rates α1, α2.

4.1 Upper bounding the entropy of the parameter space for the HAL-estimator

We remind the reader that a covering number N(ε,F,L2(Λ)) is defined as the minimal number of balls of size ε w.r.t. L2(Λ)-norm that are needed to cover the set F of functions embedded in L2(Λ). Let α10k1 and α20k2 be such that for fixed M1, M2

supΛlog1/2(N(ε,L1(Q¯n,M1),L2(Λ))=O(ε(1α1))supΛlog1/2(N(ε,L2(Q¯n,M2),L2(Λ))=O(ε(1α2)), (19)

where L1(Q¯n,M1)={L1(Q¯):Q¯Q¯n,M1}, L2(G¯n,M2)={L1(G¯):G¯G¯n,M2}, and

Q¯n,M1{Q¯Q¯n:L1(Q¯)ν<M1}G¯n,M2{G¯Q¯n:L2(G¯)ν<M2}. (20)

The minimal rates of convergence of our HAL-estimator of Q¯0 and G¯0 are defined in terms of α1 and α2, respectively.

By eq. (17) it follows that any cadlag function with finite variation norm can be represented as a difference of two bounded monotone increasing functions (i.e., cumulative distribution function). The class of d-variate monotone increasing/cumulative distribution functions is a convex hull of d-variate indicator functions, which is again concretely implied by the representation eq. (17) by noting that 0xdf(u)=I(ux)df(u) Thus, Fν,M consists of a difference of two convex hulls of d-variate indicator functions. By Theorem 2.6.9 in [19], which maps the covering number of a set of functions into a covering number of the convex hull of these functions, for a fixed M < ∞, we have that the universal covering number of Fν,M is bounded as follows:

supΛlog1/2(N(ε,Fν,M,L2(Λ))=O(ε(1α(d))),

where α(d) = 2/(d + 2). Let d1>0k1 be the vector of integers indicating the dimension of the domain of Q¯=(Q1,,Qk1), and similarly, let d2>0k2 be the vector of integers indicating the dimension of the domain of G¯=(G1,,Gk2). Since L1(Q¯n,M1)Fν,M1 with d = d1, L2(G¯n,M2)Fν,M2 with d = d2, we have that α1α(d1) and α2α(d2).

4.2 Minimal rate of convergence of the HAL-estimator

Lemma 1 below proves that the minimal rates rQ,1:k1(n)k1 and rG,1:k2(n)k2 of our HAL-estimator of Q¯0 and G¯0 w.r.t. the loss-based dissimilarities d01(Q, Q0) and d02(G, G0) are given by:

rQ¯(n)=rQ,1:k1(n)=n(1/2+α1/4)rG¯(n)=rG,1:k2(n)=n(1/2+α2/4).

Let rQ,k1+1 and rG,k2+1 be the rates of the simple estimators Q^k1+1 and G^k2+1 of Q(k1+1)0 and G(k2+1)0, respectively. This defines rQ(n)k1+1 and rG(n)k2+1.

Lemma 1

For a given vector M0k1 of constants, let Q¯n,M{Q¯Q¯n:L1(Q¯)νM}Fν,M be the set of all functions in the parameter space Q¯n for Q¯0n for which the variation norm of its loss is smaller than M < ∞. (In this definition one can also incorporate some extra M-constraints, as long as Q¯n,M==Q¯n.) Let Q¯0nMQ¯n,M be so that P0L1(Q¯0nM)=infQ¯Q¯n,MP0L1(Q¯). Assume that for a fixed M < ∞,

M2Q,MlimsupnsupQ¯Q¯n,ML1(Q¯)L1(Q¯0nM)P0{d10(Q¯,Q¯0nM)}1/2<.

Consider an estimator Q¯nM for which

PnL1(Q¯nM)=infQ¯Q¯n,MPnL1(Q¯)+rn, (21)

where rn = oP(n−1/2). then

0d01(Q¯nM,Q¯0nM)(PnP0){L1(Q¯nM)L1(Q¯0nM)}+rn, (22)

and

d01(Q¯nM,Q¯0nM)=OP(rQ¯(n))+rn.

Proof

We have

0d01(Q¯nM,Q¯0nM)=P0{L1(Q¯nM)L1(Q¯0nM)}=(PnP0){L1(Q¯nM)L1(Q¯0nM)}+Pn{L1(Q¯nM)L1(Q¯0nM)}(PnP0){L1(Q¯nM)L1(Q¯0nM)}+rn,

which proves eq. (22). Since L1(Q¯nM)L1(Q¯0nM) falls in a P0-Donsker class Fν,M, it follows that the right-hand side is OP(n−1/2), and thus d01(Q¯nM,Q¯0nM)=OP(n1/2). Since M2,Q,M < ∞, this also implies that L1(Q¯nM)L1(Q¯0nM)P02=OP(n1/2). By empirical process theory we have that n1/2(PnP0)fnp 0 if fn falls in a P0-Donsker class with probability tending to 1, and P0fn2p0 as n → ∞. Applying this to fn=L1(Q¯nM)L1(Q¯0nM) shows that (PnP0)(L1(Q¯nM)L(Q¯0nM))=oP(n1/2), which proves d01(Q¯nM,Q¯0nM)=oP(n1/2).

We now apply Lemma 7 with Fn={L1(Q¯)L1(Q¯0nM):Q¯Q¯n,M}, α = α1 (see eq. (19)), envelope bound Mn = M and r0(n) = n−1/4, which proves that

|n1/2(PnP0)fn|=OP(nα1/4).

This proves d01(Q¯nM,Q¯0nM)=OP(n(1/2+α1/4))+rn

5 Super-learning: HAL-estimator tuning the variation norm of the fit with cross-validation

Defining the library of candidate estimators

For an M>0k1, let Q¯^M:MnonpQ¯n,MFν,M be the HAL-estimator eq. (21) and let Q¯n,M=Q¯^M(Pn). By Lemma 1 we have d01(Q¯n,M=Q¯^M(Pn),Q¯0nM)=OP(rQ¯2(n)), assuming that the numerical approximation error rn is of smaller order. Let K1,n,ν be an ordered collection M1n<M2n<<MK1,n,ν of k1-dimensional constants, and consider the corresponding collection of K1,n,v candidate estimators Q¯^M with MK1,n,ν. We impose that this index set K1,n,ν is increasing in n such that limsupnMK1,n,ν equals supPML1(Q¯(P))ν, so that for any PM, there exists an N(P) so that for n > N(P), we will have that MK1,n,ν>L1(Q¯(P))ν. Note that for all MK1,n,ν with M>L1(Q¯0)ν, we have that d01(Q¯^M(Pn),Q¯0)=OP(rQ¯2(n)). In addition, let Q¯^j:MnonpQn,jK1,n,a be an additional collection of K1,n,a estimators of Q¯0. For example, these candidate estimators could include a variety of parametric model as well as machine learning based estimators. This defines an index set K1,n=K1,n,νK1,n,a representing a collection of K1n = K1,n,ν + K1,n,a candidate estimators {Q¯^k:kK1n}.

Super Learner

Let Bn ∈ {0, 1}n denote a random cross-validation scheme that randomly splits the sample {O1, …, On} in a training sample {Oi : Bn(i) = 0} and validation sample {Oi : Bn(i) = 1}. Let qn=i=1nBn(i)/n denote the proportion of observations in the validation sample. We impose throughout the article that q < qn ≤ 1/2 for some q > 0 and that this random vector Bn has a finite number V possible realizations for a fixed V < ∞. In addition, Pn,Bn1, Pn,Bn0 will denote the empirical probability distributions of the validation and training sample, respectively. Thus, the cross-validated risk of an estimator Q¯^:MnonpQ¯n of Q¯0 is defined as EBnPn,Bn1L1(Q¯^(Pn,Bn0)).

We define the cross-validation selector as the index

k1n=K^1(Pn)=argminkK1nEBnPn,Bn1L1(Q¯^k(Pn,Bn0))

that minimizes the cross-validated risk EBnPnL1(Q¯^k(Pn,Bn0)) over all choices kK1n of candidate estimators. Our proposed super-learner is defined by

Q¯n=Q¯^(Pn)EBnQ¯^k1n(Pn,Bn0). (23)

The following lemma proves that the super-learner Q¯^(Pn) converges to Q¯0 at least at the rate rQ¯(n) the HAL-estimator converges to Q¯0:d01(Q¯^(Pn),Q¯0)=OP(rQ¯(n)). This lemma also shows that the super-learner is either asymptotically equivalent with the oracle selected candidate estimator, or achieves the parametric rate 1/n of a correctly specified parametric model.

Lemma 2

Recall the definition of the model bounds M1Q,n, M2Q,n eq. (18), and let C(M1,M2,δ)2(1+δ)2(2M1/3+M22/δ).

For any fixed δ > 0,

d01(Q¯n,Q¯0n)(1+2δ)EBnminkK1nd01(Q¯^k(Pn,Bn0),Q¯0n)+OP(C(M1Q,n,M2Q,n,δ)logK1nn).

If for each fixed δ > 0, C(M1Q,n, M2Q,n, δ) log K1n/n divided by EBnminkd01(Q¯^k(Pn,Bn0),Q¯0n) is oP(1), then

d01(Q¯^(Pn),Q¯0n)EBnminkd01(Q¯^k(Pn,Bn0),Q¯0n)1=oP(1).

If for each fixed δ > 0, EBnminkd01(Q¯^k(Pn,Bn0),Q¯0n)=OP(C(M1Q,n,M2Q,n,δ)logK1n/n), then

d01(Q¯^(Pn),Q¯0n)=OP(C(M1n,M2n,δ)logK1nn).

Suppose that for each finite M, the conditions of Lemma 1 hold with negligible numerical approximation error rn, so that d01(Q¯n,M=Q¯^M(Pn),Q¯0nM)=OP(rQ¯2(n)). Let λ1>0k1 be chosen so that rQ¯2(n)=O(nλ1). For each fixed δ > 0, we have

d01(Q¯n,Q¯0n)=OP(nλ1)+OP(C(M1Q,n,M2Q,n,δ)logK1nn). (24)

The proof of this lemma is a simple corollary of the finite sample oracle inequality for cross-validation [11, 13, 21, 33, 34], also presented in Lemma 5 in Section A of the Appendix. It uses the convexity of the loss function to bring the EBn inside the loss-based dissimilarity.

In the Appendix we present the analogue super-learner eq. (37) of G0 and its corresponding Lemma 6.

6 One-step CV-HAL-TMLE

Cross-validated TMLE (CV-TMLE) robustifies the bias-reduction of the TMLE-step by selecting : based on the cross-validated risk [5, 15]. In the next subsection we define the CV-TMLE. In this subsection we propose a particular type of local least favorable submodel that separately updates the initial estimator of Qj0 for each j = 1, …, k1. Due to this choice, in subsection 2 we now easily establish that the CV-TMLE of Q¯0 converges at the same rate to Q¯0 as the initial estimator, which is important for control of the second order remainder in the asymptotic efficiency proof of the CV-TMLE. In subsection 3 we establish the asymptotic efficiency of the CV-TMLE.

6.1 The CV-HAL-TMLE

Definition of one-step CV-HAL-TMLE for general local least favorable submodel

Let L¯1(Q)j=1k1+1L1j(Qj) be the sum loss-function. For a given (Q, G), let {Qε:ε}QnQ be a parametric submodel through Q at ε = 0 such that the linear span of ddεL¯1(Qε) at ε = 0 includes the canonical gradient D*(Q, G). Let Q^:MnonpQn and G^:MnonpGn be our initial estimators of Q0=(Q¯0,Q0,k1+1) and G0=(G¯0,G0,k2+1. We recommend defining the initial estimators Q¯^ and G¯^ of Q¯0 and G¯0 to be HAL-super-learners as defined by eqs (23) and (37), so that d10(Q^(Pn),Q0n)=OP(rQ2(n)) and d20(G^(Pn),G0n)=OP(rG2(n)). Given a cross-validation scheme Bn ∈ {0, 1}n, let Qn,Bn=Q^(Pn,Bn0)Qn be the estimator Q^ applied to the training sample Pn,Bn0. Similarly, let Gn,Bn=G^(Pn,Bn0). Let {Qn,Bn,ε:ε} be the above submodel with (Q,G)=(Qn,Bn,Gn,Bn) through Qn,Bn at ε = 0. Let

εn=argminεEBnPn,Bn1L¯(Qn,Bn,ε)

be the MLE of ε minimizing the cross-validated empirical risk. This defines Qn,Bn=Qn,Bn,εn as the Bn-specific targeted fit of Q0. The one-step CV-TMLE of ψ0 is defined as

ψn=EBnΨ(Qn,Bn).

One-step CV-HAL-TMLE solves cross-validated efficient score equation

Our efficiency Theorem 1 assumes that

EBnPn,Bn1D(Qn,Bn,Gn,Bn)=oP(n1/2). (25)

That is, it is assumed that the one-step CV-TMLE already solves the cross-validated efficient influence curve equation up till an asymptotically negligible approximation error. By definition of εn we have that it solves its score equation EBnPn,Bn1ddεnL¯(Qn,Bn,εn)=0, which provides a basis for verifying eq. (25). As formalized by Lemma 13 in the Appendix D, for our choice of n−(1/4+)-consistent initial estimators Qn, Gn of Q0, G0, a one-step CV-TMLE will satisfy eq. (25) for one-dimensional local least favorable submodels under weak regularity conditions. We believe that such a result can be proved in great generality for arbitrary (also multivariate) local least favorable submodels. Instead, below we propose a particular class of multivariate local least favorable submodels eq. (26) for which we establish eq. (25) under regularity conditions. In (van der Laan and Gruber, 2015) it is shown that one can always construct a so called universal least favorable submodel through Q with a one dimensional ε so that ddεL¯1(Qε)=D(Qε,G) at each ε so that EBnPn,Bn1D(Qn,Bn,εn,Gn,Bn)=0(exactly), independent of the properties of the initial estimator (Qn, Gn).

One-step CV-HAL-TMLE preserves fast rate of convergence of initial estimator

Our efficiency Theorem 1 also assumes that the updated estimator Qn,Bn satisfies for each split Bnd01(Qn,Bn,Q0)=oP(n1/2). This is generally a very reasonable condition given that d01(Qn,Bn,Q0)=OP(nλ1) for a specified λ1 > 1/2. Our proposed class of local least favorable submodels eq. (26) below guarantees that the rate of convergence of the initial estimator Qn,Bn is completely preserved by Qn,Bn, so that this condition is automatically guaranteed to hold.

A class of multivariate local least favorable submodels that separately updates each nuisance parameter component

One way to guarantee that d01(Qn,Bn,Q0)=oP(n1/2) is to make sure that the updated estimator Qn,Bn converges as fast to Q0 as the initial estimator Qn,Bn. For that purpose we propose a k1 + 1-dimensional local least favorable submodel of the type

Qε=(Q1,ε1,,Qk1+1,εk1+1)suchthatddεjL1j(Qj,εj)|εj=0=Dj(Q,G), (26)

for j = 1, …, k1 + 1, and where D(Q,G)=j=1k1+1Dj(Q,G). By using such a submodel we have Qj,n,Bn=Qj,n,Bn,εn(j) and εn(j)=argminεEBnPn,Bn1L1j(Qj,n,Bn,ε). Thus, in this case Qj,n,Bn is updated with its own εn(j), j = 1, …, k1 + 1. The advantage of such a least favorable submodel is that the one-step update of Q¯j,n,Bn is not affected by the statistical behavior of the other estimators Q¯l,n,Bn,lj. On the other hand, if one uses a local least favorable submodel with a single ε, the MLE εn is very much driven by the worst performing estimator Q¯j,n,Bn. Lemma 3 shows that, by using such a k1 + 1-variate local least favorable submodel satisfying eq. (26), the rate of convergence of the initial estimator Q¯j,n is fully preserved by the TMLE-update Q¯j,n,Bn (see Lemma 3 below).

How to construct a local least favorable submodel of type eq. (26)

A general approach for constructing such a k1 + 1-variate least favorable submodel is the following. Let Dj(P) be the efficient influence curve at a P for the parameter Ψj,P:M defined by Ψj,P(P1) = Ψ(Qj(P), Qj(P1)) that sets all the other components of Ql with lj equal to its true value under P, j = 1, …, k1 +1. Then, it follows immediately from the definition of pathwise derivative that

D(P)j=1k1+1Dj(P),

so that, D*(P) is an element of the linear span of {Dj(P):j=1,,k1+1}. Let {Qj,ε(j):ε(j)}Qjn be a one-dimensional submodel through Qj so that

ddε(j)L1j(Qj,ε(j))|ε(j)=0=Dj(Q,G),j=1,,k1+1.

That is, {Qj,ε(j) : ε(j)} is a local least favorable submodel at (Q, G) for the parameter Ψj,Q:M, j = 1, …, k1 + 1. Now, define (Qε:ε)Qn by Qε = (Qj,ε(j) : j = 1, …, k1 + 1). Then, we have

ddεL¯(Qε)|ε=0=(Dj(Q,G):j=1,,k1+1),

so that the submodel is indeed a local least favorable submodel.

Lemma 14 provides a sufficient set of minor conditions under which the one-step-HAL-CV-TMLE using a local least favorable submodel of the type eq. (26) will satisfy eq. (25). Therefore, the class of local least favorable submodels eq. (26) yields both crucial conditions for the HAL-CV-TMLE: it solves eq. (25) and it preserve the rate of convergence of the initial estimator.

6.2 Preservation of the rate of initial estimator for the one-step CV-HAL-TMLE using eq. (26)

Consider the submodel {Qε : ε} of the type eq. (26) presented above. Given an initial estimator Q^:MnonpQn, recall the definition Qn,Bn,ε=Q^ε(Pn,Bn0) as the fluctuated version of the initial estimator applied to the training sample, and εn=argminεEBnPn,Bn1L1(Qn,Bn,ε). We want to show that Qn,Bn,εn converges to Q0 at the same rate as the initial estimator Qn,Bn (and thus also Q^(Pn)). The following lemma establishes this result and it is an immediate consequence of the oracle inequality of the cross-validation selector for the loss function L1j, applied to the set of candidate estimators PnQjn,ε(j)=Q^j,ε(j)(Pn) indexed by ε(j), for each j = 1, …, k1 +1.

Lemma 3

Let εn=argminεEBnPn,Bn1L1(Qn,Bn,ε). We have

EBnd01(Q^εn(Pn,Bn0),Q0n)(1+2δ)minEBnd01(Q^ε(Pn,Bn0),Q0n)+OP(C(M1Q,n,M2Q,n,δ)logK1nnq).

By convexity of the loss function L1(Q), this implies

d01(EBnQ^εn(Pn,Bn0),O0n)(1+2δ)minEBnd01(Q^ε(Pn,Bn0),Q0n)+OP(C(M1Q,n,M2Q,n,δ)logK1nnq).

We have

minεEBnd01(Q^ε(Pn,Bn0),Q0n)EBnd01(Q^(Pn,Bn0),Q0n).

Thus, if for some λ1>0C(M1Q,n,M2Q,n,δ)logK1n/(nq)=O(nλ1) and for each Bnd01(Q^(Pn,Bn0),Q0n)=OP(nλ1), then

d01(EBnQn,Bn,εn,Q0n)=OP(nλ1).

It then also follows that for each Bn, d01(Q^εn(Pn,Bn0),Q0n)=OP(nλ1).

6.3 Efficiency of the one-step CV-HAL-TMLE

We have the following theorem.

Theorem 1

Consider the above defined corresponding one-step CV-TMLE ψn=EBnΨ(Qn,Bn,εn) of Ψ(Q0).

Initial estimator conditions

Consider the HAL-super-learners Q¯^(Pn) and G¯^(Pn) defined by eqs (23) and (37), respectively, and, recall that we are given simple estimators Q^k1+1 and G^k2+1 of Q0,k1+1 and G0,k2+1. Let λ1 and λ2 be chosen so that rQ¯(n)=O(nλ1) and rG¯(n)=O(nλ2). Assume the conditions of Theorem 2 and Theorem 6 so that we have

d01(Q¯^(Pn),Q¯0)=OP(nλ1(1:k1))+OP(C(M1Q,n,M2Q,n,δ)logK1n/n)d02(G¯^(Pn),G¯0)=OP(nλ2(1:k2))+OP(C(M1G,n,M2G,n,δ)logK2n/n),

where λ1(1 : k1) > 1/2 and λ2(1 : k2) > 1/2. Let Q^=(Q¯^,Q^k1+1) and G^=(G¯^,G^k2+1) be the corresponding estimators of Q0 and G0, respectively.

“Preserve rate of convergence of initial estimator”-condition

In addition, assume that either (Case A) the CV-TMLE uses a local least favorable submodel of the type eq. (26) so that Lemma 3 applies, or (Case B) assume that for each split Bnd01(Qn,Bn,Q0)=OP(nλ1) for some λ1>1/2.

Efficient influence curve score equation condition and second order remainder condition

Define fn,ε=D(Q^ε(Pn,Bn0),Gn,Bn)D(Q0,G0) and the class of functions Fn={fn,ε:ε}. Assume

EBnPn,Bn1D(Qn,Bn,εn,Gn,Bn)=oP(n1/2), (27)
D(Qn,Bn,Gn,Bn)D(Q0,G0)P0=oP(rD,n)forrD,n=o(1), (28)
EBnR20((Qn,Bn,Gn,Bn),(Q0,G0))=oP(n1/2), (29)
max(M1Q,n,M2Q,n2)logK1nn=O(nλ1), (30)
max(M1G,n,M2G,n2)logK2nn=O(nλ1), (31)
supΛN(εMD,n,Fn,L2(Λ))<KεpforaK<,p<. (32)

In Case A, for verification of assumption eq. (27) one could apply Lemma 14.

In Case A, for verification of the two assumptions eqs (28) and (29) one can use that for each of the V realizations of Bn, d0(Qn,Bn,Q0)=OP(nλ1) and d02(Gn,Bn,G0)=OP(nλ2).

In Case B, for verification of the latter two assumptions eqs (28) and (29) one can use that for each of the V realizations of Bn, d0(Qn,Bn,Q0)=OP(nλ1) and d02(Gn,Bn,G0)=OP(nλ2).

Then, ψn=EBnΨ(Qn,Bn,εn) is asymptotically efficient:

ψnψ0=(PnP0)D(Q0,G0)+oP(n1/2). (33)

Condition eq. (32) will practically always trivially hold for p = k1 + 1 equal to the dimension of ε: note that this is even true for unbounded models due to the normalizing constant MD,n. We already discussed the crucial condition eq. (27) in our subsection defining the CV-TMLE. Conditions eqs (30) and (31) are easily satisfied by controlling the speed at which the model bounds M1Q,n, M2Q,n, M1G,n, M2G,n can converge to infinity, and are always true for bounded models (as long as the size of the library of the super-learner behaves as a polynomial power of sample size). For bounded models M, condition eq. (28) will typically hold with rD,n=nλ and λ equal to the minimum of the components of λ1/2 and λ2/2: i.e., the efficient influence curve estimator will converge to its true counterpart as fast as the slowest converging nuisance parameter estimator. If the model M is unbounded so that the model bounds of the sieve Mn will converge to infinity, then eq. (28) will hold with rD,n=nλMn for some Mn converging to infinity (e.g., Mn=MD,n). So, in the latter case one has to control the rate at which the model bounds of the sieve Mn, such as the supremum norm bound MD,n for the efficient influence curve, converge to infinity. Finally, the crucial condition eq. (29) will easily hold for bounded models M if this slowest rate λ is larger than 1/4, which we know to be true for the HAL-estimator and its super-learner. For unbounded models, this condition eq. (29) will put a serious brake on the speed as which the model bounds of Mn can converge to infinity.

Proof

By assumptions eqs (30) and (31), we have

d0((Q^(Pn,Bn0),G^(Pn,Bn0),(Q0,G0))=OP(nλ1,nλ2).

Consider Case A. Lemma 3 proves that under these same assumptions eqs (30), (31), we also have, for each Bn, d01(Qn,Bnεn,Q0n)=OP(nλ1). This proves that for each Bn, d0((Qn,Bn=Qn,Bn,εn,Gn,Bn),(Q0,G0))=OP(nλ1,nλ2). For Case B, we replace in latter expression λ1 by λ1. Suppose n > N0 so that Q0n = Q0 and G0n = G0. By the identity Ψ(Qn,Bn)Ψ(Q0)=P0D(Qn,Bn,Gn,Bn)+R20((Qn,Bn,Gn,Bn),(Q0,G0)), we have

EBnΨ(Qn,Bn)Ψ(Q0)=EBnP0D(Qn,Bn,Gn,Bn)+EBnR20((Qn,Bn,Gn,Bn),(Q0,G0)).

Combining this with eq. (27) yields the following identity:

ψnΨ(Q0)=EBnΨ(Qn,Bn)Ψ(Q0)=EBn(Pn,Bn1P0)D(Qn,Bn,Gn,Bn)+EBnR20((Qn,Bn,Gn,Bn),(Q0,G0))+oP(n1/2).

By assumption eq. (29) we have that EBnR20((Qn,Bn,Gn,Bn),(Q0,G0))=oP(n1/2). Thus, we have shown

Ψ(Qn)Ψ(Q0)=EBn(Pn,Bn1P0)D(Qn,Bn,Gn,Bn)+oP(n1/2).

We now note

EBn(Pn,Bn1P0)D(Qn,Bn,Gn,Bn)=EBn(Pn,Bn1P0)D(Q0,G0)+EBn(Pn,Bn1P0){D(Qn,Bn,Gn,Bn)D(Q0,G0)}=(PnP0)D(Q0,G0)+EBn(Pn,Bn1P0){D(Qn,Bn,Gn,Bn)D(Q0,G0)}.

Thus, it remains to prove that EBn(Pn,Bn1P0){D(Qn,Bn,Gn,Bn)D(Q0,G0)}=oP(n1/2). For this we apply Lemma 10 with fn,ε=D(Q^ε(Pn,Bn0),Gn,Bn)D(Q0,G0), conditional on Pn,Bn0, and Fn={fn,ε:ε}. By assumption eq. (28), there exists a rate rD,n=o(1) so that fn,εnP0=OP(rD,n), where (e.g., for Case A) this rate will be determined based upon d0((Qn,Bn,Gn,Bn)(Q0,G0))=OP(nλ1,nλ2). Note also that the envelope of Fn satisfies FnΛMD,n for any measure Λ (see eq. (18)). Since ε is p-dimensional for some integer p, the entropy of Fn easily satisfies supΛ N(εFnΛ, Fn, L2(Λ)) = O(εp), which is assumed to hold by condition eq. (32). Application of Lemma 10 proves now that, if rD,n=o(1), then, given Pn,Bn0,

(Pn,Bn1P0)fn,εn=oP(n1/2).

This proves also that EBn(Pn,Bn1P0)fn,εn=oP(n1/2). This completes the proof. □

7 Example: Treatment specific mean

We will now apply Theorem 1 to the example introduced in Section 2. We have the following sieve model bounds (van der Laan et al., 2004): M1Q,n=O(logδn1); M2Q,n=O(1/δn); M1G,n=O(logδn1); M2G,n=O(1/δn); MD,n=O(1/δn).

Since the parameter space Q1n consists of the cadlag functions with bounded variation norms, without any further restrictions beyond the global bound δn, we can select the entropy quantities for Q1 as follows: α1 = α(d1) = 2/(d1 + 2), where d1 = d−2 is the dimension of W. Similarly, if Gn consists of all cadlag functions of dimension d2, without further meaningful restrictions beyond δn, then we can select the entropy quantities for Gn as α2 = α(d2) = 2/(d2 + 2). If the model G enforces more meaningful restrictions than that A only depends on W through a subset of W of dimension d2, then α2 can be replaced by a sharper upper bound than α(d2). We already established that condition eq. (27) in Theorem 1 holds exactly. Condition eq. (32) trivially holds.

Verification of eqs (30) and (31)

Let Q¯nQ1n be a super-learner of Q¯0 of the type presented in eq. (23). Similarly, let G¯nGn be such a super-learner of G¯0 as presented in eq. (37). Suppose that max(M1Q,nM2Q,n2)logK1n/n=O(nλ(d1)) and max(M1G,nM2G,n2)logK2n/n=O(nλ(d2)), where λ(d) = 1/2+ α(d)/4. Then, by Lemma 2 and Lemma 6, we have d10,1(Q¯n,Q¯0)=OP(nλ(d1)) and d02(G¯n,G¯0)=OP(nλ(d2)). Plugging in the above bounds for M1Q,n, M2Q,n, M1G,n, M2G,n, it follows that it suffices to select δn so that δn1=O(n1/21/2λ(d1)(max(logK1n,logK2n))1/2). (Improvements can be obtained by selecting a separate δ1n for truncating Q¯ and δ2n for truncating G¯.) Let Kn = max(K n, K2n) and impose that logKn=O(n1/2α(d1)/2). Then, it follows that this bound for δn1 is larger than nα(d1)/6, so that this constraint on δn is dominated by our later constraint given below δn1=o(nα(d1)/6).

Above we showed that if δn1=O(n1/21/2λ(d1)(max(logK1n,logK2n))1/2), then the two super-learners Q¯n,Bn and G¯n,Bn of Q¯0 and G¯0 based on the training sample Pn,Bn0 converge at the rate nλ(d1) and nλ(d2) w.r.t the loss-based dissimilarities d10,1 and d02, respectively. By Lemma 3, under the same conditions stated above for d01(Q¯n,Q¯0)=OP(nλ(d1)), the TMLE update Q¯n,Bn converges at this same rate: for each split Bn, we have d01(Q¯n,Bn,Q¯0)=OP(nλ(d1)).

Verification of eq. (28)

Using straightforward algebra and using the triangle inequality for a norm, we obtain

D(Qn,Bn,G¯n,Bn)D(Q0,G¯0)P0AG¯n,BnG¯0G¯n,BnG¯0(YQ¯0)P0+AG¯n,Bb(Q¯n,BnQ¯0)P0+Q¯n,BnQ¯0P0+|EBnΨ(Qn,Bn)Ψ(Q0)|.

Using that min(G¯n,BnG¯0)>δn and |YQ¯0|<1 it follows that the first term is bounded by δ3/2G¯n,BnG¯0P0. Using that G¯n,Bn>δn, it follows that the second term is bounded by δn1Q¯n,BnQ¯0P0. So, we have

D(Qn,Bn,Gn,Bn)D(Q0,G0)P0δn3/2G¯n,BnG¯0P0+2δn1Q¯n,BnQ¯0P0+|EBnΨ(Qn,Bn)Ψ(Q0)|.

We bound the last term as follows:

EBnΨ(Qn,Bn)Ψ(Q0)=EBnQ2n,Bn1Q¯n,BnQ20Q¯0=EBn(Q2n,Bn1Q20)Q¯0+EBnQ2n,Bn1(Qn,BnQ¯0)=OP(n1/2)+EBn(Q2n,Bn1Q20)(Q¯n,BnQ¯0)+EBnQ20(Q¯n,BnQ¯0)=OP(n1/2)+EBn(Q2n,Bn1Q20)(Q¯n,BnQ¯0)+OP(EBnd10,11/2(Q¯n,Bn,Q¯0)),

where we used at the third equality that for each split Bn(O2n,Bn1Q20)Q¯0=OP(n1/2), by the standard central limit theorem.

In order to bound the second empirical process term we apply Lemma 10 to the term n1/2(Q2n,Bn1Q20)(Q¯n,BnQ¯0). Lemma 4 below shows that Q¯n,BnQ¯0P0=OP(nλ(d1)/2δn1/2). Therefore, we can apply Lemma 10 with rD,n equal to this latter rate. This yields the following bound:

EBn(Q2n,Bn1Q20)(Q¯n,BnQ¯0)=OP(nλ(d1)/2δn1/2(1+logn+logδn)).

Thus, we have shown

D(Qn,Bn,G¯n,Bn)D(Q0,G¯0)P0=OP(nλ(d1)/2δn1/2(1+logn+logδn))+OP(δn1Q¯n,BnQ¯0P0)+OP(δn3/2G¯n,BnG¯0P0).

We have d10,1(Q¯n,Bn,Q¯0)=OP(nλ(d1)) and d02(G¯n,Bn,G¯0)=OP(nλ(d2)). These rates first need to be translated in terms of L2(P0)-norms in order to utilize the above bound. Lemma 4 below shows that Q¯n,BnQ¯0P0=OP(nλ(d1)/2δn1/2) and G¯n,BnG¯0P0=OP(nλ(d2)). So we obtain the following bound:

D(Qn,Bn,G¯n,Bn)D(Q0,G¯0)P0=OP(nλ(d1)/2δn1/2(1+logn+logδn))+OP(δn3/2nλ(d1)/2)+OP(δn3/2nλ(d2)/2).

We can conservatively bound this as follows:

D(Qn,Bn,G¯n,Bn)D(Q0,G¯0)P0=OP(δn3/2nλ(d1)/2logn),

where we used conservative bounding by not utilizing that d2 could be significantly smaller than d1. We conclude that we can set rD,n=δn3/2nλ(d1)/2logn. We need that rD,n=o(1) and thus that δn3/2=o(nλ(d1)/2logn), or δn1=o(n1/6+α(d1)/6logn) The latter condition is dominated by the condition δn1=o(nα(d1)/6) we need in the analysis below of the second order remainder.

Verification of eq. (29)

By eq. (6), we can bound the second order remainder as follows:

R20(Pn,Bn,P0)δn1G¯n,BnG¯0P0Q¯n,BnQ¯0P0=OP(δn3/2nλ(d1)/2λ(d2)/2).

Thus, it suffices to assume that δn3/2nλ(d1)=o(n1/2), and thus δn1=o(nα(d1)/6).

We verified the conditions of Theorem 1. Application of Theorem 1 yields the following result.

Theorem 2

Consider the nonparametric statistical model M for P0 of the d-dimensional O=(W,A,Y)~P0M and target parameter Ψ:M defined by Ψ(P) = EPEP(Y | A = 1, W). In this nonparametric model we only assume that for each PM, Q¯(P)=EP(Y|A=1,W) and G¯(P)=EP(A|W) are cadlag functions on [0,τ]0d2 for some finite τ with finite variation norm.

Consider the above defined one-step CV-TMLE ψn=EBnΨ(Qn,Bn) of Ψ(Q0) based on the HAL-super-learner Q¯n and G¯n of type eqs (23) and (37), where Q¯n and G¯n are enforced to be contained in interval (δn, 1 − δn). Let d1 = d − 2. Let α(d1) = 2/(d1 + 2), λ(d1) = 1/2 + α(d1)/4, and Kn = max(K1n, K2n).

Assume that logKn=O(n1/2α(d1)/2), and that dn1 converges slowly enough to ∞ so that δn1=o(nα(d1)/6) Then ψn is a regular asymptotically linear estimator with influence curve equal to the efficient influence curve D*(P0), and is thus asymptotically efficient.

Thus for large dimension d, δn1 is only allowed to converge to infinity at a very slow rate. Note that δn1 immediately implies a bound on the efficient influence curve and such bounds are naturally very crucial.

Above we used the following lemma.

Lemma 4

We have

Q¯Q¯0P024δn1d01(Q¯,Q¯0). (34)

We also have

G¯G¯0P024d02(G¯,G¯0). (35)

Proof

We first prove eq. (34). Let

KL(Q¯(W),Q¯0(W))=Q¯0(W)logQ¯0(W)Q¯(W)+(1Q¯0(W))log1Q¯0(W)1Q¯(W)

be the Kullback-Leibler divergence between the Bernoulli laws with probabilities Q¯(W) and Q¯0(W). Then,

d01(Q¯,Q¯0)=EP0G¯0(W)KL(Q¯(W),Q¯0(W)).

In van der Vaart (1998, page 62) it is shown that for two densities p, p0, we have p1/2p01/2P02log(p/p0)dP0. Applying this inequality to Bernoulli laws with probabilities Q¯(W) and Q¯0(W) yields:

KL(Q¯())Q¯0(Q¯1/2Q¯01/2)2+(1Q¯0)((1Q¯)1/2(1Q¯0)1/2)2.

Applying the inequality (ab)2 ≤ 4(a1/2b1/2)2 (for a, b ∈ [0, 1]) to the square terms on the right-hand side now yields:

KL(Q¯(),Q¯0())41(Q¯Q¯0)2. (36)

Now, note that d01(Q¯,Q¯0)=EP0G¯0(W)KL(Q¯(W),Q¯0(W)). We can use that G¯0>δn, which provides us with the following bound:

d01(Q¯,Q¯0)δnEP0KL(Q¯(W),Q¯0(W))δn41EP0(Q¯Q¯0)2(W)=δn41Q¯Q¯0P02.

This completes the proof of eq. (34). We have

d02(G¯,G¯0)=EP0KL(G¯(W),G¯0(W)).

Completely analogue to the derivation above of eq. (36) we obtain

KL(G¯(),G¯0())41(G¯G¯0)2,

and thus

d02(G¯,G¯0)41G¯G¯0P02.

This proves eq. (35). □

8 Discussion

In this article we established that a one-step CV-TMLE, using a super-learner with a library that includes L1-penalized MLEs that minimize the empirical risk over high dimensional linear combinations of indicator basis functions under a series of L1-constraints, will be asymptotically efficient. This was shown to hold under remarkable weak conditions and for an arbitrary dimension of the data structure O.

This remarkable fact is heavily driven by the fact that this super-learner will always converge at a rate faster than n−1/4 w.r.t. the loss-based dissimilarity, which is typically equivalent with the L2(P0)-norm. This holds for every dimension of the data and any underlying smoothness of the true nuisance parameter values, as long as these true nuisance parameter values have a finite variation norm. Since the second order remainder R2(Pn,P0) of the first order expansion for the TMLE can be bounded in terms of these loss-based dissimilarities between the super-learner and its true counterpart, this rate of convergence is fast enough to make the second order remainder asymptotically negligible. As a consequence, the first order empirical mean of the canonical gradient/efficient influence curve drives the asymptotics of the TMLE.

In order to prove our theorems it was also important to establish that a one-step TMLE already approximately solves the efficient influence curve equation, under very general reasonable conditions. In this article we focused on a one-step TMLE that updates each nuisance parameter with its own one-dimensional MLE update step. This choice of local least favorable submodel guarantees that the one-step TMLE update of the super-learner of the nuisance parameters is not driven by the nuisance parameter component that is hardest to estimate, which might have finite sample advantages. Nonetheless, our asymptotic efficiency theorem applies to any local least favorable submodel.

The fact that a one-step TMLE already solves the efficient influence curve equation is particularly important in problems in which the TMLE update step is very demanding due to a high complexity of the efficient influence curve. In addition, a one-step TMLE has a more predictable robust behavior than a limit of an iterative algorithm. We could have focused on the universal least favorable submodels so that the TMLE is always a one-step TMLE, but in various problems local least favorable submodels are easier to fit and can thus have practical advantages.

By now, we also have implemented the HAL-estimator for nonparametric regression and dimensions d ≤ 10, and established that its practical performance appears to be very good [22]. In addition, we also implemented the HAL-TMLE for the ATE (i.e., our example) for such low dimensions and the coverage of the confidence intervals has been remarkable good for normal sample sizes, suggesting that the asymptotics of the HAL-TMLE kicks in at earlier sample sizes then theory would predict. We suspect that part of the reason for the excellent practical performance is the double robust nature of the second order remainder, which suggest more finite sample bias cancelation than an actual square of a difference. The practical implementation and evaluation of the HAL-estimator and HAL-TMLE across a diversity of problems remains an area of future research.

In this article we assumed independent and identically distributed observations. Nonetheless, this type of super learner and the resulting asymptotic efficiency of the one-step TMLE will be generalizable to a variety of dependent data structures such as data generated by a statistical graph that assumes sufficient conditional independencies so that the desired central limit theorems can still be established [4, 2326].

This article focused on a CV-TMLE that represents the statistical target parameter Ψ(P) as a function Ψ(Q1(P),,Qk1+1(P)) of variation independent nuisance parameters (Q1,,Qk1+1). However, there are key examples in which representing Ψ(P) in terms of recursively defined nuisance parameters has key advantages. For example, the longitudinal one-step TMLE of causal effects of multiple time point interventions in [27, 28] relies on a sequential regression representation of the target parameter [29]. In this case, the next regression is defined as the regression of the previous regression on a shrinking history, across a number of regressions, one for each time point at which an intervention takes place. In this case, a super-learner of nuisance parameter Qk is based on a loss function L1,k,Qk+1(Qk) that depends on a next nuisance parameter Qk+1 (representing the outcome for the regression defining Qk), k = 1, …, k1 + 1.. One would now start with obtaining the desired result for the super-learner of Qk1+1 whose loss function does not depend on other nuisance parameters. For the second super-learner of QK1 based on candidate estimators Q^k1,j, j = 1, …, J, we would use as cross-validated risk EBnPn,Bn1L1,k1,Q^k1+1(Pn,Bn0)(Q^k1,j). In other words, one estimates the nuisance parameter of the loss-function based on the training sample. In [11, 30, 31] we establish oracle inequalities for the cross-validation selector based on loss-functions indexed by an unknown nuisance parameter, which now also rely on a remainder concerning the rate at which Q^k1+1(Pn) converges to Qk1+1,0. In this manner, one can establish that the super-learner of Q^k1,j will converge at the same or better rate than the super-learner of Qk1+1,0. This process can be iterated to establish convergence of all the super-learners at the same or better rate than the initial super-learner of Qk1+1,0. Our asymptotic efficiency results for the one-step TMLE and one-step CV-TMLE can now be generalized to one-step TMLE and CV-TMLE that rely on sequential targeted learning. The disadvantage of sequential learning is that the behavior of previous super-learners affects the behavior of the next super-learners in the sequence, but the practical implementation of a sequential super-learner can be significantly easier.

Our general theorems and specifically the theorems for our example demonstrate that the model bound on the variance of the efficient influence curve heavily affects the stability of the TMLE, and that we can only let this bound converge to infinity at a slow rate when the dimension of the data is large. Therefore, knowing this bound instead of enforcing it in a data adaptive manner is crucial for good behavior of these efficient estimators. This is also evident from the well known finite sample behavior of various efficient estimators in causal inference and censored data models that almost always rely on using truncation of the treatment and/or censoring mechanism. If one uses highly data adaptive estimators, even when the censoring or treatment mechanism is bounded away from zero, the estimators of these nuisance parameters could easily get very close to zero, so that truncation is crucial. Careful data adaptive selection of this truncation level is therefore an important component in the definition of these efficient estimators.

Alternatively, one can define target parameters in such a way that their variance of the efficient influence curve is uniformly bounded over the model (e.g., [32]). For example, in our example we could have defined the target parameter EYd1EYd0, where d1(W)=I(G¯n(W)>δ)π and d0(W)=1I((1G¯n(W)>δ), and G¯n is the super-learner of G¯0=E0(A|W) and δ > 0 is a user supplied constant. In this case, the static interventions have been replaced by data dependent realistic dynamic interventions that approximate the static interventions but are guaranteed to only carry out the intervention when there is enough support in the data. Due to the fact that such parameters have a guaranteed amount of support in the data, the variance of the efficient influence curve is uniformly bounded over the model: i.e. MD<.

Acknowledgments

This research is funded by NIH-grant 5R01AI074345-07. The author thanks Marco Carone, Antoine Chambaz, and Alex Luedtke for stimulating discussions, and the reviewers for their very helpful comments.

Appendix

A Oracle inequality for the cross-validation selector

Lemma 2 is a simple corollary of the following finite sample oracle inequality for cross-validation [11, 13], combined with exploiting the convexity of the loss function allowing us to bring the EBn inside the loss-based dissimilarity.

Lemma 5

For any δ > 0, there exists a constant C(M1Q,n,M2Q,n,δ)=2(1+δ)2(2M1Q,n/3+M2Q,n2/δ) such that

E0{EBnd01(Q¯^k1n(Pn,Bn0),Q¯0)}(1+2δ)E0{EBnminkd01(Q¯^k(Pn,Bn0),Q¯0)}+2C(M1Q,n,M2Q,n,δ)logK1nnB¯n.

Similarly, for any δ > 0,

EBnd01(Q¯^k1n(Pn,Bn0),Q¯0)(1+2δ)EBnminkd01(Q¯^k(Pn,Bn0),Q¯0)}+Rn,

where ERn2C(M1Q,n,M2Q,n,δ)logK1nnB¯n.

If log K1n/n divided by EBnminkd01(Q¯^k(Pn,Bn0),Q¯0)} converges to zero in probability, then we also have

EBnd01(Q¯^kn(Pn,Bn0,Q¯0)EBnminkd01(Q¯^k(Pn,Bn0,Q¯0)p1.

Similarly, if log K1n/n divided by E0EBnminkd01(Q¯^k(Pn,Bn0),Q¯0)} converges to zero, then we also have

E0EBnd01(Q¯^kn(Pn,Bn0,Q¯0)E0EBnminkd01(Q¯^k(Pn,Bn0,Q¯0)1.

B Super learner of G0

Completely analogue to the super-learner eq. (23), we can define such a super-learner of G0, which we will do here. For an M>0k2, let G¯^M:MnonpG¯n,MFν,M be the MLE for which d02(G¯n,M=G¯^M(Pn),G¯0nM)=OP(rG¯2(n)). Let K2,n,ν be an ordered collection of k2-dimensional constants, and consider the corresponding collection of candidate estimators G¯^M with MK2,n,ν. We assume the index set K2,n,ν is increasing in n and that limsupnMK2,n,ν=max(MG,ν,ML2(G),ν). Note that for all MK2,n,ν with M>L2(G¯0)ν, we have that d02(G¯^M(Pn),G¯0)=OP(nλ2). In addition, let G¯^j:MnonpG¯n, jK2,n,a, be an additional collection of K2,n,a estimators of G0. This defines a collection of K2n = K2,n,v + K2,n,a candidate estimators {G¯^k:kK2n} of G¯0.

We define the cross-validation selector as the index

k2n=K^2(Pn)=argminkK2nEBnPn,Bn1L1(G¯^k(Pn,Bn0))

that minimizes the cross-validated risk EBnPnL2(G¯^k(Pn,Bn0)) over all choices k of candidate estimators. Our proposed super-learner of G¯0 is defined by

G¯n=G¯^(Pn)=EBnG¯^kn(Pn,Bn0). (37)

The same Lemma 2 applies to this estimator G¯^(Pn) of G¯0.

Lemma 6

Recall the definition of the model bounds M1G,n, M2G,n eq. (18), and let C(M1,M2,δ)2(1+δ)2(2M1/3+M22/δ). For any fixed δ > 0,

d02(G¯n,G¯0n)(1+2δ)EBnminkK2nd02(G¯^k(Pn,Bn0),G¯0n)+OP(C(M1G,n,M2G,n,δ)logK2nn),

If for each fixed δ > 0, C(M1G,n, M2G,n, δ) log K2n/n divided by EBnminkd02(G¯^k(Pn,Bn0),G¯0n) is oP(1), then

d02(G¯^(Pn),G¯0n)EBnminkd02(G¯^k(Pn,Bn0),G¯0n)1=oP(1).

If for a fixed δ > 0, EBnminkd02(G¯^k(Pn,Bn0),G¯0n)=OP(C(M1G,n,M2G,n,δ)logK2n/n), then

d02(G¯^(Pn),G¯0n)=OP(C(M1G,n,M2G,n,δ)logK1nn).

Suppose that for each fixed M the conditions of Lemma 1 hold with negligible numerical approximation error rn, so that d02(G¯n,M,G¯0nM)=OP(rG¯2(n)). Let λ2 be chosen so that rG¯2(n)=O(nλ2). For each fixed δ > 0, we have

d02(G¯^(Pn),G¯0n)=OP(nλ2)+OP(C(M1G,n,M2G,n,δ)logK2nn). (38)

C Empirical process results

Theorem 2.1 in [18] establishes the following result for a Donsker class Fn with uniformly bounded envelope Fn and for which for each fFnP0f2δ2PFn2:

EGnFn<J(δ,Fn)(1+J(δ,Fn)δ2n1/2FnP0)FnP0,

where Gn(f) = n1/2(PnP0)f and

J(δ,Fn)supΛ0δlog1/2(1+N(εFnΛ,Fn,L2(Λ))dε

is the entropy integral from 0 to δ. This definition of the entropy integral is slightly different from a common definition in which the supremum over P is taken within the integral.

Suppose we want a bound on supfFn,fP0<δ|Gn(f)|. Of course, fP0<δ is equivalent with fP0<δ1FnP0, where δ1=δ/FnP0. Application of the above result with this choice of δ = δ1 yields:

EsupfFn,fP0<δ|Gn(f)|<J(δ/FnP0,Fn)(1+J(δ/FnP0,Fn)FnP0δ2n1/2)FnP0. (39)

Suppose that supΛlog1/2(1+N(εFnΛ,Fn,L2(Λ)))=O(ε(1α)) for some α ∈ (0, 1). Then,

J(δ/FnP0,Fn)=O(δαFnP0α).

Thus, we have

EsupfFn,fP0<δ|Gn(f)|<δαFnP01α+δ2α2n1/2FnP022α.

Note that this is a decreasing function in FnP0. Given a bound Mn so that FnP0<Mn, a conservative bound is obtained by replacing FnP0 by Mn.

This proves the following lemma.

Lemma 7

Consider Fn with FnP0<Mn and supΛlog1/2(1+N(εFnΛ,Fn,L2(Λ)))=O(ε(1α)) for some α ∈ (0, 1). Then,

EsupfFn,fP0<r0(n)|Gn(f)|<{r0(n)/Mn}αMn+{r0(n)/Mn}2α2n1/2.

If r0(n) < n−1/4, one should select r0(n) = n−1/4 in the above right hand side, giving the bound:

EsupfFn,fP0<r0(n)|Gn(f)|<{n1/4/Mn}αMn+{Mn}22αnα/2.

Consider eq. (39) again, but suppose now that supΛN(εFnΛ,Fn,L2(Λ))=O(εp) for some p > 0. Then,

J(δ/FnP0,Fn)=p1/20δ/FnP0log1/2ε1dε.

We can conservatively bound log1/2 ε−1 by log ε−1 for ε small enough, and then note 0xlogεdε=x(1logx). Thus, we have the bound

J(δ/FnP0,Fn)=O(δFnP01(1log(δ/FnP0)).

By plugging this latter bound into eq. (39) we obtain

EsupfFn,fP0<δ|Gn(f)|<δ(1log(δ/FnP0))+(1log(δ/FnP0))2n1/2.

Note that the right-hand side is increasing in FnP0. So if we know that FnP0Mn for some Mn, we obtain the bound

EsupfFn,fP0<δ|Gn(f)|<δ(1log(δ/Mn))+(1log(δ/Mn))2n1/2.

Lemma 8

Consider Fn with FnP0<Mn and supΛN(εFnΛ,Fn,L2(Λ)))=O(εp) for some p > 0. Then,

EsupfFn,FnP0<r0(n)Gn(f)<r0(n)(1logr0(n)Mn)+(1logr0(n)Mn)2n1/2. (40)

The following lemma is proved by first applying the Lemma 7 to (PnP0)fn with r0(n) = 1 to obtain an initial rate r0(n), and then applying the above lemma again with this new initial rate r0(n).

Lemma 9

Consider the following setting:

fnFn,FnP0Mn,supΛlog1/2(1+N(εFnΛ,Fn,L2(Λ)))=O(ε(1α)),α(0,1),d0(Qn,Q0)|(PnP0)fn|,fnP0M2n{d0(Qn,Q0)}1/21<Mn<n1/(4(1α)).

Then

d0(Qn,Q0)<n1/2nα/4C(Mn,M2n,α),

where

C(Mn,M2n,α)=M2nαMn1α/2α2/2+nα/4M2n2α1Mn1α2.

Proof

We have d0(Qn, Q0) ≤ | (PnP0)fn |. We apply Lemma 7 to the right-hand side with r0(n) = 1. This yields

E|(PnP0)fn|<n1/2Mn1α+Mn22αn1.

This shows d0(Qn,Q0)<n1/2Mn(1α)+Mn22αn1. Using that x+yx+y, this implies d0(Qn,Q0)1/2<n1/4Mn(1α)/2+Mn1αn1/2. By assumption, this implies

fnP0<n1/4M2nMn(1α)/2+n1/2M2nMn1α.

The right-hand side is of order n1/4M2nMn(1α)/2 if Mnn1/(4(1−α)), which holds by assumption. Let r0(n)=n1/4M2nMn(1α)/2. We now apply Lemma 7 to (PnP0)fn with this choice of r0(n). Note r0(n) converges to zero at slower rate (or equal than) n−1/4. Thus, application of Lemma 7 gives the following bound:

E|(PnP0)fn|<n1/2r0(n)αMn1α+r0(n)2α2Mn22αn1<n1/2nα/4M2nαMn1α/2α2/2+n1/2(1+α)M2n2α2Mn1α2.

We can factor out n−1/2nα/4, giving the bound

<n1/2nα/4{M2nαMn1α/2α2/2+nα/4M2n2α2Mn1α2}.

This completes the proof of the lemma. □

The following lemma is needed in the analysis of the CV-TMLE, where fn,ε=D(Qn,Bn,ε,Gn,Bn)D(Q0,G0).

Lemma 10

Let fn,εnFn={fn,ε:ε} where ε varies over a bounded set in ℝp and fn,ε is a non-random function (i.e., not based on data O1, …, On). Let Fn be the envelope of Fn and let Mn be such that Fn<MD,n. Assume that supΛN(εFnΛ,Fn,L2(Λ))=O(εp). Suppose that fn,εnP0=oP(rD(n)) for a rate rD(n)0. Then, Gn(fn,εn)=Gn(f¯n,εn)+En, where

E0|Gn(fn,εn)|=O(rD(n)(1log(rD(n)/MD,n))),

and En equals 0 with probability tending to 1. Thus, if rD(n)log(MD,n/rD(n))=o(1), then Gn(fn,εn)=oP(1).

Proof

For notational convenience, let’s denote fn,εn with fn. We have that with probability tending to 1 fnP0<rD(n). We have fn=fnI(fnP0<rD(n))+fnI(fnP0>rD(n)). Denote the first term with fn and note that the second term equals zero with probability tending to 1. This shows that Gn(fn)=Gn(fn)+En where En equals zero with probability tending to 1 while fnP0<rD(n) with probability 1. Application of Lemma 8 shows that

E|Gn(fn)|<rD(n)log(MD,n/rD(n)).

This completes the proof. □

D Implementing the HAL-estimator

For notational convenience, consider the case that Qn=Q. The M-specific HAL-estimator is defined for a given M < ∞ vector, by minimizing PnL1(Q¯) over all Q¯Q¯ for which the variation norm of L1(Q¯) is bounded by this M. We need to calculate this estimator for a series of M-vectors ranging from 0 to infinity, and we will then select M with cross-validation (see next section). Suppose that, for a fixed n, there exists an Mn,vk1 so that for all Q¯Q¯, L1(Q¯)vMn,vQ¯v. This is typically an assumption that is trivially satisfied. Then, calculating this collection of M-specific HAL-estimators across a set of M-vectors can also be achieved by computing an MLE of Q¯PnL1(Q¯) over all Q¯Q¯ with Q¯v<M, for a series of M-vectors. Therefore we rephrase our goal as to compute a Q¯n,M so that

PnL1(Q¯n,M)=minQ¯Q¯MPnL1(Q¯)+rn, (41)

where in this section we redefine Q¯M={Q¯Q¯:Q¯v<M}, and rn is a controlled small number. We will now address a strategy for implementation of this MLE Q¯n,M.

D.1 Approximating a function with variation norm M by a linear combination of indicator basis functions with L1-norm of the coefficient vector equal to M

Any cadlag function fD[0,τ] with finite variation norm can be represented as follows:

f(x)=f(0)+s{1,,p}(0s,xs]f(dus,0s).

For each subset s of size | s |, consider a partitioning of (0s, τs] in | s |-dimensional cubes with width hm. Let’s denote these cubes with Rhm(j,s), where j is the index of the j-th cube and j runs over O(1/hm|s|) cubes. Let Rhm(s) be the index set, so that we can write (0s,τs]=jRhm(s)Rhm(j,s). By definition of an integral, we have f(x)=limhm0fm(x), where

fm(x)=fm(f)(x)=f(0)+s{1,,p}jRhm(s)ϕhm,js(x)βhm,js,

βhm,js=f(Rhm(j,s)) is the measure f assigns to the cube Rhm(j,s), and ϕhm,js(x)=I(mhm(j,s)xs) is the indicator that the midpoint mhm(j,s) of the cube Rhm(j,s) is smaller or equal than xs. By the dominated convergence theorem, it also follows that ‖ fm(f) − fΛ→ 0 for any L2(D)-norm. Moreover, the variation norm of f is approximated by the sum of the absolute values of all the coefficients βhm,js:

fv=limhm0f(0)+s{1,,p}jRhm(s)|βhm,js|.

Let β0 denote the intercept f(0). Thus, we conclude that given a function fFv,M, we can approximate it with a finite linear combination fm(f) of indicator basis functions ϕhm,js plus an intercept β0 for which the L1-norm of its coefficient vector (β0,(βhm,js:j,s)) approximates the variation norm of f. The support points mhm(j,s) could also be selected based on the data support {O1, …, On}. Such a strategy is presented and implemented for the HAL-estimator of a nonparametric regression in [22]. In the latter paper we select n support points for each s-specific measure, possibly resulting in as many as n * 2d-number of basis functions.

D.2 An approximation of the MLE over functions of bounded variation using L1-penalization

For an M ∈ ℝ>0, let’s define

Fv,Mm={s{1,,p}jRhm(s)ϕhm,js(x)βhm,js:s,j|βhm,js|M}

as the collection of all these finite linear combinations of this collection of basis functions under the constraint that its L1-norm is bounded by M. Consider the case that the parameter space Q¯j for Q¯j(P), j ∈ {1, …, k1} is nonparametric, so that the MLE over Q¯j,M=Fv,M of Q¯j0 would correspond with minimizing over Fv,M. Note that this does not imply that the model M is nonparametric: for example, the data distribution could be parameterized in terms of unspecified functions Q¯j of dimension d1(j), j = 1, …, k1, and unspecified functions G¯j of dimension d2(j), j = 1, …, k2.

The next lemma proves that we can approximate such an MLE over Fv,M for a loss function L1j(Q¯j) by an MLE over Fv,Mm by selecting m large enough.

Lemma 11

Let M ∈ ℝ>0 be given. Consider f0Fv,MD[0,τ] so that for a loss function (O, f) → L(f)(O), we have P0L(f0)=minfFv,MP0L(f). Assume that if fmFv,M converges pointwise to a fFv,M on [0, τ], then L(fm) converges pointwise to L(f) on a support of P0, including the support of the empirical distribution Pn. Let f0,mFv,Mm be such that P0L(f0,m)=minfFv,MmP0L(f). We have P0(L(f0,m) − L(f0)) → 0 as hm → 0.

Consider now an fnFv,M so that PnL(fn)=minfFv,MPnL(f), and let fn,mFv,Mm be such that PnL(fn,m)=minfFv,MmPnL(f). We have Pn(L(fn,m) − L(fn)) → 0 as hm → 0.

Proof

We want to show that P0(L(f0,m) − L(f0)) → 0 as hm → 0. By the approximation presented in the previous section, since f0Fv,M, we can find a sequence f0,mFv,Mm so that f0,mf0 as hm → 0, pointwise and in L2(P0) norm. By assumption and the dominated convergence theorem, this implies P0L(f0,m)P0L(f0) also converges to zero as hm → 0. But, since f0,m minimizes P0L(f) over all fFv,Mm, we have

0P0L(f0,m)P0L(f0)P0L(f0,m)P0L(f0)0,

which proves that P0L(f0,m) − P0L(f0) → 0, as hm → 0.

We now want to show that Pn(L(fn,m) − L(fn)) → 0 as hm → 0. Since fnFv,M, we can find a sequence fn,mFv,Mm so that fn,mfn as hm → 0, pointwise and in L2(Pn)-norm.

Then, by assumption and the dominated convergence theorem, PnL(fn,m)PnL(fn) also converges to zero as hm → 0. But, since fn,m minimizes PnL(f) over all fFv,Mm, we have

0PnL(fn,m)PnL(fn)PnL(fn,m)PnL(fn)0,

which proves that PnL(fn,m) − PnL(fn) → 0, as hm → 0. □

D.3 An approximation of the MLE over the subspace Q¯M by an MLE over an L1-constrained linear model

Above we defined a mapping from a function fFv,M into a linear combination fm(f)Fv,Mm of basis functions for which the norm of the coefficient vector approximates the variation norm of f. The following lemma proves in general that we can compute the MLE over Q¯M=Q¯Fv,M with the MLE over Q¯Mm={Q¯m(Q¯):Q¯Q¯M}, which is a collection of these linear combinations of the basis functions for which the L1-norm of the coefficient vector is bounded by M. Note that Q¯Mm is typically not a submodel of Q¯M, but it is obtained by replacing each element Q¯ in Q¯M with its approximation Q¯m(Q¯).

Lemma 12

Assume that if Q¯mFv,M converges pointwise to a Q¯Fv,M on [0,τ]k1, then L1(Q¯m) converges pointwise to L1(Q¯) on a support of P0, including the support of the empirical distribution Pn. For an Mk1, let Q¯M=Q¯Fv,Mk+1={Q¯(P):PM,Q¯(P)Fv,M} be all functions in the parameter space for Q¯0 that have a variation norm smaller than M < ∞. Let Q¯Mm={Q¯m(Q¯):Q¯Q¯M}, where Q¯m(Q¯) is defined above as the finite dimensional linear combination of the basis functions {ϕhm,js:j,s} with coefficient vector {βhm,js(Q¯):j,s}.

Consider a Q¯0,MQ¯M so that P0L1(Q¯0,M)=minQ¯Q¯MP0L1(Q¯), and let P0L1(Q¯0,Mm)=minQ¯Q¯MmP0L1(Q¯) be such that P0(L1(Q¯0,Mm)L1(Q¯0,M))0 as hm → 0.

Similarly, consider a Q¯n,MQ¯M so that PnL1(Q¯n,M)=minQ¯Q¯MPnL1(Q¯), and let Q¯n,MmQ¯Mm be such that PnL1(Q¯n,Mm)=minQ¯Q¯MmPnL1(Q¯). Then, Pn(L1(Q¯n,MmL1(Q¯n,M))0 as hm → 0.

Proof

We want to show that P0(L1(Q¯0,Mm)L(Q¯0,M))0 as hm → 0. By the approximation presented in the previous section, since Q¯0,MFv,M, we can find a sequence Q¯0,Mm,Fv,Mm so that Q¯0,MQ¯0,M as hm → 0, pointwise and in L2(P0) norm. By assumption and the dominated convergence theorem, this implies P0L1(Q¯0,Mm,)P0L1(Q¯0,M) also converges to zero as hm → 0. But, since Q¯0,Mm minimizes P0L1(Q¯) over all Q¯Q¯Mm, we have

0P0L1(Q¯0,Mm)P0L1(Q¯0,M)P0L1(Q¯0,Mm,)P0L1(Q¯0,M)0,

which proves that P0L1(Q¯0,Mm)P0L1(Q¯0,M)0, as hm → 0.

We now want to show that Pn(L1(Q¯n,Mm)L1(Q¯n,M))0 as hm → 0. Since Q¯n,MFv,M, we can find a sequence Q¯n,Mm,Fv,Mm so that Q¯n,Mm,Q¯n,M as hm → 0, pointwise and in L2(Pn)-norm.

Then, by assumption and the dominated convergence theorem, PnL1(Q¯n,Mm,)PnL1(Q¯n,M) also converges to zero as hm → 0. But, since Q¯n,Mm minimizes PnL1(Q¯) over all Q¯Q¯n,Mm, we have

0PnL1(Q¯n,Mm)PnL1(Q¯n,M)PnL1(Q¯n,Mm,)PnL1(Q¯n,M)0,

which proves that PnL1(Q¯n,Mm)PnL1(Q¯n,M)0, as hm → 0. □

E A single updating step in TMLE suffices for approximately solving the efficient influence curve equation

In this section we focus on the one-step TMLE, but the results can be straightforwardly generalized to the one-step CV-TMLE.

The following lemma proves that for a local least favorable submodel with a 1-dimensional ε and n−1/4+-consistent initial estimators, the one-step TMLE already solves PnD(Qn,εn,Gn)=oP(n1/2) under some regularity conditions.

Lemma 13

Ψ:MR is a pathwise differentiable parameter at P with canonical gradient D(P), and assume Ψ(P) = Ψ(Q(P)) and D(P) = D(Q(P), G(P)) for parameters Q:MQ={Q(P):PM} and G:MG={G(P):PM}. Let R2() be defined by Ψ(P) − Ψ(P0) = (PP0)D(P) + R2(P, P0), and let R2(P, P0) = R20((Q, G), (Q0, G0)). Suppose Q0 = arg minQ P0L(Q) for some loss function L(Q) and that, for any QQ and GG, {Qε:ε}Q is a one dimensional parametric submodel through Q with ddεL(Qε)|ε=0=D(Q,G) be an initial estimator of (Q0, G0), and consider the one-step TMLE Ψ(Qn, εn) with εn = arg minε PnL(Qn, ε).

Let fn(ε) = PnD(Qn, ε, Gn) and gn(ε)=ddεPnL(Qn,ε). Let fn(ε)=ddεfn(ε) and gn(ε)=ddεgn(ε). Let ε0 = 0. Assume

  • fn(εn)=fn(0)+fn(0)εn+OP(εn2) and gn(εn)=gn(0)+gn(0)εn+OP(εn2);

  • εn2=oP(n1/2);

  • {ddεnD(Qn,εn,Gn)d2dεn2L(Qn,εn)}/n1/4 falls in a P0-Donsker class with probability tending to 1;

  • P0{ddε0D(Qn,ε0,Gn)ddε0D(Q0,ε0,,G0)}=OP(n1/4)P0{d2dε02L(Qn,ε0)d2dε02L(Q0,ε0)}=OP(n1/4); (42)
  • P0d2dε02L(Q0,ε0)=P0D(P0){D(P0)}. (43)
    If L(Q(P)) = − log pQ(P),η(P) for some density parameterization (Q, η) → pQ,η, then (43) holds;
  • ddε0R20((Q0,ε0,G0),(Q0,G0))=0.

Then, PnD(Qn,εn,Gn)=oP(n1/2).

The first bullet point condition only assumes that the chosen least favorable submodel is smooth in ε. The second bullet point condition will be satisfied if the initial estimators Qn, Gn converge to the true Q0, G0 at a rate faster than n−1/4. The third bullet condition will hold without n−1/4-scalar if the estimators Qn, Gn have uniformly bounded variation norm. Due to the scaling n−1/4, it could even allow that the variation norm grows with sample size, again showing that this is a very weak condition. Conditions eq. (42) are expected to hold if Qn, Gn converge to Q0, G0 at a rate n−1/4. Condition eq. (43) is a condition that holds for loss-functions that can be represented as log-likelihood loss function, and is therefore again a natural condition for a local least favorable submodel w.r.t. loss function L. Finally, consider the last bullet point condition. If this remainder has a double robust form R20((Q, G), (Q0, G0)) = (H1(Q) − H1(Q0))(H2(G) − H2(G0))dP0 for some functionals H1, H2, then this condition holds. If the remainder is of the form R20((Q, G), (Q0, G0)) = ∫(H(Q) − H(Q0))2dP0, then again this condition trivially holds. This shows that also the latter condition is a weak regularity condition.

Proof of Lemma

Firstly, by the fact that Qn,ε has score D(Qn, Gn) at ε = 0, it follows that fn(0) = gn(0). We also know that gn(εn) = 0, and we want to show that fn(εn) = oP(n−1/2). Let ε0 = 0. By the second order Tailor expansion assumption for fn, gn at ε = 0, we have

fn(εn)=fn(εn)gn(εn)=fn(0)gn(0)+εn(fngn)(0)+O(εn2)=εn{ddε0PnD(Qn,ε0,Gn)d2dε02PnL(Qn,ε0)}+O(εn2).

By assumption, εn2=oP(n1/2), so that O(εn2)=oP(n1/2). Thus, it remains to show

Pnddε0D(Qn,ε0,Gn)Pnd2dε02L(Qn,ε0)=OP(n1/4).

By our Donsker class assumption, we have

(PnP0){ddε0D(Qn,ε0,Gn)d2dε02L(Qn,ε0)}/n1/4=OP(n1/2).

Thus, it remains to show

ddε0P0D(Qn,ε0,Gn)P0d2dε02L(Qn,ε0)=OP(n1/4)

By assumptions eq. (42), we have that the left-hand side of last expression equals

ddε0P0D(Q0,ε0,G0)P0d2dε02L(Q0,ε0)+OP(n1/4),

so that it remains to show that the first term equals zero. By −P0D(P) = Ψ(P) − Ψ(P0) − R2(P, P0), it follows that

ddε0P0D(Q0,ε0,G0)=ddε0Ψ(Q0,ε0)+ddε0R2((Q0,ε0,G0),(Q0,G0)).

By assumption we have ddε0R2((Q0,ε0,G0),(Q0,G0))=0. By definition of the pathwise derivative at P0, we have that the derivative Ψ(Q0,ε) = Ψ(P0,ε) at ε = 0 equals P0D(P0){D(P0)} . Thus, we have shown

ddε0P0D(Q0,ε0,G0)=P0D(P0){D(P0)}.

Thus, it remains to show eq. (43), which thus holds by assumption. Suppose that L(Q(P)) = − log pQ(P),η(P) for some density parameterization (Q, η) → pQ,η. Then L(Q0,ε)=logpQ0,ε,η0. Since {pQ0,ε,η0:ε} is a correctly specified parametric model, we have that the second derivative of P0logpQ0,ε,η0 at ε = 0 equals its information matrix (i.e., covariance matrix of its score) P0ddεlogpQ0,ε,η0{ddεlogpQ0,ε,η0} at ε = 0. However, the latter equals −P0D(P0){D(P0)}, which proves eq. (43). This completes the proof of fn(εn) = oP(n−1/2).

In the main article we have not proposed a 1-dimensional local least favorable submodel as in Lemma 13, even though our results are straightforwardly generalized to that case. Instead we proposed a k1 + 1- dimensional least favorable submodel that uses a 1-dimensional ε(j) for updating Qjn for each j = 1, …, k1 + 1. We will now state the desired lemma for the one-step TMLE for such a submodel by application of the above lemma across all j.

Lemma 14

Let Ψ:M be pathwise differentiable with canonical gradient D(P) = D(Q, G) and let Ψ(P) = Ψ(Q(P)) for Q(P)=(Q1(P),,Qk1+1(P)). For a given Q, we define ΨQ,j:M by ΨQ,j(P) = Ψ(Qj, Qj(P)), j = 1, …, k1 + 1. Let DQ,j(P)=DQ,j(Qj(P),Qj(P),G(P)) be the efficient influence curve of ΨQ,j at P, and define R2,Q,j(P, P0) = R2,Q,j((Q(P), G(P)), (Q0, G0)) by ΨQ,j(P)ΨQ,j(P0)=(PP0)DQ,j(P)+R2,Q,j(P,P0),j=1,,k1+1. Here Qj = (Ql : lj, l ∈ {1, …, k1 + 1}). We have D(P)=j=1k1+1DQ(P),j(P).

Let QnQn, GnGn be a given initial estimator. Let {Qjn,ε(j):ε(j)}Qjn be a submodel through Qjn at ε(j) = 0 and satisfying ddε(j)L1,j(Qjn,ε(j))|ε(j)=0=DQn,j(Qn,Gn),j=1,,k1+1. Let {Qn,ε:ε}Qn be defined by Qn,ε = (Qjn,ε(j) : j = 1, …, k1+1). Let εn = arg minε PnL1(Qn,ε), where PnL1(Qn,ε) = (PnL1j(Qjn,ε(j)) : j = 1, …, k1+1). Let Qn=Qn,εn.

We wish to establish that PnD(Qn,εn,Gn)=oP(n1/2), where

PnD(Qn,εn,Gn)=j=1k1+1PnDQn,εn,j(Qjn,εn(j),Qjn,εn,Gn).

For each j = 1, …, k1 + 1, assume the following conditions:

  1. Suppose that by application of the previous lemma to ΨQn,j:MR, submodel {Qjn,ε(j) : ε(j)}, loss function L1j(Qj), εn(j) = arg minε(j) PnL1j(Qjn,ε(j)), and one-step TMLE Qjn,εn(j), we establish its conclusion PnDQn,j(Qjn,εn(j),Qjn,Gn)=oP(n1/2). For completeness, Lemma 15 below explicitly states these j specific conditions of the previous lemma, which are sufficient for this conclusion.

  2. Let fnj=DQn,j(Qjn,Qjn,Gn)DQn,j(Qjn,Qjn,Gn), and assume (PnP0)fnj = oP(n−1/2). For this to hold if suffices to assume that P0fnj2p0 and lim supn→∞fnjv< M a.e.

  3. Let fnj,1=DQn,j(Qn,Gn)DQn,j(Qn,Gn), and assume (PnP0)fnj = oP(n−1/2). For this to hold if suffices to assume that P0fnj2p0 and lim supn→∞fnj,1v< M a.e.

  4. R2,Qn,j(((Qjn,Qjn),Gn),(Q0,G0))R2,Qn,j(((Qjn,Qjn),Gn),(Q0,G0))=oP(n1/2);

  5. R2,Qjn,j((Qn,Gn),(Q0,G0))R2,Qn,j((Qn,Gn),(Q0,G0))=oP(n1/2);

  6. ΨQn,j(Qjn)ΨQn,j(Qj0){ΨQn,j(Qjn)ΨQn,j(Qj0)}=oP(n1/2).

Then, PnD(Qn,εn,Gn)=oP(n1/2).

Lemma 15

Let fnj(ε(j))=PnDQn,j(Qjn,ε(j),Qjn,Gn) and gnj(ε(j))=ddε(j)PnL1j(Qjn,ε(j)). Let fnj(ε(j))=ddε(j)fnj(ε(j)) and gnj(ε(j))=ddε(j)gnj(ε(j)). Let : ε0(j) = 0.

Assume the following conditions:

  1. fnj(εn(j))=fnj(0)+fnj(0)εn(j)+OP(εn(j)2) and gnj(εn(j))=gnj(0)+gnj(0)εn(j)+OP(εn2(j));

  2. εn2(j)=oP(n1/2);

  3. {ddεn(j)DQn,j(Qjn,εn(j),QjnGn)d2dεn(j)2L1j(Qjn,εn(j))}/n1/4 falls in a P0-Donsker class with probability tending to 1;

  4. ddε0(j)P0{DQn,j(Qjn,ε0(j),QjnGn)DQn,j(Qj0,ε0(j),Qj0G0)}=OP(n1/4)d2dε0(j)2P0{L1j(Qjn,ε0(j))L1j(Qj0,ε0(j))}=OP(n1/4);
  5. P0d2dε0(j)2L1j(Qj0,ε0(j))=P0DQ0,j(P0){DQ0,j(P0)}. (44)
    If L1j(Qj(P))=logpQj(P),η(P) for some density parameterization (Qj,η)pQj,η, then eq. (44) holds;
  6. ddε0(j)R2,Q0,j((Qj0,ε0(j),Qj0,G0),(Q0,G0))=0.

    Then, PnDQn,j(Qjn,εn(j),Qjn,Gn)=oP(n1/2).

Proof

This is an immediate application of Lemma 13. □

Proof of Lemma 14

Consider a 1-dimensional submodel {Pε : ε}⊂ ℳ with score S. We have

ddεΨ(Pε)=ddεΨ(Qε)=ddεΨ(Q1ε,,Qk1+1ε)j=1k1+1ddεΨ(Qj,Qjε).

By pathwise differentiability of Ψ at P the left-hand side equals PD*(P)S, while, by pathwise differentiability of ΨQ,j at P, each j-specific term on the right-hand side equals PDQ,j(P)S. This proves that

PD(P)S=j=1k1+1PDQ,j(P)S=P{j=1k1+1DQ,j(P)}S.

Since this holds for each ST(P) and DQ,j(P)T(P) for all j, this implies D(P)=j=1k1+1DQ,j(P). This proves the first statement of the lemma. This shows also that PnD(Qn,Gn)=j=1k1+1PnDQn,j(Qn,Gn), so it suffices to prove that PnDQn,j(Qn,Gn)=oP(n1/2) for each j. In the lemma we assumed that we already established PnDQn,j(Qjn,Qjn,Gn)=oP(n1/2), by application of Lemma 15.

Firstly, we want to prove that Pn{DQn,j(Qjn,Qjn,Gn)DQn,j(Qjn,Qjn,Gn)}=oP(n1/2), which then shows that PnDQn,j(Qn,Gn)=oP(n1/2). This term can be represented as Pnfn. We can write Pnfn = (PnP0)fn + P0fn. By our first assumption, we have (PnP0)fn = oP(1). So we now have to consider

P0{DQn,j(Qnj,Qjn,Gn)DQn,j(Qjn,Qjn,Gn)}=ΨQn,j(Qjn)ΨQn,J(Qj0)R2,Qn,j(((Qjn,Qjn),Gn),(Q0,G0))ΨQn,j(Qjn)+ΨQn,j(Qj0)R2,Qn,j(((Qjn,Qjn),Gn),(Q0,G0))=R2,Qn,j(((Qjn,Qjn),Gn),(Q0,G0))R2,Qn,j(((Qjn,Qjn),Gn),(Qo,G0))).

By assumption 2., the latter is o (n−1/2). This proves now that PnDQn,j(Qn,Gn)=oP(n1/2).

We now want to prove that Pn{DQn,j(Qn,Gn)DQn,j(Qn,Gn)}=oP(n1/2), so that we can conclude PnDQn,j(Qn,Gn)}=oP(n1/2). Let, fn={DQn,j(Qn,Gn)DQn,j(Qn,Gn)}, so that this term can be represented as Pnfn. We have Pnfn = (PnP0)fn + P0fn. By assumption 3., we have (PnP0)fn=oP(n−1/2). We now have to consider

P0{DQn,j(Qn,Gn)DQn,j(Qn,Gn)}=ΨQn,j(Qjn)ΨQn,j(Qj0)+R2,Qn,j((Qn,Gn),(Q0,G0))ΨQn,j(Qjn)+ΨQn,J(QJ0)R2,Qn,j((Qn,Gn),(Q0,G0)).

By assumption 4., we have R2,Qn,j()R2,Qn,j()=oP(n1/2). By assumption 5, the second order Ψ-difference is oP(n−1/2) as well. □

References

  • 1.Bickel PJ, Klaassen CAJ, Ritov Y, Wellner J. Efficient and adaptive estimation for semiparametric models. Berlin/Heidelberg/New York: Springer; 1997. [Google Scholar]
  • 2.Robins JM, Rotnitzky A. AIDS epidemiology. Basel: Birkhauser; 1992. Recovery of information and adjustment for dependent censoring using surrogate markers; pp. 297–331. [Google Scholar]
  • 3.van der Laan MJ, Robins JM. Unified methods for censored longitudinal data and causality. New York: Springer; 2003. [Google Scholar]
  • 4.van der Laan MJ. Estimation based on case-control designs with known prevalance probability. Int J Biostat. 2008;4(1) doi: 10.2202/1557-4679.1114. Article 17. [DOI] [PubMed] [Google Scholar]
  • 5.van der Laan MJ, Rose S. Targeted learning: Causal inference for observational and experimental data. Berlin/Heidelberg/New York: Springer; 2011. [Google Scholar]
  • 6.van der Laan MJ, Rubin Daniel B. Targeted maximum likelihood learning. Int J Biostat. 2006;2(1) Article 11. [Google Scholar]
  • 7.Gruber S, van der Laan MJ. An application of collaborative targeted maximum likelihood estimation in causal inference and genomics. Int J Biostat. 2010;6(1) doi: 10.2202/1557-4679.1182. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Porter KE, Gruber S, van der Laan MJ, Sekhon JS. The relative performance of targeted maximum likelihood estimators. (Working Paper Series. Working Paper 279).Int J Biostat. 2011 Jan 1;7(1) doi: 10.2202/1557-4679.1308. Article 31., 2011. Published online Aug 17, 2011. Also available at: U.C. Berkeley Division of Biostatistics. http://www.bepress.com/ucbbiostat/paper279. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Sekhon JS, Gruber S, Porter KE, van der Laan MJ. Propensity scorebased estimators and c-tmle. In: van der Laan MJ, Rose S, editors. Targeted learning: Causal inference for observational and experimental data. New York/Dordrecht/Heidelberg/London: Springer; 2012. [Google Scholar]
  • 10.Polley EC, Rose S, van der Laan MJ. Super learner. In: van der Laan MJ, Rose S, editors. Targeted learning: Causal inference for observational and experimental data. New York/Dordrecht/Heidelberg/London: Springer; 2011. [Google Scholar]
  • 11.van der Laan MJ, Gruber S. One-step targeted minimum loss-based estimation based on universal least favorable one-dimensional submodels. Int J Biostat. 2016;12(1):351–378. doi: 10.1515/ijb-2015-0054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.van der Laan MJ, Polley EC, Hubbard AE. Super learner. Stat Appl Genet Mol. 2007;6(1) doi: 10.2202/1544-6115.1309. Article 25. [DOI] [PubMed] [Google Scholar]
  • 13.van der Vaart AW, Dudoit S, van der Laan MJ. Oracle inequalities for multi-fold cross-validation. Stat Decis. 2006;24(3):351–371. [Google Scholar]
  • 14.Polley EC, Sherri Rose, van der Laan MJ. Super learning. In: van der Laan MJ, Rose S, editors. Targeted learning: Causal inference for observational and experimental data. New York/Dordrecht/Heidelberg/London: Springer; 2012. [Google Scholar]
  • 15.Zheng W, van der Laan MJ. Cross-validated targeted minimum loss based estimation. In: van der Laan MJ, Rose S, editors. Targeted learning: Causal inference for observational and experimental studies. New York: Springer; 2011. [Google Scholar]
  • 16.van der Laan MJ. Technical Report 300. UC Berkeley: 2015. A generally efficient targeted minimum lossbased estimator. http://biostats.bepress.com/ucbbiostat/paper343. [Google Scholar]
  • 17.Neuhaus G. On weak convergence of stochastic processes with multidimensional time parameter. Ann Stat. 1971;42:1285–1295. [Google Scholar]
  • 18.van der Vaart AW, Wellner JA. A local maximal inequality under uniform entropy. Electr J Stat. 2011;5:192–203. doi: 10.1214/11-EJS605. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.van der Vaart AW, Wellner JA. Weak convergence and empirical processes. Berlin/Heidelberg/New York: Springer; 1996. [Google Scholar]
  • 20.Gill RD, van der Laan MJ, Wellner JA. Inefficient estimators of the bivariate survival function for three models. Annales de l’Institut Henri Poincaré. 1995;31:545–597. [Google Scholar]
  • 21.van der Laan MJ, Dudoit S, van der Vaart AW. The cross-validated adaptive epsilon-net estimator. Stat Decis. 2006;24(3):373–395. [Google Scholar]
  • 22.Benkeser D, van der Laan MJ. The highly adaptive lasso estimator. Proceedings of the IEEE Conference on Data Science and Advanced Analytics. 2016 doi: 10.1109/DSAA.2016.93. To appear. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Chambaz A, van der Laan MJ. Targeting the optimal design in randomized clinical trials with binary outcomes and no covariate, theoretical study. (Working paper 258).Int J Biostat. 2011a;7(1):1–32. www.bepress.com/ucbbiostat. [Google Scholar]
  • 24.Chambaz A, van der Laan MJ. Targeting the optimal design in randomized clinical trials with binary outcomes and no covariate, simulation study. (Working paper 258).Int J Biostat. 2011b;7(1):33. www.bepress.com/ucbbiostat. [Google Scholar]
  • 25.van der Laan MJ. Causal inference for networks. UC Berkeley: 2012. (Technical Report 300). http://biostats.bepress.com/ucbbiostat/paper300, to appear in Journal of Causal Inference. [Google Scholar]
  • 26.van der Laan MJ, Balzer LB, Petersen ML. Adaptive matching in randomized trials and observational studies. J Stat Res. 2013;46(2):113–156. [PMC free article] [PubMed] [Google Scholar]
  • 27.Gruber S, van der Laan MJ. Targeted maximum likelihood estimation, R package version 1.2.0-1. 2012 Available at http://cran.rproject.org/web/packages/tmle/tmle.pdf.
  • 28.Petersen M, Schwab J, Gruber S, Blaser N, Schomaker M, van der Laan MJ. Targeted maximum likelihood estimation of dynamic and static marginal structural working models. J Causal Inf. 2013;2:147–185. doi: 10.1515/jci-2013-0007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Bang H, Robins JM. Doubly robust estimation in missing data and causal inference models. Biometrics. 2005;61:962–972. doi: 10.1111/j.1541-0420.2005.00377.x. [DOI] [PubMed] [Google Scholar]
  • 30.Iván Díaz, van der Laan MJ. Sensitivity analysis for causal inference under unmeasured confounding and measurement error problems. Int J Biostat. doi: 10.1515/ijb-2013-0004. In press. [DOI] [PubMed] [Google Scholar]
  • 31.van der Laan MJ, Petersen ML. Targeted learning. In: Zhang C, Ma Y, editors. Ensemble machine learning: methods and applications. New York: Springer; 2012. [Google Scholar]
  • 32.van der Laan MJ, Petersen ML. Causal effect models for realistic individualized treatment and intention to treat rules. Int J Biostat. 2007;3(1) doi: 10.2202/1557-4679.1022. Article 3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.van der Laan MJ, Dudoit S. Unified cross-validation methodology for selection among estimators and a general cross-validated adaptive epsilon-net estimator: finite sample oracle inequalities and examples. Division of Biostatistics, University of California; Berkeley: 2003. (Technical Report 130). [Google Scholar]
  • 34.van der Laan MJ, Dudoit S, Keles S. Asymptotic optimality of likelihood-based cross-validation. Stat Appl Genet Mol. 2004;3(1) doi: 10.2202/1544-6115.1036. Article 4. [DOI] [PubMed] [Google Scholar]

RESOURCES