Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Nov 2.
Published in final edited form as: Bernoulli (Andover). 2021 Aug 24;27(4):2300–2336. doi: 10.3150/20-BEJ1309

Universal sieve-based strategies for efficient estimation using machine learning tools

HONGXIANG QIU 1,*, ALEX LUEDTKE 2, MARCO CARONE 1,*,*
PMCID: PMC8561841  NIHMSID: NIHMS1702459  PMID: 34733110

Abstract

Suppose that we wish to estimate a finite-dimensional summary of one or more function-valued features of an underlying data-generating mechanism under a nonparametric model. One approach to estimation is by plugging in flexible estimates of these features. Unfortunately, in general, such estimators may not be asymptotically efficient, which often makes these estimators difficult to use as a basis for inference. Though there are several existing methods to construct asymptotically efficient plug-in estimators, each such method either can only be derived using knowledge of efficiency theory or is only valid under stringent smoothness assumptions. Among existing methods, sieve estimators stand out as particularly convenient because efficiency theory is not required in their construction, their tuning parameters can be selected data adaptively, and they are universal in the sense that the same fits lead to efficient plug-in estimators for a rich class of estimands. Inspired by these desirable properties, we propose two novel universal approaches for estimating function-valued features that can be analyzed using sieve estimation theory. Compared to traditional sieve estimators, these approaches are valid under more general conditions on the smoothness of the function-valued features by utilizing flexible estimates that can be obtained, for example, using machine learning.

Keywords: nonparametric inference, asymptotic efficiency, sieve estimation

1. Introduction

1.1. Motivation

A common statistical problem consists of using available data in order to learn about a summary of the underlying data-generating mechanism. In many cases, this summary involves function-valued features of the distribution that cannot be estimated at a parametric rate under a nonparametric model — for example, a regression function or the density function of the distribution. Examples of useful summaries involving such features include average treatment effects [23], average derivatives [11], moments of the conditional mean function [24], variable importance measures [33] and treatment effect heterogeneity measures [12]. For ease of implementation and interpretation, in traditional approaches to estimation, these features have typically been restricted to have simple forms encoded by parametric or restrictive semiparametric models. However, when these models are misspecified, both the interpretation and validity of subsequent inferences can be compromised. To circumvent this difficulty, investigators have increasingly relied on machine learning (ML) methods to flexibly estimate these function-valued features.

Once estimates of the function-valued features are obtained, it is natural to consider plug-in estimators of the summary of interest. However, in general, such estimators are not root-n-consistent and asymptotically normal, and hence not asymptotically efficient (referred to as efficient henceforth). Lacking this property is problematic since it often forms the basis for constructing valid confidence intervals and hypothesis tests [3, 20]. When the function-valued features are estimated by ML methods, in order for the plug-in estimator to be CAN, the ML methods must not only estimate the involved function-valued features well, but must also satisfy a small-bias property with respect to the summary of interest [20, 31]. Unfortunately, because ML methods generally seek to optimize out-of-sample performance, they seldom satisfy the latter property.

1.2. Existing methodological frameworks

The targeted minimum loss-based estimation (TMLE) framework provides a means of constructing efficient plug-in estimators [27, 31]. Given an (almost arbitrary) initial ML fit that provides a good estimate of the function-valued features involved, TMLE produces an adjusted fit such that the resulting plug-in estimator has reduced bias and is efficient. This adjustment process is referred to as targeting since a generic estimate of the function-valued features is modified to better suit the goal of estimating the summary of interest. Though TMLE provides a general template for constructing efficient estimators, its implementation requires specialized expertise, namely knowledge of the analytic expression for an influence function of the summary of interest. Influence functions arise in semiparametric efficiency theory and are key to establishing efficiency, but can be difficult to derive. Furthermore, even when an influence function is known analytically, additional expertise is needed to construct a TMLE for a given problem.

Alternative approaches for constructing efficient plug-in estimators have been proposed in the literature, including the use of undersmoothing [18], twicing kernels [20], and sieves [4, 19, 24]. These methods neither require knowing an influence function nor performing any targeting of the function-valued feature estimates. Hence, the same fits can be used to simultaneously estimate different summaries of the data-generating distribution, even if these summaries were not pre-specified when obtaining the fit. These approaches also circumvent the difficulties in obtaining an influence function. However, these methods all rely on smoothness conditions on derivatives of the functional features that may be overly stringent. In addition, undersmoothing provides limited guidance on the choice of the tuning parameter; the twicing kernel method requires the use of a higher-order kernel, which may lead to poor performance in small to moderate samples [14].

In contrast, under some conditions, sieve estimation can produce a flexible fit with the optimal out-of-sample performance while also yielding an efficient — and therefore root-n-consistent and asymptotically normal — plug-in estimator [24]. In this paper, we focus on extensions of this approach. In sieve estimation, we first assume that the unknown function falls in a rich function space, and construct a sequence of approximating subspaces indexed by sample size that increase in complexity as sample size grows. We require that, in the limit, the functions in the subspaces can approximate any function in the rich function space arbitrarily well. These approximating subspaces are referred to as sieves. By using an ordinary fitting procedure that optimizes the estimation of the function-valued feature within the sieve, the bias of the plug-in estimator can decrease sufficiently fast as the sieve grows in order for that estimator to be efficient. Thus sieve estimation requires no explicit targeting for the summary of interest.

The series estimator is one of the best known and most widely used sieve techniques. These sieves are taken as the span of the first finitely many terms in a basis that is chosen by the user to approximate the true function well. Common choices of the basis include polynomials, splines, trigonometric series and wavelets, among others. However, series estimators usually require strong smoothness assumptions on derivative of the unknown function in order for the flexible fit to converge at a sufficient rate to ensure the resulting plug-in estimator is efficient. As the dimension of the problem increases, the smoothness requirement may become prohibitive. Moreover, even if the smoothness assumption is satisfied, a prohibitively large sample size may be needed for some series estimators to produce a good fit. For example, if the unknown function is smooth but is a constant over a region, estimation based on a polynomial series can perform poorly in small to moderate samples.

Series estimators may also require the user to choose the number of terms in the series in such a way that results in a sufficient convergence rate. The rates at which the number of terms should grow with sample size have been thoroughly studied (e.g. [4, 19, 24]). However, these results only provide minimal guidance for applications because there is no indication on how to select the actual number of terms for a given sample size. In practice, the number of terms is the series is often chosen by CV. Upper bounds on the convergence rate of the series estimator as a function of sample size and the number of terms have been derived, and it has been shown that the optimal number of terms that minimizes the bound can also lead to an efficient plug-in estimator [24]. However, CV tends to select the number of terms that optimizes the actual convergence rate [29], which may differ from the number of terms minimizing the derived bound on the convergence rate. Even though the use of CV-tuned sieve estimators has achieved good numerical performance, to the best of our knowledge, there is no theoretical guarantee that they lead to an efficient plug-in estimator.

Two variants of traditional series estimators were proposed in [3]. These methods can use two bases to approximate the unknown function-valued features and the corresponding gradient separately, whereas in traditional series estimators, only one basis is used for both approximations. Consequently, these variants may be applied to more general cases than traditional series estimators. However, like traditional series estimators, they also suffer from the inflexibility of the pre-specified bases.

1.3. Contributions and organization of this article

In this paper we present two approaches that can partially overcome these shortcomings.

  1. Estimating the unknown function with Highly Adaptive Lasso (HAL) [1, 26].

    If we are willing to assume the unknown functions have a finite variation norm, then they may be estimated via HAL. If the tuning parameter is chosen carefully, then we may obtain an efficient plug-in estimator. This method can help overcome the stringent smoothness assumptions on derivatives that are required by existing series estimators, as we discussed earlier.

  2. Using data-adaptive series based on an initial ML fit.

    As long as the initial ML algorithm converges to the unknown function at a sufficient rate, we show that, for certain types of summaries, it is possible to obtain an efficient plug-in estimator with a particular data-adaptive series. The smoothness assumption on the unknown function can be greatly relaxed due to the introduction of the ML algorithm into the procedure. Moreover, for summaries that are highly smooth, we show that the number of terms in the series can be selected by CV.

Although the first approach is not an example of sieve estimation, both approaches are motivated by the sieve literature and can be shown to lead to asymptotically efficient plug-in estimators using the sieve estimation theory derived in [24]. The flexible fits of the functional features from both approaches can be plugged in for a rich class of estimands.

We remark that, although we do not have to restrict ourselves to the plug-in approach in order to construct an asymptotically efficient estimator, other estimators do not overcome the shortcomings described in Section 1.2 and can have other undesirable properties. For example, the popular one-step correction approach (also called debiasing in the recent literature on high-dimensional statistics) [22] constructs efficient estimators by adding a bias reduction term to the plug-in estimator. Thus, it is not a plug-in estimator itself, and as a consequence, one-step estimators may not respect known constraints on the estimand — for example, bounds on a scalar-valued estimand (e.g., the estimand is a probability and must lie in [0, 1]) or shape constraints on a vector-valued estimand (e.g., monotonicity constraints). This drawback is also typical for other non-plug-in estimators, such as those derived via estimating equations [30] and double machine learning [5, 6]. Additionally, as with the other procedures described above, the one-step correction approach requires the analytic expression of an influence function.

Our paper is organized as follows. We introduce the problem setup and notation in Section 2. We consider plug-in estimators based on HAL in Section 3, data-adaptive series in Section 4, and its generalized version that is applicable to more general summaries in Section 5. Section 6 concludes with a discussion. Technical proofs of lemmas and theorems (Appendix D), simulation details (Appendix E) and other additional details are provided in the Appendix.

2. Problem setup and traditional sieve estimation review

Suppose we have independent and identically distributed observations V1, …, Vn drawn from P0. Let Θ be a class of functions, and denote by θ0 ∈ Θ a (possibly vector-valued) functional feature of P0 — for example, θ0 may be a regression function. Throughout this paper we assume that the generic data unit is V = (X, Z) ~ P0, where X is a (possibly vector-valued) random variable corresponding to the argument of θ0, and Z may also be a vector-valued random variable. In some cases V = X and Z is trivial. We use X to denote the support of X. The estimand of interest is a finite-dimensional summary Ψ(θ0) of θ0. We consider a plug-in estimator Ψ(θ^n), where θ^n is an estimator of θ0 , and aim for this plug-in estimator to be asymptotically linear, in the sense that Ψ(θ^n)=Ψ(θ0)+n1i=1nIF(Vi)+op(n1/2) with IF an influence function satisfying EP0[IF(V)]=0 and EP0[IF(V)2]. This estimator is efficient under a nonparametric model if the estimator is also regular. By the central limit theorem and Slutsky’s theorem, it follows that Ψ(θ^n) is a CAN estimator of Ψ(θ0), and therefore, n[Ψ(θ^n)Ψ(θ0)]dN(0,EP0[IF(V)2). This provides a basis for constructing valid confidence intervals for Ψ(θ0).

We now list some examples of such problems.

Example 1. Moments of the conditional mean function [24]: Let θ0:xEP0[ZX=x] be the conditional mean function. The κ-th moment of θ0 (X), X ~ P0, namely Ψκ(θ0)=EP0[θ0κ(X)], can be a summary of interest. The values of Ψ1 (θ0) and Ψ2 (θ0) are useful for defining the proportion of VarP0(Z) that is explained by X, which may be written as VarP0(θ0(X))/VarP0(Z). This proportion is a measure of variable importance [33]. Generally, we may consider Ψ(θ0)=EP0[f(θ0(X))] for a fixed function f.

Example 2. Average derivative [11]: Let X follow a continuous distribution on d and θ0:xEP0[ZX=x] be the conditional mean function. Let θ0 denote the vector of partial derivatives of θ0. Then Ψ(θ0)=EP0[θ0(X)] summarizes the overall (adjusted) effect of each component of X on Y. Under certain conditions, we can rewrite Ψ(θ0)=EP0[θ0(X)p0(X)/p0(X)], where p0 is the Lebesgue density of X and p0 is the vector of partial derivatives of p0. This expression clearly shows the important role of the Lebesgue density of X in this summary.

Example 3. Mean counterfactual outcome [23]: Suppose that Z = (A, Y) where A is a binary treatment indicator and Y is the outcome of interest. Let θ0:xEP0[YA=1,X=x] be the outcome regression function under treatment value 1. Under causal assumptions, the mean counterfactual outcome corresponding to the intervention that assigns treatment 1 to the entire population can be nonparametrically identified by the G-computation formula Ψ(θ0)=EP0[θ0(X)].

Example 4. Treatment effect heterogeneity measures [12]: Similarly to Example 3, suppose that A is a binary treatment indicator and Z is the outcome of interest. Let θ0 = (μ00, μ01)T, where μ0a:xEP0[ZA=a,X=x] is the outcome regression function for treatment arm a ∈ {0, 1}. Then, Ψ(θ0)=VarP0(μ01(X)μ00(X)) is an overall summary of treatment effect heterogeneity.

To obtain an asymptotically linear plug-in estimator, θ^n must converge to θ0 at a sufficiently fast rate and approximately solve an estimating equation to achieve the small bias property with respect to the summary of interest [20, 26, 31]. For simplicity, we assume the estimand to be scalar-valued — when the estimand is vector-valued, we can treat each entry as a separate estimand, and the plug-in estimators of all entries are jointly asymptotically linear if each estimator is asymptotically linear. Therefore, this leads to no loss in generality if the same fits are used for all entries in the summary of interest.

Sieve estimation allows us to obtain an estimator Ψ(θ^n) with the small bias property with respect to Ψ(θ0) while maintaining the optimal convergence rate of θ^n [4, 24]. The construction of sieve estimators is based on a sequence of approximating spaces Θn to Θ. These approximating spaces are referred to as sieves. Usually Θn is much simpler than Θ to avoid over-fitting but complex enough to avoid under-fitting. For example, Θn can be the space of all polynomials with degree K or splines with K knots with K = K (n) → ∞ as n → ∞. In this paper, with a loss function l such that θ0argminθΘEP0[l(θ)(V)], we consider estimating θ0 by minimizing an empirical risk based on l, i.e., θ^nargminθΘnn1i=1nl(θ)(Vi). Under some conditions, the growth rate of Θn can be carefully chosen so that Ψ(θ^n) is an asymptotically linear estimator of Ψ(θ0) while θ^n converges to θ0 at the optimal rate.

Throughout this paper, for a probability distribution P and an integrable function f with respect to P, we define Pff(v)dP(v)=EP[f(V)]. We use Pn to denote the empirical distribution. We take 〈·, ·〉 to be the L2 (P0)-inner product, i.e., θ1,θ2=P0(θ1θ2), where L2 (P0) is the set of real-valued P0 -squared-integrable functions defined on the support of P0. When the functions are vector-valued, we take θ1,θ2=P0(θ1θ2). We use ‖·‖ to denote the induced norm of 〈·, ·〉. We assume that Θ ⊆ L2 (P0). We remark that we have committed to a specific choice of inner product and norm to fix ideas; other inner products can also be adopted, and our results will remain valid upon adaptation of our upcoming conditions. We discuss this explicitly via a case study in Appendix A.

For the methods we propose in this article, we assume that Θ is convex. Throughout this paper, we will further require a set of conditions similar to those in [24]. For any θ ∈ Θ, let l0[θθ0](v)limδ0[l(θ0+δ(θθ0))(v)l(θ0)(v)]/δ be the Gâteaux derivative of l at θ0 in the direction θ−θ0 and r[θθ0](v)l(θ)(v)l(θ0)(v)l0[θθ0](v) be the corresponding remainder.

Condition A1 (Linearity and boundedness of Gâteaux derivative operator of loss function). For all θ ∈ Θ, θΘ,l0[θθ0] exists and l0[θθ0](v)P0l0[θθ0] is linear and bounded in θ − θ0.

Condition A2 (Local quadratic behavior of loss function). There exists a constant α0,l (0, ∞) such that, for all θ ∈ Θ such that P0 {l(θ) − l(θ0)} or ‖θ − θ0‖ is sufficiently small, it holds that P0 {l(θ) − l(θ0)} = α0,l ‖θ − θ0 2 /2 + o(‖θ − θ02).

Remark 1. We now present an equivalent form of A2 that may be easier to verify in practice. For all θ ∈ Θ\{θ0}, define hθ := (θ−θ0)/‖θ−θ0 ‖ and aθd2dδ2P0l(θ0+δhθ)|δ=0. Requiring Condition A2 is equivalent to requiring that aθ1=aθ2 for all θ1, θ2 ∈ Θ\{θ0} and that

supθΘ|P0l(θ0+δhθ)P0l(θ0)aθ2|=o(δ2).

Moreover, if A2 holds, then, for any θ ∈ Θ\{θ0}, it is true that α0,l = aθ.

A large class of loss functions satisfy Conditions A1 and A2. For example, in the regression setting where Z is the outcome, the squared-error loss l(θ) : v ↦ [zθ(x)]2 and the logistic loss l(θ) : v ↦ −(x) + log{1 + exp(θ(x))} both satisfy these conditions; a negative working log-likelihood usually also satisfies these conditions. In Examples 1–4, the unknown functions are all conditional mean functions, which can be estimated with the above loss functions. Thus, Conditions A1 and A2 hold. Examples 3 and 4 require a slight modification discussed in more details in Appendix A. We also note that Condition A2 is sufficient for Condition B in [24].

Condition A3 (Differentiability of summary of interest). Ψθ0[θθ0]limδ0[Ψ(θ0+δ(θθ0))Ψ(θ0)]/δ exists for all θ ∈ Θ and is a linear bounded operator.

If Condition A3 holds, then, by the Riesz representation theorem, Ψθ0[θθ0]=θθ0,Ψ˙ for a gradient function Ψ˙=Ψ˙θ0 in the completion of the space spanned by Θ−θ0 := {xθ(x) − θ0 (x) : θ ∈ Θ}.

Condition A4 (Locally quadratic remainder). There exists a constant C > 0 so that, for all θ with sufficiently smallθ − θ0, it holds that

|Ψ(θ)Ψ(θ0)Ψθ0[θθ0]|Cθθ02.

The above condition states that the remainder of the linear approximation to Ψ is locally bounded by a quadratic function.

Conditions A3 and A4 hold for Examples 1–4. For the generalized moment of the conditional mean function in Example 1, it holds that Ψ˙=fθ0. For the average derivative of the conditional mean function in Example 2, it holds that Ψ˙=p0/p0. For the average treatment effect and the treatment effect heterogeneity measure in Examples 3 and 4, as we show in Appendix A, Ψ˙ also exists and depends on the propensity score function xP0 (A = 1 | X = x).

3. Estimation with Highly Adaptive Lasso

3.1. Brief review of Highly Adaptive Lasso

Recently, the Highly Adaptive Lasso (HAL) was proposed as a flexible ML algorithm that only requires a mild smoothness condition on the unknown function and has a well-described implementation [1, 26]. In this subsection, we briefly review HAL. We first heuristically introduce its definition and desirable properties, and then introduce the definition and implementation more formally. For ease of presentation, for the moment, we assume that θ0 is real-valued.

In HAL, θ0 is assumed to fall in the class of càdlàg functions (right-continuous with left limits) defined on Xd with variation norm bounded by a finite constant M. In this section, we denote this function class by Θv,M. The variation norm of a càdlàg function θ, denoted by ‖θv, characterizes the total variability of θ as its argument ranges over the domain, so ‖ ·v is a global smoothness measure and Θv,M is a large function class that even contains functions with discontinuities. Fig. 1 presents some examples of univariate càdlàg functions with finite variation norms for illustration. Because Θv,M is a rich class, it can be plausible that θ0 Θv,M for some M < ∞. The HAL estimator of θ0 is then θ^n=θ^n,MargminθΘv,Mn1i=1nl(θ)(Vi). Under this assumption, it has been shown that θ^nθ0=op(n1/4) regardless of the dimension of X under additional mild conditions [26]. Thus, estimation with HAL replaces the usual smoothness requirement on derivatives of traditional series estimators by a requirement on global smoothness, namely θ0 Θv,M for some M.

Figure 1.

Figure 1.

Examples of univariate càdlàg functions with finite variation norms. The top-left, top-right, bottom-left and bottom-right plots present the standard normal density function, a minimax concave penalty function [35], a step function and the real part of a Morlet wavelet [13] respectively.

We next formally present the definition of variation norm of a càdlàg function θ: [x(l),x(u)]d. Here, x(l) and x(u) are vectors in d; with being entrywise, [x(l),x(u)]{xd:x(l)xx(u)}.

For any nonempty index set s ⊆ {1, 2, …, d} and any x = (x1, x2, …, xd) [x(l), x(u)], we define xs := {xj : j ∈ s} and xs := {xj : j ∈ {1, 2, …, d} \ s} to be entries of x with indices in and not in s respectively. We defined the s-section of θ as θsθ(x11(1s),x21(2s),,xd1(ds)). We can subsequently obtain the following representation of θ at any x ∈ [x(l), x(u)] in terms of sums and integrals of the variation of s-sections of θ [9]:

θ(x)=θ(x(l))+s{1,,d},s(x(l),x]θs(dx˜).

The variation norm is then subsequently defined as

θv|θ(x(l))|+s{1,,d},s(x(l),x(u)]|θs(dx˜)|.

We refer to [1] and [26] for more details on variation norm. Notably, this notion of variation norm coincides with that of Hardy and Krause [21].

We finally briefly introduce the algorithm to compute a HAL estimator. It can be shown that an empirical risk minimizer in Θv,M is a step function that only jumps at sample points, namely

xβ0+s{1,,d},sj=1n1(Xj,sxs)βs,j.

Here, β0 and all βs,j are real numbers. To find an empirical risk minimizer in θv,M in the above form, we may solve the following optimization problem:

minθi=1nl(θ)(Vi)
subject to θ:xβ0+s{1,,d},sj=1n1(Xj,sxs)βs,j
|β0|+s{1,,d},sj=1n|βs,j|M.

The constraint imposes an upper bound on the l1 norm of a vector. Therefore, for common loss functions, we may use software for LASSO regression [25]. For example, if the loss function is the squared-error loss, then we may run a LASSO linear regression to obtain a HAL estimate.

3.2. Estimation with an oracle tuning parameter

In this section, we consider plug-in estimators based on HAL. For ease of illustration, for the rest of this section, we consider scalar-valued Ψ, and will discuss vector-valued Ψ only at the end of this subsection. We further introduce the following conditions needed to establish that the HAL-based plug-in estimator is efficient.

Condition B1 (Càdlàg functions). θ0 and Ψ˙ are càdlàg.

Condition B2 (Bound on variation norm). For some M < ∞, θ0v+Ψ˙vM.

Condition B2 ensures that certain perturbations of θ0 still lie in Θv,M, a crucial requirement for proving the asymptotic linearity of our proposed plug-in estimator. In addition, since Ψ˙ may depend on components of P0 other than θ0 as in Examples 2–4, Conditions B1–B2 may also impose conditions on these components.

In this section, we fix an M that satisfies Condition B2. Additional technical conditions can be found in Appendix B.1. Let θ^n=θ^n,MargminθΘv,Mn1i=1nl(θ)(Vi) denote the the HAL fit obtained using the bound M in Condition B2.

We note that θ^n is not a typical sieve estimator because M is fixed and there is no explicit sequence of growing approximating spaces Θn. Nevertheless, we may view this method as a special case of sieve estimation with degenerate sieves Θn = Θv,M for all n. This allows us to utilize existing results [24] to show the asymptotic linearity and efficiency of the plug-in estimator based on θ^n. We next formally present this result.

Theorem 1 (Efficincy of plug-in estimator). Under Conditions A1–A4 and B1–B4, Ψ(θ^n) is an asymptotically linear estimator of Ψ(θ0) with the influence function being vα0,l1{l0[Ψ˙](v)+EP0[l0[Ψ˙](V)]}, that is,

Ψ(θ^n)=Ψ(θ0)+1ni=1nα0,l1{l0[Ψ˙](Vi)+EP0[l0[Ψ˙](V)]}+op(n1/2).

As a consequence, n[Ψ(θ^n)Ψ(θ0)]dN(0,ξ2) with ξ2VarP0(l0[Ψ˙](V))/α0,l2. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n) is efficient under a nonparametric model.

We note that, for HAL to achieve the optimal convergence rate, we only need that M ≥θ0v [1, 26]. The requirement of a larger M imposed by Condition B2 resembles undersmoothing [18], as using a larger M would result in a fit that is less smooth than that based on the CV-selected bound. The L2 (P0)-convergence rate of the flexible fit using the larger bound remains the same, but the leading constant may be larger. This is in contrast to traditional undersmoothing, which leads to a fit with a suboptimal rate of convergence.

Under some conditions, the following lemma provides a loose bound on Ψ˙v in the case that Ψ˙ has a particular structure. Such a bound can be used to select an appropriate bound on variation norm that satisfies Condition B2.

Lemma 1. Suppose that Ψ˙=ψ˙θ0, where ψ˙: is differentiable. Let x(l) = sup{x : P0 (X ≥ x) =1} where sup and and ≥ are entrywise. Assume that θ0 is differentiable. If each ofθ0v, |Ψ˙(x(l))| and Bsupz:|z|θ0v|ψ˙(z)| is finite, then Ψ˙vBθ0v+|Ψ˙(x(l))|. Hence, θ0v+Ψ˙v(B+1)θ0v+|Ψ˙(x(l))|<.

As we discussed at the end of Section 2, such structures as Ψ˙=ψ˙θ0 are common, especially if we augment θ0 to include other implicitly relevant components of P0. For example, in Example 2, we may augment θ0 with p0 and p0; in Examples 3 and 4, we may augment θ0 with the propensity score function.

When θ0 is p0-valued, θ0 can often be viewed as a collection of q real-valued variation-independent functions η10, …, ηq0. In this case, we can define Θv,M = {(η1, …, ηq) : ηj is càdlàg, ‖ηjv ≤ Mj, j = 1, …, q} for a positive vector M = (M1, …, Mq). The subsequent arguments follow analogously, where now each ηj is treated as a separate function.

We remark that an undersmoothing condition such as B2 appears to be necessary for a HAL-based plug-in estimator to be efficient. We illustrate this numerically in Section 3.3. The choice of a sufficiently large bound M required by Theorem 1 is by no means trivial, since this choice requires knowledge that the user may not have. Nevertheless, this result forms the basis of the data-driven method that we propose in Section 3.3 for choosing M. We also remark that, if we wish to plug in the same θ^n based on HAL for a rich estimands, the chosen bound M needs to be sufficiently large for all estimands of interest.

Another method to construct efficient plug-in estimators based on HAL has been independently developed [28]. Unlike our approach based on sieve theory, in this work, the authors directly analyzed the first-order bias of the plug-in estimator using influence functions. In terms of ease of implementation, their method requires specifying a constant involved in a threshold of the empirical mean of the basis functions, which may be difficult to specify in applications. Our approach in Section 3.3 may also require specifying an unknown constant to obtain a valid upper bound on Ψ˙v , but in some cases the constant may be set to zero, and our simulation suggests that the performance is not sensitive to the choice of the constant.

3.3. Data-adaptive selection of the tuning parameter

Since it is hard to prespecify a bound M on the variation norm that is sufficiently large to satisfy Condition B2 but also sufficiently small to avoid overfitting for a given data set, it is desirable to select M in a data-adaptive manner. A seemingly natural approach makes use of k-fold CV. In particular, for each candidate bound M, partition the data into k folds of approximately equal size (k is fixed and does not depend on n), in each fold evaluate the performance of the HAL estimator fitted on all other folds based on this candidate M, and use the candidate bound Mn with the best average performance across all folds to obtain the final fit. It has been shown that θ^n,Mn can achieve the optimal convergence rate under mild conditions [29], but Mn appears not to satisfy Condition B2 in general. In particular, the derived bound on θ^nθ0 relies on an empirical process term, namely supθΘv,M|(PnP0){l(θ)l(θ0)}|, and a larger M implies a larger space Θv,M. Therefore, the bound on θ^nθ0 grows with M. Because k-fold CV seeks to optimize out-of-sample performance, Mn generally appears to be close to ‖θ0v and not sufficiently large to obtain an efficient plug-in estimator.

To avoid this issue with the CV-selected bound, we propose a method that takes inspiration from k-fold CV, but modifies the bound so that it is guaranteed to yield an efficient plug-in estimator for Ψ(θ0). This method may require the analytic expression for Ψ˙. In Sections 4 and 5, we present methods that do not require this knowledge.

  1. Derive an upper bound on Ψ˙v. This bound is a non-decreasing function of the variation norms of functions that can be learned from data (e.g., using Lemma 1). In other words, find a non-decreasing function F such that Ψ˙vF(η10v,,ηq0v) for unknown functions η10, …, ηq0 that can be assumed to be càdlàg with finite variation norm and can be estimated with HAL.

  2. Estimate θ0, η10, …, ηq0 by HAL with k-fold CV, and denote the CV-selected bounds for these functions by Mn, M1n, …, Mqn.

  3. For a small ϵ > 0, use the bound Mn + ϵ + F (M1n + ϵ, …, Mqn + ϵ) to estimate θ0 with HAL and plug in the fit. We refer to this step of slightly increasing the bounds as ϵ-relaxation.

It follows from Lemma 2 in the Appendix that this method would yield a sufficiently large bound with probability tending to one. In practice, it is desirable for the bound derived on Ψ˙v to be relatively tight to avoid choosing an overly large bound that leads to overfitting in small to moderate samples. We remark that multiplying by 1 + ϵ rater than adding E to each argument also leads to a valid choice for the bound; that is, the bound Mn (1 + ϵ) + F (M1n (1 + ϵ), …, Mqn (1 + ϵ)) is also sufficiently large with probability tending to one. In practice, the user may increase each CV-selected bound by, for example, 5% or 10%. Although it is more natural and convenient to directly use Mn + F (M1n, …, Mqn) as the bound, we have only been able to prove the result with a small ϵ-relaxation. However, if the bound is loose and F is continuous, we can show that ϵ-relaxation is unnecessary. The formal argument can be found after Lemma 2 in the Appendix.

As for methods based on knowledge of an influence function, deriving Ψ˙ and a bound for its variation norm requires some expertise, but in some cases this task can be straightforward. The derivation of an influence function is typically based on a fluctuation in the space of distributions, but in many cases, the relation between such fluctuations and the summary of interest is implicit and difficult to handle. In contrast, the derivation of Ψ˙ is based on a fluctuation of θ0, and the summary of interest explicitly depends on θ0. As a consequence, it can be simpler to derive Ψ˙ than to derive an influence function. For example, for the summary Ψκ(θ0)=P0θ0κ in Example 1, we find that Ψ˙κ=κθ0κ1 by straightforward calculation, whereas the influence function given in Theorem 1 is more difficult to directly derive analytically.

We illustrate the fact that Mn may not be sufficiently large and show that our proposed method resolves this issue via a simulation study in which θ0:xEP0[YX=x] and Ψ:θ0P0θ02. We compare the performance of the plug-in estimators based on the 10-fold CV-selected bound on variation norm (M.cv), the bound derived from the analytic expression of Ψ˙ with and without ϵ-relaxation (M.gcv+ and M.gcv respectively), and a sufficiently large oracle choice satisfying Condition B2 (M.oracle). We According to Lemma 1, M.oracle is 3‖θ0v and M.gcv is 3×M.cv. We also investigate the performance of 95% Wald CIs based on the influence function. For each resulting plug-in estimator, we investigate the following quantities: n · MSE, nbias and CI coverage. More details of this simulation are provided in Appendix E. In theory, for an efficient estimator, we should find that n · MSE tends to a constant (the variance of the influence function ξ2P0IF2), nbias tends to 0, and 95% Wald CIs have approximately 95% coverage.

We report performance summaries in Fig 2 and Table 1 with this criterion, from which it appears that the plug-in estimators with M.oracle and M.gcv+ achieve efficiency, while the plug-in estimator based on M.cv does not. The desirable performance of M.oracle and M.gcv+ agrees with the available theory, whereas the poor performance of M.cv suggests that cross-validation may not yield a valid choice of variation norm in general. Interestingly, M.gcv performs similarly to M.oracle and M.gcv+. We conjecture that using an ϵ-relaxation is unnecessary in this setting. In Fig 3, we can also see that M.cv tends to ‖θ0v and has a high probability of being less than M.oracle. Therefore, this simulation suggests that using a sufficiently large bound — in particular, a bound larger than the CV-selected bound — may be necessary and sufficient for the plug-in estimator to achieve efficiency.

Figure 2.

Figure 2.

The relative MSE, n · MSE/ξ2, and the relative absolute bias, n|bias/Ψ(θ0)|, of the plug-in estimator of Ψ(θ0)=P0θ02 based on HAL for an oracle choice of the bound on variation norm (M.oracle), the 10-fold CV-selected bound (M.cv), a bound based on M.cv and analytic expression of Ψ˙ without and with ϵ-relaxation (M.gcv and M.gcv+ respectively). ξ2 := P0IF2 is the asymptotic variance that the n · MSE of an AL estimator should converge to. Note that then MSE for M.oracle, M.gcv and M.gcv+ tends to 2 but that for M.cv does not.

Table 1.

Coverage probability of 95% Wald CI of the plug-in estimator of Ψ(θ0)=P0θ02 based on HAL for an oracle choice of the bound on variation norm (M.oracle), the 10-fold CV-selected bound (M.cv), a bound based on M.cv and analytic expression of Ψ˙ without and with ϵ-relaxation (M.gcv and M.gcv+ respectively). The CI is constructed based on the influence function. The coverage for M.oracle, M.gcv and M.gcv+ is approximately 95%, but that for M.cv is not.

n M.cv M.gcv M.gcv+ M. oracle
500 0.87 0.96 0.96 0.97
1000 0.87 0.97 0.97 0.97
2000 0.90 0.95 0.95 0.96
5000 0.93 0.95 0.95 0.95
10000 0.89 0.95 0.95 0.95

Figure 3.

Figure 3.

A boxplot of the ratio of bounds based on 10-fold CV and M.oracle. The horizontal gray thick dashed lines are 1 and 1/3. The y-axis is scaled based on logarithm for readability. There is a high probability that M.cv is much smaller than M.oracle; M.cv tends to the variation norm of the function being estimated, ‖θ0v, corresponding to 1/3 of M.oracle. Enlarging M.cv according to the analytic expression of Ψ˙ with ϵ-relaxation results in sufficiently large bounds. The enlargement without ϵ-relaxation appears to have similar performance.

4. Data-adaptive series

4.1. Proposed method

For ease of illustration, we consider the case that Ψ is scalar-valued in this section. As we will describe next, our proposed estimation procedure for function-valued features does not rely on Ψ and hence can be used for a class of summaries.

Suppose that θ is a vector space of q-valued functions equipped with the L2 (P0)- inner product. Further, suppose that Ψ=ψθ0 for some function: ψ˙:qq. This holds, for example, when Ψ : θP0(fθ) for a fixed differentiable function f in Example 1. In this case, Ψ˙=fθ0 and hence ψ˙=f. Particularly useful examples include Examples 1 and 4. For now we assume that the marginal distribution of X is known so that we only need to estimate θ0 for this summary. We will address the more difficult case in which the marginal distribution of X is unknown in Section 4.3.

Let θn0 be a given initial flexible ML fit of θ0 and consider the data-adaptive sieve-like subspaces based on θn0,ΘnΘn,θn0Span{ϕ1,ϕ2,,ϕK}θn0, where ϕ1, ϕ2,… are q-valued basis functions in a series defined on q and K = K (n) is a deterministic number of terms in the series — we will consider selecting K via CV in Section 4.4. Let θn*=θn*(θn0)argminθΘnn1i=1bl(θ)(Vi) denote the series estimator within this data-adaptive sieve-like subspace that minimizes the empirical risk. We propose to use Ψ(θn*) to estimate Ψ(θ0).

4.2. Results for a deterministic number of terms

Following [4, 24], our proofs of the validity of our data-adaptive series approach make heavy use of projection operators. We use πnπn,θn0 to denote the projection operator for functions in Θ onto Θn=Θn,θn0 with respect to 〈·, ·〉. For any function θ ∈ Θ, let Πn,θ denote the operator that takes as input a function g:qq for which gθL2 (P0) and outputs a function Πn,θ(g):qq such that Πn,θ (g) ○ θ = πn,θ (g ○ θ). In other words, letting βj be the quantity that depends on g and θ such that πn,θ(gθ)=(j=1Kβjϕj)θ, we define Πn,θ(g)j=1Kβjϕj. The operator Πn,θ may also be interpreted as follows: letting Pθ be the distribution of θ(X) with V = (X, Z) ~ P0, then Πn,θ is the projection operator of functions qq with respect to the L2 (Pθ)-inner product. We use I to denote the identity function in q.

We now present additional conditions we will require to ensure that Ψ(θn*) is an efficient estimator of Ψ(θ0).

Condition C1 (Sufficient convergence rate of initial ML fit). θn0θ0=op(n1/4).

Condition C2 (Sufficiently small estimation error). θnπn(θ0)=op(n1/4).

Condition C3 (Sufficiently small approximation error to I for Θn,θ0). θ0Πn,θ0(I)θ0=o(n1/4).

Condition C4 (Sufficiently small approximation error to ψ˙ for Θn,θ0 and convergence rate of θn). [ψ˙Πn,θ0(ψ˙)]θ0θnθ0=op(n1/2) Appendix B.2 contains further technical conditions and Appendix C discusses their plausibility. As discussed in Appendix C, Conditions C2–C4 typically imply restrictions on the growth rate of K : if K grows too fast with n, then Condition C2 may be violated; if K instead grows too slow, then Conditions C3 and C4 may be violated. For the generalized moment Ψ : θP0 (fθ) with a fixed known function f in Example 1, Condition C4 typically also imposes a smoothness condition f so that f can be approximated by the series well. Our conditions are closely related to the conditions in Theorem 1 of [24]. Conditions C1–C3 and C6 serve as sufficient conditions for the condition on the smoothness of Ψ and the convergence rate of θn in Theorem 1 of [24]. Together with Conditions C4 and C7, we can derive Lemma 4, which is similar to the first part of Condition C of [24]. The empirical process condition C8 is sufficient for Conditions A, D and the second part of C in Theorem 1 in [24].

We now present a theorem ensuring the asymptotic linearity and efficiency of the plug-in estimator based on θn.

Theorem 2 (Efficiency of plug-in estimator). Under Conditions A1–A4 and C1–C9, Ψ(θn) is an asymptotically linear estimator of Ψ(θ0) with the influence function being vα0,l1{l0[Ψ˙](v)+EP0[l0[Ψ˙](V)]} , that is,

Ψ(θn*)=Ψ(θ0)+1ni=1nα0,l1{l0[ψ˙θ0](Vi)+EP0[l0[ψ˙θ0](V)]}+op(n1/2).

As a consequence, n[Ψ(θ^n)Ψ(θ0)]dN(0,ξ2) with ξ2VarP0(l0[Ψ˙](V))/α0,l2. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n) is efficient under a nonparametric model.

Remark 2. Consider the general case in which it may not be true that Ψ˙ can be can be represented as ψ˙θ0 for some ψ˙:qq. If the analytic expression of Ψ˙ can be derived and Ψ˙ can be estimated by Ψ˙n such that Ψ˙nΨ˙θn0θ0=op(n1/2) , then our data-adaptive series can take a special form that is targeted towards Ψ. Specifically, letting ϑ0(θ0,Ψ˙) and Ψ (ϑ0) := Ψ(θ0), it is straightforward to show that the gradient of Ψ is Ψ˙=(Ψ˙,0)=(e2,0)ϑ0 with 0 = (0, 0)T and e2 = (0, 1)T, which is a function composed with ϑ0. We can set ϑn0=(θn0,Ψ˙n) and Θn=Span{θn0,Ψ˙n} in our data-adaptive series. This approach does not have a growing number of terms in Θn and is not similar to sieve estimation, but can be treated as a special case of data-adaptive series. It can be shown that Conditions C1–C4 are still satisfied for ϑ and Ψ with this choice of Θn, and hence our data-adaptive series estimator leads to an efficient plug-in estimator. We remark that the introduction of ϑ and Ψ is a purely theoretical device, and this targeted approach to estimation is quite similar to that used in the context of TMLE [27, 31].

4.3. Summaries involving the marginal distribution of X

We now generalize the setting considered thus far by allowing the parameter to depend both on θ0 and on P0, i.e., estimating Ψ(θ0, P0). The example given at the beginning of Section 4.1, namely that of estimating Ψ(θ0) = P0 (f ○ θ0), is a special case of this more general setting. In what follows, we will make use of the following conditions:

Condition D1 (Conditions with P0 fixed). When we regard Ψ(θ0, P0) as the mapping θ ↦ Ψ(θ, P0) evaluated at θ0, Conditions A1–A4, C1–C4 and C6–C9 are satisfied for estimating Ψ(θ0, P0).

Condition D2 (Hadamard differentiability with θ0 fixed). The mapping P ↦ Ψ(θ0, P) is Hadamard differentiable at P0.

By the functional delta method, it follows that Ψ(θ0, Pn) = Ψ(θ0, P0) + PnIF0 + op (n−1/2) for a function IF0 satisfying P0IF0 = 0 and P0IF02<.

Condition D3 (Negligible second-order difference).

[Ψ(θn*,Pn)Ψ(θ0,Pn)][Ψ(θn*,P0)Ψ(θ0,P0)]=op(n1/2).

This condition usually holds, for example, when Ψ(θ0, P0) = P0 (f ○ θ0), as in this case the left-hand side is equal to (PnP0)(fθnfθ0), which is op (n−1/2) under empirical process conditions.

Theorem 3 (Asymptotic linearity of plug-in estimator). Under Conditions D1–D3, Ψ(θn,Pn) is an asymptotically linear estimator of Ψ(θ0, P0) with influence function

vα0,l1{l0[ψ˙θ0](v)+EP0[l0[ψ˙θ0](V)]}+IF0(V),

that is,

Ψ(θn*,Pn)=Ψ(θ0,P0)+1ni=1n{α0,l1l0[ψ˙θ0](Vi)+α0,l1EP0[l0[ψ˙θ0](V)]+IF(Vi)}+op(n1/2).

As a consequence, n[Ψ(θn,Pn)Ψ(θ0,P0)]dN(0,ξ2) with ξ2VarP0(α0,l1l0[ψ˙θ0](V)+IF(V)).

This result is easy to verify by decomposing Ψ(θn,Pn)Ψ(θ0,P0) as

[Ψ(θn*,P0)Ψ(θ0,P0)]+[Ψ(θ0,Pn)Ψ(θ0,P0)]+{[Ψ(θn*,Pn)Ψ(θ0,Pn)][Ψ(θn*,P0)Ψ(θ0,P0)]}

Moreover, under conditions similar to the conditions E1 and E2 given in Appendix B.4, we can show that Ψ(θn,Pn) is efficient under a nonparametric model.

Remark 3. Conditions D2 and D3 can be relaxed. Specifically, if P^n is an estimator of P0 that satisfies that Ψ(θ0,P^n)=Ψ(θ0,P0)+PnIF0+op(n1/2) for an influence function IF0 and Condition D3 holds with Pn replaced by P^n , then Ψ(θn,P^n) is an asymptotically linear estimator of Ψ(θ0, P0).

4.4. CV selection of the number of terms in data-adaptive series

In the preceding subsections, we established the efficiency of the plug-in estimator based on suitable rates of growth for K relative to the sample size n. In this subsection, we show that, under some conditions, such a K can be selected by k-fold CV: after obtaining θn0, for each K in a range of candidates, we can calculate the cross-validated risk from k folds and choose the value of K with the smallest CV risk. We denote the number of terms in the series that CV selects by K*. In this section, we use K in the subscripts for notation related to data-adaptive sieves-like spaces and projections; this represents a slight abuse of notation because, in Sections 4.1 and 4.2, these subscripts were instead used for sample size n. That is, we use ΘK,θ to denote Span {φ1, φ2, …, φK} ○ θ, πK,θ to denote the projection onto ΘK,θ, ΠK,θ to denote the operator such that ΠK,θ (g) ○ θ = πK,θ (g ○ θ) for all g:qq with g ○ θ ∈ L2 (P0), and θn#θK(θn0) to be the data-adaptive series estimator based on θn0 and K*.

Condition C5 (Bounded approximation error of ψ˙ relative to I). There exists a constant C > 0 such that, with probability tending to one, ψ˙θn0ΠK,θn0(ψ˙)θn0Cθn0ΠK,θn0(I)θn0.

This condition is equivalent to

ψ˙ΠK,θn0(ψ˙)L2(Pθn0)CIΠK,θn0(I)L2(Pθn0)

for all K with probability tending to one, which may be interpreted in terms of two simultaneous requirements. The first requirement is that the identity function I is not exactly contained in the span of φ1, …, φK for any K, since otherwise, the right-hand side would be zero for all sufficiently large K. Therefore, common series such as polynomial and spline series are not permitted for general summaries. In contrast, other series such as trigonometric series and wavelets satisfy this requirement. The second requirement is that the approximation error of the chosen series for the identity function I is not much larger than ψ˙. If a trigonometric or wavelet series is used, then this condition imposes a strong smoothness condition on derivatives of ψ˙. Nonetheless, this may not be stringent in some interesting examples. For example, if Ψ(θ) = P0 (f ○ θ) for a fixed function f in Example 1, then ψ˙ equals f′ and hence can be expected to satisfy this strong smoothness condition provided that f is infinitely differentiable with bounded derivatives. The estimands encountered in many applications involve f satisfying this smoothness condition.

The following theorem justifies the use of k-fold CV to select K under appropriate conditions.

Theorem 4 (Efficiency of CV-based plug-in estimator). Assume that Conditions A1–A4, C1–C3, C5, C8 and C9 hold for a deterministic K = K (n). Suppose part (a) of Condition C7 holds, then, with θn#θK(θn0),Ψ(θn#) is an asymptotically linear estimator of ψ(θ0) with influence function vα0,l1{l0[ψ˙θ0](v)+EP0[l0[ψ˙θ0](V)]} , that is,

Ψ(θn#)=Ψ(θ0)+1ni=1nα0,l1{l0[ψ˙θ0](Vi)+EP0[l0[ψ˙θ0](V)]}+op(n1/2).

As a consequence, n[Ψ(θn#)Ψ(θ0)]dN(0,ξ2) with ξ2VarP0(l0[ψ˙θ0](V))/α0,l2. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n) is efficient under a nonparametric model.

4.5. Simulation

4.5.1. Demonstration of Theorem 4

We illustrate our method in a simulation in which we take θ0(x)=EP0[ZX=x] and Ψ(θ0)=P0θ02. This is a special case of Example 1. The true function θ0 is chosen to be discontinuous, which violates the smoothness assumptions commonly required in traditional series estimation. In this case, ψ˙=2I and so the constant in Condition C5 is 2. We compare the performance of plug-in estimators based on three different nonparametric regressions: (i) polynomial regression with degree selected by 10-fold CV (poly), which results in a traditional sieve estimator, (ii) gradient boosting (xgb) [7, 8, 15, 16], and (iii) data-adaptive trigonometric series estimation with gradient boosting as the initial ML fit and 10-fold CV to select the number of terms in the series (xgb.trig). We also compare these plug-in estimators with the one-step correction estimator [22] based on gradient boosting (xgb.1step). Further details of this simulation can be found in Appendix E.

Fig 4 presents n · MSE and nbias for each estimator, whereas Table 2 presents the coverage probability of 95% Wald CIs based on these estimators. We find that xgb.trig and xgb.1step estimators perform well, while poly and xgb plug-in estimators do not appear to be efficient. Since polynomial series estimators only work well when estimating smooth functions, in this simulation, we would not expect the fit from the polynomial series estimator to converge sufficiently fast, and consequently, we would not expect the resulting plug-in estimator to be efficient. In contrast, gradient boosting is a flexible ML method that can learn discontinuous functions, so we can expect an efficient plug-in estimator based on this ML method. However, gradient boosting is not designed to approximately solve the estimating equation that achieves the small-bias property for this particular summary, so we would not expect its naïve plug-in estimator to be efficient. Based on gradient boosting, our estimator and the one-step corrected estimator both appear to be efficient, but our method has the advantage of being a plug-in estimator. Moreover, the construction of our estimator does not require knowledge of the analytic expression of an influence function.

Figure 4.

Figure 4.

The relative MSE, n · MSE/ξ2, and the relative absolute bias, n|bias/Ψ(θ0)|, of estimators of Ψ(θ0)=P0θ02 is the asymptotic variance that then n · MSE of an AL estimator should converge to. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradi- ent boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The y-a.xis for relative MSE is scaled based on logarithm for readability. Note that then n · MSE for xgb.trig and xgb.1step tend to ξ2, but those for poly and xgb do not.

Table 2.

Coverage probability of 95% Wald CI based on estimators of Ψ(θ0)=P0θ02. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The CI is constructed based on the influence function. The coverage probabilities for xgb.trig and xgb.1step are approximately 95%, but those for poly and xgb are not.

n poly xgb xgb.lstep xgb.trig
500 0.90 0.90 0.95 0.95
1000 0.86 0.89 0.95 0.95
2000 0.74 0.88 0.96 0.96
5000 0.47 0.88 0.94 0.94
10000 0.16 0.87 0.95 0.96
20000 0.02 0.86 0.96 0.96

We also investigate the effect of the choice of K on the performance of our method. Fig 5 presents n · MSE for the data-adaptive series estimator with different choices of K. We can see that our method is insensitive to the choice of K in this simulation setting. Although a relatively small K performs better, choosing a much larger K does not appear to substantially harm the behavior of the estimator. This insensitivity to the selected tuning parameter suggests that in some applications, without using CV, an almost arbitrary choice of K that is sufficiently large might perform well.

Figure 5.

Figure 5.

n · MSE of estimators of Ψ(θ0)=P0θ02 based on data-adaptive series with different choices of K. The horizontal gray thick dashed line is the asymptotic variance that then n · MSE of an AL estimator should converge to, ξ2:= P0IF2. Note that n · MSE is not sensitive to the choice of K over a wide range of K.

4.5.2. Violation of Condition C5

For the k-fold CV selection of K in our method to yield an efficient plug-in estimator, Ψ must be highly smooth in the sense that ψ˙ can be approximated by the series about as well as can the identity function (see Condition C5). Although we have argued that this condition is reasonable, in this section, we explore via simulation the behavior of our method based on CV when ψ˙ is rough. We again take θ0:xEP0[ZX=x] and an artificial summary Ψ(θ0) = P0(fθ0) where f is an element of C1[−1,1] but not of C2 [−1, 1]. In this case, ψ˙=f is very rough, so we do not expect it to be approximated by a trigonometric series as well as the identity function. However, it is sufficiently smooth to allow for the existence of a deterministic K that achieves efficiency. Further simulation details are provided in Appendix E.

Table 3 presents the performance of our estimator based on 10-fold CV. We note that it performs reasonably well in terms of then· MSE criterion. However, it is unclear whether its scaled bias converges to zero for large n, so our method may be too biased. The coverage of 95% Wald Cis is close to the nominal level, suggesting that the bias is fairly small relative to the standard error of the estimator at the sample sizes considered. One possible explanation for the good performance observed is that the L2 (P0)-convergence rate of θn is much faster than n−1/14, which allows for a slower convergence rate of the approximation error ψ˙θ0Πn,θ0(ψ˙)θ0 (see Appendix C). This simulation shows that our proposed method may still perform well even if Condition C5 is violated, especially when the initial ML fit is close to the unknown function.

Table 3.

Performance of the plug-in estimator of Ψ(θ0) = P0 (fθ0) based on data-adaptive series. Here f is not infinitely differentiable. The relative MSE is n · MSE/ξ2 where ξ2 := P0IF2 is the asymptotic variance that the n · MSE of an AL estimator should converge to; the root-n abs relative bias is n|bias/Ψ(θ0)|. The performance appears to be acceptable in view of the small MSE and reasonable CI coverage.

n relative MSE root-n absolute relative bias 95% Wald CI coverage
500 0.88 3.95 0.97
1000 0.89 3.73 0.96
2000 0.79 3.15 0.97
5000 0.78 2.02 0.97
10000 0.88 2.57 0.97
20000 0.88 1.75 0.96

5. Generalized data-adaptive series

5.1. Proposed method

As in Section 4, we consider the case that Ψ is scalar-valued in this section. The assumption that Ψ˙=ψ˙θ0 may be too restrictive for general summaries as in Examples 2–4, especially if Ψ˙ is not derived analytically (see Remark 2). In this section, we generalize the method in Section 4 to deal with these summaries. Letting Ix be the identity function defined on X, we can readily generalize the above method to the case where Ψ˙ can be represented as ψ˙(θ0,Ix) for a function ψ˙:q×Xq; that is, Ψ˙(x)=ψ˙(θ0(x),x). This form holds trivially if we set ψ˙(t,x)=Ψ˙(x), i.e., ψ˙ is independent of its first argument, but we can utilize flexible ML methods if ψ˙ is nontrivial. Again, we assume Θ is a vector space of q-valued function equipped with the L2(P0)-inner product. We assume ψ˙ can be approximated well by a basis ϕ1,ϕ2,:q×Xq, and consider the data-adaptive sieve-like subspace ΘnΘn,θn0Span{ϕ1,,ϕK}(θn0,Ix). We propose to use Ψ(θn) to estimate Ψ(θ0), where θn=θn(θn0)argminθΘnn1i=1nl(θ)(Vi) denotes the series estimator within Θn minimizing the empirical risk.

5.2. Results for proposed method

With a slight abuse of notation, in this section we use I to denote the function (t, x) 7↦ t where tq and xX. Again, we use πnπn,θn0 to denote the projection operator onto Θn,θn0. Let Πn,θ be defined such that, for any function q:q×Xq with g(θ,Ix)L2(P0), it holds that Πn,θ(g)(θ,Ix)=πn,θ(g(θ,Ix)); that is, letting βj be the quantity that depends on g and θ such that πn,θ(g(θ,Ix))=(j=1Kβjϕj)(θ,Ix), we define Πn,θ(g)j=1Kβjϕj.

We introduce conditions and derive theoretical results that are parallel to those in Section 4.

Condition C3* (Sufficiently small approximation error to I for Θn,θ0). θ0Πn,θ0(I)(θ0,Ix)=o(n1/4).

Condition C4* (Sufficiently small approximation error to ψ˙ for Θn,θ0 and convergence rate of θn. [ψ˙Πn,θ0(ψ˙)](θ0,Ix)θnθ0=op(n1/2) Additional regularity conditions can be found in Appendix B.3. Note that Ψ˙ may depend on components of P0 other than θ0, Condition C4* may impose smoothness conditions on these components so that ψ˙ can be well approximated by the chosen series. For example, in Example 2, Condition C4* requires that p0/p0 and the propensity score can be approximated by the series well; in Examples 3 and 4, Condition C4* imposes the same requirement on the propensity score. We now present a theorem that establishes the efficiency of the plug-in estimator based on θn.

Theorem 5 (Efficiency of plug-in estimator). Under Conditions A1–A4, C1, C2, C3*, C4*, C6*, C7*, C8 and C9, Ψ(θn) is an asymptotically linear estimator of Ψ(θ0) with influence function vα0,l1{l0[ψ(θ0,Ix)](v)+EP0[l0[ψ˙(θ0,Ix)](V)]}, that is,

Ψ(θn*)=Ψ(θ0)+1ni=1nα0,l1{l0[ψ˙(θ0,Ix)](Vi)+EP0[l0[ψ˙(θ0,Ix)](V)]}+op(n1/2).

As a consequence, n[Ψ(θn)Ψ(θ0)]dN(0,ξ2) with ξ2VarP0(l0[ψ˙(θ0,Ix)](V))/α0,l2. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n) is efficient under nonparametric models.

Remark 4. When Ψ depends on both θ0 and P0, we can readily adapt this method as in Section 4.3.

We now present a condition for selecting K via k-fold CV, in parallel with Condition C5 from Section 4.4.

Condition C5* (Bounded approximation error of ψ˙ relative to I). There exists a constant C > 0 such that, with probability tending to one, ψ˙(θn0,Ix)ΠK,θn0(ψ˙)(θn0,Ix)Cθn0ΠK,θn0(I)(θn0,Ix).

Remark 5. Similarly to Condition C5, Condition C5* requires that the identity function I is not contained in the span of finitely many terms of the chosen series and that ψ˙ is sufficiently smooth so that ψ˙ can be approximated well by the chosen series. However, Condition C5* may be far more stringent than Condition C5. In fact, it may be overly stringent in practice. Since Ψ˙ may depend on components of P0 other than θ0, Condition C5* may require these components to be sufficiently smooth. When a common candidate series such as the trigonometric series is used, a sufficient condition for Condition C5* is that ψ˙ is infinitely differentiable with bounded derivatives, which further imposes assumptions on the smoothness of other components of P0. For example, in Example 2, a sufficient condition for Condition C5* is that p0/p0 is infinitely differentiable with bounded derivatives; in Examples 3 and 4, a sufficient condition for Condition C5* is that Condition C5* is that the propensity score function satisfies the same requirement. Due to the stringency of Condition C5*, we conduct a simulation in Section 5.3.2 to understand the performance of our proposed method when this condition is violated. The simulation appears to indicate that our method may be robust against violation of Condition C5*.

The following theorem shows that k-fold CV can be used to select K under certain conditions.

Theorem 6 (Efficiency of CV-based plug-in estimator). Assume Conditions A1–A4, C1, C2, C3*, C6*, C7*, C8 and C9 hold for a deterministic K = K(n). Suppose that part (a) of Condition C7* holds. With θn#θK(θn0),Ψ(θn#) is an asymptotically linear estimator of Ψ(θ0) with influence function vα0,l1{l0[ψ˙(θ0,Ix)](v)+EP0[l0[ψ˙(θ0,Ix)](V)]}, that is,

Ψ(θn#)=Ψ(θ0)+1ni=1nα0,l1{l0[ψ˙(θ0,Ix)](Vi)+EP0[l0[ψ˙(θ0,Ix)](V)]}+op(n1/2)

Therefore, n[Ψ(θn#)Ψ(θ0)]dN(0,ξ2) with ξ2VarP0(l0[ψ˙θ0](V))/α0,l2. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n) is efficient under a nonparametric model.

5.3. Simulation

In the following simulations, we consider the problem in Example 4. As we show in Appendix A, letting g0 : xP0(A = 1|X = x) be the propensity score and setting θ = (μ0, μ1), with (θ) : va[zμ1(x)]2 +(1−a)[zμ0(x)]2, the generalized data-adaptive series methodology may be used to obtain an efficient estimator. As in Section 4.5, we conduct two simulation studies, the first demonstrating Theorem 6 and the other exploring the robustness of CV against violation of Condition C5*.

5.3.1. Demonstration of Theorem 6

We choose θ0 to be a discontinuous function while g0 is highly smooth. We compare the performance of plug-in estimators based on three different nonparametric regressions: (i) polynomial regression with the degree selected by 5-fold CV (poly), which results in a traditional sieve estimator, (ii) gradient boosting (xgb) [7, 8, 15, 16], and (iii) generalized data-adaptive trigonometric series estimation with gradient boosting as the initial ML fit and 5-fold CV to select the number of terms in the series (xgb.trig). Further details of the simulation setting are provided in Appendix E.

Fig 6 presents n · MSE and nbias for each estimator, whereas Table 4 presents the coverage probability of 95% Wald CIs based on these estimators. There are a few runs in the simulation with noticeably poor behavior, so we trimmed the most extreme values when computing MSE and bias in Fig 6 (1% of all Monte Carlo runs). The outliers may be caused by the performance of gradient boosting and the instability of 5-fold CV. In practice, the user may ensemble more ML methods and use 10-fold CV to mitigate such behavior. We note that xgb.trig and xgb.1step estimators perform well, while poly and xgb plug-in estimators do not appear to be efficient. Based on gradient boosting, our estimator and the one-step corrected estimator both appear to be efficient, but the construction of our estimator has the advantage of not requiring the analytic expression of an influence function.

Figure 6.

Figure 6.

The relative MSE, n · MSE/ξ2, and the relative absolute bias, n|bias/Ψ(θ0)|, of estimators of Ψ(θ0)=VarP0(μ0,1(X)μ0,0(X)) where μ0,a:xEP0[Y|A=a,X=x]. ξ2 := P0IF2 is the asymptotic variance that the n · MSE of an AL estimator should converge to. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The y-axis for relative MSE is scaled based on logarithm for readability. Note that the n · MSE for xgb.trig and xgb.1step tend to ξ2, but those for poly and xgb do not.

Table 4.

Coverage probability of 95% Wald CI based on estimators of Ψ(θ0)=VarP0(μ0,1(X)μ0,0(X)) where μ0,a:xEP0[Y|A=a,X=x]. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The CI is constructed based on the influence function. The coverage probabilities for xgb.trig and xgb.1step are relatively close to 95%, but those for poly and xgb are not.

n poly xgb xgb.1step xgb.trig
500 0.85 0.76 0.89 0.90
1000 0.68 0.78 0.93 0.93
2000 0.44 0.81 0.93 0.92
5000 0.11 0.80 0.89 0.87
10000 0.00 0.79 0.92 0.90
20000 0.00 0.67 0.91 0.88

5.3.2. Violation of Condition C5*

We also study via simulation the behavior of our estimator when Condition C5* is violated. We note that whether Condition C5* holds depends on the smoothness of g0. We choose g0 to be rougher than I with g0 being an element of C2[−1, 1] but not of C3[−1, 1]. Consequently, Ψ˙ cannot be approximated by our generalized data-adaptive series as well as I, but its smoothness is sufficient for the existence of a deterministic K to achieve efficiency. Appendix E describes further details of this simulation setting.

Table 5 presents the performance of our estimator based on 5-fold CV. We observe that its scaled MSE appears to converge to one, but it is unclear whether its scaled bias converges to zero for large n, and so our method may be overly biased.. The coverage of 95% Wald CIs is close to the nominal level, suggesting that the bias may be fairly small relative to the standard error of the estimator at the sample sizes considered. Therefore, according to this simulation, our generalized data-adaptive series methodology appears to be robust against violation of Condition C5*.

Table 5.

Performance of the plug-in estimator of Ψ(θ0)=VarP0(μ0,1(X)μ0,0(X)) where μ0,a:xEP0[Y|A=a,X=x] based on data-adaptive series. Here the propensity score g0:xEP0[A|X=x] is rough. The relative MSE is n · MSE/ξ2 where ξ2 := P0IF2 is the asymptotic variance that the n · MSE of an AL estimator should converge to; the root-n abs relative bias is n|bias/Ψ(θ0)|. The performance appears to be acceptable in view of small MSE and reasonable CI coverage.

n relative MSE root-n absolute relative bias 95% Wald CI coverage
500 1.02 0.28 0.92
1000 1.13 0.26 0.91
2000 1.10 0.19 0.94
5000 1.03 0.02 0.93
10000 0.96 0.23 0.95
20000 0.99 0.24 0.94

6. Discussion

Numerous methods have been proposed to construct efficient estimators for statistical parameters under a nonparametric model, but each of them has one or more of the following undesirable limitations: (i) their construction may require specialized expertise that is not accessible to most statisticians; (ii) for any given data set, there may be little guidance, if any, on how to select a key tuning parameter; and (iii) they may require stringent smoothness conditions, especially on derivatives. In this paper, we propose two sieve-like methods that can partially overcome these difficulties.

Our first approach, namely that based on HAL, can be further generalized to the case in which the flexible fit is an empirical risk minimizer over a function class assumed to contain the unknown function. The key condition B2 may be modified in that case as long as it ensures that certain perturbations of the unknown function still lie in that function class. We note that our methods may also be applied under semiparametric models.

A major direction for future work is to construct valid CIs without the knowledge of the influence function of the resulting plug-in estimator. The nonparametric bootstrap is in general invalid when the overall summary is not Hadamard differentiable and especially when the method relies on CV [2, 10], but a model-based bootstrap is a possible solution (Chapter 28 of [31]). In many cases only certain components of the true data-generating distribution must be estimated to obtain a plug-in estimator, while its variance may depend on other components that are not explicitly estimated. Therefore, generating valid model-based bootstrap samples is generally difficult.

Our proposed sieve-like methods may be used to construct efficient plug-in estimators for new applications in which the relevant theoretical results are difficult to derive. They may also inspire new methods to construct such estimators under weaker conditions.

Acknowledgements

This work was partially supported by the National Institutes of Health under award number DP2-LM013340 and R01HL137808. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Appendix A: Modification of chosen norm for evaluating the conditions: case study of mean counterfactual outcome

In this appendix, we consider a parameter that requires a modification in the chosen norm for evaluating the conditions. In particular, we discuss estimating counterfactual mean outcome in Example 3.

Let g0 : xP0(A = 1|X = x) be the propensity score function. A natural choice of the loss function is (θ) : va[yθ(x)]2. Indeed, learning a function with this loss function is equivalent to fitting a function within the stratum of observations that received treatment 1. Unfortunately, this loss function does not satisfy Condition A2 with L2(P0)-norm, because P0{(θ) − (θ0)} = P0{g0 · (θθ0)2} cannot be well approximated by α0,P0{(θθ0)2}/2 for any constant α0, > 0 unless g0 is a constant. One way to overcome this challenge is to choose the alternative inner product 〈θ1, θ2g0 := P0{g0θ1θ2} and its induced norm g0. In this case, Condition A2 is satisfied once k·k is replaced by g0 in the condition statement. Under this choice, Ψθ0=P0(θθ0)=1/g0,θθ0g0.

We may redefine the corresponding Ψ˙ similarly as the function that satisfies

Ψθ0=Ψ˙,θθ0g0,

and it immediately follows that Ψ˙=1/g0. Moreover, under a strong positivity condition, namely g0(X) ≥ δg > 0 a.s. for some δg, which is a typical condition in causal inference literature [31, 34], then it is straightforward to show that δgg0; that is, g0 is equivalent to L2(P0)-norm. Using this fact, it can be shown that all other conditions with respect to the L2(P0)-inner product are equivalent to the corresponding conditions with respect to ,g0.

Therefore, the data-adaptive series can be applied to estimation of the counterfactual mean outcome under our conditions for L2(P0)-inner product. If we use the targeted form in Remark 2, then we need a flexible estimator of g0 and the procedure is almost identical to a TMLE [31]. If we use the generalized data-adaptive series, we would require sufficient amount of smoothness for g0(·). In the latter case, the change in norm when evaluating the conditions is a purely technical device and the estimation procedure is the same as would have been used if we had used the L2(P0)-norm. We also note that the same argument may be used to show that in Example 4, with (θ) : va[zμ1(x)]2 + (1 − a)[zμ0(x)]2 being the usual squared-error loss, we may choose the alternative inner product θ1,θ2g0P0{θ1diag(1g0,g0)θ2} and find that , Ψ˙=(2/(1g0)[(μ01μ00)P0(μ01μ00)],2/g0[(μ01μ00)P0(μ01μ00)]) as we did in Section 5.3.

Appendix B: Additional conditions

Throughout the rest of this appendix, we use C to denote a general absolute positive constant that can vary line by line.

B.1. HAL

Condition B3 (Empirical processes conditions). For any fixed ϑ ∈ Θv,M and some Δ > 0, it holds that ℓ(θ), l0[θθ0] and {r[θθ0] − r[θ + δ(ϑθ) − θ0]}/δ are càdlàg for all θ ∈ Θv,M and all δ ∈ [0, Δ]. Moreover, the following terms are all finite:

supθΘv,Ml(θ)v,supθΘv,Ml0[θθ0]v,supθΘv,M,δ[0,Δ]r[θθ0]r[θθ0+δ(ϑθ)]δv.

In addition, l0[θ^nθ0] and supδ[0,Δ]{r[θ^nθ0]r[θ^nθ0+δ(ϑθ^n)]}/δ converge to 0 in probability.

Condition B4 (Finite variance of influence function) ξ2VarP0(l0[Ψ˙](V))/α0,l2<.

B.2. Data-adaptive series

Condition C6 (Local Lipschitz continuity of Πn,θ0(I)). For sufficiently large n,

Πn,θ0(I)θΠn,θ0(I)θ0Cθθ0

for all θ ∈ Θ withθθ0‖ ≤ n−1/4.

Condition C7 (Local Lipschitz continuity of ψ˙ and Πn,θ0(ψ˙)). For sufficiently large n, for all θ ∈ Θ withθθ0‖ ≤ n−1/4,

  1. ψ˙θψ˙θ0Cθθ0;

  2. Πn,θ0(ψ˙)θΠn,θ0(ψ˙)θ0Cθθ0.

Condition C8 (Empirical process conditions). There exists some constant Δ > 0 such that

supδ[0,Δ]|(PnP0){r[θn*θ0]r[πn((1δ)θn*+δ(±Ψ˙+θ0))θ0]δ}|=op(n1/2),
(PnP0)l0[(±Ψ˙+θ0)πn(±Ψ˙+θ0)]=op(n1/2),
(PnP0)l0[θn*θ0]=op(n1/2).

Condition C9 (Finite variance of influence function) ξ2VarP0(l0[Ψ˙](V))/α0,l2<.

B.3. Generalized data-adaptive series

Condition C6* (Local Lipschitz continuity of projected I for Θn,θ0). For sufficiently large n, Πn,θ0(I)(θ,Ix)Πn,θ0(I)(θ0,Ix)Cθθ0 for all θθ0n1/4.

Condition C7* (Local Lipschitz continuity of ψ˙ and its projection for Θn,θ0). For sufficiently large n, for allθθ0‖ ≤ n−1/4,

  1. ψ˙(θ,Ix)ψ˙(θ0,Ix)Cθθ0;

  2. Πn,θ0(ψ˙)(θ,Ix)Πn,θ0(ψ˙)(θ0,Ix)Cθθ0.

B.4. Conditions for efficiency of the plug-in estimator

Define a collection of submodels

{{PH,δ:δBH}:HH}

for which: (i) H is a subset of L02(P0) and the L02(P0) of its linear span is L02(P0); and (ii) each {PH,δ:δBH} is a regular univariate parametric submodel that passes through P0 and has score H for δ at δ = 0. For each HH and δBH, we define θH,δargminθΘPH,δl(θ). In this appendix, for all small o and big O notations, we let δ → 0 with H fixed.

Condition E1 (Sufficiently close risk minimizer). For any given HH. θH,δθ0=o(δ1/2).

Condition E2 (Quadratic behavior of loss function remainder near 0). For any given HH and ϑ, there exists positive δ′ = o(δ) such that (PH,δP0){r[(1δ)(θH,δθ0)+δΨ˙]r[θH,δθ0]}/δ=o(δ)}.

Appendix C: Discussion of technical conditions for data-adaptive series and its generalization

C.1. Theorem 2

Condition C2 usually imposes an upper bound on the growth rate of K. To see this, we show that Condition C2 is equivalent to a term being op(n−1/4), and an upper bound of this term is controlled by K. Let θnargminθΘnP0l(θ) be the true-risk minimizer in Θn. Under Conditions A2, C1, C3 and C6, by Lemma 5, it follows that Condition C2 is equivalent to requiring that θnθn=op(n1/4). Note that θn minimizes the empirical risk in Θn, and M-estimation theory [32] can show that θnθn can be upper bounded by an empirical process term, whose upper bound is related to the complexity of Θn, namely how fast K grows with sample size. To ensure this bound is op(n−1/4), K must not grow too quickly.

Condition C3 assumes that the identity function can be well approximated by the series ϕk with the specified number of terms K in the L2(Pθ0) sense. If Span{ϕ1, …, ϕK} does not contain I for any K, then sufficiently many terms must be included to satisfy this condition; that is, this condition imposes a lower bound on the rate at which K should grow with n. Even if Span{ϕ1, …, ϕK} does contain I for some finite K, this condition still requires that K is not too small.

Condition C4 is implied by the following condition in view of Lemma 3:

Condition C4s. [ψ˙Πn,θ0(ψ˙)]θ0=o(n1/4).

This condition is similar to Condition C3. However, in general, we do not expect ψ˙ to be contained in Span{ϕ1, …, ϕK} for any K, and hence this condition generally imposes a lower bound on the rate of K. Note that Condition C4s is stronger than Condition C4, and there are interesting examples where C4 holds but C4s fails to hold. Indeed, if θn converges to θ0 at a rate much faster than n−1/4, then C4 can be satisfied even if [ψ˙Πn,θ0(ψ˙)]θ0 decays to zero in probability relatively slowly — that is, the convergence rate of θn can compensate for the approximation error of ψ˙. This is one way in which we can benefit from using flexible ML algorithms to estimate θ0: if θn0 converges to θ0 at a fast rate, then we can expect θn to also have a fast convergence rate.

Conditions C2, C3 and C4 are not stringent provided sufficient smoothness on derivatives of ψ˙ and a reasonable series. For example, as noted in [4], when ψ˙ has a bounded p-th order derivative and the polynomial, trigonometric series or spline with degree at least p + 1 is used, then if K2/n → 0 (K3/n → 0 for polynomial series), the term in Condition C2 is Op(K/n); the terms in Condition C3 and the sufficient Condition C4s are O(Kp/q). Therefore, we can select K to grow at a rate faster than nq/(4p) and slower than n1/2 (n1/3 for polynomial series). If p is large, then this allows for a wide range of rates for K. Typically Ψ˙ (and hence ψ˙) is only related to the summary of interest Ψ but not the true function θ0. For example, for the summary Ψ(θ) = P0(fθ) at the beginning of Section 4.1, ψ˙=f is variation independent of θ0. It is often the case that Ψ is smooth and so is ψ˙, so p is often sufficiently large for this window to be wide.

Condition C6 is usually easy to satisfy. Since Πn,θ0(I) is a linear combination of {ϕk : k ∈ {1, …, K}} and is an approximation of a highly smooth function I, if the series ϕk is smooth, then we can expect that Πn,θ0(I) will be Lipschitz uniformly over n, that is, that Condition C6 holds. For example, using polynomial series, cubic splines or trigonometric series imply that this condition holds.

Condition C7 imposes Lipschitz continuity conditions on ψ˙ and Πn,θ0(ψ˙) uniformly over n. The Lipschitz continuity of ψ˙ has been discussed above. As for Πn,θ0(ψ˙), similarly to Condition C6, as long as the series ϕk being used is smooth, Πn,θ0(ψ˙) would be Lipschitz continuous uniformly over n.

C.2. Theorem 5

The conditions are similar to those in Theorem 2. However, Condition C4* can be more stringent than Condition C4. For generalized data-adaptive series, the dimension of the argument of the series is larger. Hence, as noted in [4], C4* may require more smoothness of ψ˙ in order that ψ˙ can be well approximated by Πn,θ0(ψ˙). However, in general, we do not expect the smoothness of ψ˙ to depend on Ψ alone but no components of P0, so the amount of smoothness of ψ˙ may be more limited in practice.

It is also worth noting that, similarly to Theorem 2, a sufficient condition for Condition C4* is the following:

Condition C4*s. [ψ˙Πn,θ0(ψ˙)](θ0,Ix)=o(n1/4).

Appendix D: Lemmas and technical proofs

D.1. Highly Adaptive Lasso (HAL)

Proof of Theorem 1. Under Conditions A2 and B1–B3, Lemma 1 and its corollary in [26] show that θ^nθ0=op(n1/4).

We show that the small perturbations of θ^n in certain directions are contained in Θv,M. Let ϑδ=θ^n+δ(Ψ˙+θ0θ^n) be a path indexed by δ (0 ≤ δ < 1) that is a perturbation of θ^n. Note that for all δ, ϑδ is càdlàg by Condition B1 and we have that

ϑδv=(1δ)θ^n+δ(Ψ˙+θ0)v(1δ)θ^nv+δ(Ψ˙v+θ0v)(1δ)M+δM=M

by Condition B2. Hence ϑδ ∈ Θv,M. The same result holds for the path ϑ˜δθ^n+δ(Ψ˙+θ0θ^n).

Combining this observation with the P0-Donkser property of Θv,M′ for any fixed M′ > 0 [9] and Conditions A1–A2, B4, we have that all of the conditions of Theorem 1 in [24] are satisfied with all sieves being Θv,M. The desired asymptotic linearity result follows. The efficiency result is shown in Appendix D.3. □

Proof of Lemma 1. Recall that Xd. Similar to x(l), let x(u) = inf{x : P0(Xx) = 1} where inf and ≤ are entrywise. To avoid clumsy notations, in this proof we drop the subscript in θ0 and use θ instead. This should not introduce confusion because other functions (e.g., an estimator of θ0) are not involved in the statement or proof. Using the results reviewed in Section 3.1,

Ψ˙v=|Ψ˙(x(l))|+s{1,2,,d},sxs(l)xs(u)|Ψ˙s(du)|=|Ψ˙(x(l))|+s{1,2,,d},sxs(l)xs(u)|ψ˙(z)||z=θs(u)|θs(du)|.

Since

|θ(x)|=|θ(x(l))+s{1,2,,d},sxs(l)xsθs(du)||θ(x(l))|+s{1,2,,d},sxs(l)xs|θs(du)||θ(x(l))|+s{1,2,,d},sxs(l)xs(u)|θs(du)|=θv,

we have |ψ˙(z)||z=θs(u)supz:|z|θ0v|ψ˙(z)|=B for all x()ux(u), so

Ψ˙v|Ψ˙(x(l))|+s{1,2,,d},sxs(l)xs(u)B|θs(du)||Ψ˙(x(l))|+Bθ0v.

Lemma 2 (CV-selected bound not much smaller than the bound of the true function’s variation norm). Suppose that Condition B1 holds, θ0 is càdlàg, ‖θ0v < ∞ and for any M, supθΘv,Ml(θ)<. Let Mn be a (possibly random) sequence such that P0{l(θ^n,Mn)l(θ0)}=op(1). Then for any ϵ > 0, with probability tending to one, Mn ≥ ‖θ0v − ϵ. Therefore, for any fixed ϵ > 0, with probability tending to one, Mn + ϵ ≥ (‖θ0v − ϵ) + ϵ = ‖θ0v

Proof of Lemma 2. We prove by contradiction. Suppose the claim is not true, i.e. there exists ϵ, δ > 0 such that P(Mn < ‖θ0v − ϵ) for all nN, where N is an infinite set. Let  θ0,MargminθΘv,MP0l(θ). Then for all nN, with probability at least δ,

P0{l(θ^n,Mn)l(θ0)}=P0{l(θ^n,Mn)l(θ0,Mn)}+P0{l(θ0,Mn)l(θ0)}P0{l(θ0,Mn)l(θ0)}P0{l(θ0,θ0vϵ)l(θ0)},

which is a positive constant since the function class Θθ0vϵ does not contain θ0 and this term is non-negligible bias. This contradicts the assumption that P0{l(θ^n,Mn)l(θ0)}=op(1) and hence the desired follows. □

Therefore, if Ψ˙vF(θ0v) for a known increasing function F, then with probability tending to one, Mn + ϵ + F(Mn + ϵ)) is a valid bound on θ^nv that can be used to obtain an efficient plug-in estimator. Moreover, if the bound is loose, i.e. Ψ˙vF(θ0v), and F is continuous, then there exists some ϵ > 0 such that Ψ˙vF(θ0vϵ)ϵ and hence θ0v+Ψ˙vMn+F(Mn) with probability tending to one.

Note that this lemma only concerns learning a function-valued feature but not estimating Ψ(θ0). There are examples where Ψ˙ depends on components of P0, say η0, other than θ0. However, if η0 can be learned via HAL, then Lemma 2 can be applied. Therefore, if it is known that Ψ˙vF(θ0v,η0v) for a known increasing function F, then we can use a bound on θ^nv obtained in a similar fashion as above from the sequence Mn to construct an efficient plug-in estimator Ψ(θ^n).

Now consider obtaining Mn by k-fold CV from a set of candidate bounds. Then, under Conditions B1–B3, by (i) Lemma 1 and its corollary of [26], and (ii) the oracle inequality for k-fold CV in [29], P0{l(θ^n,Mn)l(θ0)}=op(n1/4) if (i) one candidate bound is no smaller than ‖θ0v, and (ii) the number of candidate bounds is fixed. Therefore, the above results apply to this case.

D.2. Data-adaptive series estimation

We first present and prove two lemmas that lead to Theorems 2 and 5.

Lemma 3 (Convergence rate of the sieve estimator). Under Conditions C1, C3 and C6, ‖πn(θ0)−θ0‖ = op(n−1/4). Under an additional condition C2, θnθ0=op(n1/4).

Proof of Lemma 3. By triangle inequality, πn(θ0)θ0θ0θn0+θn0πn(θn0)+πn(θn0)πn(θ0). We bound these three terms separately.

Term 1: By Condition C1, θ0θn0=op(n1/4).

Term 2: By the definition of projection operator,

θn0πn(θn0)=θn0Πn,θn0(I)θn0θn0Πn,θ0(I)θn0.

We bound the right-hand side by showing this term is close to θ0Πn,θ0(I)θ0 up to an op(n−1/4) term. By the reverse triangle inequality and the triangle inequality,

|θn0Πn,θ0(I)θn0θ0Πn,θ0(I)θ0|[θn0Πn,θ0(I)θn0][θ0Πn,θ0(I)θ0]=[θn0θ0][Πn,θ0(I)θn0Πn,θ0(I)θ0]θn0θ0+Πn,θ0(I)θn0Πn,θ0(I)θ0θn0θ0+Cθn0θ0

which is op(n−1/4) by Condition C1. Therefore, by Condition C3,

θn0πn(θn0)θn0Πn,θ0(I)θn0θ0Πn,θ0(I)θ0+op(n1/4)=op(n1/4)

Term 3: By the definition of projection and Condition C1, πn(θn0)πn(θ0)θn0θ0=op(n1/4)

Conclusion from the three bounds: ‖πn(θ0) − θ0‖ = op(n−1/4).

If, in addition, Condition C2 also holds, then θnθ0πn(θ0)θ0+θnπn(θ0)=op(n1/4). □

The same result holds for the generalized data-adaptive series under Conditions C1, C6*, C3* and C2 (if relevant). The proof is almost identical and is therefore omitted.

Lemma 4 (Approximation error to ψ˙). Under Condition C7, ψ˙θ0πn(ψ˙θ0)Cθn0θ0+ψ˙θ0Πn,θ0(ψ˙)θ0. Therefore, under Conditions C1–C4, ψ˙θ0πn(ψθ0)θnθ0=op(n1/2).

Proof of Lemma 4. By the definition of the projection operator and triangle inequality,

ψ˙θ0πn(ψ˙θ0)ψ˙θ0πn(ψ˙θn0)ψ˙θ0ψ˙θn0+ψ˙θn0πn(ψ˙θn0).

We bound the two terms on the right-hand side separately.

Term 1: By Condition C7, ψ˙θ0ψ˙θn0Cθ0θn0.

Term 2: This term can be bounded similarly as in Lemma 3. By the reverse triangle inequality and the triangle inequality,

|ψ˙θn0Πn,θ0(ψ˙)θn0ψ˙θ0Πn,θ0(I)θ0|[ψ˙θn0Πn,θ0(ψ˙)θn0][ψ˙θ0Πn,θ0(ψ˙)θ0]=[ψ˙θn0ψ˙θ0][Πn,θ0(ψ˙)θn0Πn,θ0(ψ˙)θ0]ψ˙θn0ψ˙θ0+Πn,θ0(ψ˙)θn0Πn,θ0(ψ˙)θ0Cθn0θ0+Cθn0θ0            (Condition C7)=Cθn0θ0.

Therefore, by the definition of the projection operator and Condition C7,

ψ˙θn0πn(ψ˙θn0)ψ˙θn0Πn,θ0(ψ˙)θn0ψ˙θ0Πn,θ0(ψ˙)θ0+Cθn0θ0.

Conclusion from the two bounds:ψ˙θ0πn(ψ˙θ0)Cθn0θ0+ψ˙θ0n,θ0(ψ˙)θ0.

Under Conditions C1–C4, using Lemma 3, it follows that ψ˙θ0πn(ψ˙θ0)θnθ0=op(n1/2). □

Note that πn is a linear operator. Lemma 3 and 4 along with other conditions essentially satisfy the assumptions in Corollary 2 in [24]. We can prove the asymptotic linearity result of Theorem 2 similarly to this result as follows.

Proof of Theorem 2. We note that

Pnl(θn*)=Pnl(θ0)+P0[l(θn*)l(θ0)]+(PnP0)[l(θn*)l(θ0)]=Pnl(θ0)+P0[l(θn*)l(θ0)]+(PnP0)l0[θn*θ0]+(PnP0)r[θn*θ0].

Let ϵn be an arbitrary sequence of positive real numbers that is o(n−1/2). We may replace θn with πn((1ϵn)θn*+ϵn(θ0+Ψ˙)) in the above equation. We first consider πn((1ϵn)θn*+ϵn(θ0+Ψ˙)):

Pnl(πn((1ϵn)θn*+ϵn(θ0+Ψ˙)))=Pnl(θ0)+P0[l(πn((1ϵn)θn*+ϵn(θ0+Ψ˙)))l(θ0)]+(PnP0)l0[πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ0]+(PnP0)r[πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ0]. (1)

Take the difference between the above two equations. By the linearity of l0 and πn, we have that

Pnl(πn((1ϵn)θn*+ϵn(θ0+Ψ˙)))Pnl(θn*)=P0[l(πn((1ϵn)θn*+ϵn(θ0+Ψ˙)))l(θ0)]P0[l(θn*)l(θ0)]+(PnP0)l0[πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θn*]+(PnP0){r[πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ0]r[θn*θ0]}.

We next analyze the three lines on the right-hand side of the above equation separately.

Line 1: Under Condition A2,

P0[l(πn((1ϵn)θn*+ϵn(θ0+Ψ˙)))l(θ0)]P0[l(θn*)l(θ0)]=α0,l2πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ02α0,l2θn*θ02+op(πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ02+θn*θ02)

We subtract and add (1ϵn)θn*+ϵn(θ0+Ψ˙) in the first term. By the fact that πn is linear and πn(θn)=θn, the display continues as

=α0,l2{πn((1ϵn)θn*+ϵn(θ0+Ψ˙))((1ϵn)θn*+ϵn(θ0+Ψ˙))}+{(1ϵn)θn*+ϵn(θ0+Ψ˙)θ0}2α0,l2θn*θ02+op(πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ02+θn*θ02)=α0,l2ϵn{πn(θ0+Ψ˙)(θ0+Ψ˙)}+(θn*θ0)+ϵn(Ψ˙+θ0θn*)2α0,l2θn*θ02+op(πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ02+θn*θ02)=ϵnα0,lθn*θ0,Ψ˙+ϵn2α0,l2πn(θ0+Ψ˙)θn*2+ϵnα0,lπn(θ0)θ0,θn*θ0+ϵnα0,lπn(Ψ˙)Ψ˙,θn*θ0ϵnα0,lθn*θ02+op(πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ02+θn*θ02)

By Cauchy-Schwards inequality, the display continues as

ϵnα0,lθn*θ0,Ψ˙+ϵn2α0,l2πn(θ0+Ψ˙)θn*2+ϵnα0,lπn(θ0)θ0θn*θ0+ϵnα0,lπn(Ψ˙)Ψ˙θn*θ0ϵnα0,lθn*θ02+op(πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ02+θn*θ02)

By Lemmas 3–4 and the assumption that ϵn = o(n−1/2), the display continues as

=ϵnα0,lθn*θ0,Ψ˙+ϵnop(n1/2)+op(πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ02+θn*θ02).

Line 2: We subtract and add (1ϵn)θn*+ϵn(θ0+Ψ˙). By linearity of l0, Condition C8, and the fact that πn(θn)=θn, we have that

(PnP0)l0[πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θn*]=(PnP0)l0[(1ϵn)θn*+ϵn(θ0+Ψ˙)θn*]+ϵn(PnP0)l0[πn(θ0+Ψ˙)(θ0+Ψ˙)]=ϵn(PnP0)l0[Ψ˙]ϵn(PnP0)l0[θn*θ0]+ϵnop(n1/2)=ϵn(PnP0)l0[Ψ˙]+ϵnop(n1/2).

Line 3: By Condition C8, this term is ϵnop(n−1/2).

Conclusion of the three lines: It holds that

Pnl(πn((1ϵn)θn*+ϵn(θ0+Ψ˙)))Pnl(θn*)ϵnα0,lθn*θ0,Ψ˙+ϵn(PnP0)l0[Ψ˙]+ϵnop(n1/2)+op(πn((1ϵn)θn*+ϵn(θ0+Ψ˙))θ02+θn*θ02).

Since θn is an empirical risk minimizer, the left-hand side is non-negative. Thus,

0θn*θ0,Ψ˙+(PnP0)α0,l1l0[Ψ˙]+op(n1/2).

Similarly, by replacing πn((1ϵn)θn*+ϵn(θ0+Ψ˙)) with πn((1ϵn)θn*+ϵn(θ0+Ψ˙)) in (1), we derive that

0θn*θ0,Ψ˙(PnP0)α0,l1l0[Ψ˙]+op(n1/2).

Therefore, |θn*θ0,Ψ˙(PnP0)α0,l1l0[Ψ˙]|. By Conditions A3–A4 and Lemma 3,

Ψ(θn*)Ψ(θ0)=θn*θ0,Ψ˙+op(n1/2)=(PnP0)α0,l1l0[Ψ˙]+op(n1/2).

The asymptotic linearity of Ψ(θn) follows. We prove the efficiency in Appendix D.3. □

The proof of Theorem 5 is almost identical.

Nest we present and prove a lemma allows us to interpret Condition C2 as an upper bound on the rate of K.

Lemma 5. Under Conditions A2, C1, C3 (C3* resp.) and C6 (C6* resp.), πn(θ0)θn=op(n1/4).

Proof of Lemma 5. By definition of θn and Condition A2, we have

θnθ02CP0{l(θn)l(θ0)}CP0{l(πn(θ0))l(θ0)}Cπn(θ0)θ02,

the right-hand side of which is op(n−1/2) by Lemma 3 (or its corresponding version under Conditions C6* and C3*). Therefore, θnθ0=op(n1/4) and hence πn(θ0)θnπn(θ0)θ0+θnθ0=op(n1/4).

We finally prove the efficiency of the data-adaptive series estimator with K selected by CV.

Proof of Theorem 4. By Lemma 3 and Condition A2, for that existing deterministic K, P0{l(θK(θn0))l(θ0)}CθK(θn0)θ02=op(n1/2). By the oracle inequality for CV in [29], P0{l(θn#)l(θ0)}=op(n1/2). By Condition A2, θn#θ02CP0{l(θn#)l(θ0)}=op(n1/2) and hence θn#θ0=op(n1/2). So with probability tending to one,

ψ˙θn0πK*,θn0(ψ˙θn0)=ψ˙θn0ΠK*,θn0(ψ˙)θn0Cθn0ΠK*,θn0(I)θn0          (Condition C5)Cθn0θn#      (definition of the projection operator)C(θn0θ0+θn#θ0),       (triangle inequality)

which is op(n−1/4) by Condition C1. Hence,

ψ˙θ0πK*,θn0(ψ˙θ0)ψ˙θ0πK*,θn0(ψ˙θn0)ψ˙θ0ψ˙θn0+ψ˙θn0πK*,θn0(ψ˙θn0)Cθn0θ0+op(n1/4),          (Condition C7)

which is op(n−1/4) by Condition C1.

This bounds the approximation error ψ˙θ0πK,θn0(ψ˙θ0) for ψ˙, a result that is similar to Lemma 4 combined with Conditions C1 and C4*s. Similarly to Theorem 2, along with other conditions, the assumptions in Corollary 2 in [24] are essentially satisfied and hence an almost identical argument shows that Ψ(θn#) is an asymptotically linear estimator of Ψ(θ0). We prove the efficiency in Appendix D.3. □

D.3. Efficiency

Proof of efficiency of the proposed estimators. It is sufficient to show that the influence function of our proposed estimators is the canonical gradient under a nonparametric model. Let HH be fixed. In the rest of this proof, for all small o and big O notations, we let δ → 0. The proof is similar to the proof of asymptotic linearity in [24] except that the estimator of θ0 and the empirical distribution Pn are replaced by θH,δ and PH,δ respectively.

Let δ′ satisfy Condition E2. We note that

PH,δl(θH,δ)=PH,δl(θ0)+P0[l(θH,δ)l(θ0)]+(PH,δP0)[l(θH,δ)l(θ0)]=PH,δl(θ0)+P0[l(θH,δ)l(θ0)]+(PH,δP0)l0[θH,δθ0]+(PH,δP0)r[θH,δθ0].

We also note that (1δ)θH,δ+δ(θ0+Ψ˙)Θ if |δ| is sufficiently small. Then, similarly, by replacing θH,δ with (1δ)θH,δ+δ(θ0+Ψ˙) in the above equation, we have that

PH,δl((1δ)θH,δ+δ(θ0+Ψ˙))=PH,δl(θ0)+P0[l((1δ)θH,δ+δ(θ0+Ψ˙))l(θ0)]+(PH,δP0)l0[(1δ)θH,δ+δ(θ0+Ψ˙)θ0]+(PH,δP0)r[(1δ)(θH,δθ0)+δΨ˙]. (2)

Take the difference between the above two equations. By the linearity of l0, we have that

PH,δl((1δ)θH,δ+δ(θ0+Ψ˙))PH,δl(θH,δ)=P0[l((1δ)θH,δ+δ(θ0+Ψ˙))l(θ0)]P0[l(θH,δ)l(θ0)]+δ(PH,δP0)l0[Ψ˙θH,δ+θ0]+(PH,δP0){r[(1δ)(θH,δθ0)+δΨ˙]r[θH,δθ0]}=α0,l2(1δ)(θH,δθ0)+δΨ˙2α0,l2θH,δθ02+o(θH,δθ02+(1δ)(θH,δθ0)+δΨ˙2)         (Condition A2) +δ(PH,δP0)l0[Ψ˙]δ(PH,δP0)l0[θH,δθ0]+δo(δ)      (Condition E2) =δα0,lθH,δθ0,Ψ˙δα0,lθH,δθ02+δ2α0,l2θH,δθ0+Ψ˙2+o(θH,δθ02+(1δ)(θH,δθ0)+δΨ˙2)+δ(PH,δP0)l0[Ψ˙]+δo(δ)δα0,lθH,δθ0,Ψ˙+δ2α0,l2θH,δθ0+Ψ˙2+o(θH,δθ02+(1δ)(θH,δθ0)+δΨ˙2)+δ(PH,δP0)l0[Ψ˙]+δo(δ).

Since the left-hand side of the above display is nonnegative, by Condition E1, we have that

0θH,δθ0,Ψ˙+α0,l1(PH,δP0)l0[Ψ˙]+O(δ)+o(δ)=θH,δθ0,Ψ˙+α0,l1(PH,δP0)l0[Ψ˙]+o(δ).

Similarly, by replacing (1δ)θH,δ+δ(θ0+Ψ˙) with (1δ)θH,δ+δ(θ0+Ψ˙) in (2), we show that 0θH,δθ0,Ψ˙+α0,l1(PH,δP0)l0[Ψ˙]+o(δ). Therefore, |θH,δθ0,Ψ˙+α0,l1(PH,δP0)l0[Ψ˙]|=o(δ) and

Ψ(θH,δ)Ψ(θ0)=θH,δθ0,Ψ˙+O(θH,δθ02)=α0,l1(PH,δP0)l0[Ψ˙]+o(δ)+O(θH,δθ02)=α0,l1(PH,δP0)l0[Ψ˙]+o(δ).        (Condition E1)

Consequently, limδ0[Ψ(θH,δ)Ψ(θ0)]/δ=P0{α0,l1l0[Ψ˙]H} and hence the canonical gradient of Ψ under a nonparametric model is α0,l1{l0[Ψ˙]+P0l0[Ψ˙]}. Since the influence functions of our asymptotically linear estimators are equal to this canonical gradient, our proposed estimators are efficient under a nonparametric model. □

Appendix E: Simulation setting details

In all simulations, since θ0(x)=EP0[Z|X=x] is the conditional mean function, the loss function was chosen to be the square loss (θ) : v ↦ (zθ(x))2.

E.1. HAL

In the simulation, we generate data from the distribution defined by

X~N(0,1),θ0(x)=exp{(1+2x+2x2)/2},ZX=x~ Exponential (rate=1/θ0(x)).

The sample sizes being considered are 500, 1000, 2000, 5000 and 10000. For each scenario we run 1000 replicates. We chose M.gcv+ to be 3.1 times M.cv.

E.2. Data-adaptive series

E.2.1. Demonstration of Theorem 4

In the simulation, we generate data from the distribution defined by X ~ Unif(−1, 1), Z|X = x ~ N(θ0(x), 0.252) where

θ0:xI(1x<3/4)+πI(3/4x<1/2)+10x2I(1/4x<1/4)+2I(1/4x<1/2)+exp(1)I(1/2x<3/4)+33I(3/4x1),

When using the trigonometric series, we first shift and scale the initial function range to be [−1/2, 1/2] and then use the basis for the interval [−1, 1] (i.e. sin(jπz), cos(jπz)) in sieve estimation to avoid the poor behavior of trigonometric series near the boundary. We consider sample sizes 500, 1000, 2000, 5000, 10000 and 20000. For each sample size, we run 1000 simulations.

E.2.2. Violation of Condition C5

In the simulation, we generate data from the distribution defined by X ~ Unif(−1, 1), Z|X = x ~ N(θ0(x), 1) where θ0 : x ↦ cos(10x). The estimand is Ψ(θ0) = P0(fθ0) where

f:z[310πcos(5πz)38]I(z<12)32z2I(12z<0)+3z2I(0z<12)+[32exp(24z)3z+154]I(z12).

We consider sample sizes 500, 1000, 2000, 5000, 10000 and 20000; for each sample size, we run 1000 simulations. Our goal is to explore the behavior of the plug-in estimator when f, instead of θ0, is rough, so we use kernel regression [17] to estimate θ0 for convenience.

E.3. Generalized data-adaptive series

E.3.1. Demonstration of Theorem 6

In the simulation, we generate data from the distribution defined by X ~ Unif(−1, 1), A|X = x ~ Bern(expit(−x)), Y|A = a, X = x ~ N(μ0,a(x), 0.252) where

μ00:xI(1x<3/4)+πI(3/4x<1/2)+10x2I(1/4x<1/4)+2I(1/4x<1/2)+exp(1)I(1/2x<3/4)+33I(3/4x1),
μ01:xx2I(x<1/3)+exp(x)I(1/3x1/3)+I(x1/3)

The series is the tensor product [4] of univariate trigonometric series in E.2.1. The sample sizes are the same as in E.2.1.

E.3.2. Violation of Condition C5*

In the simulation, we generate data from the distribution defined by X ~ Unif(−1, 1), A|X = x ~ Bern(g0(x)), Y|A = a, X = x ~ N(μ0,a(x), 0.252) where μ0a : x ↦ exp(−x2 + 0.8ax + 0.5a) (a ∈ {0, 1}) and

g0:xexpit{(53x3154x253x2596)I(x12)+(56x4+53x3)I(12<x0)+53x3I(0<x12)+(5x2154x+56)I(x>12)}.

We consider sample sizes 500, 1000, 2000, 5000, 10000 and 20000; for each sample size, we run 1000 simulations. Our goal is to explore the behavior of the plug-in estimator when Ψ˙, instead of θ0, is rough, so we use kernel regression [17] to estimate θ0 for convenience.

References

  • [1].Benkeser D and van Der Laan M (2016). The Highly Adaptive Lasso Estimator. In Data Science and Advanced Analytics (DSAA), 2016 IEEE International Conference on 689–696. IEEE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Bickel P, Götze F and vanZwet W (1997). Resampling fewer than n observations: Gains, losses, and remedies for losses. STATISTICA SINICA 7. [Google Scholar]
  • [3].Bickel PJ and Ritov Y (2003). Nonparametric estimators which can be “plugged-in”. Annals of Statistics 31 1033–1053. [Google Scholar]
  • [4].Chen X (2007). Chapter 76 Large Sample Sieve Estimation of Semi-Nonparametric Models. Handbook of Econometrics 6 5549–5632. [Google Scholar]
  • [5].Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C and Newey W (2017). Double/debiased/Neyman machine learning of treatment effects. American Economic Review 107 261–265. [Google Scholar]
  • [6].Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, Newey W and Robins J (2018). Double/debiased machine learning for treatment and structural parameters. Econometrics Journal 21 C1–C68. [Google Scholar]
  • [7].Friedman JH (2001). Greedy Function Approximation: A Gradient Boosting Machine Technical Report No. 5.
  • [8].Friedman JH (2002). Stochastic gradient boosting. Computational Statistics and Data Analysis 38 367–378. [Google Scholar]
  • [9].Gill RD, van der Laan MJ and Wellner JA (1993). Inefficient estimators of the bivariate survival function for three models. Rijksuniversiteit Utrecht. Mathematisch Instituut. [Google Scholar]
  • [10].Hall P (2013). The bootstrap and Edgeworth expansion. Springer Science & Business Media. [Google Scholar]
  • [11].Härdle W and Stoker TM (1989). Investigating smooth multiple regression by the method of average derivatives. Journal of the American Statistical Association 84 986–995. [Google Scholar]
  • [12].Levy J, van der Laan M, Hubbard A and Pirracchio R (2018). A Fundamental Measure of Treatment Effect Heterogeneity.
  • [13].Mallat S (2009). A Wavelet Tour of Signal Processing. Elsevier. [Google Scholar]
  • [14].Marron JS (1994). Visual understanding of higher-order kernels. Journal of Computational and Graphical Statistics 3 447–458. [Google Scholar]
  • [15].Mason L, Baxter J, Bartlett P and Frean M (1999). Boosting Algorithms as Gradient Descent in Function Space Technical Report.
  • [16].Mason L, Baxter J, Bartlett PL and Frean M (2000). Boosting Algorithms as Gradient Descent Technical Report.
  • [17].Nadaraya EA (1964). On estimating regression. Theory of Probability & Its Applications 9 141–142. [Google Scholar]
  • [18].Newey W, Hsieh F and Robins J (1998). Undersmoothing and Bias Corrected Functional Estimation. Working papers.
  • [19].Newey WK (1997). Convergence rates and asymptotic normality for series estimators. Journal of Econometrics 79 147–168. [Google Scholar]
  • [20].Newey WK, Hsieh F and Robins JM (2004). Twicing Kernels and a Small Bias Property of Semiparametric Estimators. Econometrica 72 947–962. [Google Scholar]
  • [21].Owen AB (2005). Multidimensional variation for quasi-Monte Carlo. In Contemporary Multivariate Analysis And Design Of Experiments: In Celebration of Professor Kai-Tai Fang’s 65th Birthday 49–74. World Scientific. [Google Scholar]
  • [22].Pfanzagl J (1982). Contributions to a General Asymptotic Statistical Theory. Lecture Notes in Statistics 13. Springer; New York, New York, NY. [Google Scholar]
  • [23].Rubin DB (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66 688–701. [Google Scholar]
  • [24].Shen X (1997). On methods of sieves and penalization. Annals of Statistics 25 2555–2591. [Google Scholar]
  • [25].Tibshirani R (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) 267–288. [Google Scholar]
  • [26].van Der Laan M (2017). A Generally Efficient Targeted Minimum Loss Based Estimator based on the Highly Adaptive Lasso. International Journal of Biostatistics 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].van der Laan M and Rubin D (2006). Targeted Maximum Likelihood Learning. U.C. Berkeley Division of Biostatistics Working Paper Series. [Google Scholar]
  • [28].van der Laan MJ, Benkeser D and Cai W (2019). Efficient Estimation of Pathwise Differentiable Target Parameters with the Undersmoothed Highly Adaptive Lasso. arXiv preprint arXiv:1908.05607v1. [DOI] [PMC free article] [PubMed]
  • [29].van der laan MJ and Dudoit S (2003). Unified cross-validation methodology for selection among estimators and a general cross-validated adaptive epsilon-net estimator: Finite sample oracle inequalities and examples. U.C. Berkeley Division of Biostatistics Working Paper. [Google Scholar]
  • [30].van der Laan MJ and Robins JM (2003). Unified Methods for Censored Longitudinal Data and Causality. Springer Series in Statistics. Springer New York, New York, NY. [Google Scholar]
  • [31].van der Laan MJ and Rose S (2018). Targeted Learning in Data Science.
  • [32].van der Vaart A and Wellner J (2000). Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Series in Statistics. Springer. [Google Scholar]
  • [33].Williamson B, Gilbert P, Simon N and Carone M (2017). Nonparametric variable importance assessment using machine learning techniques. UW Biostatistics Working Paper Series. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [34].Yang S and Ding P (2018). Asymptotic inference of causal effects with observational studies trimmed by the estimated propensity scores. Biometrika 105 487–493. [Google Scholar]
  • [35].Zhang CH (2010). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics 38 894–942. [Google Scholar]

RESOURCES