Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Feb 1.
Published in final edited form as: J R Stat Soc Series B Stat Methodol. 2018 Nov 2;81(1):75–99. doi: 10.1111/rssb.12299

An omnibus non-parametric test of equality in distribution for unknown functions

Alexander R Luedtke 1, Marco Carone 2, Mark J van der Laan 3
PMCID: PMC6476331  NIHMSID: NIHMS991509  PMID: 31024219

Abstract

We present a novel family of nonparametric omnibus tests of the hypothesis that two unknown but estimable functions are equal in distribution when applied to the observed data structure. We developed these tests, which represent a generalization of the maximum mean discrepancy tests described in Gretton et al. [2006], using recent developments from the higher-order pathwise differentiability literature. Despite their complex derivation, the associated test statistics can be expressed rather simply as U-statistics. We study the asymptotic behavior of the proposed tests under the null hypothesis and under both fixed and local alternatives. We provide examples to which our tests can be applied and show that they perform well in a simulation study. As an important special case, our proposed tests can be used to determine whether an unknown function, such as the conditional average treatment effect, is equal to zero almost surely.

Keywords: higher order pathwise differentiability, maximum mean discrepancy, omnibus test, equality in distribution, infinite dimensional parameter

1. Introduction

In many scientific problems, it is of interest to determine whether two particular functions are equal to each other. In many settings these functions are unknown and may be viewed as features of a data-generating mechanism from which observations can be collected. As such, these functions can be learned from available data, and estimates of these respective functions can then be compared. To reduce the risk of deriving misleading conclusions due to model misspecification, it is appealing to employ flexible statistical learning tools to estimate the unknown functions. Unfortunately, inference is usually extremely difficult when such techniques are used, because the resulting estimators tend to be highly irregular. In such cases, conventional techniques for constructing confidence intervals or computing p-values are generally invalid, and a more careful construction, as exemplified by the work presented in this article, is required.

To formulate the problem statistically, suppose that n independent observations O1, O2, …, On are drawn from a distribution P0 known only to lie in the nonparametric statistical model, denoted by M. Let O denote the support of P0, and suppose that PRP and PSP are parameters mapping from M onto the space of univariate bounded real-valued measurable functions defined on O, i.e. RP and SP are elements of the space of univariate bounded real-valued measurable functions defined on O. For brevity, we will write R0RP0 and S0SP0. Our objective is to test the null hypothesis

H0:R0(O)=dS0(O)

versus the complementary alternative H1 : not H0, where O follows the distribution P0 and the symbol =d denotes equality in distribution. We note that R0(O)=dS0(O) if R0S0, i.e. R0(O) = S0 (O) almost surely, but not conversely. The case where S0 ≡ 0 is of particular interest since then the null simplifies to H0:R00. Because P0 is unknown, R0 and S0 are not readily available. Nevertheless, the observed data can be used to estimate P0 and hence each of R0 and S0. The approach we propose will apply to functionals within a specified class described later.

Before presenting our general approach, we describe some motivating examples. Consider the data structure O = (W, A, Y) ~ P, where W is a collection of covariates, A is binary treatment indicator, and Y is a bounded outcome, i.e., there exists a universal c such that, for all PM, P(|Y| ≤ c) = 1. Note that, in our examples, the condition that Y is bounded cannot easily be relaxed, as the parameter from Gretton et al. [2006] on which we will base our testing procedure requires that the quantities under consideration have compact support.

  • Example 1: Random sample size variant of the two-sample test from Gretton et al. [2006].
    • If RP(o)ay and SP(o)(1a)y, the null hypothesis corresponds to Y|A = 1 and Y|A = 0 sharing the same distribution. This will differ from the setting considered in Gretton et al. [2006] in that, in our setting, the number of subjects with A = 0 and A = 1 will be treated as random, while the total number of observed subjects is fixed. This is in contrast to Gretton et al. [2006], who studied the case where the number of subjects with A = 0 and A = 1 were both fixed. This is the simplest example that we will give in this work. In particular, it is our only example in which the functions RP and SP do not rely on the (unknown) data generating distribution P.
  • Example 2: Testing a null conditional average treatment effect.
    • If RP(o)EP(Y|A=1,W=w)EP(Y|A=0,W=w) and SP ≡ 0, the null hypothesis corresponds to the absence of a conditional average treatment effect. This definition of RP corresponds to the so-called blip function introduced by Robins [2004], which plays a critical role in defining optimal personalized treatment strategies [Chakraborty and Moodie, 2013].
  • Example 3: Testing for equality in distribution of regression functions in two populations.
    • Suppose the setting of the previous example, but where A represents membership to population 0 or 1. If RP(o)EP(Y|A=1,W=w) and SP(o)EP(Y|A=0,W=w), the null hypothesis corresponds to the outcome having the conditional mean functions, applied to a random draw of the covariate, having the same distribution in these two populations. We note here that our formulation considers selection of individuals from either population as random rather than fixed so that population-specific sample sizes (as opposed to the total sample size) are themselves random. The same interpretation could also be used for the previous example, now testing if the two regression functions are equivalent.
  • Example 4: Testing a null covariate effect on average response.
    • Suppose now that the data unit only consists of O(W,Y). If RP(o)EP(Y|W=w) and SP ≡ 0, the null hypothesis corresponds to the outcome Y having conditional mean zero in all strata of covariates. This may be interesting when zero has a special importance for the outcome, such as when the outcome is the profit over some period.
  • Example 5: Testing a null variable importance.
    • Suppose again that O(W,Y) and W(W(1),W(2),,W(K)). Denote by W(−k) the vector (W(i) : 1 ≤ iK,ik). Setting RP(o)EP(Y|W=w) and SP(o)EP(Y|W(k)=w(k)), the null hypothesis corresponds to W(k) having null variable importance in the presence of W(−k) with respect to the conditional mean of Y given W in the sense that EP (Y | W) = EP (Y | W(−k)) almost surely. This is true because if R0(W)=dS0(W(k)), the latter random variables have equal variance and so
      EP0{VarP0[R0(W)|W(k)]}=VarP0[R0(W)]VarP0{EP0[R0(W)|W(k)]}=VarP0[R0(W)]VarP0[S0(W(k))]=0,
      implying that VarP0[R0(W)|W(k)]=0 almost surely. Thus, a test of RP(O)=dSP(O) is equivalent to a test of almost sure equality between RP and SP in this example. We will show in Section 5 that our approach cannot be directly applied to this example, but that a simple extension yields a valid test.

Gretton et al. [2006] investigated the related problem of testing equality between two distributions in a two-sample problem. They proposed estimating the maximum mean discrepancy (hereafter referred to as MMD), a non-negative quantitative summary of the relationship between the two distributions. In particular, the MMD between distributions P1 and P2 for observations X is defined as

supfF(EP1[f(X)]EP2[f(X)]). (1)

Defining the MMD relies on selecting a function class F. Gretton et al. [2006] propose selecting F to be the unit ball in a reproducing kernel Hilbert space. If the kernel defining this space is a so-called universal kernel and the support of X under P1 and P2 is compact, then they showed that the MMD is zero if and only if the two distributions are equal. They also observe that the Gaussian kernel is a universal kernel. Gretton et al. also investigated related problems using this technique [see, e.g., Gretton et al., 2009, 2012a, Sejdinovic et al., 2013]. In this work, we also utilize the MMD as a parsimonious summary of equality but consider the more general problem wherein the null hypothesis relies on unknown functions R0 and S0 indexed by the data-generating distribution P0.

Other investigators have proposed omnibus tests of hypotheses of the form H0 versus H1 in the literature. In the setting of Example 2 above, the work presented in Racine et al. [2006] and Lavergne et al. [2015] is particularly relevant. The null hypothesis of interest in these papers consists of the equality EP0(Y|A,W)=EP0(Y|W) holding almost surely. If individuals have a nontrivial probability of receiving treatment in all strata of covariates, this null hypothesis is equivalent to H0. In both these papers, kernel smoothing is used to estimate the required regression functions. Therefore, key smoothness assumptions are needed for their methods to yield valid conclusions. The method we present does not hinge on any particular class of estimators and therefore does not rely on this condition.

To develop our approach, we use techniques from the higher-order pathwise differentiability literature [see, e.g., Pfanzagl, 1985, Robins et al., 2008, van der Vaart, 2014, Carone et al., 2014]. Despite the elegance of the theory presented by these various authors, it has been unclear whether these higher-order methods are truly useful in infinite-dimensional models since most functionals of interest fail to be even second-order pathwise differentiable in such models. This is especially troublesome in problems in which under the null the first-order derivative of the parameter of interest (in an appropriately defined sense) vanishes, since then there seems to be no theoretical basis for adjusting parameter estimates to recover parametric rate asymptotic behavior. At first glance, the MMD parameter seems to provide one such disappointing example, since its first-order derivative indeed vanishes under the null. The latter fact is a common feature of problems wherein the null value of the parameter is on the boundary of the parameter space. It is also not an entirely surprising phenomenon, at least heuristically, since the MMD achieves its minimum of zero under the null hypothesis. Nevertheless, we are able to show that this parameter is indeed second-order pathwise differentiable under the null hypothesis – this is a rare finding in infinite-dimensional models. As such, we can employ techniques from the recent higher-order pathwise differentiability literature to tackle the problem at hand.

This paper is organized as follows. In Section 2, we formally present our parameter of interest, the squared MMD between two unknown functions, and establish asymptotic representations for this parameter based on its higher-order differentiability, which, as we formally establish, holds even when the MMD involves estimation of unknown nuisance parameters. In Section 3, we discuss estimation of this parameter, discuss the corresponding hypothesis test and study its asymptotic behavior under the null. We study the consistency of our proposed test under fixed and local alternatives in Section 4. We revisit our examples in Section 5 and provide an additional example in which we can still make progress using our techniques even though our regularity conditions fails. In Section 6, we present results from a simulation study to illustrate the finite-sample performance of our test, and we end with concluding remark in Section 7.

Supplementary Appendix A reviews higher-order pathwise differentiability. Supplementary Appendix B gives a summary of the empirical U−process results from Nolan and Pollard [1988] that we build upon. All proofs can be found in Supplementary Appendix C.

2. Properties of maximum mean discrepancy

2.1. Definition

For a distribution P and mappings T and U, we define

ΦTU(P)e[TP(o1)UP(o2)]2dP(o1)dP(o2) (2)

and set Ψ(P)ΦRR(P)2ΦRS(P)+ΦSS(P). The MMD between the distributions of RP(O) and SP(O) when O ~ P, defined in Eq. 1 using F to be the unit ball in the RKHS generated by the Gaussian kernel with unit bandwidth, is given by Ψ(P) and is always well-defined because Ψ(P) is non-negative. Indeed, denoting by ψ0 the true parameter value Ψ(P0), Theorem 3 of Gretton et al. [2006] establishes that ψ0 equals zero if H0 holds and is otherwise strictly positive. Though the study in Gretton et al. [2006] is restricted to two-sample problems, their proof of this result is only based upon properties of Ψ and therefore holds regardless of the sample collected. Their proof relies on the fact that two random variables X and Y with compact support are equal in distribution if and only if E[f(Y)] = E[f(X)] for every continuous function f, and uses techniques from the theory of Reproducing Kernel Hilbert Spaces [see, e.g., Berlinet and Thomas-Agnan, 2011, for a general exposition]. We invite interested readers to consult Gretton et al. [2006] – and, in particular, Theorem 3 therein – for additional details. The definition of the MMD we utilize is based on the univariate Gaussian kernel with unit bandwidth, which is appropriate in view of Steinwart [2002]. The results we present in this paper can be generalized to the MMD based on a Gaussian kernel of arbitrary bandwidth h by simply rescaling the mappings R and S to R/h and S/h.

2.2. First-order differentiability

To develop a test of H0, we will first construct an estimator ψn of ψ0. In order to avoid restrictive model assumptions, we wish to use flexible estimation techniques in estimating P0 and therefore ψ0. To control the operating characteristics of our test, it will be crucial to understand how to generate a parametric-rate estimator of ψ0. For this purpose, it is informative to first investigate the pathwise differentiability of Ψ as a parameter from M to .

So far, we have not specified restrictions on the mappings PRP and PSP. However, in our developments, we will require these mappings to satisfy certain regularity conditions. Specifically, we will restrict our attention to elements of the class S of all mappings T for which there exists some measurable function XT defined on O, e.g. XT(o) = XT(w, a, y) = w, such that

  • (S1)

    TP is a measurable mapping with domain {XT(o):oO} and range contained in [− b, b] for some 0 ≤ b < ∞ independent of P;

  • (S2)

    for all submodels dPt/dP = 1 + th with bounded h with Ph = 0, there exists some δ > 0 and a set O1O with P0(O1)=1 such that, for all (o,t1)O1×(δ,δ), tTPt(xT) is twice differentiable at t1 with uniformly bounded (in xT) first and second derivatives;

  • (S3)
    for any PM and submodel dPt/dP = 1 + th for bounded h with Ph = 0, there exists a function DPT:O uniformly bounded (in P and o) such that DPT(o)dP(o|xT)=0 for almost all oO and
    ddtTPt(xT)|t=0=DPT(o)h(o)dP(o|xT).

Condition (S1) ensures that T is bounded and only relies on a summary measure of an observation O. Condition (S2) ensures that we will be able to interchange differentiation and integration when needed. Condition (S3) is a conditional (and weaker) version of pathwise differentiability in that the typical inner product representation only needs to hold for the conditional distribution of O given XT under P0. We will verify in Section 5 that these conditions hold in the context of the motivating examples presented earlier.

Remark 1. As a caution to the reader, we warn that simultaneously satisfying (S1) and (S3) may at times be restrictive. For example, if the observed data unit is O(W(1),W(2),Y), the parameter

TP(o)EP[Y|W(1)=w(1),W(2)=w(2)]EP[Y|W(1)=w(1)]

cannot generally satisfy both conditions. In Section 5, we discuss this example further and provide a means to tackle this problem using the techniques we have developed. In concluding remarks, we discuss a weakening of our conditions, notably by replacing S by the linear span of elements in S. Consideration of this larger class significantly complicates the form of the estimator we propose in Section 3. □

We are now in a position to discuss the pathwise differentiability of Ψ. For any elements T,US, we define

ΓPTU(o1,o2)[2[TP(o1)UP(o2)][DPU(o2)DPT(o1)]+1{4[TP(o1)UP(o2)]22}DPT(o1)DPU(o2)]e[TP(o1)UP(o2)]2.

and set ΓPΓPRRΓPRSΓPSR+ΓPSS. Note that ΓP is symmetric for any PM. For brevity, we will write Γ0TU and Γ0 to denote ΓP0TU and ΓP0, respectively. The following theorem characterizes the first-order behavior of Ψ at an arbitrary PM.

Theorem 1 (First-order pathwise differentiability of Ψ over M). If R,SS, the parameter Ψ:M is pathwise differentiable at PM with first-order canonical gradient given by D1Ψ(P)(o)2[ΓP(o,o2)dP(o2)Ψ(P)].

Under some conditions, it is straightforward to construct an asymptotically linear estimator of ψ0 with influence function D1Ψ(P0), that is, an estimator ψn of ψ0 such that

ψnψ0=1ni=1nD1Ψ(P0)(Oi)+oP0(n1/2).

For example, the one-step Newton-Raphson bias correction procedure [see, e.g., Pfanzagl, 1982] or targeted minimum loss-based estimation [see, e.g., van der Laan and Rose, 2011] can be used for this purpose. If the above representation holds and the variance of D1Ψ(P0)(O) is positive, then n(ψnψ0)N(0,σ02), where the symbol denotes convergence in distribution and we write σ02P0[D1Ψ(P0)2]. If σ0 is strictly positive and can be consistently estimated, Wald-type confidence intervals for ψ0 with appropriate asymptotic coverage can be constructed.

The situation is more challenging if σ0 = 0. In this case, n(ψnψ0)0 in probability and typical Wald-type confidence intervals will not be appropriate. Because D1Ψ(P0)(O) has mean zero under P0, this happens if and only if D1Ψ(P0)0. The following lemma provides necessary and sufficient conditions under which σ0 = 0.

Corollary 1 (First-order degeneracy under H0). If R,SS, it will be the case that σ0 = 0 if and only if either (i) H0 holds, or (ii) R0(O) and S0(O) are degenerate, i.e. almost surely constant but not necessarily equal, with D0RD0S.

The above results rely in part on knowledge of D0R and D0S. It is useful to note that, in some situations, the computation of DPT(o) for a given TS and PM can be streamlined. This is the case, for example, if PTP is invariant to fluctuations of the marginal distribution of XT, as it seems (S3) may suggest. Consider obtaining iid samples of increasing size from the conditional distribution of O given XT = xT under P, so that all individuals have observed XT = xT. Consider the fluctuation submodel dPt(o|xT)[1+th(o)]dP(o|xT) for the conditional distribution, where h is uniformly bounded and ∫ h(o)dP(o|xT) = 0. Suppose that (i) PTP(xT) is differentiable at t = 0 with respect to the above submodel and (ii) this derivative satisfies the inner product representation

ddtTPt(xT)|t=0=D˜PT(o|xT)h(o)dP(o|xT)

for some uniformly bounded function oD˜PT(o|xT) with D˜PT(o|xT)dP(o|xT)=0. If the above holds for all xT, we may take DPT(o)=D˜PT(o|xT) for all o with XT(o) = xT. If DPT is uniformly bounded in P, (S3) then holds.

In summary, the above discussion suggests that, if T is invariant to fluctuations of the marginal distribution of XT, (S3) can be expected to hold if there exists a regular, asymptotically linear estimator of each TP(xT) under iid sampling from the conditional distribution of O given XT = xT implied by P.

Remark 2. If T is invariant to fluctuations of the marginal distribution of XT, one can also expect (S3) to hold if P∫ TP(XT(o))dP(o) is pathwise differentiable with canonical gradient uniformly bounded in P and o in the model in which the marginal distribution of X is known. The canonical gradient in this model is equal to DPT. □

2.3. Second-order differentiability and asymptotic representation

As indicated above, if σ0 = 0, the behavior of Ψ around P0 cannot be adequately characterized by a first-order analysis. For this reason, we must investigate whether Ψ is second-order differentiable. As we discuss below, under H0, Ψ is indeed second-order pathwise differentiable at P0 and admits a useful second-order asymptotic representation.

Theorem 2 (Second-order pathwise differentiability under H0). If R,SS and H0 holds, the parameter Ψ:M is second-order pathwise differentiable at P0 with second-order canonical gradient D2Ψ(P0)2Γ0.

It is easy to confirm that Γ0, and thus D2Ψ, is one-degenerate under H0 in the sense that Γ0(o,o2)dP0(o2) = Γ0(o1, o)dP0(o1) = 0 for all o. This is shown as follows. For any T,US, the law of total expectation conditional on XU and fact that D0U(o)dP0(o|xU)=0 yields that

Γ0TU(o,o2)dP0(o2)={12[T0(o)U0(o2)]D0T(o)}e[T0(o)U0(o2)]2dP0(o2),

where we have written Γ0TU to denote ΓP0TU. Since ∫ f(R0(o))dP0(o) = ∫ f(S0(o))dP0(o) for each measurable function f when S0(O)=dT0(O), this then implies that Γ0RS(o,o2)dP0(o2)=Γ0RR(o,o2)dP(o2) and Γ0SR(o,o2)dP0(o2)=Γ0SS(o,o2)dP0(o2) under H0. Hence, it follows that Γ0(o, o2)dP0(o2) = 0 under H0 for any o.

If second-order pathwise differentiability held in a sufficiently uniform sense over M, we would expect

RemPΨΨ(P)Ψ(P0)(PP0)D1Ψ(P)+12(PP0)2D2Ψ(P) (3)

to be a third-order remainder term. However, second-order pathwise differentiability has only been established under the null, and in fact, it appears that Ψ may not generally be second-order pathwise differentiable under the alternative. As such, D2Ψ may not even be defined under the alternative. In writing (3), we either naively set D2Ψ(P)2ΓP, which is not appropriately centered to be a candidate second-order gradient, or instead take D2Ψ to be the centered extension

(o1,o2)2[ΓP(o1,o2)ΓP(o1,o)dP(o)ΓP(o,o2)dP(o)+P2ΓP].

Both of these choices yield the same expression above because the product measure (PP0)2 is self-centering. The need for an extension renders it a priori unclear whether as P tends to P0 the behavior of RemPΨ is similar to what is expected under more global second-order pathwise differentiability. Using the fact that Ψ(P) = P2ΓP, we can simplify the expression in (3) to

RemPΨ=P02ΓPψ0. (4)

As we discuss below, this remainder term can be bounded in a useful manner, which allows us to determine that it is indeed third-order.

For all TS, PM and oO, we define

RemPT(o)TP(o)T0(o)+DPT(o1)[dP(o1|xT)dP0(o1|xT)]

as the remainder from the linearization of T based on the conditional gradient DPT. Typically, RemPT(o) is a second-order term. Further consideration of this term in the context of our motivating examples is described in Section 5. Furthermore, we define

LPRS(o)max{|RemPR(o)|,|RemPS(o)|}MPRS(o)max{|RP(o)R0(o)|,|SP(o)S0(o)|}.

For any given function f:O, we denote by fp,P0[|f(o)|pdP0(o)]1/p the Lp(P0)-norm and use the symbol ≲ to denote ‘less than or equal to up to a positive multiplicative constant’. The following theorem provides an upper bound for the remainder term of interest.

Theorem 3 (Upper bounds on remainder term). For each PM, the remainder term, admits the following upper bounds:

UnderH0:|RemPΨ|K0PLPRS2,P0MPRS2,P0+LPRS1,P02+MPRS4,P04UnderH1:|RemPΨ|K1PLPRS1,P0MPRS2,P02.

To develop a test procedure, we will require an estimator of P0, which will play the role of P in the above expressions. It is helpful to think of parametric model theory when interpreting the above result, with the understanding that certain smoothing methods, such as higher-order kernel smoothing, can achieve near-parametric rates in certain settings. In a parametric model where P0 is estimated with P^n (e.g., a maximum likelihood estimator), we could often expect LP^nRSp,P0 and MP^nRSp,P0 to be OP0(n1) and OP0(n1/2), respectively, for p ≥ 1. Thus, the above theorem suggests that the approximation error may be OP0(n3/2) in a parametric model under H0. In some examples, it is reasonable to expect that LP^nRS0 for a large class of distributions P. In such cases, the upper bound on RemP^nΨ simplifies to MP^nRS4,P04 under H0, which under a parametric model is often OP0(n2).

To make these results more concrete, we consider the special case where RP, SP, DPR, and DPS are smooth mappings of regression functions under P conditional on the d−dimensional covariate W (e.g., as in Example 3 – see Section 5). Suppose that all of these regression functions under P0 are at least −times differentiable. In this case, rates of convergence for the remainder terms are well understood for kernel smoothers using kernels of sufficiently high order. In particular, each regression function converges at rate n2+d in L2(P0). Under H0, one could rely on MP^nRS2,P0 being oP0(n1/3) and LP^nRS2,P0 being oP0(n2/3). If LP^nRS is second-order, this would generally require > d, which is more stringent than the usual > d/2 requirement for standard first-order estimators. If, on the other hand, LP^nRS0, then we require that MP^nRS4,P04 is oP0(n1) under H0, which corresponds to requiring slightly greater than d/2.

3. Proposed test: formulation and inference under the null

3.1. Formulation of test

We begin by constructing an estimator of ψ0 from which a test can then be devised. Using the fact that Ψ(P) = P2ΓP, as implied by (4), we note that if Γ0 were known, the U-statistic UnΓ0 would be a natural estimator of ψ0, where Un denotes the empirical measure that places equal probability mass on each of the n(n − 1) points (Oi, Oj) with ij. In practice, Γ0 is unknown and must be estimated. This leads to the estimator ψnUnΓn, where we write ΓnΓP^n for some estimator P^n of P0 based on the available data. Since a large value of ψn is inconsistent with H0, we will reject H0 if and only if ψn > cn for some appropriately chosen cutoff cn.

In the nonparametric model considered, it may be necessary, or at the very least desirable, to utilize a data-adaptive estimator P^n of P0 when constructing Γn. Studying the large-sample properties of ψn may then seem particularly daunting since at first glance we may be led to believe that the behavior of ψnψ0 is dominated by P02(ΓnΓ0). However, this is not the case. As we will see, under some conditions, ψnψ0 will approximately behave like (UnP02)Γ0. Thus, there will be no contribution of P^n to the asymptotic behavior of ψnψ0. Though this result may seem counterintuitive, it arises because Ψ(P) can be expressed as P2ΓP with ΓP a second-order gradient (or rather an extension thereof) up to a proportionality constant. More concretely, this surprising finding is a direct consequence of (4).

As further support that ψn is a natural test statistic, even when a data-adaptive estimator P^n of P0 has been used, we note that ψn could also have been derived using a second-order one-step Newton-Raphson construction, as described in Robins et al. [2008]. The latter is given by

ψn,NRΨ(P^n)+PnD1Ψ(P^n)+12UnD2Ψ(P^n),

where we use the centered extension of D2Ψ as discussed in Section 2.3. Here and throughout, Pn denotes the empirical distribution. It is straightforward to verify that indeed ψn = ψn,NR.

3.2. Inference under the null

3.2.1. Asymptotic behavior

For each PM, we let Γ˜P be the P0−centered modification of ΓP given by

Γ˜P(o1,o2)ΓP(o1,o2)ΓP(o1,o)dP0(o)ΓP(o,o2)dP0(o)+P02ΓP

and denote Γ˜P0 by Γ˜0. While Γ˜0=Γ0 under H0, this is not true more generally. Below, we use RemnΨ and Γ˜n to respectively denote RemPΨ and Γ˜P evaluated at P=P^n. Straightforward algebraic manipulations allows us to write

ψnψ0=UnΓnψ0=UnΓnP02Γn+P02Γnψ0=(UnP02)Γn+RemnΨ=UnΓ0+2(PnP0)P0Γn+Un(Γ˜nΓ0)+RemnΨ. (5)

Our objective is to show that n (ψnψ0) behaves like nUnΓ0 as n gets large under H0. In view of (5), this will be true, for example, under conditions ensuring that

  • C1)

    n(PnP0)P0Γn=oP0(1) (empirical process and consistency conditions);

  • C2)

    nUn(Γ˜nΓ0)=oP0(1) (U−process and consistency conditions);

  • C3)

    nRemnΨ=oP0(1) (consistency and rate conditions).

We have already argued that C3) is reasonable in many examples of interest, including those presented in this paper. Nolan and Pollard [1987, 1988] developed a formal theory that controls terms of the type appearing in C2). In Supplementary Appendix B.1 we restate specific results from these authors which are useful to study C2). Finally, the following lemma gives sufficient conditions under which C1) holds. We first set K1nLP^nRS1,P0+MP^nRS2,P02.

Lemma 1 (Sufficient conditions for C1)). Suppose that o1 Γn(o1, o)dP0(o)/K1n, defined to be zero if K1n = 0, belongs to a P0Donsker class [van der Vaart and Wellner, 1996] with probability tending to 1. Then, under H0,

(PnP0)P0Γn=OP0(K1nn)

and thus C1) holds whenever K1n=oP0(n1/2).

The following theorem describes the asymptotic distribution of n under the null hypothesis whenever conditions C1), C2) and C3) are satisfied.

Theorem 4 (Asymptotic distribution under H0). Suppose that C1), C2) and C3) hold. Then, under H0,

nψn=nUnΓ0+oP0(1)k=1λk(Zk21),

where {λk}k=1 are the eigenvalues of the integral operator h(o) ↦ Γ0 (o1, o)h(o1)dP0(o1) repeated according to their multiplicity, and {Zk}k=1 is a sequence of independent standard normal random variables. Furthermore, all of these eigenvalues are nonnegative under H0.

We note that by employing a sample splitting procedure – namely, estimating Γ0 on one portion of the sample and constructing the U−statistic based on the remainder of the sample – it is possible to eliminate the U−process conditions required for C2). In such a case, satisfaction of C2) only requires convergence of Γ˜n to Γ0 with respect to the L2(P02)-norm. This sample splitting procedure would also allow one to avoid the empirical process conditions in C1): in particular, oP0Γn(o, ·) would need to converge to zero, but no further requirements would then be needed on Γn for C1) to be satisfied. We also note that the L2(P0) consistency of oP0Γn(o, ·) and the L2(P02) consistency of Γ˜n are implied by the L2(P02) consistency of Γn for Γ0, and so when sample splitting is used one could replace C1) and C2) by this single consistency condition.

We note also that, if sample splitting is not used, then one could replace C1) and C2) by this single consistency condition and the added assumption that Γn belongs to a class with a finite uniform entropy integral. See Supplementary Appendix B.2 for a proof that this suffices to imply the needed empirical process conditions for C1). It is also straightforward to show that controlling the entropy of the class to which Γn may belong also controls the entropy of the class to which the linear transformation Γ˜n of Γn may belong.

Remark 3. In Example 4, sample splitting may prove particularly important when the estimator of EP0(Y|W=w) is chosen as the minimizer of an empirical risk since in finite samples the bias induced by using the same residuals yEP^n(Y|W=w) as those in the definition of DP^nR(o) may be significant. Thus, without some form of sample splitting, the finite sample performance of ψn may be poor even under the conditions stated in Supplementary Appendix B.1. □

3.2.2. Estimation of the test cutoff

As indicated above, our test consists of rejecting H0 if and only if ψn is larger than some cutoff cn. We wish to select cn to yield a non-conservative test at level α ∈ (0,1). In view of Theorem 4, denoting by q1−α the 1 – α quantile of the described limit distribution, the cutoff cn should be chosen to be q1−α/n. We thus reject H0 if and only if n > q1−α. As described in the following corollary, q1−α admits a very simple form when SP ≡ 0 for all P.

Corollary 2 (Asymptotic distribution under H0, S degenerate). Suppose that C1), C2) and C3) hold, that SP ≡ 0 for all PM, and that σR2VarP0[D0R(O)]>0. Then, under H0,

nψn2σR2Z21,

where Z is a standard normal random variable. It follows then that q1α=2σR2(z1α/221), where z1−α/2 is the (1 − α/2) quantile of the standard normal distribution.

The above corollary gives an expression for q1−α that can easily be consistently estimated from the data. In particular, one can use q^1α2(z1α/221)PnDR(P^n)2 as an estimator of q1−α, whose consistency can be established under a Glivenko-Cantelli and consistency condition on the estimator of D0R. However, in general, such a simple expression will not exist. Gretton et al. [2009] proposed estimating the eigenvalues νk of the centered Gram matrix and then computing λ^kνk/n. In our context, the eigenvalues νk are those of the n × n matrix G{Gij}1i,jn with entries defined as

GijΓn(Oi,Oj)1nk=1nΓn(Ok,Oj)1n=1nΓn(Oi,O)+1n2k=1n=1nΓn(Ok,O). (6)

Given these n eigenvalue estimates λ^1,,λ^n, one could then simulate from k=1nλ^k(Zk21) to approximate k=1λk(Zk21). While this seems to be a plausible approach, a formal study establishing regularity conditions under which this procedure is valid is beyond the scope of this paper. We note that it also does not fall within the scope of results in Gretton et al. [2009] since their kernel does not depend on estimated nuisance parameters. We refer the reader to Franz [2006] for possible sufficient conditions under which this approach may be valid. Though we do not have formal regularity conditions under which this procedure is guaranteed to maintain the type I error level, our simulation results do seem to suggest appropriate control in practice (Section 6).

In practice, it suffices to give a data-dependent asymptotic upper bound on q1−α. We will refer to q^1αub, which depends on Pn, as an asymptotic upper bound of q1−α if

limsupnP0n(nψn>q^1αub)1α. (7)

If q1−α is consistently estimated, one possible choice of q^1αub is this estimate of q1−α – the inequality above would also become an equality provided the conclusion of Theorem 4 holds. It is easy to derive a data-dependent upper bound with this property using Chebyshev’s inequality. To do so, we first note that

VarP0[k=1λk(Zk21)]=k=1λk2VarP0(Zk2)=2k=1λk2=2P02Γ02,

where we have interchanged the variance operation and the limit using the L2 martingale convergence theorem and the last equality holds because λk, k = 1, 2,…, are the eigenvalues of the Hilbert-Schmidt integral operator with kernel Γ˜0. Under mild regularity conditions, P02Γ02 can be consistently estimated using UnΓn2. Provided P02Γ02>0, we find that

(2UnΓn2)1/2nψn(2P02Γ02)1/2k=1λk(Zk21), (8)

where the limit variate has mean zero and unit variance. The following theorem gives a valid choice of q^1αub.

Theorem 5. Fix α ∈ (0, 1) and suppose that C1), C2) and C3) hold. Then, under H0 and provided UnΓn2P02Γ02>0 in probability, q^1αub(2[1α]UnΓn2/α)1/2q1α is a valid upper bound in the sense of (7).

The proof of the result follows immediately by noting that P(X > t) ≤ (1 + t2)−1 for any random variable X with mean zero and unit variance in view of the one-sided Chebyshev’s inequality. For α = 0.05, the above demonstrates that a conservative cutoff is 6.2(UnΓn2)1/2. This theorem illustrates concretely that we can obtain a consistent test that controls type I error. In practice, we recommend either using the result of Corollary 2 whenever possible or estimating the eigenvalues of the matrix in (6).

We note that the condition σR2>0 holds in many but not all examples of interest. Fortunately, the plausibility of this assumption can be evaluated analytically. In Section 5, we show that this condition does not hold in Example 5 and provide a way forward despite this.

4. Asymptotic behavior under the alternative

4.1. Consistency under a fixed alternative

We present two analyses of the asymptotic behavior of our test under a fixed alternative. The first relies on P^n providing a good estimate of P0. Under this condition, we give an interpretable limit distribution that provides insight into the behavior of our estimator under the alternative. As we show, surprisingly, P^n need not be close to P0 to obtain an asymptotically consistent test, even if the resulting estimate of ψ0 is nowhere near the truth. In the second analysis, we give more general conditions under which our test will be consistent if H1 holds.

4.1.1. Nuisance functions have been estimated well

As we now establish, our test has power against all alternatives P0 except for the fringe cases discussed in Corollary 1 with Γ0 one-degenerate. We first note that

ψnψ0=UnΓnψ0=2(PnP0)P0Γn+UnΓ˜n+RemPΨ.

When scaled by n, the leading term on the right-hand side follows a mean zero normal distribution under regularity conditions. The second summand is typically OP0(n1) under certain conditions, for example, on the entropy of the class of plausible realizations of the random function (o1, o2) ↦ Γn(o1, o2) [Nolan and Pollard, 1987, 1988]. In view of the second statement in Theorem 3, the third summand is a second-order term that will often be negligible, even after scaling by n. As such, under certain regularity conditions, the leading term in the representation above determines the asymptotic behavior of ψn, as described in the following theorem.

Theorem 6 (Asymptotic distribution under H1). Suppose that K1n=oP0(n1/2), that UnΓ˜n=oP0(n1/2), and furthermore, that o Γn(o1, o)dP0(o) belongs to a fixed P0Donsker class with probability tending to 1 while P0(ΓnΓ0)2,P0=oP0(1). If H1 holds, we have that n(ψnψ0)N(0,τ2), where τ24VarP0[Γ0(O,o)dP0(o)].

In view of the results of Section 2, τ2 coincides with σ02, the efficiency bound for regular, asymptotically linear estimators in a nonparametric model. Hence, ψn is an asymptotically efficient estimator of ψ0 under H1. Sufficient conditions for Γn(o1, o)dP0(o) to belong to a fixed P0−Donsker class with probability approaching one are given in Supplementary Appendix B.2.

The following corollary is trivial in light of Theorem 6. It establishes that the test nψn>q^1αub is consistent against (essentially) all alternatives provided the needed components of the likelihood are estimated sufficiently well.

Corollary 3 (Consistency under a fixed alternative). Suppose the conditions of Theorem 6. Furthermore, suppose that τ2 > 0 and q^1αub=oP0(n). Then, under H1, the test nψn>q^1αub is consistent in the sense that

limnP0n(nψn>q^1αub)=1.

The requirement that q^1αub=oP0(n) is very mild given that q1−α will be finite whenever R,SS. As such, we would not expect q^1αub to get arbitrarily large as sample size grows, at least beyond the extent allowed by our corollary. This suggests that most non-trivial upper bounds satisfying (7) will yield a consistent test.

4.1.2. Nuisance functions have not been estimated well

We now consider the case where the nuisance functions are not estimated well, in the sense that the consistency conditions of Theorem 6 do not hold. In particular, we argue that failure of these conditions does not necessarily undermine the consistency of our test. Let q^1αub be the estimated cutoff for our test, and suppose that q^1αub=oP0(n). Suppose also that P02Γn is asymptotically bounded away from zero in the sense that, for some δ > 0, P0n(P02Γn>δ) tends to one. This condition is reasonable given that P02Γ0>0 if H1 holds and P^n is nevertheless a (possibly inconsistent) estimator of P0. Assuming that (UnP02)Γn=OP0(n1/2), which is true under entropy conditions on Γn [Nolan and Pollard, 1987, 1988], we have that

P0n(nψn>q^1αub)=P0n(n[UnP02]Γn>q^1αubnnP02Γn)1.

We have accounted for the random n1/2q^1αub term as in the proof of Corollary 3. Of course, this result is less satisfying than Theorem 6, which provides a concrete limit distribution.

4.2. Consistency under a local alternative

We consider local alternatives of the form

dQn(o)=[1+n1/2hn(o)]dP0(o),

where hnh in Γ02(P0) for some non-degenerate h and P0 satisfies the null hypothesis H0. Suppose that the conditions of Theorem 4 hold. By Theorem 2.1 of Gregory [1977], we have that

nUnΓ0Qnk=1λk[(Zk+fk,h)21],

where Un is the U−statistic empirical measure from a sample of size n drawn from Qn, 〈·, ·〉 is the inner product in L2(P0), Zk and λk are as in Theorem 4, and fk is a normalized eigenfunction corresponding to eigenvalue λk described in Theorem 4. By the contiguity of Qn, the conditions of Theorem 4 yield that the result above also holds with UnΓ0 replaced by UnΓn, our estimator applied to a sample of size n drawn from Qn.

If each λk is non-negative, the limiting distribution under Qn stochastically dominates the asymptotic distribution under P0, and furthermore, if 〈fk, h〉 ≠ 0 for some k with λk > 0, this dominance is strict. It is straightforward to show that, under the conditions of Theorem 4, the above holds if and only if liminfnnΨ(Qn)>0, that is, if the sequence of alternatives is not too hard. Suppose that q^1α, is a consistent estimate of q1−α. By Le Cam’s third lemma, q^1α, is consistent for q1−α even when the estimator is computed on samples of size n drawn from Qn rather than P0. This proves the following theorem.

Theorem 7 (Consistency under a local alternative). Suppose that the conditions of Theorem 4 hold. Then, under H0 and provided lim liminfnnΨ(Qn)>0, the proposed test is locally consistent in the sense that limnQn(nψn>q^1α)>α, where q^1α is a consistent estimator of q1−a.

5. Illustrations

We now return to Examples 2, 3, 4, and 5. We do not return to Example 1 because it has already been well-studied, e.g. the fixed sample size variant was studied in detail in Gretton et al. [2006]. We first show that Examples 2, 3 and 4 satisfy the regularity conditions described in Section 2. Specifically, we show that all involved parameters R and S belong to S under reasonable conditions. Furthermore, we determine explicit remainder terms for the asymptotic representation used in each example and describe conditions under which these remainder terms are negligible. For any TS, we will use the shorthand notation T˙t˜(xT)ddtTPt(xT)|t=t˜ for t˜ in a neighborhood of zero.

Example 2 (Continued).

The parameter S with SP ≡ 0 belongs to S trivially, with DPS0. Condition (S1) holds with xR(o) = w. Condition (S2) holds using that Rt(w) equals

a=01(1)a+1y{1+th1(w,a,y)+t2h2(w,a,y)1+tEP0[h1(w,A,Y)]+t2EP0[h2(w,A,Y)]}dP0(y|a,w). (9)

Since we must only consider h1 and h2 uniformly bounded, for t sufficiently small, we see that Rt(w) is twice continuously differentiable with uniformly bounded derivatives. Condition (S3) is satisfied by

DPR(o)2a1P(A=a|W=w){yEP[Y|A=a,W=w]}

and DPS0. If mina P (A = a | W) is bounded away from zero with probability 1 uniformly in P, it follows that (P,o)DPR(o) is uniformly bounded.

Clearly, we have that RemPS0. We can also verify that RemPR(o) equals

a˜=01(1)a˜EP0{[1P0(A=a˜|W)P(A=a˜|W)]×[EP(Y|A,W)EP0(Y|A,W)]|A=a˜,W=w}.

The above remainder is double robust in the sense that it is zero if either the treatment mechanism (i.e., the probability of A given W) or the outcome regression (i.e., the expected value of Y given A and W) is correctly specified under P. In a randomized trial where the treatment mechanism is known and specified correctly in P, we have that RemPR0 and thus LPRS0. More generally, an upper bound for RemPR can be found using the Cauchy-Schwarz inequality to relate the rate of RemPR2,P0 to the product of the L2(P0)-norm for the difference between each of the treatment mechanism and the outcome regression under P and P0.

Example 3 (Continued).

For (S1) we take xR = xS = w. Condition (S2) can be verified using an expression similar to that in (9). Condition (S3) is satisfied by

DPR(o)aP(A=a|W=w)[yEP(Y|A=a,W=w)]DPS(o)1aP(A=a|W=w)[yEP(Y|A=a,W=w)].

If mina P (A = a | W) is bounded away from zero with probability 1 uniformly in P, both (P,o)DPR(o) and (P,o)DPS(o) are uniformly bounded.

Similarly to Example 2, we have that RemPR(o) is equal to

EP0{[1P0(A=1|W)P(A=1|W)][EP(Y|A,W)EP0(Y|A,W)]|A=1,W=w}.

The remainder RemPS(o) is equal to the above display but with A = 1 replaced by A = 0. The discussion about the double robust remainder term from Example 2 applies to these remainders as well.

Example 4 (Continued).

The parameter S is the same as in Example 2. The parameter R satisfies (S1) with xR(o) = w and (S2) by an identity analogous to that used in Example 2. Condition (S3) is satisfied by DPR(o)yEP(Y|W=w). By the bounds on Y, (P,o)DPR(o) is uniformly bounded. Here, the remainder terms are both exactly zero: RemPRRemPS0. Thus, we have that LPRS0 in this example.

The requirement that VarP0[D0R(O)]>0 in Corollary 2, and more generally that there exist a nonzero eigenvalue λj for the limit distribution in Theorem 4 to be non­degenerate, may at times present an obstacle to our goal of obtaining asymptotic control of the type I error. This is the case for Example 5, which we now discuss further. Nevertheless, we show that with a little finesse the type I error can still be controlled at the desired level for the given test. In fact, the test we discuss has type I error converging to zero, suggesting it may be noticeably conservative in small to moderate samples.

Example 5 (Continued).

In this example, one can take xR = w and xS = w(−k). Furthermore, it is easy to show that

DPR(o)=YEP[Y|W=w]DPS(o)=YEP[Y|W(k)=w(k)].

The first-order approximations for R and S are exact in this example as the remainder terms RemPR and RemPS are both zero. However, we note that if EP (Y | W) = EP (Y | W (−k)) almost surely, it follows that DPRDPS. This implies that Γ0 ≡ 0 almost surely under H0. As such, under the conditions of Theorem 4, all of the eigenvalues in the limit distribution of n in Theorem 4 are zero and n → 0 in probability. We are then no longer able to control the type I error at level α, rendering our proposed test invalid.

Nevertheless, there is a simple albeit unconventional way to repair this example. Let A be a Bernoulli random variable, independent of all other variables, with fixed probability of success p ∈ (0,1). Replace SP with o ↦ EP (Y | A = 1, W(−k) = w(−k)) from Example 3, yielding then

DPS(o)=ap[yEP(Y|A,W(k)=w(k))].

It then follows that D0RD0S and in particular Γ0 is no longer constant. In this case, the limit distribution given in Theorem 4 is non-degenerate. Consistent estimation of q1−α thus yields a test that asymptotically controls type I error. Given that the proposed estimator ψn converges to zero faster than n−1, the probability of rejecting the null approaches zero as sample size grows. In principle, we could have chosen any positive cutoff given that n → 0 in probability, but choosing a more principled cutoff seems judicious.

Because p is known, the remainder term RemPS is equal to zero. Furthermore, in view of the independence between A and all other variables, one can estimate EP0(Y|A=0,W(k)) by regressing Y on W(−k) using all of the data without including the covariate A.

In future work, it may also be worth checking to see if the parameter is third-order differentiable under the null, and if so whether or not this allows us to construct an α−level test without resorting to an artificial source of randomness.

6. Simulation studies

In simulation studies, we have explored the performance of our proposed test in the context of Examples 2, 3 and 4, and have also compared our method to the approach of Racine et al. [2006] for which software is readily available – see, e.g., the R package np [Hayfield and Racine, 2008]. We evaluate the performance of computing the eigenvalues of the Gram matrix defined in (6) for Example 3 in two different scenarios. We report the results of our simulation studies in this section.

In all simulation settings, we consider an adaptive bandwidth selection procedure that is a variant of the median heuristic that has been employed in the classical MMD setting where PRP and PSP do not depend on P [Gretton et al., 2012a]. In that case, the median heuristic selects the bandwidth to be equal to the median of the 2n × 2n Euclidean distance matrix of {R(Oi) : i = 1, …, n} ⋃ {S(Oi) : i = 1, …, n}, where the subscript of R and S on a distribution P has been omitted to emphasize the lack of this dependence in the classical MMD setting. In our case, we choose the bandwidth to be equal to the median of the Euclidean distance matrix between scalar or vector-valued observations (see Concluding Remark b in Section 7 for the extension to vector-valued unknown functions) in

{RP^n(Oi)+DP^nR(Oi):i=1,,n}{SP^n(Oi)+DP^nS(Oi):i=1,,n}.

This extension is natural in that RP^n+DP^nR and SP^n+DP^nS are reminiscent of one-step estimators [Pfanzagl, 1982] of the unknown R0 and S0, which should help this procedure account for the uncertainty in RP^n and SP^n. Except where specified, every MMD result presented in this section uses thins mediann heuristic to select the bandwidth. We also compare this procedure to a fixed choice of bandwidth in two of our settings.

6.1. Simulation scenario 1

We use an observed data structure (W, A, Y), where W ≜ (W1, W2, …, W5) is drawn from a standard 5-dimensional normal distribution, A is drawn according to a Bernoulli(0.5) distribution, and Y = μ(A, W) + 5ξ(A, W), where the different forms of the conditional mean function μ(a, w) are given in Table 1, and ξ(a, w) is a random variate following a Beta distribution with shape parameters α = 3expit(aw2) and β= 2expit[(1 − a)w1] shifted to have mean zero, where expit(x) = 1/(1 + exp(−x)).

Table 1.

Conditional mean function in each of three simulation settings within simulation scenario 1. Here, m(a,w)0.2(w12+w22w3w4), and the third and fourth columns indicate, respectively, whether μ(1, W) and μ(0, W) are equal in distribution or almost surely.

μ(a, w) =d =a.s.
Simulation 1a m(a, w) × ×
Simulation 1b m(a, w) + 0.4[aw3 + (1 − a)w4] ×
Simulation 1c m(a, w) + 0.8aw3

We performed tests of the null in which μ(1, W) is equal to μ(0, W) almost surely and in distribution, as presented in Examples 2 and 3, respectively. Our estimate P^n of P0 was constructed using the knowledge that P0 (A = 1 | W) = 1/2, as would be available, for example, in the context of a randomized trial. The conditional mean function μ(a, w) was estimated using the ensemble learning algorithm Super Learner [van der Laan et al., 2007], as implemented in the SuperLearner package [Polley and van der Laan, 2013]. This algorithm was implemented using 10-fold cross-validation to determine the best convex combination of regression function candidates minimizing mean-squared error using a candidate library consisting of SL.rpart, SL.glm.interaction, SL.glm, SL.earth, and SL.nnet. We used the results of Corollary 2 to evaluate significance for Example 2, and the eigenvalue approach presented in Section 3.2.2 to evaluate significance for Example 3, where we used all of the positive eigenvalues for n = 125 and the largest 200 positive eigenvalues for n > 125 using the rARPACK package [Qiu et al., 2014].

To evaluate the performance of the adaptive bandwidth selection procedure in the context of a test of the equality in distribution of two unknown functions applied to w, namely μ(1, ·) and μ(0, ·), we also ran our procedure at fixed bandwidths with values 2k, k = −2, 1, 0, 1, 2. In the context of a test of the almost sure equality of μ(1,W) and μ(0, W), we compare our adaptive bandwidth selection procedure to fixing the bandwidth at one. The performance of the adaptive bandwidth selection procedure is evaluated in more detail for a null hypothesis of the almost sure equality of two unknown functions in our third simulation setting.

We ran 1,000 Monte Carlo simulations with samples of size 125, 250, 500, 1000, and 2000, except for the np package, which we only ran for 500 Monte Carlo simulations. For Example 2 we compared our approach with that of Racine et al. [2006] using the npsigtest function from the np package. This requires first selecting a bandwidth, which we did using the npregbw function, specifying that we wanted a local linear estimator and the bandwidth to be selected using the cv.aic method [Hayfield and Racine, 2008].

Figure 1 displays the empirical null rejection probability of our test of equality in distribution of μ(1, W) and μ(0, W) for simulation scenarios 1a, 1b and 1c. In particular, we observe that our method is able to properly control type I error for Simulation 1a when testing the hypothesis that μ(1, W) is equal in distribution to μ(0, W). Type I error is also properly controlled in Simulation 1b, though the control of the fixed bandwidth procedures appears to be conservative at the larger sample sizes. We also note that the adaptive bandwidth yielded similar performance to the best considered fixed bandwidth of 1. Our selection procedure generally picked values with an average of around 1.5 – at large sample sizes, there was little variability around this average bandwidth, while at smaller sample sizes the selected bandwidths generally fell between 1.25 and 1.75. The adaptive procedure always controlled type I error at or near the nominal level and had power increasing with sample size and comparable to that of a fixed bandwidth of 1. Choosing the largest fixed bandwidth, namely 4, yielded no power at the alternative in Simulation 1c. Choosing the smallest fixed bandwidth, namely 1/4, yielded inflated type I error levels at one of the null distributions, namely Simulation 1b.

Fig. 1.

Fig. 1.

Empirical probability of rejecting the null when testing the null hypothesis that μ(1, W) is equal in distribution to μ(0, W) (Example 3) in Simulation 1. Table 1 indicates that the null is true in Simulations 1a and 1b, and the alternative is true in Simulation 1c.

Figure 2 displays the empirical coverage of our approach as well as that resulting from use of the np package. At smaller sample sizes, our method does not appear to control type I error near the nominal level. This is likely because we use an asymptotic result to compute the cutoff, even when the sample size is small. Nevertheless, as sample size grows, the type I error of our test approaches the nominal level. We note that choosing the fixed unit bandwidth outperforms the median heuristic bandwidth selection procedure in this setting, especially in terms of power Simulation 1b. We note that in Racine et al. [2006], unlike in our proposal, the bootstrap was used to evaluate the significance of the proposed test. It will be interesting to see if applying a bootstrap procedure at smaller sample sizes improves our small-sample results. At larger sample sizes, it appears that the method of Racine et al. outperforms our approach in terms of power in simulation scenarios 1b and 1c. At smaller sample sizes (125, 250, 500), our method achieves higher power than that of Racine et al., but at the expense of double the type I error of that of Racine et al.: therefore, it appears that the method of Racine et al. outperforms our approach in Simulations 1a, 1b, and 1c when testing the null hypothesis that μ(1, W) – μ(0, W) is almost surely equal to zero. Nonetheless, we note that the generality of our approach allows us to apply our test in more settings than a test using the method of Racine et al.. For example, we are not aware of any other test devised to test the equality in distribution of μ(1, W) and μ(0, W) (Figure 1).

Fig. 2.

Fig. 2.

Empirical probability of rejecting the null when testing the null hypothesis that μ(1, W) – μ(0, W) is almost surely equal to zero (Example 2) in Simulation 1. Table 1 indicates that the null is true in Simulation 1a, and the alternative is true in Simulations 1b and 1c.

6.2. Simulation scenario 2: comparison with Racine et al. [2006]

We reproduced a simulation study from Section 4.1 of Racine et al. [2006] at sample size n = 100. In particular, we let Y=1+βA(1+W22)+W1+W2+ϵ, where A, W1, and W2 are drawn independently from Bernoulli(0.5), Bernoulli(0.5), and N(0,1) distributions, respectively. The error term ϵ is unobserved and drawn from a N(0,1) distribution independently of all observed variables. The parameter β was varied over values −0.5, −0.4,…, 0.4, 0.5 to achieve a range of distributions. The goal is to test whether E0 (Y | A, W) = E0 (Y | W) almost surely, or equivalently, that μ(1, W) – μ(0, W) = 0 almost surely.

Due to computational constraints, we only ran the ‘Bootstrap I test’ to evaluate significance of the method of Racine et al. [2006]. As the authors report, this method is anticonservative relative to their ‘Bootstrap II test’ and indeed achieves lower power (but proper type I error control) in their simulations.

Except for two minor modifications, our implementation of the method in Example 2 is similar to that as for Simulation 1. For a fair comparison with Racine et al. [2006], in this simulation study, we estimated P0 (A = 1 | W) rather than treating it as known. We did this using the same Super Learner library and the ‘family=binomial’ setting to account for the fact that A is binary. We also scaled the function μ(1, w) – μ(0, w) by a factor of 5 to ensure most of the probability mass of R0 falls between −1 and 1 (around 99% when β = 0). We note that even with scaling the variable Y is not bounded as our regularity conditions require. Nonetheless, an evaluation of our method under violations of our assumptions can itself be informative.

Figure 3 displays the empirical null rejection probability of our test as well as that of Racine et al. [2006]. In this setup, used by the authors themselves to showcase their test procedure, our method performs comparably to their proposal, with slightly lower type I error (closer to nominal) and slightly lower power.

Fig. 3.

Fig. 3.

Empirical probability of rejecting the null when testing the null hypothesis that μ(1, W) – μ(0, W) is almost surely equal to zero in Simulation 2.

6.3. Simulation scenario 3: higher dimensions

We also explored the performance of our method as extended to tackle higher-dimensional hypotheses, as discussed in Section 7. To do this, we used the same distribution as for Simulation 1 but with Y now a 20-dimensional random variable. Our objective here was to test μ(1,W) – μ (0, W) is equal to (0,0, …, 0) in probability, where μ(a,w)(μ1(a,w),μ2(a,w),,μ20(a,w)) with μj(a,w)E0(Yj|A=a,W=w). Conditional on A and W, the coordinates of Y are independent. We varied the number of coordinates that represent signal and noise. For signal coordinate j, given A and W, 20Yj was drawn from the same conditional distribution as Y give A and W in Simulation 1c. For noise coordinate j, given A and W, 20Yj was drawn from the same conditional distribution as Y given A and W in Simulation 1a.

Relative to Simulation 1, we have scaled each coordinate of the outcome to be one twentieth the size of the outcome in Simulation 1. Apart from the adaptive bandwidth selection procedure discussed at the beginning of this section, we considered defining the MMD with a Gaussian kernel with bandwidths of 1/4, 1/2, 1, and 2. Alternatively, this could be viewed as considering bandwidths 5, 10, 20, and 40 if the outcome had not been scaled by 1/20.

We ran the same Super Learner to estimate μ(1,w) as in Simulation 1, and we again treated the probability of treatment given covariates as known. We evaluated significance by estimating all of the positive eigenvalues of the centered Gram matrix for n = 125 and the largest 200 positive eigenvalues of the centered Gram matrix for n > 125.

In Figure 4, the empirical null rejection probability is displayed for our proposed MMD method. We did not include the results for sample size 125 in the figure because type I error control was too poor. For example, for zero signal coordinates, the probability of rejection was 0.24 for bandwidth 1 and 0.33 for bandwidth 1/2. The adaptive bandwidth method performs comparably to the procedure that a priori fixes the bandwidth at 1/2. This observation is consistent with the fact that, across all signal levels and sample sizes, the selected bandwidth was closely concentrated about 1/2 for all Monte Carlo repetitions: the minimal selected bandwidth was 0.49 and the maximal selected bandwidth was 0.59. Among the considered fixed bandwidths, 1/2 or 1/4 seem to be the best at the largest sample sizes (1000, 2000), with the tradeoff between the two being that a bandwidth of 1/4 increases power (substantially for a signal of 5) at the cost of inflating type I error. At smaller sample sizes, the bandwidth of 1/4 yields unacceptably inflated type I error (0.4 at n = 250 and 0.15 at n = 500). Our adaptive bandwidth procedure appears to control type I error well at moderate to large sample sizes (i.e., n ≥ 500). This simulation shows that, overall, our method indeed has increasing power as sample size grows or as the number of coordinates j for which μj(1, W) – μj(0, W) not equal to zero in probability increases. The only sample size and signal number at which our adaptive bandwidth procedure appears to be outperformed by a fixed bandwidth is at 2000: a fixed bandwidth of 1/4 attains nominal coverage at this sample size but dramatically outperforms the adaptive bandwidth when the signal is 5. This discrepancy disappears when the signal is 10.

Fig. 4.

Fig. 4.

Probability of rejecting the null when testing the null hypothesis that μ(1, W) – μ(0, W) is almost surely equal to zero in Simulation 3.

7. Concluding remarks

We have presented a novel approach to test whether two unknown functions are equal in distribution. Our proposal explicitly allows, and indeed encourages, the use of flexible, data-adaptive techniques for estimating these unknown functions as an intermediate step. Our approach is centered upon the notion of maximum mean discrepancy, as introduced in Gretton et al. [2006], since the MMD provides an elegant means of contrasting the distributions of these two unknown quantities. In their original paper, these authors showed that the MMD, which in their context tests whether two probability distributions are equal using n random draws from each distribution, can be estimated using a U− or V−statistic. Under the null hypothesis, this U- or V−statistic is degenerate and converges to the true parameter value quickly. Under the alternative, it converges at the standard n−1/2 rate. Because this parameter is a mean over a product distribution from which the data were observed, it is not surprising that a U− or V−statistic yields a good estimate of the MMD. What is surprising is that we were able to construct an estimator with these same rates even when the null hypothesis involves unknown functions that can only be estimated at slower rates. To accomplish this, we used recent developments from the higher-order pathwise differentiability literature. Our simulation studies indicate that our asymptotic results are meaningful in finite samples, and that in specific examples for which other methods exist, our methods generally perform at least as well as these established, tailor-made methods. Of course, the great appeal of our proposal is that it applies to a much wider class of problems.

In our simulation study, we adapted the median heuristic for selecting the Gaussian kernel bandwidth to our setting in which R0 and S0 are unknown. In some settings, this bandwidth selection procedure performed well in our simulation compared to specifying a fixed bandwidth in our simulation, though we did note settings where the adaptive procedure underperformed relative to using a fixed bandwidth. An advantage of this adaptive bandwidth selection procedure compared to selecting a fixed bandwidth (e.g., the unit bandwidth) is that it yields a procedure that is invariant to a rescaling of the unknown functions R0 and S0. For the classical MMD setting in which R0 and S0 are known functions, Gretton et al. [2012b] showed that other bandwidth selection procedures can outperform the median heuristic. One such procedure involves selecting the bandwidth to maximize an estimate of the power of the test of the null hypothesis of equality in distribution of R0(O) and S0 (O), subject to a constraint on the estimated type I error. Extending these procedures to our setting where R0 and S0 are unknown is an important area for future research.

We conclude with several possible extensions of our method that may increase further its applicability and appeal.

  • (a)
    Although this condition is satisfied in all but one of our examples, requiring R and S to be in S can be somewhat restrictive. Nevertheless, it appears that this condition may be weakened by instead requiring membership to S*, the class of all parameters T for which there exist some M < ∞ and elements T1, T2, …, TM in S such that T=m=1MTm. While the results in our paper can be established in a similar manner for functions in this generalized class, the expressions for the involved gradients are quite a bit more complicated. Specifically, we find that, for T,US* with T=m=1MTm and U=l=1LUl, the quantity ΓPTU(o1,o2) equals
    e[TP(o1)UP(o2)]2+=1LEP{2[TP(o1)UP(O)]e[TP(o1)UP(O)]2|XU=x2U}DPU(o2)m=1MEP{2[TP(O)UP(o2)]e[TP(O)UP(o2)]2|XTm=x1Tm}DPTm(o1)=1Lm=1MEP2[{4[TP(O1)UP(O2)]22}e[TP(O1)UP(O2)]2|X1T=x1T,X2Um=x2Um]DPT(o1)DPUm(o2).
    In particular, we note the need for conditional expectations with respect to XRm and XSm in the definition of Γ, which could render the implementation of our method more difficult. While we believe this extension is promising, its practicality remains to be investigated.
  • (b)
    While our paper focuses on univariate hypotheses, our results can be generalized to higher dimensions. Suppose that PRP and PSP are d-valued functions on O. The class Sd of allowed such parameters can be defined similarly as S, with all original conditions applying componentwise. The MMD for the vector-valued parameters R and S using the Gaussian kernel is given by Ψd(P)ΦdRR(P)2ΦdRS(P)+ΦdSS(P), where for any T,USd we set
    ΦdTU(P)eTP(o1)UP(o2)2dP(o1)dP(o2).
    It is not difficult to show then that, for any T,USd(P0), Γd,PTU(o1,o2) is given by
    [2[TP(o1)UP(o2)][DPU(o2)DPT(o1)]+12DPT(o1){2[TP(o1)UP(o2)][TP(o1)UP(o2)]Id}DPU(o2)]×eTP(o1)UP(o2)2,
    where Id denotes the d−dimensional identity matrix and A′ denotes the transpose of a given vector A. Using these objects, the method and results presented in this paper can be replicated in higher dimensions rather easily.
  • (c)

    Our results can be used to develop confidence sets for infinite dimensional parameters by test inversion. Consider a parameter T satisfying our conditions. Then one can test if R0T0f is equal in distribution to zero for any fixed function f that does not rely on P. Under the conditions given in this paper, a 1 – α confidence set for T0 is given by all functions f for which we do not reject H0 at level α. The blip function from Example 2 is a particularly interesting example, since a confidence set for this parameter can be mapped into a confidence set for the sign of the blip function, i.e. the optimal individualized treatment strategy [Robins, 2004]. We would hope that the omnibus nature of the test implies that the confidence set does not contain functions f that are “far away” from T0, contrary to a test which has no power against certain alternatives. Formalization of this claim is an area of future research.

  • (d)

    To improve upon our proposal for nonparametrically testing variable importance via the conditional mean function, as discussed in Section 5, it may be fruitful to consider the related Hilbert Schmidt independence criterion [Gretton et al., 2005]. Higher-order pathwise differentiability may prove useful to estimate and infer about this discrepancy measure.

Supplementary Material

Supp AppendixS1

Acknowledgement

The authors thank Noah Simon for helpful discussions. Alex Luedtke was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program. Marco Carone was supported by a Genentech Endowed Professorship at the University of Washington. Mark van der Laan was supported by NIH grant R01 AI074345–06.

Footnotes

Supplementary Appendices

Supplementary Appendix A reviews first- and second-order pathwise differentiability. Supplementary Appendix B.1 contains U−process results from Nolan and Pollard [1987, 1988] that are useful in our context, and Supplementary Appendix B.2 contains an empirical process result used to establish the Donsker condition that was assumed under H1. Supplementary Appendix C contains proofs for the results in the main text.

Contributor Information

Alexander R. Luedtke, Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, WA, USA, aluedtke@fredhutch.org

Marco Carone, Department of Biostatistics, University of Washington, Seattle, WA, USA.

Mark J. van der Laan, Division of Biostatistics, University of California, Berkeley, Berkeley, CA, USA

References

  1. Berlinet A and Thomas-Agnan C. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media, 2011. [Google Scholar]
  2. Carone M, Díaz I, and van der Laan MJ. Higher-order Targeted Minimum Loss-based Estimation. Technical report, Division of Biostatistics, University of California, Berkeley, 2014. [Google Scholar]
  3. Chakraborty B and Moodie EE. Statistical Methods for Dynamic Treatment Regimes. Springer, Berlin Heidelberg New York, 2013. [Google Scholar]
  4. Franz C. Discrete approximation of integral operators. Proceedings of the American Mathematical Society, 134(8):2437–2446, 2006. [Google Scholar]
  5. Gregory GG. Large sample theory for U-statistics and tests of fit. The annals of statistics, pages 110–123, 1977. [Google Scholar]
  6. Gretton A, Bousquet O, Smola A, and Schölkopf B. Measuring statistical dependence with Hilbert-Schmidt norms In Algorithmic learning theory, pages 63–77. Springer, 2005. [Google Scholar]
  7. Gretton A, Borgwardt MM, Rasch M, Schölkopf B, and Smola AJ. A kernel method for the two-sample-problem. In Advances in neural information processing systems, pages 513–520, 2006. [Google Scholar]
  8. Gretton A, Fukumizu K, Harchaoui Z, and Sriperumbudur BK. A fast, consistent kernel two-sample test. In Advances in neural information processing systems, pages 673–681, 2009. [Google Scholar]
  9. Gretton A, Borgwardt KM, Rasch MJ, Schölkopf B, and Smola A. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012a. [Google Scholar]
  10. Gretton A, Sejdinovic D, Strathmann H, Balakrishnan S, Pontil M, Fukumizu Kenji, and Sriperumbudur BK. Optimal kernel choice for large-scale two-sample tests. In Advances in neural information processing systems, pages 1205–1213, 2012b. [Google Scholar]
  11. Hayfield T and Racine JS. Nonparametric Econometrics: The np Package. Journal of Statistical Software, 27(5), 2008. URL http://www.jstatsoft.org/v27/i05/. [Google Scholar]
  12. Lavergne P, Maistre S, and Patilea V. A significance test for covariates in nonparametric regression. Electronic Journal of Statistics, 9:643–678, 2015. [Google Scholar]
  13. Nolan D and Pollard D. U-processes: rates of convergence. The Annals of Statistics, 15 (2):780–799, 1987. [Google Scholar]
  14. Nolan D and Pollard D. Functional Limit Theorems for $U$-Processes. The Annals of Probability, 16(3):1291–1298, 1988. ISSN 0091–1798. doi: 10.1214/aop/1176991691. [DOI] [Google Scholar]
  15. Pfanzagl J. No Title. Springer, Berlin Heidelberg New York, 1982. [Google Scholar]
  16. Pfanzagl J. Asymptotic expansions for general statistical models, volume 31 Springer-Verlag, 1985. [Google Scholar]
  17. Polley E and van der Laan MJ. SuperLearner: super learner prediction, 2013. URL http://cran.r-project.org/package=SuperLearner.
  18. Qiu Y, Mei J, and authors of the ARPACK library. See file AUTHORS for details. rARPACK: R wrapper of ARPACK for large scale eigenvalue/vector problems, on both dense and sparse matrices, 2014. URL http://cran.r-project.org/package=rARPACK.
  19. Racine JS, Hart J, and Li Q. Testing the significance of categorical predictor variables in nonparametric regression models. Econometric Reviews, 25(4):523–544, 2006. [Google Scholar]
  20. Robins JM. Optimal structural nested models for optimal sequential decisions. In Lin DY and Heagerty P, editors, Proceedings of the Second Seattle Symposium in Biostatistics, volume 179, pages 189–326, 2004. [Google Scholar]
  21. Robins JM, Li L, Tchetgen E, and van der Vaart AW. Higher order influence functions and minimax estimation of non-linear functionals In Essays in Honor of David A. Freedman, IMS, Collections Probability and Statistics, pages 335–421. Springer; New York, 2008. [Google Scholar]
  22. Sejdinovic D, Sriperumbudur B, Gretton A, and Fukumizu K. Equivalence of distance-based and RKHS-based statistics in hypothesis testing. The Annals of Statistics, 41 (5):2263–2291, 2013. [Google Scholar]
  23. Steinwart I. On the influence of the kernel on the consistency of support vector machines. The Journal of Machine Learning Research, 2:67–93, 2002. [Google Scholar]
  24. van der Laan MJ and Rose S. Targeted Learning: Causal Inference for Observational and Experimental Data. Springer, New York, New York, 2011. [Google Scholar]
  25. van der Laan MJ, Polley E, and Hubbard A. Super Learner. Stat Appl Genet Mol, 6 (1):Article 25, 2007. ISSN 1. [DOI] [PubMed] [Google Scholar]
  26. van der Vaart AW. Higher order tangent spaces and influence functions. Statistical Science, 29(4):679–686, 2014. [Google Scholar]
  27. van der Vaart AW and Wellner JA. Weak convergence and empirical processes. Springer, Berlin Heidelberg New York, 1996. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp AppendixS1

RESOURCES