Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jan 6.
Published in final edited form as: Ann Stat. 2014 May 20;42(2):413–468. doi: 10.1214/13-AOS1175

A SIGNIFICANCE TEST FOR THE LASSO1

Richard Lockhart 1,2, Jonathan Taylor 2,3, Ryan J Tibshirani 3,4, Robert Tibshirani 4,5
PMCID: PMC4285373  NIHMSID: NIHMS637361  PMID: 25574062

Abstract

In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables.

Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1).

Key words and phrases: Lasso, least angle regression, p-value, significance test

1. Introduction

We consider the usual linear regression setup, for an outcome vector yRn and matrix of predictor variables XRn×p:

y=Xβ+ε,ε~N(0,σ2I), (1)

where βRp are unknown coefficients to be estimated. [If an intercept term is desired, then we can still assume a model of the form (1) after centering y and the columns of X; see Section 2.2 for more details.] We focus on the lasso estimator [Tibshirani (1996), Chen, Donoho and Saunders (1998)], defined as

β^=argminβRp12yXβ22+λβ1, (2)

where λ ≥ 0 is a tuning parameter, controlling the level of sparsity in β^. Here, we assume that the columns of X are in general position in order to ensure uniqueness of the lasso solution [this is quite a weak condition, to be discussed again shortly; see also Tibshirani (2013)].

There has been a considerable amount of recent work dedicated to the lasso problem, both in terms of computation and theory. A comprehensive summary of the literature in either category would be too long for our purposes here, so we instead give a short summary: for computational work, some relevant contributions are Friedman et al. (2007), Beck and Teboulle (2009), Friedman, Hastie and Tibshirani (2010), Becker, Bobin and Candès (2011), Boyd et al. (2011), Becker, Candès and Grant (2011); and for theoretical work see, for example, Greenshtein and Ritov (2004), Fuchs (2005), Donoho (2006), Candes and Tao (2006), Zhao and Yu (2006), Wainwright (2009), Candès and Plan (2009). Generally speaking, theory for the lasso is focused on bounding the estimation error Xβ^Xβ22 or β^β22, or ensuring exact recovery of the underlying model, supp(β^)=supp(β) [with supp(·) denoting the support function]; favorable results in both respects can be shown under the right assumptions on the generative model (1) and the predictor matrix X. Strong theoretical backing, as well as fast algorithms, have made the lasso a highly popular tool.

Yet, there are still major gaps in our understanding of the lasso as an estimation procedure. In many real applications of the lasso, a practitioner will undoubtedly seek some sort of inferential guarantees for his or her computed lasso model—but, generically, the usual constructs like p-values, confidence intervals, etc., do not exist for lasso estimates. There is a small but growing literature dedicated to inference for the lasso, and important progress has certainly been made, with many methods being based on resampling or data splitting; we review this work in Section 2.5. The current paper focuses on a significance test for lasso models that does not employ resampling or data splitting, but instead uses the full data set as given, and proposes a test statistic that has a simple and exact asymptotic null distribution.

Section 2 defines the problem that we are trying to solve, and gives the details of our proposal—the covariance test statistic. Section 3 considers an orthogonal predictor matrix X, in which case the statistic greatly simplifies. Here, we derive its Exp(1) asymptotic distribution using relatively simple arguments from extreme value theory. Section 4 treats a general (nonorthogonal) X, and under some regularity conditions, derives an Exp(1) limiting distribution for the covariance test statistic, but through a different method of proof that relies on discrete-time Gaussian processes. Section 5 empirically verifies convergence of the null distribution to Exp(1) over a variety of problem setups. Up until this point, we have assumed that the error variance σ2 is known; in Section 6, we discuss the case of unknown σ2. Section 7 gives some real data examples. Section 8 covers extensions to the elastic net, generalized linear models, and the Cox model for survival data. We conclude with a discussion in Section 9.

2. Significance testing in linear modeling

Classic theory for significance testing in linear regression operates on two fixed nested models. For example, if M and M ∪ {j} are fixed subsets of {1,…, p}, then to test the significance of the jth predictor in the model (with variables in) M ∪ {j}, one naturally uses the chi-squared test, which computes the drop in residual sum of squares (RSS) from regression on M ∪ {j} and M,

Rj=(RSSMRSSM{j})/σ2 (3)

and compares this to a χ12 distribution. (Here, σ2 is assumed to be known; when σ2 is unknown, we use the sample variance in its place, which results in the F-test, equivalent to the t-test, for testing the significance of variable j.)

Often, however, one would like to run the same test for M and M ∪ {j} that are not fixed, but the outputs of an adaptive or greedy procedure. Unfortunately, adaptivity invalidates the use of a χ12 null distribution for the statistic (3). As a simple example, consider forward stepwise regression: starting with an empty model M = ∅, we enter predictors one at a time, at each step choosing the predictor j that gives the largest drop in residual sum of squares. In other words, forward stepwise regression chooses j at each step in order to maximize Rj in (3), over all jM. Since Rj follows a χ12 distribution under the null hypothesis for each fixed j, the maximum possible Rj will clearly be stochastically larger than χ12 under the null. Therefore, using a chi-squared test to evaluate the significance of a predictor entered by forward stepwise regression would be far too liberal (having type I error much larger than the nominal level). Figure 1(a) demonstrates this point by displaying the quantiles of R1 in forward stepwise regression (the chisquared statistic for the first predictor to enter) versus those of a χ12 variate, in the fully null case (when β* = 0). A test at the 5% level, for example, using the χ12 cutoff of 3.84, would have an actual type I error of about 39%.

FIG. 1.

FIG. 1

A simple example with n = 100 observations and p = 10 orthogonal predictors. All true regression coefficients are zero, β* = 0. On the left is a quantile–quantile plot, constructed over 1000 simulations, of the standard chi-squared statistic R1 in (3), measuring the drop in residual sum of squares for the first predictor to enter in forward stepwise regression, versus the χ12 distribution. The dashed vertical line marks the 95% quantile of the χ12 distribution. The right panel shows a quantile–quantile plot of the covariance test statistic T1 in (5) for the first predictor to enter in the lasso path, versus its asymptotic null distribution Exp(1). The covariance test explicitly accounts for the adaptive nature of lasso modeling, whereas the usual chi-squared test is not appropriate for adaptively selected models, for example, those produced by forward stepwise regression.

The failure of standard testing methodology when applied to forward stepwise regression is not an anomaly—in general, there seems to be no direct way to carry out the significance tests designed for fixed linear models in an adaptive setting.6 Our aim is hence to provide a (new) significance test for the predictor variables chosen adaptively by the lasso, which we describe next.

2.1. The covariance test statistic

The test statistic that we propose here is constructed from the lasso solution path, that is, the solution β^(λ) in (2) a function of the tuning parameter λ ∈ [0, ∞). The lasso path can be computed by the well-known LARS algorithm of Efron et al. (2004) [see also Osborne, Presnell and Turlach (2000a, 2000b)], which traces out the solution as λ decreases from ∞ to 0. Note that when rank(X) < p there are possibly many lasso solutions at each λ and, therefore, possibly many solution paths; we assume that the columns of X are in general position,7 implying that there is a unique lasso solution at each λ > 0, and hence a unique path. The assumption that X has columns in general position is a very weak one [much weaker, e.g., than assuming that rank(X) = p]. For example, if the entries of X are drawn from a continuous probability distribution on Rnp, then the columns of X are almost surely in general position, and this is true regardless of the sizes of n and p; see Tibshirani (2013).

Before defining our statistic, we briefly review some properties of the lasso path.

  • The path β^(λ) is a continuous and piecewise linear function of λ, with knots (changes in slope) at values λ1λ2λr0 (these knots depend on y, X).

  • At λ = ∞, the solution β^() has no active variables (i.e., all variables have zero coefficients); for decreasing λ, each knot λk marks the entry or removal of some variable from the current active set (i.e., its coefficient becomes nonzero or zero, resp.). Therefore, the active set, and also the signs of active coefficients, remain constant in between knots.

  • At any point λ in the path, the corresponding active set A=supp(β^(λ)) of the lasso solution indexes a linearly independent set of predictor variables, that is, rank(XA) = |A|, where we use XA to denote the columns of X in A.

  • For a general X, the number of knots in the lasso path is bounded by 3p (but in practice this bound is usually very loose). This bound comes from the following realization: if at some knot λk, the active set is A=supp(β^(λk)) and the signs of active coefficients are sA=sign(β^A(λk)), then the active set and signs cannot again be A and sA at some other knot λ ≠ λk. This in particular means that once a variable enters the active set, it cannot immediately leave the active set at the next step.

  • For a matrix X satisfying the positive cone condition (a restrictive condition that covers, e.g., orthogonal matrices), there are no variables removed from the active set as λ decreases and, therefore, the number of knots is p.

We can now precisely define the problem that we are trying to solve: at a given step in the lasso path (i.e., at a given knot), we consider testing the significance of the variable that enters the active set. To this end, we propose a test statistic defined at the kth step of the path.

First, we define some needed quantities. Let A be the active set just before λk, and suppose that predictor j enters at λk. Denote by β^(λk+1) the solution at the next knot in the path λk+1, using predictors A ∪ {j}. Finally, let β~A(λk+1) be the solution of the lasso problem using only the active predictors XA, at λ = λk+1. To be perfectly explicit,

β~A(λk+1)=argminβAR|A|12yXAβA22+λk+1βA1. (4)

We propose the covariance test statistic defined by

Tk=(y,Xβ^(λk+1)y,XAβ~A(λk+1))/σ2. (5)

Intuitively, the covariance statistic in (5) is a function of the difference between Xβ^ and XAβ~A, the fitted values given by incorporating the jth predictor into the current active set, and leaving it out, respectively. These fitted values are parameterized by λ, and so one may ask: at which value of λ should this difference be evaluated? Well, note first that β~A(λk)=β^A(λk), that is, the solution of the reduced problem at λk is simply that of the full problem, restricted to the active set A (as verified by the KKT conditions). Clearly then, this means that we cannot evaluate the difference at λ = λk, as the jth variable has a zero coefficient upon entry at λk, and hence

Xβ^(λk)=XAβ^A(λk)=XAβ~A(λk).

Indeed, the natural choice for the tuning parameter in (5) is λ = λk+1: this allows the jth coefficient to have its fullest effect on the fit Xβ^ before the entry of the next variable at λk+1 (or possibly, the deletion of a variable from A at λk+1).

Secondly, one may also ask about the particular choice of function of Xβ^(λk+1)XAβ~A(λk+1). The covariance statistic in (5) uses an inner product of this difference with y, which can be roughly thought of as an (uncentered) covariance, hence explaining its name.8 At a high level, the larger the covariance of y with Xβ^ compared to that with XAβ~A, the more important the role of variable j in the proposed model A ∪ {j}. There certainly may be other functions that would seem appropriate here, but the covariance form in (5) has a distinctive advantage: this statistic admits a simple and exact asymptotic null distribution. In Sections 3 and 4, we show that under the null hypothesis that the current lasso model contains all truly active variables, A ⊇ supp(β*),

TkdExp(1),

that is, Tk is asymptotically distributed as a standard exponential random variable, given reasonable assumptions on X and the magnitudes of the nonzero true coefficients. [In some cases, e.g., when we have a strict inclusion Asupp(β), the use of an Exp(1) null distribution is actually conservative, because the limiting distribution of Tk is stochastically smaller than Exp(1).] In the above limit, we are considering both n, p → ∞; in Section 4, we allow for the possibility p > n, the high-dimensional case.

See Figure 1(b) for a quantile–quantile plot of T1 versus an Exp(1) variate for the same fully null example (β* = 0) used in Figure 1(a); this shows that the weak convergence to Exp(1) can be quite fast, as the quantiles are decently matched even for p = 10. Before proving this limiting distribution in Sections 3 (for an orthogonal X) and 4 (for a general X), we give an example of its application to real data, and discuss issues related to practical usage. We also derive useful alternative expressions for the statistic, present a connection to degrees of freedom, review related work, and finally, discuss the null hypothesis in more detail.

2.2. Prostate cancer data example and practical issues

We consider a training set of 67 observations and 8 predictors, the goal being to predict log of the PSA level of men who had surgery for prostate cancer. For more details, see Hastie, Tibshirani and Friedman (2008) and the references therein. Table 1 shows the results of forward stepwise regression and the lasso. Both methods entered the same predictors in the same order. The forward stepwise p-values are smaller than the lasso p-values, and would enter four predictors at level 0.05. The latter would enter only one or maybe two predictors. However, we know that the forward stepwise p-values are inaccurate, as they are based on a null distribution that does not account for the adaptive choice of predictors. We now make several remarks.

Table 1.

Forward stepwise and lasso applied to the prostate cancer data example. The error variance is estimated by σ^2, the MSE of the full model. Forward stepwise regression p-values are based on comparing the drop in residual sum of squares (divided by σ^2) to an F(1, n − p) distribution (using χ12 instead produced slightly smaller p-values). The lasso p-values use a simple modification of the covariance test (5) for unknown variance, given in Section 6. All p-values are rounded to 3 decimal places

Step Predictor entered Forward stepwise Lasso
1 lcavol 0.000 0.000
2 lweight 0.000 0.052
3 svi 0.041 0.174
4 lbph 0.045 0.929
5 pgg45 0.226 0.353
6 age 0.191 0.650
7 lcp 0.065 0.051
8 gleason 0.883 0.978

Remark 1

The above example implicitly assumed that one might stop entering variables into the model when the computed p-value rose above some threshold. More generally, our proposed test statistic and associated p-values could be used as the basis for multiple testing and false discovery rate control methods for this problem; we leave this to future work.

Remark 2

In the example, the lasso entered a predictor into the active set at each step. For a general X, however, a given predictor variable may enter the active set more than once along the lasso path, since it may leave the active set at some point. In this case, we treat each entry as a separate problem. Our test is specific to a step in the path, and not to a predictor variable at large.

Remark 3

For the prostate cancer data set, it is important to include an intercept in the model. To accommodate this, we ran the lasso on centered y and column-centered X (which is equivalent to including an unpenalized intercept term in the lasso criterion), and then applied the covariance test (with the centered data). In general, centering y and the columns of X allows us to account for the effect of an intercept term, and still use a model of the form (1). From a theoretical perspective, this centering step creates a weak dependence between the components of the error vector ε ∈ ℝn. If originally we assumed i.i.d. errors, εi ~ N(0, σ2), then after centering y and the columns of X, our new errors are of the form ε~i=εiε¯, where ε¯=j=1nεj/n. It is easy see that these new errors are correlated:

Cov(ε~i,ε~j)=σ2/nforij.

One might imagine that such correlation would cause problems for our theory in Sections 3 and 4, which assumes i.i.d. normal errors in the model (1). However, a careful look at the arguments in these sections reveals that the only dependence on y is through XTy, the inner products of y with the columns of X. Furthermore,

Cov(XiTε~,XjTε~,)=σ2XiT(I1n11T)Xj=σ2XiTXjforalli,j,

which is the same as it would have been without centering (here 11T is the matrix of all 1s, and we used that the columns of X are centered). Therefore, our arguments in Sections 3 and 4 apply equally well to centered data, and centering has no effect on the asymptotic distribution of Tk.

Remark 4

By design, the covariance test is applied in a sequential manner, estimating p-values for each predictor variable as it enters the model along the lasso path. A more difficult problem is to test the significance of any of the active predictors in a model fit by the lasso, at some arbitrary value of the tuning parameter λ. We discuss this problem briefly in Section 9.

2.3. Alternate expressions for the covariance statistic

Here, we derive two alternate forms for the covariance statistic in (5). The first lends some insight into the role of shrinkage, and the second is helpful for the convergence results that we establish in Sections 3 and 4. We rely on some basic properties of lasso solutions; see, for example, Tibshirani and Taylor (2012), Tibshirani (2013). To remind the reader, we are assuming that X has columns in general position.

For any fixed λ, if the lasso solution has active set A=supp(β^(λ)) and signs sA=sign(β^A(λ)), then it can be written explicitly (over active variables) as

β^A(λ)=(XATXA)1XATyλ(XATXA)1sA.

In the above expression, the first term (XATXA)1XATy simply gives the regression coefficients of y on the active variables XA, and the second term λ(XATXA)1sA can be thought of as a shrinkage term, shrinking the values of these coefficients toward zero. Further, the lasso fitted value at λ is

Xβ^(λ)=PAyλ(XAT)+sA, (6)

where PA=XA(XATXA)1XAT denotes the projection onto the column space of XA, and (XAT)+=XA(XATXA)1 is the (Moore–Penrose) pseudoinverse of XAT.

Using the representation (6) for the fitted values, we can derive our first alternate expression for the covariance statistic in (5). If A and sA are the active set and signs just before the knot λk, and j is the variable added to the active set at λk, with sign s upon entry, then by (6),

Xβ^(λk+1)=PA{j}yλk+1(XA{j}T)+sA{j},

where sA{j}=sign(β^A{j}(λk+1)). We can equivalently write sA{j}=(sA,s), the concatenation of sA and the sign s of the jth coefficient when it entered (as no sign changes could have occurred inside of the interval [λk, λk+1], by definition of the knots). Let us assume for the moment that the solution of reduced lasso problem (4) at λk+1 has all variables active and sA=sign(β~A(λk+1))—remember, this holds for the reduced problem at λk, and we will return to this assumption shortly. Then, again by (6),

XAβ~A(λk+1)=PAyλk+1(XAT)+sA

and plugging the above two expressions into (5),

Tk=yT(PA{j}PA)y/σ2λk+1·yT((XA{j}T)+sA{j}(XAT)sA)/σ2. (7)

Note that the first term above is yT(PA{j}PA)y/σ2=(yPAy22yPA{j}y22)/σ2, which is exactly the chi-squared statistic for testing the significance of variable j, as in (3). Hence, if A, j were fixed, then without the second term, Tk would have a χ12 distribution under the null. But of course A, j are not fixed, and so much like we saw previously with forward stepwise regression, the first term in (7) will be generically larger than χ12, because j is chosen adaptively based on its inner product with the current lasso residual vector. Interestingly, the second term in (7) adjusts for this adaptivity: with this term, which is composed of the shrinkage factors in the solutions of the two relevant lasso problems (on X and XA), we prove in the coming sections that Tk has an asymptotic Exp(1) null distribution. Therefore, the presence of the second term restores the (asymptotic) mean of Tk to 1, which is what it would have been if A, j were fixed and the second term were missing. In short, adaptivity and shrinkage balance each other out.

This insight aside, the form (7) of the covariance statistic leads to a second representation that will be useful for the theoretical work in Sections 3 and 4. We call this the knot form of the covariance statistic, described in the next lemma.

Lemma 1

Let A be the active set just before the kth step in the lasso path, that is, A=supp(β^(λk)), with λk being the kth knot. Also, let sA denote the signs of the active coefficients, sA=sign(β^A(λk)), j be the predictor that enters the active set at λk, and s be its sign upon entry. Then, assuming that

sA=sign(β~A(λk+1)) (8)

or in other words, all coefficients are active in the reduced lasso problem (4) at λk+1 and have signs sA, we have

Tk=C(A,sA,j,s)·λk(λkλk+1)/σ2, (9)

where

C(A,sA,j,s)=(XA{j}T)+sA{j}(XAT)+sA22

and sA∪{j} is the concatenation of sA and s.

The proof starts with expression (7), and arrives at (9) through simple algebraic manipulations. We defer it until Appendix A.1.

When does the condition (8) hold? This was a key assumption behind both of the forms (7) and (9) for the statistic. We first note that the solution β~A of the reduced lasso problem has signs sA at λk, so it will have the same signs sA at λk+1 provided that no variables are deleted from the active set in the solution path β~A(λ) for λ ∈ [λk+1, λk]. Therefore, assumption (8) holds:

  • When X satisfies the positive cone condition (which includes X orthogonal), because no variables ever leave the active set in this case. In fact, for X orthogonal, it is straightforward to check that C(A, sA, j, s) = 1, so Tk = λkk − λk+1)/σ2.

  • When k = 1 (we are testing the first variable to enter), as a variable cannot leave the active set right after it has entered. If k = 1 and X has unit normed columns, ‖Xi2 = 1 for i = 1,…, p, then we again have C(A, sA, j, s) = 1 (note that A = ∅), so T1 = λ11 − λ2)/σ2.

  • When sA = sign((XA)+y), that is, sA contains the signs of the least squares coefficients on XA, because the same active set and signs cannot appear at two different knots in the lasso path (applied here to the reduced lasso problem on XA).

The first and second scenarios are considered in Sections 3 and 4.1, respectively. The third scenario is actually somewhat general and occurs, for example, when sA=sign((XA)+y)=sign(βA); in this case, both the lasso and least squares on XA recover the signs of the true coefficients. Section 4.2 studies the general X and k ≥ 1 case, wherein this third scenario is important.

2.4. Connection to degrees of freedom

There is an interesting connection between the covariance statistic in (5) and the degrees of freedom of a fitting procedure. In the regression setting (1), for an estimate ŷ [which we think of as a fitting procedure ŷ = ŷ(y)], its degrees of freedom is typically defined [Efron (1986)] as

df(y^)=1σ2i=1nCov(yi,y^i). (10)

In words, df(ŷ) sums the covariances of each observation yi with its fitted value ŷi. Hence, the more adaptive a fitting procedure, the higher this covariance, and the greater its degrees of freedom. The covariance test evaluates the significance of adding the jth predictor via something loosely like a sample version of degrees of freedom, across two models: that fit on A ∪ {j}, and that on A. This was more or less the inspiration for the current work.

Using the definition (10), one can reason [and confirm by simulation, just as in Figure 1(a)] that with k predictors entered into the model, forward stepwise regression had used substantially more than k degrees of freedom. But something quite remarkable happens when we consider the lasso: for a model containing k nonzero coefficients, the degrees of freedom of the lasso fit is equal to k (either exactly or in expectation, depending on the assumptions) [Efron et al. (2004), Zou, Hastie and Tibshirani (2007), Tibshirani and Taylor (2012)]. Why does this happen? Roughly speaking, it is the same adaptivity versus shrinkage phenomenon at play. [Recall our discussion in the last section following the expression (7) for the covariance statistic.] The lasso adaptively chooses the active predictors, which costs extra degrees of freedom; but it also shrinks the nonzero coefficients (relative to the usual least squares estimates), which decreases the degrees of freedom just the right amount, so that the total is simply k.

2.5. Related work

There is quite a lot of recent work related to the proposal of this paper. Wasserman and Roeder (2009) propose a procedure for variable selection and p-value estimation in high-dimensional linear models based on sample splitting, and this idea was extended by Meinshausen, Meier and Bühlmann (2009). Meinshausen and Bühlmann (2010) propose a generic method using resampling called “stability selection,” which controls the expected number of false positive variable selections. Minnier, Tian and Cai (2011) use perturbation resampling-based procedures to approximate the distribution of a general class of penalized parameter estimates. One big difference with the work here: we propose a statistic that utilizes the data as given and does not employ any resampling or sample splitting.

Zhang and Zhang (2014) derive confidence intervals for contrasts of high-dimensional regression coefficients, by replacing the usual score vector with the residual from a relaxed projection (i.e., the residual from sparse linear regression). Bühlmann (2013) constructs p-values for coefficients in high-dimensional regression models, starting with ridge estimation and then employing a bias correction term that uses the lasso. Even more recently, van de Geer and Bühlmann (2013), Javanmard and Montanari (2013a, 2013b) all present approaches for debiasing the lasso estimate based on estimates of the inverse covariance matrix of the predictors. (The latter work focuses on the special case of a predictor matrix X with i.i.d. Gaussian rows; the first two consider a general matrix X.) These debiased lasso estimates are asymptotically normal, which allows one to compute p-values both marginally for an individual coefficient, and simultaneously for a group of coefficients. All of the work mentioned in the present paragraph provides a way to make inferential statements about preconceived predictor variables of interest (or preconceived groups of interest); this is in contrast to our work, which instead deals directly with variables that have been adaptively selected by the lasso procedure. We discuss this next.

2.6. What precisely is the null hypothesis?

The referees of a preliminary version of this manuscript expressed some confusion with regard to the null distribution considered by the covariance test. Given a fixed number of steps k ≥ 1 along the lasso path, the covariance test examines the set of variables A selected by the lasso before the kth step (i.e., A is the current active set not including the variable to be added at the kth step). In particular, the null distribution being tested is

H0:Asupp(β), (11)

where β* is the true underlying coefficient vector in the model (1). For k = 1, we have A = ∅ (no variables are selected before the first step), so this reduces to a test of the global null hypothesis: β* = 0. For k > 1, the set A is random (it depends on y), and hence the null hypothesis in (11) is itself a random event. This makes the covariance test a conditional hypothesis test beyond the first step in the path, as the null hypothesis that it considers is indeed a function of the observed data. Statements about its null distribution must therefore be made conditional on the event that A ⊇ supp(β*), which is precisely what is done in Sections 3.2 and 4.2.

Compare the null hypothesis in (11) to a null hypothesis of the form

H0:Ssupp(β)=, (12)

where S ⊆ {1,…, p} is a fixed subset. The latter hypothesis, in (12), describes the setup considered by Zhang and Zhang (2014), Bühlmann (2013), van de Geer and Bühlmann (2013), Javanmard and Montanari (2013a, 2013b). At face value, the hypotheses (11) and (12) may appear similar [the test in (11) looks just like that in (12) with S = {1,…, p} \ A], but they are fundamentally very different. The difference is that the null hypothesis in (11) is random, whereas that in (12) is fixed; this makes the covariance test a conditional hypothesis test, while the tests constructed in all of the aforementioned work are traditional (unconditional) hypothesis tests. It should be made clear that the goal of our work and these works also differ. Our test examines an adaptive subset of variables A deemed interesting by the lasso procedure; for such a goal, it seems necessary to consider a random null hypothesis, as theory designed for tests of fixed hypotheses would not be valid here.9 The main goal of Zhang and Zhang (2014), Bühlmann (2013), van de Geer and Bühlmann (2013), Javanmard and Montanari (2013a, 2013b), it appears, is to construct a new set of variables, say A, based on testing the hypotheses in (12) with S = {j} for j = 1,…, p. Though the construction of this new set à may have started from a lasso estimate, it need not be true that à matches the lasso active set A, and ultimately it is this new set à (and inferential statements concerning Ã) that these authors consider the point of interest.

3. An orthogonal predictor matrix X

We examine the special case of an orthogonal predictor matrix X, that is, one that satisfies XTX = I. Even though the results here can be seen as special cases of those for a general X in Section 4, the arguments in the current orthogonal X case rely on relatively straightforward extreme value theory and are hence much simpler than their general X counterparts (which analyze the knots in the lasso path via Gaussian process theory). Furthermore, the Exp(1) limiting distribution for the covariance statistic translates in the orthogonal case to a few interesting and previously unknown (as far as we can tell) results on the order statistics of independent standard χ1 variates. For these reasons, we discuss the orthogonal X case in detail.

As noted in the discussion following Lemma 1 (see the first point), for an orthogonal X, we know that the covariance statistic for testing the entry of the variable at step k in the lasso path is

Tk=λk(λkλk+1)/σ2.

Again using orthogonality, we rewrite yXβ22=XTyβ22+C for a constant C (not depending on β) in the criterion in (2), and then we can see that the lasso solution at any given value of λ has the closed-form:

β^j(λ)=Sλ(XjTy),j=1,,p,

where X1,…, Xp are columns of X, and Sλ : ℝ → ℝ is the soft-thresholding function,

Sλ(x)={xλ,ifx>λ,0,ifλxλ,x+λ,ifx<λ.

Letting Uj=XjTy, j = 1,…, p, the knots in the lasso path are simply the values of λ at which the coefficients become nonzero (i.e., cease to be thresholded),

λ1=|U(1)|,λ2=|U(2)|,,λp=|U(p)|,

where |U(1)| ≥ |U(2)| ≥ … ≥ |U(p)| are the order statistics of |U1|,…,|Up| (somewhat of an abuse of notation). Therefore,

Tk=|U(k)|(|U(k)||U(k+1)|)/σ2.

Next, we study the special case k = 1, the test for the first predictor to enter the active set along the lasso path. We then examine the case k ≥ 1, the test at a general step in the lasso path.

3.1. The first step, k = 1

Consider the covariance test statistic for the first predictor to enter the active set, that is, for k = 1,

T1=|U(1)|(|U(1)||U(2)|)/σ2.

We are interested in the distribution of T1 under the null hypothesis; since we are testing the first predictor to enter, this is

H0:y~N(0,σ2I).

Under the null, U1,…, Up are i.i.d., Uj ~ N(0, σ2), and so |U1|/σ,…, |Up|/σ follow a χ1 distribution (absolute value of a standard Gaussian). That T1 has an asymptotic Exp(1) null distribution is now given by the next result.

Lemma 2

Let V1V2Vp be the order statistics of an independent sample of χ1 variates (i.e., they are the sorted absolute values of an independent sample of standard Gaussian variates). Then

V1(V1V2)dExp(1)asp.

This lemma reveals a remarkably simple limiting distribution for the largest of independent χ1 random variables times the gap between the largest two; we skip its proof, as it is a special case of the following generalization.

Lemma 3

If V1V2Vp are the order statistics of an independent sample of χ1 variates, then for any fixed k ≥ 1,

(V1(V1V2),V2(V2V3),,Vk(VkVk+1))
d(Exp(1),Exp(1/2),,Exp(1/k))asp,

where the limiting distribution (on the right-hand side above) has independent components. To be perfectly clear, here and throughout we use Exp(α) to denote the exponential distribution with scale parameter α (not rate parameter α), so that if Z ~ Exp(α), then E[Z]=α.

Proof

The χ1 distribution has CDF

F(x)=(2Φ(x)1)1{x0},

where Φ is the standard normal CDF. We first compute

limtF(t)(1F(t))(F(t))2=limtt(1Φ(t))ϕ(t)=1,

the last equality using Mills’ ratio. Theorem 2.2.1 in de Haan and Ferreira (2006) then implies that, for constants ap = F−1 (1 − 1/p) and bp = pF′(ap),

bp(V1ap),dlogE0,

where E0 is a standard exponential variate, so − log E0 has the standard (or type I) extreme value distribution. Hence, according to Theorem 3 in Weissman (1978), for any fixed k ≥ 1, the random variables W0 = bp (Vk+1ap) and Wi = bp(ViVi+1), i = 1,…, k, converge jointly:

(W0,W1,W2,,Wk)d(logG0,E1/1,E2/2,,Ek/k),

where G0, E1,…, Ek are independent, G0 is Gamma distributed with scale parameter 1 and shape parameter k, and E1,…, Ek are standard exponentials. Now note that

Vi(ViVi+1)=(ap+W0bp+j=ikWjbp)Wibp=apbpWi+1bp2(W0+j=ikWj)Wi.

We claim that ap/bp → 1; this would give the desired result as the second term converges to zero, using bp→∞. Writing ap, bp more explicitly, we see that 1 − 1/p = 2Φ(ap) − 1, that is, 1 − Φ(ap) = 1/(2p), and bp = 2(ap). Using Mills’ inequalities,

ϕ(ap)ap11+1/ap21Φ(ap)ϕ(ap)ap

and multiplying by 2p,

bpap11+1 /ap21bpap.

Since ap → ∞, this means that bp/ap → 1, completing the proof. □

Practically, Lemma 3 tells us that under the global null hypothesis y ~ N(0, σ2), comparing the covariance statistic Tk at the kth step of the lasso path to an Exp(1) distribution is increasingly conservative [at the first step, T1 is asymptotically Exp(1), at the second step, T2 is asymptotically Exp(1/2), at the third step, T3 is asymptotically Exp(1/3), and so forth]. This progressive conservatism is favorable, if we place importance on parsimony in the fitted model: we are less and less likely to incur a false rejection of the null hypothesis as the size of the model grows. Moreover, we know that the test statistics T1, T2, … at successive steps are independent, and hence so are the corresponding p-values; from the point of view of multiple testing corrections, this is nearly an ideal scenario.

Of real interest is the distribution of Tk, k ≥ 1, not under the global null hypothesis, but rather, under the weaker null hypothesis that all variables excluded from the current lasso model are truly inactive (i.e., they have zero coefficients in the true model). We study this in next section.

3.2. A general step, k ≥ 1

We suppose that exactly k0 components of the true coefficient vector β* are nonzero, and consider testing the entry of the predictor at step k = k0 + 1. Let A* = supp(β*) denote the true active set (so k0 = |A*|), and let B denote the event that all truly active variables are added at steps 1, …, k0,

B={minjA|Uj|>maxjA|Uj|}. (13)

We show that under the null hypothesis (i.e., conditional on B), the test statistic Tk0+1 is asymptotically Exp(1), and further, the test statistic Tk0+d at a future step k = k0 + d is asymptotically Exp(1/d).

The basic idea behind our argument is as follows: if we assume that the nonzero components of β* are large enough in magnitude, then it is not hard to show (relying on orthogonality, here) that the truly active predictors are added to the model along the first k0 steps of the lasso path, with probability tending to one. The test statistic at the (k0 + 1)st step and beyond would therefore depend on the order statistics of |Ui| for truly inactive variables i, subject to the constraint that the largest of these values is smaller than the smallest |Uj| for truly active variables j. But with our strong signal assumption, that is, that the nonzero entries of β* are large in absolute value, this constraint has essentially no effect, and we are back to studying the order statistics from a χ1 distribution, as in the last section. This is made precise below.

Theorem 1

Assume that X ∈ ℝn×p is orthogonal, and yn is drawn from the normal regression model (1), where the true coefficient vector β* has k0 nonzero components. Let A* = supp(β*) be the true active set, and assume that the smallest nonzero true coefficient is large compared to σ2logp,

minjA|βj|σ2logpasp.

Let B denote the event in (13), namely, that the first k0 variables entering the model along the lasso path are those in A*. Then ℙ(B) → 1 as p → ∞, and for each fixed d ≥ 0, we have

(Tk0+1,Tk0+2,,Tk0+d)d(Exp(1),Exp(1/2),,Exp(1/d))asp.

The same convergence in distribution holds conditionally on B.

Proof

We first study ℙ(B). Let θp=miniA|βi|, and choose cp such that

cpσ2logpandθpcp.

Note that Uj~N(βj,σ2), independently for j = 1,…, p. For jA*,

P(|Uj|cp)=Φ(cpβiσ)Φ(cpβiσ)Φ(cpθpσ)0,

so

P(minjA|Uj|>cp)=jAP(|Uj|>cp)1.

At the same time,

P(maxjA|Uj|cp)=(Φ(cp/σ)Φ(cp/σ))pk01.

Therefore, ℙ(B) → 1. This in fact means that ℙ(E|B) − ℙ(E) → 0 for any sequence of events E, so only the weak convergence of (Tk0+1,,Tk0+d) remains to be proved. For this, we let m = pk0, and V1V2 ≥ ⋯ ≥ Vm denote the order statistics of the sample |Uj|, jA* of independent χ1 variates. Then, on the event B, we have

Tk0+i=Vi(ViVi+1)fori=1,,d.

As ℙ(B) → 1, we have in general

Tk0+i=Vi(ViVi+1)+oP(1)fori=1,,d.

Hence, we are essentially back in the setting of the last section, and the desired convergence result follows from the same arguments as those for Lemma 3. □

4. A general predictor matrix X

In this section, we consider a general predictor matrix X, with columns in general position. Recall that our proposed covariance test statistic (5) is closely intertwined with the knots λ1 ≥ … ≥ λr in the lasso path, as it was defined in terms of difference between fitted values at successive knots. Moreover, Lemma 1 showed that (provided there are no sign changes in the reduced lasso problem over [λk+1, λk]) this test statistic can be expressed even more explicitly in terms of the values of these knots. As was the case in the last section, this knot form is quite important for our analysis here. Therefore, it is helpful to recall [Efron et al. (2004), Tibshirani (2013)] the precise formulae for the knots in the lasso path. If A denotes the active set and sA denotes the signs of active coefficients at a knot λk,

A=supp(β^(λ)),sA=sign(β^A(λk)),

then the next knot λk+1 is given by

λk+1=max{λk+1join,λk+1leave}, (14)

Where λk+1join and λk+1leave are the values of λ at which, if we were to decrease the tuning parameter from λk and continue along the current (linear) trajectory for the lasso coefficients, a variable would join and leave the active set A, respectively. These values are10

λk+1join=maxjA,s{111}XjT(IPA)ysXjT(XAT)+sA.1{XjT(IPA)ysXjT(XAT)+sA<λk}, (15)

where recall PA=XA(XATXA)1XAT,and(XAT)+=XA(XATXA)1; and

λk+1leave=maxjA[(XA)+y]j[(XATXA)1sA]j1{[(XA)+y]j[(XATXA)1sA]j<λk}. (16)

As we did in Section 3 with the orthogonal X case, we begin by studying the asymptotic distribution of the covariance statistic in the special case k = 1 (i.e., the first model along the path), wherein the expressions for the next knot (14), (15), (16) greatly simplify. Following this, we study the more difficult case k ≥ 1. For the sake of readability, we defer the proofs and most technical details until the Appendix.

4.1. The first step, k = 1

We assume here that X has unit normed columns: ‖Xi2 = 1, for i =1,…,p; we do this mostly for simplicity of presentation, and the generalization to a matrix X whose columns are not unit normed is given in the next section (though the exponential limit is now a conservative upper bound). As per our discussion following Lemma 1 (see the second point), we know that the first predictor to enter the active set along the lasso path cannot leave at the next step, so the constant sign condition (8) holds, and by Lemma 1 the covariance statistic for testing the entry of the first variable can be written as

T1=λ1(λ1λ2)/σ2

(the leading factor C being equal to one since we assumed that X has unit normed columns). Now let Uj=XjTy,j=1,,p, and R = XTX. With λ0 = ∞, we have A = ∅, and trivially, no variables can leave the active set. The first knot is hence given by (15), which can be expressed as

λ1=maxj=1,,p,s{1,1}sUj. (17)

Letting j1, s1 be the first variable to enter and its sign (i.e., they achieve the maximum in the above expression), and recalling that j1 cannot leave the active set immediately after it has entered, the second knot is again given by (15), written as

λ2=maxjj1,s{1,1}sUjsRj,j1Uj11ss1Rj,j11{sUjsRj,j1Uj11ss1Rj,j1<s1Uj1}.

The general position assumption on X implies that |Rj,j1| < 1, and so 1 − ss1Rj,j1 > 0, all jj1, s ∈ {−1,1}. It is easy to show then that the indicator inside the maximum above can be dropped, and hence

λ2=maxjj1,s{1,1}sUjsRj,j1Uj11ss1Rj,j1 (18)

Our goal now is to calculate the asymptotic distribution of T1 = λ11 − λ2)/σ2, with λ1 and λ2 as above, under the null hypothesis; to be clear, since we are testing the significance of the first variable to enter along the lasso path, the null hypothesis is

H0:y~N(0,σ2I). (19)

The strategy that we use here for the general X case—which differs from our extreme value theory approach for the orthogonal X case—is to treat the quantities inside the maxima in expressions (17), (18) for λ1, λ2 as discrete-time Gaussian processes. First, we consider the zero mean Gaussian process

g(j,s)=sUjforj=1,,p,s{1,1}. (20)

We can easily compute the covariance function of this process:

E[g(j,s)g(j,s)]=ssRj,jσ2,

where the expectation is taken over the null distribution in (19). From (17), we know that the first knot is simply

λ1=maxj,sg(j,s).

In addition to (20), we consider the process

h(j1,s1)(j,s)=g(j,s)ss1Rj,j1g(j1,s1)1ss1Rj,j1forjj1,s{1,1}. (21)

An important property: for fixed j1, s1, the entire process h(j1,s1)(j,s) is independent of g(j1, s1). This can be seen by verifying that

E[g(j1,s1)h(j1,s1)(j,s)]=0

and noting that g(j1, s1) and h(j1,s1)(j,s), all jj1, s ∈ {−1, 1}, are jointly

M(j1,s1)=maxjj1,sh(j1,s1)(j,s) (22)

and from the above we know that for fixed j1, s1, M(j1, s1) is independent of g(j1, s1). If j1, s1 are instead treated as random variables that maximize g(j, s) (the argument maximizers being almost surely unique), then from (18) we see that the second knot is λ2 = M(j1, s1). Therefore, to study the distribution of T1 = λ11 − λ2)/σ2, we are interested in the random variable

g(j1,s1)(g(j1,s1)M(j1,s1))/σ2

on the event

{g(j1,s1)>g(j,s)for all(j,s)(j1,s1)}.

It turns out that this event, which concerns the argument maximizers of g, can be rewritten as an event concerning only the relative values of g and M [see Taylor, Takemura and Adler (2005) for the analogous result for continuous-time processes].

Lemma 4

With g, M as defined in (20), (21), (22), we have

{g(j1,s1)>g(j,s)for all(j,s)(j1,s1)}={g(j1,s1)>M(j1,s1)}.

This is an important realization because the dual representation g(j1, s1) > M(j1, s1)} is more tractable, once we partition the space over the possible argument minimizers j1, s1, and use the fact that M(j1, s1) is independent of g(j1, s1) for fixed j1, s1. In this vein, we express the distribution of T1 = λ11 − λ2)/σ2 in terms of the sum

P(T1>t)=j1,s1P(g(j1,s1)(g(j1,s1)M(j1,s1))/σ2>t,g(j1,s1)>M(j1,s1)).

The terms in the above sum can be simplified: dropping for notational convenience the dependence on j1, s1, we have

g(gM)/σ2>t,g>Mg/σ>u(t,M/σ),

where u(a,b)=(b+b2+4a)/2, which follows by simply solving for g in the quadratic equation g(gM)/σ2 = t. Therefore,

P(T1>t)=j1,s1P(g(j1,s1)/σ>u(t,M(j1,s1)/σ))=j1,s10Φ¯(u(t,m/σ))FM(j1,s1)(dm), (23)

where Φ¯ is the standard normal function (i.e., Φ¯=1Φ, for Φ the standard normal CDF), FM(j1,s1) is the distribution of M(j1, s1), and we have used the fact that g(j1, s1) and M(j1, s1) are independent for fixed j1, s1, as well as M(j1, s1) > 0. Continuing from (23), we can write the difference between P(T1>t) and the standard exponential tail, P(Exp(1)>t)=et, as

|P(T1>t)et|=|j1,s10(Φ¯(u(t,m/σ))Φ¯(m/σ)et)Φ¯(m/σ)FM(j1,s1)(dm)|, (24)

where we used the fact that

j1,s10Φ¯(m/σ)FM(j1,s1)(dm)=j1,s1P(g(j1,s1)>M(j1,s1))=1.

We now examine the term inside the braces in (24), the difference between a ratio of normal survival functions and et; our next lemma shows that this term vanishes as m → ∞.

Lemma 5

For any t ≥ 0,

Φ¯(u(t,m))Φ¯(m)etasm.

Hence, loosely speaking, if each M(j1, s1) → ∞ fast enough as p → ∞, then the right-hand side in (24) converges to zero, and T1 converges weakly to Exp(1). This is made precise below.

Lemma 6

Consider M(j1, s1) defined in (21), (22) over j1 = 1, …, p and s1 ∈ {−1,1}. If for any fixed m0 > 0

j1,s1P(M(j1,s1)m0)0asp, (25)

then the right-hand side in (24) converges to zero as p → ∞, and so ℙ(T1 > t) → et for all t ≥ 0.

The assumption in (25) is written in terms of random variables whose distributions are induced by the steps along the lasso path; to make our assumptions more transparent, we show that (25) is implied by a conditional variance bound involving the predictor matrix X alone, and arrive at the main result of this section.

Theorem 2

Assume that X ∈ ℝn×p has unit normed columns in general position, and let R = XTX. Assume also that there is some δ > 0 such that for each j = 1, …, p, there exists a subset of indices S ⊆ {1,…, p} \ {j} with

1Ri,S\{i}(RS\{i},S\{i})1RS\{i},iδ2foralliS, (26)

and the size of S growing faster than log p,

|S|dpwheredplogpasp. (27)

The under the null distribution in (19) [i.e., y is drawn from the regression model (1) with β* = 0], we have ℙ(T1 > t) → et as p → ∞ for all t ≥ 0.

Remark

Conditions (26) and (27) are sufficient to ensure (25), or in other words, that each M(j1, s1) grows as in ℙ(M(j1, s1) m0) = o(1/p), for any fixed m0. While it is true that E[M(j1,s1)] will typically grow as p grows, some assumption is required so that M(j1, s1) concentrates around its mean faster than standard Gaussian concentration results (such as the Borell-TIS inequality) imply.

Generally speaking, the assumptions (26) and (27) are not very strong. Stated differently, (26) is a lower bound on the variance of Ui=XiTy, conditional on Ul=XlTy for all ℓ ∈ S \ {i}. Hence, for any j, we require the existence of a subset S not containing j such that the variables Ui, iS, are not too correlated, in the sense that the conditional variance of any one given all the others is bounded below. This subset S has to be larger in size than log p, as made clear in (27). Note that, in fact, it suffices to find a total of two disjoint subsets S1, S2 with the properties (26) and (27), because then for any j, either one or the other will not contain j.

An example of a matrix X that does not satisfy (26) and (27) is one with fixed rank as p grows. (This, of course, would also not satisfy the general position assumption.) In this case, we would not be able to find a subset of the variables Ui=XiTy, i = 1, …, p, that is both linearly independent and has size larger than r = rank(X), which violates the conditions. We note that in general, since |S| ≤ rank(X) ≤ n, and |S|/ log p → ∞, conditions (26) and (27) require that n/ log p → ∞.

4.2. A general step, k ≥ 1

In this section, we no longer assume that X has unit normed columns (in any case, this provides no simplification in deriving the null distribution of the test statistic at a general step in the lasso path). Our arguments here have more or less the same form as they did in the last section, but overall the calculations are more complicated.

Fix an integer k0 ≥ 0, subset A0 ⊆ {1, …, p} containing the true active set A0A* = supp(β*), and sign vector sA0{1,1}|A0|. Consider the event

B={the solution at stepk0in the lasso path has active setA=A0signssA=sign((XA0)+y)=sA0,and the next two knots are given byλk0+1=maxjA {jk0},s{1,1}XjT(IPA)ysXjT(XAT)+sA,λk0+2=λk0+2join}. (28)

We assume that ℙ(B) → 1 as p → ∞. In words, this is assuming that with probability approaching one: the lasso estimate at step k0 in the path has support A0 and signs sA0; the least squares estimate on A0 has the same signs as this lasso estimate; the knots at steps k0 + 1 and k0 + 2 correspond to joining events; and in particular, the maximization defining the joining event at step k0 + 1 can be taken to be unrestricted, that is, without the indicators constraining the individual arguments to be <λk0. Our goal is to characterize the asymptotic distribution of the covariance statistic Tk at the step k = k0 + 1, under the null hypothesis (i.e., conditional on the event B). We will comment on the stringency of the assumption that ℙ(B) → 1 following our main result in Theorem 3.

First note that on B, we have sA = sign((XA)+y), and as discussed in the third point following Lemma 1, this implies that the solution of the reduced problem (4) on XA cannot incur any sign changes over the interval [λk, λk+1]. Hence, we can apply Lemma 1 to write the covariance statistic on B as

Tk=C(A,sA,jk,sk)λk(λkλk+1)/σ2,

where C(A,sA,jk,sk)=(XA{jk}T)+sA{jk}(XAT)+sA22, A and sA are the active set and signs at step k − 1, and jk is the variable added to the active set at step k, with sign sk. Now, analogous to our definition in the last section, we define the discrete-time Gaussian process

g(A,sA)(j,s)=XjT(IPA)ysXjT(XAT)+sAforjA,s{1,1}. (29)

For any fixed A, sA, the above process has mean zero provided that AA*. Additionally, for any such fixed A, sA, we can compute its covariance function

E[g(A,sA)(j,s)g(A,sA)(j,s)]=XjT(IPA)Xjσ2[sXjT(XAT)+sA][sXjT(XAT)+sA]. (30)

Note that on the event B, the kth knot in the lasso path is

λk=maxjA,s{1,1}g(A,sA)(j,s).

For fixed jk, sk, we also consider the process

g(A{jk},sA{jk})(j,s)=XjT(IPA {jk})ysXjT(XA{jk}T)+sA{jk}forjA{jk},s{1,1} (31)

(above, sA{jk} is the concatenation of sA and sk) and its achieved maximum value, subject to being less than the maximum of g(A,sA),

M(A,sA)(jk,sk)=maxjA{jk}s{1,1}g(A{jk},sA{jk})(j,s)×1{g(A{jk},sA{jk})(j,s)<maxjA,s{1,1}g(A,sA)(j,s)}. (32)

If jk, sk indeed maximize g(A,sA), that is, they correspond to the variable added to the active set at λk and its sign (note that these are almost surely unique), then on B, we have λk+1=M(A,sA)(jk,sk). To study the distribution of Tk on B, we are therefore interested in the random variable

C(A,sA,jk,sk)g(A,sA)(jk,sk)(g(A,sA )(jk,sk)M(A,sA)(jk,sk))/σ2

on the event

E(jk,sk)={g(A,sA)(jk,sk)>g(A,sA)(j,s)for all(j,s)(jk,sk)}. (33)

Equivalently, we may write

P({Tk>t}B)=jk,skP({C(A,sA,jk,sk)g(A,sA)(jk,sk)×(g(A,sA)(jk,sk)M(A,sA)(jk,sk))/σ2>t}E(jk,sk)).

Since ℙ(B) → 1, we have in general

P(Tk>t)=jk,skP({C(A0,sA0,jk,sk)g(A0,sA0)(jk,sk)×(g(A0,sA0)(jk,sk)M(A0,sA0)(jk,sk))/σ2>t}E(jk,sk))+o(1), (34)

where we have replaced all instances of A and sA on the right-hand side above with the fixed subset A0 and sign vector sA0. This is a helpful simplification, because in what follows we may now take A = A0 and sA = sA0 as fixed, and consider the distribution of the random processes g(A0,sA0) and M(A0,sA0). With A = A0 and sA = sA0 fixed, we drop the notational dependence on them and write these processes as g and M. We also write the scaling factor C(A0, sA0, jk, sk) as C(jk, sk).

The setup in (34) looks very much like the one in the last section [and to draw an even sharper parallel, the scaling factor C(jk, sk) is actually equal to one over the variance of g(jk, sk), meaning that C(jk,sk)g(jk,sk) is standard normal for fixed jk, sk, a fact that we will use later in the proof of Lemma 8]. However, a major complication is that g(jk, sk) and M (jk, sk) are no longer independent for fixed jk, sk. Next, we derive a dual representation for the event (33) (analogous to Lemma 4 in the last section), introducing a triplet of random variables M+, M, M0—it turns out that g is independent of this triplet, for fixed jk, sk.

Lemma 7

Let g be as defined in (29) (with A, sA fixed at A0, sA0). Let Σj,j′ denote the covariance function of g [short form for the expression in (30)].11 Define

S+(j,s)={(j,s):jA{j},j,jj,j<1},M+(j,s)=max(j,s)S+(j,s)g(j,s)(j,j /j,j)g(j,s)1j,j/j,j, (35)
S(j,s)={(j,s):jA{j},j,jjj>1},M(j,s)=min(j,s)S(j,s)g(j,s)(j,j/jj)g(j,s)1j,j/jj, (36)
S0(j,s)={(j,s):jA{j}j,jj,j=1},M0(j,s)=max(j,s)S0(j,s)g(j,s)(j,j/j,j)g(j,s). (37)

Then the event E(jk, sk) in (33), that jk, sk maximize g, can be written as an intersection of events involving M+, M, M0:

{g(jk,sk)>g(j,s)for all(j,s)(jk,sk)}={g(jk,sk)>0}{g(jk,sk)>M+(jk,sk)}{g(jk,sk)<M(jk,sk)}{0>M0(jk,sk). (38)

As a result of Lemma 7, continuing from (34), we can decompose the tail probability of Tk as

P(Tk>1)=jk,skP(C(jk ,sk)g(jk,sk)(g(jk,sk)M(jk,sk))/σ2>t,g(jk,sk)>0,g(jk,sk)>M+(jk,sk),g(jk,sk)<M(jk,sk),0>M0(jk,sk))+o(1). (39)

A key point here is that, for fixed jk, sk, the triplet M+(jk, sk), M(jk, sk), M0(jk, sk) is independent of g(jk, sk), which is true because

E[g(jk,sk)(g(j,s)(jk,j/jk,jk)g(jk,sk))]=0

and g(jk, sk), along with g(j,s)(jk,j/jk,jk)g(jk,sk), for all j, s, form a jointly Gaussian collection of random variables. If we were to now replace M by M+ in the first line of (39), and define a modified statistic T~k via its tail probability,

P(T~k>t)=jks,kP(C(jk,sk)g(jk,sk)(g(jk,sk)M+(jk,sk))/σ2>t,g(jk,sk)>0,g(jk,sk)>M+(jk,sk),g(jk,sk)<M(jk,sk),0>M0(jk,sk)), (40)

then arguments similar to those in the second half of Section 4.1 give a (conservative) exponential limit for P(T~k>t).

Lemma 8

Consider g as defined in (29) (with A, sA fixed at A0, sA0), and M+, M, M0 as defined in (35), (36), (37). Assume that for any fixed m0,

jk,skP(M+(jk,sk)m0/C(jk,sk))0asp. (41)

Then the modified statistic T~k in (40) satisfies limpP(T~k>t)et, for all t ≥ 0.

Of course, deriving the limiting distribution of T~k was not the goal, and it remains to relate P(Tk>t) to P(Tk>t). A fortuitous calculation shows that the two seemingly different quantities M+ and M—the former of which is defined as the maximum of particular functionals of g, and the latter concerned with the joining event at step k + 1—admit a very simple relationship: M+(jk, sk) ≤ M(jk, sk) for the maximizing jk, sk. We use this to bound the tail of Tk.

Lemma 9

Consider g, M as defined in (29), (31), (32) (with A, sA fixed at A0, sA0), and consider M+ as defined in (36). Then for any fixed jk, sk, on the event E(jk, sk) in (33), we have

M+(jk,sk)M(jk,sk).

Hence, if we assume as in Lemma 8 the condition (41), then limp→∞ ℙ(Tk > t) ≤ et for all t ≥ 0.

Though Lemma 9 establishes a (conservative) exponential limit for the covariance statistic Tk, it does so by enforcing assumption (41), which is phrased in terms of the tail distribution of a random process defined at the kth step in the lasso path. We translate this into an explicit condition on the covariance structure in (30), to make the stated assumptions for exponential convergence more concrete.

Theorem 3

Assume that X ∈ ℝn×p has columns in general position, and y ∈ ℝn is drawn from the normal regression model (1). Assume that for a fixed integer k0 ≥ 0, subset A0 ⊆ {1, …, p with A0A* = supp(β*) and sign vector sA0 ∈ {−1, 1}|A0|, the event B in} (28) satisfies ℙ(B) → 1 as p → ∞. Assume that there exists a constant 0 < η ≤ 1 such that

(XA0)+Xj11ηforalljA0. (42)

Define the matrix R by

Rij=XiT (IPA0)Xjfori,jA0.

Assume that the diagonal elements in R are all of the same order, that is, Rii/RjjC for all i, j and some constant C > 0. Finally assume that, for each fixed jA0, there is a set S ⊆ {1, …, p} \ (A0 ∪ {j}) such that for all iS,

[RiiRi,S\{i}(RS\{i},S\{i})1RS\{i},i]/Riiδ2, (43)
|Rij|/Rjj<η/(2η), (44)
(XA0{j})+Xi1<1, (45)

where δ > 0 is a constant (not depending on j), and the size of S grows faster than log p,

|S|dpwheredplogpasp. (46)

Then at step k = k0 + 1, we have limp→∞ ℙ(Tk > t) ≤ et for all t ≥ 0. The same result holds for the tail of Tk conditional on B.

Remark 5

If X has unit normed columns, then by taking k0 = 0 (and accordingly, A0 = ∅, sA0=) in Theorem 3, we essentially recover the result of Theorem 2. To see this, note that with k0 = 0 (and A0, sA0=), we have ℙ(B) = 1 for all finite p (recall the arguments given at the beginning of Section 4.1). Also, condition (42) trivially holds with η = 1 because A0 = ∅. Next, the matrix R defined in the theorem reduces to R = XTX, again because A0 = ∅; note that R has all diagonal elements equal to one, because X has unit normed columns. Hence, (43) is the same as condition (26) in Theorem 2. Finally, conditions (44) and (45) both reduce to |Rij| < 1, which always holds as X has columns in general position. Therefore, when k0 = 0, Theorem 3 imposes the same conditions as Theorem 2, and gives essentially the same result—we say “essentially” here is because the former gives a conservative exponential limit for T1, while the latter gives an exact exponential limit.

Remark 6

If X is orthogonal, then for any A0, conditions (42) and (43)(46) are trivially satisfied [for the latter set of conditions, we can take, e.g., S = {1, …, p\(A0∪}{j})]. With an additional condition on the strength of the true nonzero coefficients, we can assure that ℙ(B) → 1 as p → ∞ with A0 = A*, sA0 = sign (βA0), and k0 = |A0|, and hence prove a conservative exponential limit for Tk; note that this is precisely what is done in Theorem 1 (except that in this case, the exponential limit is proven to be exact).

Remark 7

Defining Ui=XiT(IPA0)y for iA0, the condition (43) is a lower bound on the ratio of the conditional variance of Ui on U, ℓ ∉ S, to the unconditional variance of Ui. Loosely speaking, conditions (43), (44), and (45) can all be interpreted as requiring, for any jA0, the existence of a subset S not containing j (and disjoint from A0) such that the variables Ui, iS, are not very correlated. This subset has to be large in size compared to log p, by (46). An implicit consequence of (43)(46), as argued in the remark following Theorem 2, is that n/log p → ∞.

Remark 8

Some readers will likely recognize condition (42) as that of mutual incoherence or strong irrepresentability, commonly used in the lasso literature on exact support recovery [see, e.g., Wainwright (2009), Zhao and Yu (2006)]. This condition, in addition to a lower bound on the magnitudes of the true coefficients, is sufficient for the lasso solution to recover the true active set A* with probability tending to one, at a carefully chosen value of λ. It is important to point out that we do not place any requirements on the magnitudes of the true nonzero coefficients; instead, we assume directly that the lasso converges (with probability approaching one) to some fixed model defined by A0, sA0 at the (k0)th step in the path. Here, A0 is large enough that it contains the true support, A0A*, and the signs sA0 are arbitrary—they may or may not match the signs of the true coefficients over A0. In a setting in which the nonzero coefficients in β* are well separated from zero, a condition quite similar to the irrepresentable condition can be used to show that the lasso converges to the model with support A0 = A* and signs SA0=sign(βA0), at step k0 = |A0| of the path. Our result extends beyond this case, and allows for situations in which the lasso model converges to a possibly larger set of “screened” variables A0, and fixed signs sA0.

Remark 9

In fact, one can modify the above arguments to account for the case that A0 does not contain the entire set A* of truly nonzero coefficients, but rather, only the “strong” coefficients. While “strong” is rather vague, a more precise way of stating this is to assume that β* has nonzero coefficients both large and small in magnitude, and with A0 corresponding to the set of large coefficients, we assume that the (left-out) small coefficients must be small enough that the mean of the process g in (29) (with A = A0 and sA = sA0) grows much faster than M+. The details, though not the main ideas, of the arguments would change, and the result would still be a conservative exponential limit for the covariance statistic Tk at step k = k0 + 1. We may pursue this extension in future work.

5. Simulation of the null distribution

We investigate the null distribution of the covariance statistic through simulations, starting with an orthogonal predictor matrix X, and then considering more general forms of X.

5.1. Orthogonal predictor matrix

Similar to our example from the start of Section 2, we generated n = 100 observations with p = 10 orthogonal predictors. The true coefficient vector β* contained 3 nonzero components equal to 6, and the rest zero. The error variance was σ2 = 1, so that the truly active predictors had strong effects and always entered the model first, with both forward stepwise and the lasso. Figure 2 shows the results for testing the 4th (truly inactive) predictor to enter, averaged over 500 simulations; the left panel shows the chi-squared test (drop in RSS) applied at the 4th step in forward stepwise regression, and the right panel shows the covariance test applied at the 4th step of the lasso path. We see that the Exp(1) distribution provides a good finite-sample approximation for the distribution of the covariance statistic, while χ12 is a poor approximation for the drop in RSS.

FIG. 2.

FIG. 2

An example with n = 100 and p = 10 orthogonal predictors, and the true coefficient vector having 3 nonzero, large components. Shown are quantile–quantile plots for the drop in RSS test applied to forward stepwise regression at the 4th step and the covariance test for the lasso path at the 4th step.

Figure 3 shows the results for testing the 5th, 6th and 7th predictors to enter the lasso model. An Exp(1)-based test will now be conservative: at a nominal 5% level, the actual type I errors are about 1%, 0.2% and 0.0%, respectively. The solid line has slope 1, and the broken lines have slopes 1/2, 1/3, 1/4, as predicted by Theorem 1.

FIG. 3.

FIG. 3

The same setup as in Figure 2, but here we show the covariance test at the 5th, 6th and 7th steps along the lasso path, from left to right, respectively. The solid line has slope 1, while the broken lines have slopes 1/2, 1/3, 1/4, as predicted by Theorem 1.

5.2. General predictor matrix

In Table 2, we simulated null data (i.e., β* = 0), and examined the distribution of the covariance test statistic T1 for the first predictor to enter. We varied the numbers of predictors p, correlation parameter ρ, and structure of the predictor correlation matrix. In the first two correlation setups, the correlation between each pair of predictors was ρ, in the data and population, respectively. In the AR(1) setup, the correlation between predictors j and j′ is ρ|j−j′|. Finally, in the block diagonal setup, the correlation matrix has two equal-sized blocks, with population correlation ρ in each block. We computed the mean, variance and tail probability of the covariance statistic T1 over 500 simulated data sets for each setup. We see that the Exp(1) distribution is a reasonably good approximation throughout.

TABLE 2.

Simulation results for the first predictor to enter for a global null true model. We vary the number of predictors p, correlation parameter ρ and structure of the predictor correlation matrix. Shown are the mean, variance and tail probability P(T1>q0.95) of the covariance statistic T1, where q0.95 is the 95% quantile of the Exp(1) distribution, computed over 500 simulated data sets for each setup. Standard errors are given by “se.” (The panel in the bottom left corner is missing because the equal data correlation setup is not defined for p > n.)

Equal data corr
Equal pop’n corr
AR(1)
Block diagonal
ρ Mean Var Tail pr Mean Var Tail pr Mean Var Tail pr Mean Var Tail pr
n = 100, p = 10
0 0.966 1.157 0.062 1.120 1.951 0.090 1.017 1.484 0.070 1.058 1.548 0.060
0.2 0.972 1.178 0.066 1.119 1.844 0.086 1.034 1.497 0.074 1.069 1.614 0.078
0.4 0.963 1.219 0.060 1.115 1.724 0.092 1.045 1.469 0.060 1.077 1.701 0.076
0.6 0.960 1.265 0.070 1.095 1.648 0.086 1.048 1.485 0.066 1.074 1.719 0.086
0.8 0.958 1.367 0.060 1.062 1.624 0.092 1.034 1.471 0.062 1.062 1.687 0.072
se 0.007 0.015 0.001 0.010 0.049 0.001 0.013 0.043 0.001 0.010 0.047 0.001
n = 100, p = 50
0 0.929 1.058 0.048 1.078 1.721 0.074 1.039 1.415 0.070 0.999 1.578 0.048
0.2 0.920 1.032 0.038 1.090 1.476 0.074 0.998 1.391 0.054 1.064 2.062 0.052
0.4 0.928 1.033 0.040 1.079 1.382 0.068 0.985 1.373 0.060 1.076 2.168 0.062
0.6 0.950 1.058 0.050 1.057 1.312 0.060 0.978 1.425 0.054 1.060 2.138 0.060
0.8 0.982 1.157 0.056 1.035 1.346 0.056 0.973 1.439 0.060 1.046 2.066 0.068
se 0.010 0.030 0.001 0.011 0.037 0.001 0.009 0.041 0.001 0.011 0.103 0.001
n = 100, p = 200
0 1.004 1.017 0.054 1.029 1.240 0.062 0.930 1.166 0.042
0.2 0.996 1.164 0.052 1.000 1.182 0.062 0.927 1.185 0.046
0.4 1.003 1.262 0.058 0.984 1.016 0.058 0.935 1.193 0.048
0.6 1.007 1.327 0.062 0.954 1.000 0.050 0.915 1.231 0.044
0.8 0.989 1.264 0.066 0.961 1.135 0.060 0.914 1.258 0.056
se 0.008 0.039 0.001 0.009 0.028 0.001 0.007 0.032 0.001

In Table 3, the setup was the same as in Table 2, except that we set the first k coefficients of the true coefficient vector equal to 4, and the rest zero, for k = 1, 2, 3. The dimensions were also fixed at n = 100 and p = 50. We computed the mean, variance, and tail probability of the covariance statistic Tk+1 for entering the next (truly inactive) (k + 1)st predictor, discarding those simulations in which a truly inactive predictor was selected in the first k steps. (This occurred 1.7%, 4.0% and 7.0% of the time, resp.) Again, we see that the Exp(1) approximation is reasonably accurate throughout.

TABLE 3.

Simulation results for the (k + 1)st predictor to enter for a model with k truly nonzero coefficients, across k = 1, 2, 3. The rest of the setup is the same as in Table 2 except that the dimensions were fixed at n = 100 and p = 50. The values are conditional on the event that the k truly active variables enter in the first k steps

Equal data corr
Equal pop’n corr
AR(1)
Block diagonal
ρ Mean Var Tail pr Mean Var Tail pr Mean Var Tail pr Mean Var Tail pr
k = 1 and 2nd predictor to enter
0 0.933 1.091 0.048 1.105 1.628 0.078 1.023 1.146 0.064 1.039 1.579 0.060
0.2 0.940 1.051 0.046 1.039 1.554 0.082 1.017 1.175 0.060 1.062 2.015 0.062
0.4 0.952 1.126 0.056 1.016 1.548 0.084 0.984 1.230 0.056 1.042 2.137 0.066
0.6 0.938 1.129 0.064 0.997 1.518 0.079 0.964 1.247 0.056 1.018 1.798 0.068
0.8 0.818 0.945 0.039 0.815 0.958 0.044 0.914 1.172 0.062 0.822 0.966 0.037
se 0.010 0.024 0.002 0.011 0.036 0.002 0.010 0.030 0.002 0.015 0.087 0.002
k = 2 and 3rd predictor to enter
0 0.927 1.051 0.046 1.119 1.724 0.094 0.996 1.108 0.072 1.072 1.800 0.064
0.2 0.928 1.088 0.044 1.070 1.590 0.080 0.996 1.113 0.050 1.043 2.029 0.060
0.4 0.918 1.160 0.050 1.042 1.532 0.085 1.008 1.198 0.058 1.024 2.125 0.066
0.6 0.897 1.104 0.048 0.994 1.371 0.077 1.012 1.324 0.058 0.945 1.568 0.054
0.8 0.719 0.633 0.020 0.781 0.929 0.042 1.031 1.324 0.068 0.771 0.823 0.038
se 0.011 0.034 0.002 0.014 0.049 0.003 0.009 0.022 0.002 0.013 0.073 0.002
k = 3 and 4th predictor to enter
0 0.925 1.021 0.046 1.080 1.571 0.086 1.044 1.225 0.070 1.003 1.604 0.060
0.2 0.926 1.159 0.050 1.031 1.463 0.069 1.025 1.189 0.056 1.010 1.991 0.060
0.4 0.922 1.215 0.048 0.987 1.351 0.069 0.980 1.185 0.050 0.918 1.576 0.053
0.6 0.905 1.158 0.048 0.888 1.159 0.053 0.947 1.189 0.042 0.837 1.139 0.052
0.8 0.648 0.503 0.008 0.673 0.699 0.026 0.940 1.244 0.062 0.647 0.593 0.015
se 0.014 0.037 0.002 0.016 0.044 0.003 0.014 0.031 0.003 0.016 0.073 0.002

In Figure 4, we estimate the power curves for significance testing via the drop in RSS test for forward stepwise regression, and the covariance test for the lasso. In the former, we use simulation-derived cutpoints, and in the latter we use the theoretically-based Exp(1) cutpoints, to control the type I error at the 5% level. We find that the tests have similar power, though the cutpoints for forward stepwise would not be typically available in practice. For more details, see the figure caption.

FIG. 4.

FIG. 4

Estimated power curves for significance tests using forward stepwise regression and the drop in RSS statistic, as well as the lasso and the covariance statistic. The results are averaged over 1000 simulations with n = 100 and p = 10 predictors drawn i.i.d. from N(0, 1) and σ2 = 1. On the left, there is one truly nonzero regression coefficient, and we varied its magnitude (the effect size parameter on the x-axis). We examined the first step of the forward stepwise and lasso procedures. On the right, in addition to a nonzero coefficient with varying effect size (on the x-axis), there are 3 additional large coefficients in the true model. We examined the 4th step in forward stepwise and the lasso, after the 3 strong variables have been entered. For the power curves in both panels, we use simulation-based cutpoints for forward stepwise to control the type I error at the 5% level; for the lasso we do the same, but also display the results for the theoretically-based [Exp(1)] cutpoint. Note that in practice, simulation-based cutpoints would not typically be available.

6. The case of unknown σ2

Up until now, we have assumed that the error variance σ2 is known; in practice it will typically be unknown. In this case, provided that n > p, we can easily estimate it and proceed by analogy to standard linear model theory. In particular, we can estimate σ2 by the mean squared residual error σ^2=yXβ^LS22/(np), with β^LS being the regression coefficients y on X (i.e., the full model). Plugging this estimate into the covariance statistic in (5) yields a new statistic Fk that has an asymptotic F-distribution under the null:

Fk=y,Xβ^(λk+1)y,XAβ^A(λk+1)σ^2dF2,np. (47)

This follows because Fk=Tk/(σ^2/σ2), the numerator Tk being asymptotically Exp(1)=χ22/2, the denominator σ^2/σ2 being asymptotically χnp2/(np), and we claim that the two are independent. Why? Note that the lasso solution path is unchanged if we replace y by PXy, so the lasso fitted values in Tk are functions of PXy; meanwhile, σ^2 is a function of (I−PX)y. The quantities PXy and (IPX)y are uncorrelated, and hence independent (recalling normality of y), so Tk and σ^2 are functions of independent quantities and, therefore, independent.

As an example, consider one of the setups from Table 2, with n = 100, p = 80 and predictor correlation of the AR(1) form ρ|j−j′|. The true model is null, and we test the first predictor to enter along the lasso path. (We choose n, p of roughly equal sizes here to expose the differences between the σ2 known and unknown cases.) Table 4 shows the results of 1000 simulations from each of the ρ = 0 and ρ = 0.8 scenarios. We see that with σ2 estimated, the F2,n−p distribution provides a more accurate finite-sample approximation than does Exp(1).

TABLE 4.

Comparison of Exp(1), F2,N−p, and the observed (empirical) null distribution of the covariance statistic, when σ2 has been estimated. We examined 1000 simulated data sets with n = 100, p = 80 and the correlation between predictors j and j′ equal to ρ|jj′|. We are testing the first step of the lasso path, and the true model is the global null. Results are shown for ρ = 0.0 and 0.8. The third column shows the tail probability P(T1>q0.95) computed over the 1000 simulations, where q0.95 is the 95% quantile from the appropriate distribution [either Exp(1) or F2,n−p]

Mean Variance 95% quantile Tail prob
ρ = 0
Observed 1.17 2.10 3.75
Exp(1) 1.00 1.00 2.99 0.082
F2,n−p 1.11 1.54 3.49 0.054
ρ = 0.8
Observed 1.14 1.70 3.77
Exp(1) 1.00 1.00 2.99 0.097
F2,n−p 1.11 1.54 3.49 0.064

When pn, estimation of σ2 is not nearly as straightforward; one idea is to estimate σ2 from the least squares fit on the support of the model selected by cross-validation. One would then hope that the resulting statistic, with this plug-in estimate of σ2, is close in distribution to F2,n−r under the null, where r is the size of the model chosen by cross-validation. This is by analogy to the low-dimensional n > p case in (47), but is not supported by rigorous theory. Simulations (withheld for brevity) show that this approximation is not too far off, but that the variance of the observed statistic is sometimes inflated compared that of an F2,n−r distribution (this unaccounted variability is likely due to the model selection process via cross-validation). Other authors have argued that using cross-validation to estimate σ2 when pn is not necessarily a good approach, as it can be anti-conservative; see, for example, Fan, Guo and Hao (2012), Sun and Zhang (2012) for alternative techniques. In future work, we will address the important issue of estimating σ2 in the context of the covariance statistic, when pn.

7. Real data examples

We demonstrate the use of covariance test with some real data examples. As mentioned previously, in any serious application of significance testing over many variables (many steps of the lasso path), we would need to consider the issue of multiple comparisons, which we do not here. This is a topic for future work.

7.1. Wine data

Table 5 shows the results for the wine quality data taken from the UCI database. There are p = 11 predictors, and n = 1599 observations, which we split randomly into approximately equal-sized training and test sets. The outcome is a wine quality rating, on a scale between 0 and 10. The table shows the training set p-values from forward stepwise regression (with the chi-squared test) and the lasso (with the covariance test). Forward stepwise enters 6 predictors at the 0.05 level, while the lasso enters only 3.

TABLE 5.

Wine data: forward stepwise and lasso p-values. The values are rounded to 3 decimal places. For the lasso, we only show p-values for the steps in which a predictor entered the model and stayed in the model for the remainder of the path (i.e., if a predictor entered the model at a step but then later left, we do not show this step—we only show the step corresponding to its last entry point)

Forward stepwise
Lasso
Step Predictor RSS test p-value Step Predictor Cov test p-value
1 Alcohol 315.216 0.000 1 Alcohol 79.388 0.000
2 Volatile_acidity 137.412 0.000 2 Volatile_acidity 77.956 0.000
3 Sulphates 18.571 0.000 3 Sulphates 10.085 0.000
4 Chlorides 10.607 0.001 4 Chlorides 1.757 0.173
5 pH 4.400 0.036 5 Total_sulfur_dioxide 0.622 0.537
6 Total_sulfur_dioxide 3.392 0.066 6 pH 2.590 0.076
7 Residual_sugar 0.607 0.436 7 Residual_sugar 0.318 0.728
8 Citric_acid 0.878 0.349 8 Citric_acid 0.516 0.597
9 Density 0.288 0.592 9 Density 0.184 0.832
10 Fixed_acidity 0.116 0.733 10 Free_sulfur_dioxide 0.000 1.000
11 Free_sulfur_dioxide 0.000 0.997 11 Fixed_acidity 0.114 0.892

In the left panel of Figure 5, we repeated this p-value computation over 500 random splits into training test sets. The right panel shows the corresponding test set prediction error for the models of each size. The lasso test error decreases sharply once the 3rd predictor is added, but then somewhat flattens out from the 4th predictor onward; this is in general qualitative agreement with the lasso p-values in the left panel, the first 3 being very small, and the 4th p-value being about 0.2. This also echoes the well-known difference between hypothesis testing and minimizing prediction error. For example, the Cp statistic stops entering variables when the p-value is larger than about 0.16.

FIG. 5.

FIG. 5

Wine data: the data were randomly divided 500 times into roughly equal-sized training and test sets. The left panel shows the training set p-values for forward stepwise regression and the lasso. The right panel show the test set error for the corresponding models of each size.

7.2. HIV data

Rhee et al. (2003) study six nucleotide reverse transcriptase inhibitors (NRTIs) that are used to treat HIV-1. The target of these drugs can become resistant through mutation, and they compare a collection of models for predicting the (log) susceptibility of the drugs, a measure of drug resistance, based on the location of mutations. We focused on the first drug (3TC), for which there are p = 217 sites and n = 1057 samples. To examine the behavior of the covariance test in the p > n setting, we divided the data at random into training and test sets of size 150 and 907, respectively, a total of 50 times. Figure 6 shows the results, in the same format as Figure 5. We used the model chosen by cross-validation to estimate σ2. The covariance test for the lasso suggests that there are only one or two important predictors (in marked contrast to the chi-squared test for forward stepwise), and this is confirmed by the test error plot in the right panel.

FIG. 6.

FIG. 6

HIV data: the data were randomly divided 50 times into training and test sets of size 150 and 907, respectively. The left panel shows the training set p-values for forward stepwise regression and the lasso. The right panel shows the test set error for the corresponding models of each size.

8. Extensions

We discuss some extensions of the covariance statistic, beyond significance testing for the lasso. The proposals here are supported by simulations [in terms of having an Exp(1) null distribution], but we do not offer any theory. This may be a direction for future work.

8.1. The elastic net

The elastic net estimate [Zou and Hastie (2005)] is defined as

β^en=argminβRp12yXβ22+λβ1+γ2β22, (48)

where γ ≥ 0 is a second tuning parameter. It is not hard to see that this can actually be cast as a lasso estimate with predictor matrix X= [XγI]R(n+p)×p and outcome y=(y,0)Rn+p. This shows that, for a fixed γ, the elastic net solution path is piecewise linear over λ, with each knot marking the entry (or deletion) of a variable from the active set. We therefore define the covariance statistic in the same manner as we did for the lasso; fixing γ, to test the predictor entering at the kth step (knot λk) in the elastic net path, we consider the statistic

Tk=(y,Xβ^en(λk+1,γ)y,XAβ~Aen(λk+1,γ))/σ2,

where as before, λk+1 is next knot in the path, A is the active set of predictors just before λk and βAenis the elastic net solution using only the predictors XA. The precise expression for the elastic net solution in (48), for a given active set and signs, is the same as it is for the lasso (see Section 2.3), but with (XATXA)1 replaced by (XATXA+γI)1. This generally creates a complication for the theory in Sections 3 and 4. But in the orthogonal X case, we have (XATXA+γI)1=I/(1+γ) and so

Tk=1/(1+γ)|U(k)|(|U(k)||U(k+1)|)/σ2

with Uj=XjTy,j=1,,p. This means that for an orthogonal X, under the null,

(1+γ)TkdExp(1)

and one is tempted to use this approximation beyond the orthogonal setting as well. In Figure 7, we evaluated the distribution of (1 + γ)T1 (for the first predictor to enter), for orthogonal and correlated scenarios, and for three different values of γ. Here, n = 100, p = 10 and the true model was null. It seems to be reasonably close to Exp(1) in all cases.

FIG. 7.

FIG. 7

Elastic net: an example with n = 100 and p = 10, for orthogonal and correlated predictors (having pairwise population correlation 0.5), and three different values of the ridge penalty parameter γ.

8.2. Generalized linear models and the Cox model

Consider the estimate from an 1-penalized generalized linear model:

β^glm=argminβRpi =1nlogf(yi;xi,β)+λβ1, (49)

where f(yi; xi, β) is an exponential family density, a function of the predictor measurements xiRp and parameter βRp. Note that the usual lasso estimate in (2) is a special case of this form when f is the Gaussian density with known variance σ2. The natural parameter in (49) is ηi=xiTβ, for i = 1, …, n, related to the mean of yi via a link function g(E[yi|xi])=ηi.

Having solved (49) with λ = 0 (i.e., this is simply maximum likelihood), producing a vector of fitted values η^=Xβ^glmRn, we might define degrees of freedom as12

df(η^)=i=1nCov(yi,η^i). (50)

This is the implicit concept used by Efron (1986) in his definition of the “optimism” of the training error. The same idea could be used to define degrees of freedom for the penalized estimate in (49) for any λ > 0, and this motivates the definition of the covariance statistic, as follows. If the tuning parameter value λ = λk marks the entry of a new predictor into the active set A, then we define the covariance statistic

Tk=y,Xβ^glm(λk+1)y,XAβAglm(λk+1), (51)

where λk+1 is the next value of the tuning parameter at which the model changes (a variable enters or leaves the active set), and βAglm is the estimate from the penalized generalized linear model (49) using only predictors in A. Unlike in the Gaussian case, the solution path in (49) is not generally piecewise linear over λ, and there is not an algorithm to deliver the exact the values of λ at which variables enter the model (we still refer to these as knots in the path). However, one can numerically approximate these knot values; for example, see Park and Hastie (2007). By analogy to the Gaussian case, we would hope that Tk has an asymptotic Exp(1) distribution under the null. Though we have not rigorously investigated this conjecture, simulations seem to support it.

As example, consider the logistic regression model for binary data. Now ηi = log(μi/(1 − μi)), with μi=P(yi=1|xi). Figure 8 shows the simulation results from comparing the null distribution of the covariance test statistic in (51) to Exp(1). Here, we used the glmpath package in R [Park and Hastie (2007)] to compute an approximate solution path and locations of knots. The null distribution of the test statistic looks fairly close to Exp(1).

FIG. 8.

FIG. 8

Lasso logistic regression: an example with n = 100 and p = 10 predictors, i.i.d. from N(0, 1). In the left panel, all true coefficients are zero; on the right, the first coefficient is large, and the rest are zero. Shown are quantile–quantile plots of the covariance test statistic (at the first and second steps, resp.), generated over 500 data sets, versus its conjectured asymptotic distribution, Exp(1).

For general likelihood-based regression problems, let η = and (η) denote the log likelihood. We can view maximum likelihood estimation as an iteratively weighted least squares procedure using the outcome variable

z(η)=η+Iη1Sη, (52)

where Sη = ∇(η), and Iη = ∇2(η). This applies, for example, to the class of generalized linear models and Cox’s proportional hazards model. For the general 1-penalized estimator

β^lik=argminβRp(Xβ)+λβ1, (53)

we can analogously define the covariance test statistic at a knot λk, marking the entry of a predictor into the active set A, as

Tk=(I01/2S0,Xβ^lik(λk+1)I01/2S0,XAβAlik(λk+1))/2 (54)

with λk+1 being the next knot in the path (at which a variable is added or deleted from the active set), and βAlik the solution of the general penalized likelihood problem (53) with predictor matrix XA. For the binomial model, the statistic (54) reduces to expression (51). In Figure 9, we computed this statistic for Cox’s proportional hazards model, using a similar setup to that in Figure 8. The Exp(1) approximation for its null distribution looks reasonably accurate.

FIG. 9.

FIG. 9

Lasso Cox model estimate: the basic setup is the same as in Figure 8 (n, p, the distribution of the predictors X, the true coefficient vector—on the left, entirely zero, and on the right, one large coefficient). Shown are quantile–quantile plots of the covariance test statistic (at the first and second steps, resp.), generated over 500 data sets, versus the Exp(1) distribution.

9. Discussion

We proposed a simple covariance statistic for testing the significance of predictor variables as they enter the active set, along the lasso solution path. We showed that the distribution of this statistic is asymptotically Exp(1), under the null hypothesis that all truly active predictors are contained in the current active set. [See Theorems 1, 2 and 3; the conditions required for this convergence result vary depending on the step k along the path that we are considering, and the covariance structure of the predictor matrix X; the Exp(1) limiting distribution is in some cases a conservative upper bound under the null.] Such a result accounts for the adaptive nature of the lasso procedure, which is not true for the usual chi-squared test (or F -test) applied to, for example, forward stepwise regression.

We feel that our work has shed light not only on the lasso path (as given by LARS), but also, at a high level, on forward stepwise regression. Both the lasso and forward stepwise start by entering the predictor variable most correlated with the outcome (thinking of standardized predictors), but the two differ in what they do next. Forward stepwise is greedy, and once it enters this first variable, it proceeds to fit the first coefficient fully, ignoring the effects of other predictors. The lasso, on the other hand, increases (or decreases) the coefficient of the first variable only as long as its correlation with the residual is larger than that of the inactive predictors. Subsequent steps follow similarly. Intuitively, it seems that forward stepwise regression inflates coefficients unfairly, while the lasso takes more appropriately sized steps. This intuition is confirmed in one sense by looking at degrees of freedom (recall Section 2.4). The covariance test and its simple asymptotic null distribution reveal another way in which the step sizes used by the lasso are “just right.”

The problem of assessing significance in an adaptive linear model fit by the lasso is a difficult one, and what we have presented in this paper by no means a complete solution. We describe some current work and ideas for future projects below.

  • Significance test for generic lasso models. A natural direction to consider is the generic lasso testing problem: given a lasso model computed at some fixed value of λ, how do we carry out a significance test for each predictor in the active set? Work on this is in progress.

  • Nonasymptotic null distributions. A geometric characterization of the first knot in the lasso path provides an alternative test for the global null hypothesis, β* = 0. When all predictors have unit norm, Xi2=1, for i = 1, …, p, this test has the form
    1Φ(λ1/σ)1Φ(λ2/σ)Unif(0,1).
    Remarkably, this above result is exact (nonasymptotic), valid for any n and p, requiring (essentially) only Gaussianity of the errors, and no real assumptions about the matrix X. For most reasonably behaved predictor matrices X, the Exp(1) approximation agrees closely with this test. Details are in Taylor, Loftus and Tibshirani (2013). Work to extend this formula to subsequent steps along the solution path, that is, to test a hypothesis beyond the global null, is underway.
  • Generalizations to other penalties and models. The manuscript of Taylor, Loftus and Tibshirani (2013) applies to a regularized regression setting with a general seminorm penalty, and derives explicit results for the group lasso and nuclear norm penalties (in addition to the lasso penalty). The nuclear norm result yields a test for principal components and matrix completion. The recent work of Grazier G’Sell, Taylor and Tibshirani (2013) studies the covariance test for graphical models, based on a sparse estimate of the inverse covariance matrix.

  • Sequential procedures with false discovery rate control. It is also interesting to consider how the sequence of covariance test p-values can be used to construct a sequential test with good power properties, and a guaranteed bound on its false discovery rate. A number of such approaches are proposed in Grazier G’Sell et al. (2013).

  • Proper p-values for forward stepwise. Perhaps surprisingly, a test analogous to the covariance test can be used in forward stepwise regression, to construct valid p-values for this greedy procedure. This work is in progress.

  • Other related problems include: estimation of σ2 when pn, in the context of the covariance test; power calculations and confidence interval estimation; theory for linear models having strong and weak signals (large and small true coefficients); theory for the elastic net, generalized linear models, and the Cox model.

As is clear from the above discussion, the covariance test work has created much excitement and activity among our close collaborators and students. It is our hope that the current paper will also broadly stimulate other researchers’ interest in this area, and that at some point, the joint efforts of the community will yield a full set of inferential tools for the lasso and other commonly used adaptive procedures.

Acknowledgments

We thank Jacob Bien, Trevor Hastie, Fred Huffer and Larry Wasserman for helpful comments.

APPENDIX

A.1. Proof of Lemma 1

By continuity of the lasso solution path at λk,

PAyλk(XAT)+sA=PA{j}yλk(XA{j}T)+sA{j}

and, therefore,

(PA{j}PA)y=λk((XA{j}T)+sA{j}(XAT)+sA). (55)

From this, we can obtain two identities: the first is

yT(PA{j}PA)y=λk2(XA{j}T)+sA{j}(XAT)+sA)22, (56)

obtained by squaring both sides in (55) (more precisely, taking the inner product of the left-hand side with itself and the right-hand side with itself), and noting that (PA{j}PA)2=PA{j}PA; the second is

yT((XA{i}T)+sA{j}(XAT)+sA)=λk(XA{j}T)+sA{j}(XAT)sA22, (57)

obtained by taking the inner product of both sides in (55) with y, and then using (56). Plugging (56) and (57) in for the first and second terms in (7), respectively, then gives the result in (9).

A.2. Proof of Lemma 4

Note that

g(j1,s1)>g(j,s)g(j1s1)ss1Rj,j1g(j1s1)1ss1Rj,j1>g(j,s)ss1Rj,j1g(j1s1)1ss1Rj,j1g(j1s1)>h(j1,s1)(j,s),

the first step following since 1 − ss1Rj,j1 > 0, and the second step following from the definition of h(j1,s1). The intersection of the right-hand side above, over all (j, s) ≠ (j1, s1), is equivalent to

g(j1,s1)>g(j1s1),g(j1,s1)>M(j1,s1).

But the former inequality is the same as g(j1, s1) > 0, because g(j1, s1) and g(j1, −s1) have opposite signs. Further, the inequality g(j1, s1) > 0 is redundant, as M(j1, s1) ≥ 0. This gives the result.

A.3. Proof of Lemma 5

By l’Hôpital’s rule,

limmΦ¯(u(t,m))Φ¯(m)=limmϕ(u(t,m))ϕ(m)u(t,m)m,

where ϕ is the standard normal density. First, note that

(t,m)(m)=12+m2m2+4t1asm.

Also, a straightforward calculation shows

logϕ(u(t,m))logϕ(m)=m22(11+4t/m2)t2tasm,

where in the last step we used the fact that (11+4t/m2)/(2/m2)t/2, again by l’Hôpital’s rule. Therefore, ϕ(u(t,m))/ϕ(m)et, which completes the proof.

A.4. Proof of Lemma 6

Fix ε > 0, and choose m0 large enough that

|Φ¯(u(t,m/σ))Φ¯(m/σ)et|εfor allmm0.

Starting from (24),

|P(T1>t)et|j1,s10|Φ¯(u(t,m/σ))Φ¯(m/σ)et|Φ¯(m/σ)FM(j1,s1)(dm)εj1,s1m0Φ¯(m/σ)FM(j1,s1)(dm)+j1,s10m0FM(j1,s1)(dm)εj1,s1P(g(j1,s1)>M(j1,s1))+j1,s1P(M(j1,s1)m0).

Above, the term multiplying ε is equal to 1, and the second term can be made arbitrarily small (say, less than ε) by taking p sufficiently large.

A.5. Proof of Theorem 2

We will show that for any fixed m0 > 0 and j1, s1,

P(M(j1,S1)m0)c|s|, (58)

where S ⊆ {1, …, p} \ {j1} is as in the theorem for j = j1, with size |S| ≥ dp, and c < 1 is a constant (not depending on j1). This would imply that

j1,s1P(M(j1,s1)m0)2pcdp0asp,

where we used the fact that dp/ log p → ∞ by (27). The above sum tending to zero now implies the desired convergence result by Lemma 6, and hence it suffices to show (58). To this end, consider

M(j1,s1)=maxjj1,ssUjsRj,j1Uj11ss1Rj,j1maxjj1|UjRj,j1Uj1|1+|Rj,j1|maxjS|UjRj,j1Uj1|2,

where in both inequalities above we used the fact that |Rj,j1| < 1. We can therefore use the bound

P(M(j1,s1)m0)P(|Vj|m0,jS),

where we define Vj=(UjRj,j1Uj1)/2 for jS. Let r = |S|, and without a loss of generality, let S = {1, …, r}. We will show that

P(|V1|m0,,|Vr|m0)cr (59)

for c = Φ (2m0/(σδ))−Φ(−2m0/(σδ)) < 1, by induction; this would complete the proof, as it would imply (58). Before presenting this argument, we note a few important facts. First, the condition in (26) is really a statement about conditional variances:

Var(Ui|U,S\{i})=σ2.[1R1,S\{i}(RS\{i},S\{i})1RS\{i},i]σ2δ2for alliS

where recall that Uj=XjTy,j=1,,p. Second, since U1, …, Ur are jointly normal, we have

Var(Ui|U,S)Var(Ui|U,S)\{i})σ2δ2for anySS\{i}andiS, (60)

which can be verified using the conditional variance formula (i.e., the law of total variance). Finally, the collection V1, …, Vr is independent of Uj1, because these random variables are jointly normal, and E[VjUj1]=0 for all j = 1, …, r.

Now we give the inductive argument for (59). For the base case, note that V1~N(0,τ12), where its variance is

τ12=Var(V1)=Var(V1|Uj1)=Var(U1)/4σ2δ2/4,

the second equality is due to the independence of V1 and Uj1, and the last inequality comes from the fact that conditioning can only decrease the variance, as stated above in (60). Hence,

P(|V1|m0)=Φ(m0/τ1)Φ(m0/τ1)Φ(2m0/(σδ))Φ(2m0/(σδ))=c.

Assume as the inductive hypothesis that P(|V1|m0,,|Vq|m0)cq. Then

P(|V1|m0,,|Vq+1|m0)=P|Vq+1|m0|V1|m0,,|Vq|m0cq.

We have, using the independence of V1, …, Vq+1 and Uj1,

Vq+1|V1,,Vq=dVq+1|V1,,Vq,Uj1=dVq+1|U1,,Uq,Uj1=dN(0,τq+12),

where the variance is

τq+12=Var(Vq+1|U1,,Uq,Uj1)=Var(Uq+1|U1,,Uq)/4σ2δ2/4

and here we again used the fact that conditioning further can only reduce the variance, as in (60). Therefore,

P(|Vq+1|m0|V1,,Vq)Φ(2m0/(σδ))Φ(2m0/(σδ))=c

and so

P(|V1|m0,,|Vq+1|m0)ccq=cq+1,

completing the inductive step.

A.6. Proof of Lemma 7

Notice that

g(jk,sk)>g(j,s)g(jk,sk)(1j,j/jj)>g(j,s)(j,j/jj)g(jk,sk).

We now handle division by 1j,j/j,j in three cases:

  • if 1 − Σj,j/Σjj > 0, then
    g(jk,sk)>g(j,s)g(jk,sk)>g(j,s)(j,j/jj)g(jk,sk)1j,j/jj;
  • if 1 − Σj,j′/Σjj < 0, then
    g(jk,sk)>g(j,s)g(jk,sk)<g(j,s)(j,j/jj)g(jk,sk)1j,j/jj;
  • if 1 − Σj,j/Σjj = 0, then
    g(jk,sk)>g(j,s)0>g(j,s)(j,j/jj)g(jk,sk).

Using this breakdown, we see that the statement g(jk, sk) > g(j, s) for all (j, s) ≠ (jk, sk) is then equivalent to

g(jk,sk)>g(jksk),g(jk,sk)>M+(jk,sk),g(jk,sk)<M(jk,sk),0>M0(jk,sk).

Noting that g(jk, sk) and g(jk, −sk) must have opposite signs, the above is equivalent to

g(jk,sk)>0,g(jk,sk)>M+(jk,sk),g(jk,sk)<Mg(jk,sk),0>M0(jk,sk),

which gives the result in the lemma.

A.7. Proof of Lemma 8

Define σk=σ/C(jk,sk) and u(a,b)=(b+b2+4a)/2. Exactly as before (dropping for simplicity the notational dependence of g, M+ on jk, sk),

g(gM+)/σk2>t,g>M+g/σk>u(t,M+/σk).

Therefore, we can rewrite (40) as

P(Tk>t)=jk,skP(g(jk,sk)/σk>u(t,M+(jk,sk)/σk),g(jk,sk)<M(jk,sk),0>M0(jk,sk)).

Note that we have dropped the inequality g(jk, sk) > 0 from each term, as it is implied by the first inequality g(jk, sk)k > u(t, M+(jk, sk)k) ≥ 0. We can upper bound the right-hand side above by replacing g(jk, sk) < M(jk, sk) with

g(jk,sk)<M(jk,sk)+u(tσk2,M+(jk,sk))M+(jk,sk),

because u(a, b) ≥ b for all a ≥ 0 and b. Furthermore, Lemma 10 (Appendix A.10) shows that indeed σk2=σ2/C(jk,sk)=Var(g(jk,sk)) for fixed jk, sk, and hence g(jk, sk)k is standard normal for fixed jk, sk. Therefore,

P(Tk>t)=jk,sk[Φ(m/σk+u(t,m+/σk)m+/σk)Φ(u(t,m+/σk))]×Gjk,sk(dm+,dm,dm0), (61)

where

Gjk,sk(dm+,dm,dm0)=1{m+<m,m0<0}FM+(Jk,sk)M(Jk,sk),M0(Jk,sk)(dm+,dm,dm0)

with FM+(jk,sk),M(jk,sk),M0(jk,sk) the joint distribution of M+(jk, sk), M(jk, sk), M0 (jk, sk), and we used the fact that g is independent of M+, M, M0 for fixed jk, sk. From (61),

P(Tk>t)etjk,sk(Φ(m/σk+u(t,m+/σk)m+/σk)Φ(u(t,m+/σk))Φ(m/σk)Φ(m+/σk)et)×[Φ(m/σk)Φ(m+/σk)]Gjk,sk(dm+,dm,dm0), (62)

where we here used the fact that

jk,sk[Φ(m/σk)Φ(m+/σk)]Gjk,sk(dm+,dm,dm0)=jk,skP(g(jk,sk)>M+ (jk,sk),g(jk,sk)<M(jk,sk),0>M0(jk,sk))jk,skP(g(jk,sk)>0,g(jk,sk)>M+(jk,sk),g(jk,sk)<M(jk,sk),0>M0(jk,sk))=1,

the last equality following by Lemma 7 (i.e., each term in the last sum is exactly the probability of jk, sk maximizing g). We show in Lemma 11 (Appendix A.11) that

limm+Φ(m+u(t,m+)m+)Φ(u(t,m+))Φ(m)Φ(m+)et,

provided that m > m+. Hence, fix ε > 0, and choose m0 sufficiently large, so that for each k,

Φ(m/σk+u(t,m+/σk)m+ /σk)Φ(u(t,m+/σk))Φ(m/σk)Φ(m+/σk)etεfor allm/σk>m+/σkm0.

Working from (62),

P(Tk>t)etεjk,skm+/σkm0[Φ(m/σk)Φ(m+/σk)]Gjk,sk(dm+,dm,dm0)+jk,skm+/σkm0Gjk,sk(dm+,dm,dm0).

Note that the first term on the right-hand side above is ≤ ε, and the second term is bounded by jk,skP(M+(jk,sk)m0σk), which by assumption can be made arbitrarily small (smaller than, say, ε) by taking p large enough.

A.8. Proof of Lemma 9

For now, we reintroduce the notational dependence of the process g on A, sA, as this will be important. We show in Lemma 12 (Appendix A.12) that for any fixed jk, sk, j, s,

g(A,sA)(j,s)(jk,jjk,jk)g(A,sA)(jk,sk)1jk,j/jk,jk=g(A{jk},sA{j,k})(j,s),

where jk,j=E[g(A,sA)(jk,sk),g(A,sA)(j,s)], as given in (30), and as usual, sA{jk} denotes the concatenation of SA and Sk. According to its definition in (35), therefore,

M+(jk,sk)=max(j,s)S+(jk,sk)g(A{jk},sA{j,k})(j,s)

and hence on the event E(jk, sk), since we have g(A,sA)(jk,sk)>M+(jk,sk),M+(jk,sk)

M+(jk,sk)=max(j,s)S+(jk,sk)g(A{jk},sA{j,k})(j,s)×1{g(A{jk},sA{j,k})(j,s)<g(A,sA)(jk,sk)}maxjA{jk},sg(A{jk},sA{j,k})(j,s)1{g(A{jk},sA{j,k})(j,s)<g(A,sA)(jk,sk)}=M(jkSk).

This means that (now we return to writing g(A,sA) as g, for brevity)

jk,skP({C(jk,sk)g(jk,sk)(g(jk,sk)M(jk,sk))/σ2>t}E(jk,sk))jk,skP({C(jk,sk)g(jk,sk)(g(jk,sk)M+(jk,sk))/σ2>t}E(jk,sk))

and so limpP(Tk>t)limpP(Tk>t)et, the desired conclusion.

A.9. Proof of Theorem 3

Since we are assuming that P(B)1, we know that P(Tk>t|B)(Tk>t)0, so we only need to consider the marginal limiting distribution of Tk. We write A = A0 and sA=sA0. The general idea here is similar to that used in the proof of Theorem 2. Fixing m0 and jk, sk, we will show that

P(M+(jk,sk)m0σk)c|s|, (63)

where S ⊆ {1, …, p} \ A ∪ {jk}) is as in the statement of the theorem for j = jk, with size |S| ≥ dp, and c < 1 is a constant (not depending on jk). Also, as in the proof of Lemma 8, we abbreviated σk=σ/C(jk,sk). This bound would imply that

jk,skP(M+(jk,sk)m0σk)2pcdp0asp,

since dp/ log p → 0. The above sum converging to zero is precisely the condition required by Lemma 9, which then gives the desired (conservative) exponential limit for Tk. Hence, it is suffices to show (63). For this, we start by recalling the definition of M+ in (35):

M+(jk,sk)=max(j,s)S+(jk,sk)g(j,s)(jk,j/jk,jk)g(jk,sk)1jk,j/jk,jk

where S+(jk,sk)={(j,s):jA{jk},jk,jjk,jk<1}.

Here, we write jk,j=E[g(jk,sk)g(j,s)]; note that jk,jk=σk2 (as shown in Lemma 10). First, we show that the conditions of the theorem actually imply that S+(jk, sk) ⊇ S × {−1, 1}|S|. This is true because for jS and any s ∈ {−1, 1}, we have |Rj,jk|/Rjk,jk<η/(2η) by (44), and

|Rj,jk|/Rjk,jk<η/(2η)|Rj,jkRjk,jkskXjkT(XAT)+sAsXjT(XAT)+sA|<1jk,j/jk,jk<1.

The first implication uses the assumption (42), as |skXjkT(XAT)+sA|1+(XA)+Xjk12η and |sXjT(XAT)+sA|1(XA)+Xjk1η, and the second simply follows from the definition of jk,j and jk,jk. Therefore,

M+(jk,sk)maxjS,sg(j,s)(jk,j/jk,jk)g(jk,sk)1jk,j/jk,jk.

Let Uj=XjT(IPA)y and θjk,j=Rjk,j/Rjk,jk for jS. By the arguments given in the proof of Lemma 12, we can rewrite the right-hand side above, yielding

M+(jk,sk)maxjS,sUjθjk,jUjksXjT(XA{jk}T)+sA{jk}=maxjS,ss(Ujθjk,jUjk)1sXjT(XA{jk}T)+sA{jk}maxjS|Ujθjk,jUjk|1+|XjT(XA{jk}T)+sA{jk}|maxjS|Ujθjk,jUjk|2,

where the last two inequalities above follow as |XjT(XA{jk})+sA{jk}|<1 for all jS, which itself follows from the assumption that (XA{jk})+Xj<1 for all jS, in (45). Hence,

P(M+(jk,sk)m0σk)P(|Vj|m0σk,jS),

where Vj=(Ujθjk,jUjk)/2. Writing without a loss of generality r = |S| and S = {1, …, r}, it now remains to show that

P(|V1|m0σk,,|Vr|m0σk)cr. (64)

Similar to the arguments in the proof of Theorem 2, we will show (64) by induction, for the constant c=Φ(2m0C/(δη))Φ(2m0C/(δη))<1. Before this, it is helpful to discuss three important facts. First, we note that (43) is actually a lower bound on the ratio of conditional to unconditional variances:

Var(Ui|U,S\{i})/Var(Ui)=[RiiRi,s\{i}(Rs\{i},s\{i})1Rs\{i},i]/Riiδ2for alliS.

Second, conditioning on a smaller set of variables can only increase the conditional variance:

Var(Ui|U,s)Var(Ui|U,S\{i})δ2σ2Riifor anysS\{i}andiS,

which holds as U1, …, Ur are jointly normal. Third, and lastly, the collection V1, …, Vr is independent of Ujk, since these variables are all jointly normal, and it is easily verified that E[VjUjk]=0 for each j = 1, …, r.

We give the inductive argument for (64). For the base case, we have V1~N(0,τ12), where

τ12=Var(V1)=Var(V1|Ujk)=Var(U1)/4δ2σ2R11/4.

Above, in the second equality, we used that V1 and Ujk are independent, and in the last inequality, that conditioning on fewer variables (here, none) only increases the variance. This means that

P=(|V1|m0σk)P(|Z|2m0σk/(δσR11))P(|Z|2m0C/(δη))c,

where Z is standard normal; note that in the last inequality above, we applied the upper bound

σk2σ2R11=jk,jkσ2R11=Rjk,jkR111[skXjkT(XAT)+sA]2Cη2.

Now, for the inductive hypothesis, assume that P(|V1|m0σk,,|Vq|m0σk)cq. Consider

P(|V1|m0σk,,|Vq+1|m0σk)=P(|Vq+1|m0σk||V1|m0σk,,|V1|m0σk)cq.

Using the independence of V1, …, Vq+1 and Ujk,

Vq+1|V1,,Vq=dVq+1|V1,,Vq,Ujk=dVq+1|U1,,Uq,Ujk=dN(0,τq+12).

The variance τq+12 is

τq+12=Var(Vq+1|U1,,Uq,Ujk)=Var(Vq+1|U1,,Uq)/4δ2σ2Rq+1,q+1/4,

where we again used the fact that conditioning on a smaller set of variables only makes the variance larger. Finally,

P(|Vq+1|m0σk|V1,,Vq)P(|Z|2m0σk/(δσRq+1,q+1))P(|Z|2m0C/(δη))=c,

where we used σk2/(σ2Rq+1,q+1)C/η2 as above, and so

P(|V1|m0σk,,|Vq+1|m0σk)ccq=cq+1.

This completes the inductive proof.

A.10. Statement and proof of Lemma 10

Lemma 10

For any fixed A, sA, and any j ∉ A, s ∈ {−1, 1}, we have

Var(g(j,s))=XjT(IPA)XjTσ2[sXjT(XAT)+sA]2=σ2(XA{j}T)+sA{j}(XAT)+sA22,

where sA{j} denotes the concatenation of SA and S.

Proof

We will so that

[sXjT(XjT)+sA]2XjT(IPA)XjT=(XA{j}T)+sA{j}(XAT)+sA22. (65)

The right-hand side above, after a straightforward calculation, is shown to be equal to

sA{j}T(XA{j}TXA{j})1sA{j}sA(XATXA)1sA. (66)

Now let z=(XA{j}TXA{j})1sA{j}. In block form,

[XATXAXATXjXjTXAXjTXj][z1z2]=[sAs]. (67)

Solving for z1 in the first row yields

z1=(XATXA)1sA(XA)+Xjz2

and, therefore, (66) is equal to

sATz1+sz2sAT(XATXA)1sA=[ssAT(XA)+Xj]z2. (68)

Solving for z2 in the second row of (67) gives

z2=ssAT(XA)+XjXAT(IPA)Xj.

Plugging this value into (68) produces the left-hand side in (65), completing the proof. □

A.11. Statement and proof of Lemma 11

Lemma 11

If v = v(m) satisfies v > m, then for any t ≥ 0,

limmΦ(v+u(t,m)m)Φ(u(t,m))Φ(v)Φ(m)et.

Proof

First note, using a Taylor series expansion of 1+4t/m3, that for sufficiently large m,

u(t,m)m+tmt2m3. (69)

Also, a simple calculation shows that (u(t, m) − m)/∂m ≤ 0 for all m, so that

u(t,w)wu(t,m)mfor allwm. (70)

Now consider

Φ(v+u(t,m)m)Φ(u(t,m))=u(t,m)v+u(t,m)mez2/22πdz=mve(w+u(t,m)m)2/22πdwmveu(t,m)2/22πdwmve(w+t/mt2/m3)2/22πdw,

where the first inequality follows from (70), and the second from (69) (assuming m is large enough). Continuing from the last upper bound,

mve(w+t/mt2/m3)2/22πdw=etmvew2/22πf(w,t)dw,

where

f(w,t)=exp(t22w2+t3w4t42w6).

Therefore, we have

Φ(v+u(t,m)m)Φ(u(t,m))Φ(v)Φ(m)et(mv(ew2/2/2π)f(w,t)dwmv(ew2/2/2π)dw1)e1. (71)

It is clear that f(w, t) → 1 as w → ∞. Fixing ε choose m0 large enough so that all wm0, we have |f (w, t) −1| ≤ ε. Then the term multiplying et on the right-hand side in (71), for mm0, is

mv(ew2/2/2π)f(w,t)dwmv(ew2/2/2π)dw1mv(ew2/2/2π)|f(w,t)1|dwmv(ew2/2/2π)dwε,

which shows that the right-hand side in (71) is ≤ ε · etε, and completes the proof. □

A.12. Statement and proof of Lemma 12

Lemma 12

For any fixed jk, sk, j, s (and fixed A, sA), we have

g(A,sA)(j ,s)(jk,j/jk,jk)g(A,sA)(jk,sk)1jk,j/jk,jk=g(A{j,k},sA{j,k})(j,s) (72)

where jk,j denotes the covariance between g(A,sA)(jk,sk) and g(A,sA)(j,s)

jk,j=XjkT(IPA)Xjσ2[sksAT(XA)+Xjk][ssAT(XA)+Xj].

Proof

Simple manipulations of the left-hand side in (72) yield the expression

XjT(IPA)yθjk,jXjkT(IPA)yssAT(XA)+Xjθjk,j[sksAT(ZA)+Xjk], (73)

where θjk,jXjkT(IPA)Xj/(XjkT(IPA)Xjk). Now it remains to show that (73) is equal to

XjT(IPA{jk })yssA{jk}T(XA{jk})+Xj. (74)

We show individually that the numerators and denominators in (73) and (74) are equal. First the denominators: starting with (73), notice that

ssAT(XA)+Xjθjk,j[sksAT(XA)+Xjk]=ssA{jk}T[(XA)+(Xjθjk,jXjk)θjk,j]. (75)

By the well-known formula for partial regression coefficients,

θjk,j=XjkT(IPA)XjXjkT(IPA)Xjk=[(XA{jk})+Xj]jk,

that is, θjk,j is the (jk)th coefficient in the regression of Xj on XA{jk}. Hence, to show that (75) is equal to the denominator in (74), we need to show that (XA)+(Xjθjk,jXjk) gives the coefficients in A in the regression of Xj on XA{jk}. This follows by simply noting that the coefficients (XA{jk})+Xj=(θA,j,θjk,j) satisfy the equation

XAθA,j+Xjkθjk,j=PA{jk}Xj

and so solving for θA,j,

θA,j=(XA)+(PA{jk}Xjθjk,jXjk)=(XA)+(Xjθjk,jXjk).

Now for the numerators: again beginning with (73), its numerator is

yT(IPA)(Xjθjk,jXjk) (76)

and by essentially the same argument as above, we have

PA(Xjθjk,jXjk)=PA{jk}Xj,

therefore, (76) matches the numerator in (74). □

Footnotes

1

Discussed in 10.1214/13-AOS1175A, 10.1214/13-AOS1175B, 10.1214/13-AOS1175C, 10.1214/13-AOS1175D, 10.1214/13-AOS1175E and 10.1214/14-AOS1175F; rejoinder at 10.1214/14-AOS1175REJ.

6

It is important to mention that a simple application of sample splitting can yield proper p-values for an adaptive procedure like forward stepwise: for example, run forward stepwise regression on one-half of the observations to construct a sequence of models, and use the other half to evaluate significance via the usual chi-squared test. Some of the related work mentioned in Section 2.5 does essentially this, but with more sophisticated splitting schemes. Our proposal uses the entire data set as given, and we do not consider sample splitting or resampling techniques. Aside from adding a layer of complexity, the use of sample splitting can result in a loss of power in significance testing.

7

Points X1,,XpRn are said to be in general position provided that no k-dimensional affine subspace LRn, k < min{n, p}, contains more than k + 1 elements of {±X1, …, ±Xp}, excluding antipodal pairs. Equivalently: the affine span of any k + 1 points s1Xi1,,sk+1Xik+1, for any signs s1,…, sk+1 ∈ {−1, 1}, does not contain any element of the set {±Xi : i 6≠ i1,…, ik+1}.

8

From its definition in (5), we get Tk=yµ,Xβ^(λk+1)yµ,XAβA(λk+1)+µ,Xβ^(λk+1)XAβA(λk+1) by expanding y = yμ + μ, with μ = * denoting the true mean. The first two terms are now really empirical covariances, and the last term is typically small. In fact, when X is orthogonal, it is not hard to see that this last term is exactly zero under the null hypothesis.

9

In principle, fixed hypothesis tests can be used along with the appropriate correction for multiple comparisons in order to test a random null hypotheses. Aside from being conservative, it is unclear how to efficiently carry out such a procedure when the random null hypothesis consists of a group of coefficients (as opposed to a single one).

10

In expressing the joining and leaving times in the forms (15) and (16), we are implicitly assuming that λk+1 < λk, with strict inequality. Since X has columns in general position, this is true for (Lebesgue) almost every y, or in other words, with probability one taken over the normally distributed errors in (1).

11

To be perfectly clear, here Σj,j′ actually depends on s, s′, but our notation suppresses this dependence for brevity.

12

Note that in the Gaussian case, this definition is actually σ2 times the usual notion of degrees of freedom; hence in the presence of a scale parameter, we would divide the right-hand side in the definition (50) by this scale parameter, and we would do the same for the covariance statistic as defined in (50).

Contributor Information

Richard Lockhart, Email: lockhart@sfu.ca, Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, British Columbia V5A 1S6, Canada.

Jonathan Taylor, Email: jonathan.taylor@stanford.edu, Department of Statistics, Stanford University, Stanford, California 94305, USA.

Ryan J. Tibshirani, Email: ryantibs@cmu.edu, Departments of Statistics and Machine Learning, Carnegie Mellon University, 229B Baker Hall, Pittsburgh, Pennsylvania 15213, USA.

Robert Tibshirani, Email: tibs@stanford.edu, Department of Health, Research & Policy, Department of Statistics, Stanford University, Stanford, California 94305, USA.

References

  1. Beck A, Teboulle M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci. 2009;2:183–202. MR2486527. [Google Scholar]
  2. Becker S, Bobin J, Candès EJ. NESTA: A fast and accurate first-order method for sparse recovery. SIAM J Imaging Sci. 2011;4:1–39. MR2765668. [Google Scholar]
  3. Becker SR, Candès EJ, Grant MC. Templates for convex cone problems with applications to sparse signal recovery. Math Program Comput. 2011;3:165–218. MR2833262. [Google Scholar]
  4. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J. Distributed optimization and statistical learning via the alternative direction method of multipliers. Faund Trends Mach Learn. 2011;3:1–122. [Google Scholar]
  5. Bühlmann P. Statistical significance in high-dimensional linear models. Bernoulli. 2013;19:1212–1242. MR3102549. [Google Scholar]
  6. Candès EJ, Plan Y. Near-ideal model selection by ℓ1minimization. Ann Statist. 2009;37:2145–2177. MR2543688. [Google Scholar]
  7. Candes EJ, Tao T. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans Inform Theory. 2006;52:5406–5425. MR2300700. [Google Scholar]
  8. Chen SS, Donoho DL, Saunders MA. Atomic decomposition by basis pursuit. SIAM J Sci Comput. 1998;20:33–61. MR1639094. [Google Scholar]
  9. de Haan L, Ferreira A. Extreme Value Theory: An Introduction. Springer; New York: 2006. MR2234156. [Google Scholar]
  10. Donoho DL. Compressed sensing. IEEE Trans Inform Theory. 2006;52:1289–1306. MR2241189. [Google Scholar]
  11. Efron B. How biased is the apparent error rate of a prediction rule? J Amer Statist Assoc. 1986;81:461–470. MR0845884. [Google Scholar]
  12. Efron B, Hastie T, Johnstone I, Tibshirani R. Least angle regression. Ann Statist. 2004;32:407–499. MR2060166. [Google Scholar]
  13. Fan J, Guo S, Hao N. Variance estimation using refitted cross-validation in ultrahigh-dimensional regression. J R Stat Soc Ser B Stat Methodol. 2012;74:37–65. doi: 10.1111/j.1467-9868.2011.01005.x. MR2885839. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Friedman J, Hastie T, Tibshirani R. Regularization paths for generalized linear models via coordinate descent. J Stat Softw. 2010;33:1–22. [PMC free article] [PubMed] [Google Scholar]
  15. Friedman J, Hastie T, Höfling H, Tibshirani R. Pathwise coordinate optimization. Ann Appl Stat. 2007;1:302–332. MR2415737. [Google Scholar]
  16. Fuchs JJ. Recovery of exact sparse representations in the presence of bounded noise. IEEE Trans Inform Theory. 2005;51:3601–3608. MR2237526. [Google Scholar]
  17. Grazier G’Sell M, Taylor J, Tibshirani R. Adaptive testing for the graphical lasso. 2013 Preprint. Available at arXiv:1307.4765. [Google Scholar]
  18. Grazier G’Sell M, Wager S, Chouldechova A, Tibshirani R. False discovery rate control for sequential selection procedures, with application to the lasso. 2013 Preprint. Available at arXiv:1309.5352. [Google Scholar]
  19. Greenshtein E, Ritov Y. Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli. 2004;10:971–988. MR2108039. [Google Scholar]
  20. Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning; Data Mining, Inference, and Prediction. 2. Springer; New York: 2008. MR2722294. [Google Scholar]
  21. Javanmard A, Montanari A. Confidence intervals and hypothesis testing for high-dimensional regression. 2013a Preprint. Available at arXiv:1306.3171. [Google Scholar]
  22. Javanmard A, Montanari A. Hypothesis testing in high-dimensional regression under the Gaussian random design model: Asymptotic theory. 2013b Preprint. Available at arXiv:1301.4240. [Google Scholar]
  23. Meinshausen N, Bühlmann P. Stability selection. J R Stat Soc Ser B Stat Methodol. 2010;72:417–473. MR2758523. [Google Scholar]
  24. Meinshausen N, Meier L, Bühlmann P. p-values for high-dimensional regression. J Amer Statist Assoc. 2009;104:1671–1681. MR2750584. [Google Scholar]
  25. Minnier J, Tian L, Cai T. A perturbation method for inference on regularized regression estimates. J Amer Statist Assoc. 2011;106:1371–1382. doi: 10.1198/jasa.2011.tm10382. MR2896842. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Osborne MR, Presnell B, Turlach BA. A new approach to variable selection in least squares problems. IMA J Numer Anal. 2000a;20:389–403. MR1773265. [Google Scholar]
  27. Osborne MR, Presnell B, Turlach BA. On the LASSO and its dual. J Comput Graph Statist. 2000b;9:319–337. MR1822089. [Google Scholar]
  28. Park MY, Hastie T. L1-regularization path algorithm for generalized linear models. J R Stat Soc Ser B Stat Methodol. 2007;69:659–677. MR2370074. [Google Scholar]
  29. Rhee SY, Gonzales MJ, Kantor R, Betts BJ, Ravela J, Shafer RW. Human immunodeficiency virus reverse transcriptase and protease sequence database. Nucleic Acids Res. 2003;31:298–303. doi: 10.1093/nar/gkg100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Sun T, Zhang CH. Scaled sparse linear regression. Biometrika. 2012;99:879–898. MR2999166. [Google Scholar]
  31. Taylor J, Loftus J, Tibshirani RJ. Tests in adaptive regression via the Kac–Rice formula. 2013 Preprint. Available at arXiv:1308.3020. [Google Scholar]
  32. Taylor J, Takemura A, Adler RJ. Validity of the expected Euler characteristic heuristic. Ann Probab. 2005;33:1362–1396. MR2150192. [Google Scholar]
  33. Tibshirani R. Regression shrinkage and selection via the lasso. J Roy Statist Soc Ser B. 1996;58:267–288. MR1379242. [Google Scholar]
  34. Tibshirani Ryan J. The lasso problem and uniqueness. Electron J Stat. 2013;7:1456–1490. MR3066375. [Google Scholar]
  35. Tibshirani RJ, Taylor J. Degrees of freedom in lasso problems. Ann Statist. 2012;40:1198–1232. MR2985948. [Google Scholar]
  36. van de Geer S, Bühlmann P. On asymptotically optimal confidence regions and tests for high-dimensional models. 2013 Preprint. Available at ar:Xiv:1303.0518. [Google Scholar]
  37. Wainwright MJ. Sharp thresholds for high-dimensional and noisy sparsity recovery using ℓ1-constrained quadratic programming (Lasso) IEEE Trans Inform Theory. 2009;55:2183–2202. MR2729873. [Google Scholar]
  38. Wasserman L, Roeder K. High-dimensional variable selection. Ann Statist. 2009;37:2178–2201. doi: 10.1214/08-aos646. MR2543689. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Weissman I. Estimation of parameters and large quantiles based on the k largest observations. J Amer Statist Assoc. 1978;73:812–815. MR0521329. [Google Scholar]
  40. Zhang CH, Zhang S. Confidence intervals for low dimensional parameters in high dimensional linear models. J R Stat Soc Ser B Stat Methodol. 2014;76:217–242. MR3153940. [Google Scholar]
  41. Zhao P, Yu B. On model selection consistency of Lasso. J Mach Learn Res. 2006;7:2541–2563. MR2274449. [Google Scholar]
  42. Zou H, Hastie T. Regularization and variable selection via the elastic net. J R Stat Soc Ser B Stat Methodol. 2005;67:301–320. MR2137327. [Google Scholar]
  43. Zou H, Hastie T, Tibshirani R. On the “degrees of freedom” of the lasso. Ann Statist. 2007;35:2173–2192. MR2363967. [Google Scholar]

RESOURCES