Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Mar 12.
Published in final edited form as: J Am Stat Assoc. 2019 Sep 17;115(532):1822–1834. doi: 10.1080/01621459.2019.1654878

IPAD: Stable Interpretable Forecasting with Knockoffs Inference

Yingying Fan 1, Jinchi Lv 1, Mahrad Sharifvaghefi 1, Yoshimasa Uematsu 2
PMCID: PMC7954402  NIHMSID: NIHMS1573109  PMID: 33716359

Abstract

Interpretability and stability are two important features that are desired in many contemporary big data applications arising in statistics, economics, and finance. While the former is enjoyed to some extent by many existing forecasting approaches, the latter in the sense of controlling the fraction of wrongly discovered features which can enhance greatly the interpretability is still largely underdeveloped. To this end, in this paper we exploit the general framework of model-X knockoffs introduced recently in Candès, Fan, Janson and Lv (2018), which is nonconventional for reproducible large-scale inference in that the framework is completely free of the use of p-values for significance testing, and suggest a new method of intertwined probabilistic factors decoupling (IPAD) for stable interpretable forecasting with knockoffs inference in high-dimensional models. The recipe of the method is constructing the knockoff variables by assuming a latent factor model that is exploited widely in economics and finance for the association structure of covariates. Our method and work are distinct from the existing literature in that we estimate the covariate distribution from data instead of assuming that it is known when constructing the knockoff variables, our procedure does not require any sample splitting, we provide theoretical justifications on the asymptotic false discovery rate control, and the theory for the power analysis is also established. Several simulation examples and the real data analysis further demonstrate that the newly suggested method has appealing finite-sample performance with desired interpretability and stability compared to some popularly used forecasting methods.

Keywords: Reproducibility, Power, Latent factors, Model-X knockoffs, Large-scale inference and FDR, Stability

1. Introduction

Forecasting is a fundamental problem that arises in statistics, economics, and finance. With the availability of big data, many machine learning algorithms such as the Lasso and random forest can be resorted to for such a purpose by exploring a large pool of potential features. Many of these existing procedures provide a certain measure of feature importance which can then be utilized to judge the relative importance of selected features for the goal of interpretability. Yet the issue of stability in the sense of controlling the fraction of wrongly discovered features is still largely underdeveloped. As argued in [20] in the econometric settings, it is difficult to obtain interpretability and stability simultaneously even in simple Lasso forecasting. A natural question is how to ensure both interpretability and stability for flexible forecasting.

Naturally stability is related to statistical inference. The recent years have witnessed a growing body of work on high-dimensional inference in the statistics and econometrics literature; see, for example, [42], [38], [43], [16], [18], [17], [36], and [29]. Most existing work on high-dimensional inference for interpretable models has focused primarily on the aspects of post-selection inference known as selective inference and debiasing for regularization and machine learning methods. In real applications, one is often interested in conducting global inference relative to the full model as opposed to local inference conditional on the selected model. Moreover, many statistical inferences are based on p-values from significance testing. However, oftentimes obtaining valid p-values even for the Lasso in relatively complicated high-dimensional nonlinear models also remains largely unresolved, not to mention for the case of more complicated model fitting procedures such as random forest. Indeed high-dimensional inference is intrinsically challenging even in the parametric settings [27].

The desired property of stability for interpretable forecasting in this paper concentrates on global inference by controlling precisely the fraction of wrongly discovered features in high-dimensional models, which is also known as reproducible large-scale inference. Such a problem involves testing the joint significance of a large number of features simultaneously, which is known widely as the problem of multiple testing in statistical inference. For this problem, the null hypothesis for each feature states that the feature is unimportant in the joint model which can be understood as the property that this individual feature and the response are independent conditional on all the remaining features, while the corresponding alternative hypothesis states the opposite. Conventionally p-values from the hypothesis testing are used to decide whether or not to reject each null hypothesis with a significance level to control the probability of false discovery in a single hypothesis test, meaning rejecting the null hypothesis when it is true. When performing multiple hyothesis tests, the probability of making at least one false discovery which is known as the family-wise error rate can be inflated compared to that for the case of a single hypothesis test. The work on controlling such an error rate for multiple testing dates back to [13], where a simple, useful idea is lowering the significance level for each individual test as the target level divided by the total number of tests to be performed. The Bonferroni correction procedure is, however, well known to be conservative with relatively low power. Later on, [30] proposed a stepdown procedure which is less conservative than the Bonferroni procedure. More recently, [35] suggested a procedure in which the critical values of individual tests are constructed sequentially.

A more powerful and extremely popular approach to multiple testing is the Benjamini–Hochberg (BH) procedure for controlling the false discovery rate (FDR) which was originated in [9], where the FDR is defined as the expectation of the fraction of falsely rejected null hypotheses known as the false discovery proportion. Given the p-values from the multiple hypothesis tests, this procedure sorts the p-values from low to high and chooses a simple, intuitive cutoff point, which can be viewed as an adaptive extension of the Bonferroni correction for multiple comparisons, of the p-values for rejecting the null hypotheses. The BH procedure was shown to be capable of controlling the FDR at the desired level for independent test statistics in [9] and for positive regression dependency among the test statistics in [10], where it was shown that a simple modification of the procedure can control the FDR under other forms of dependency but such a modification is generally conservative. There is a huge literature on the theory, applications, and various extensions of the original BH procedure for FDR control; see, for instance, [8], [24], [7], and [19].

The aforementioned econometric and statistical inference methods including the BH-type procedures for FDR control are all rooted on the availability and validity of computable p-values for evaluating variable importance. As mentioned before, such a prerequisite can become a luxury that is largely unclear how to obtain in high dimensions even for the case of Lasso in general nonlinear models and random forest. In contrast, [4] proposed a novel procedure named the knockoff filter for FDR control that bypasses the use of p-values in Gaussian linear model with deterministic design matrix, where the dimensionality is no larger than the sample size, and [5] generalized the method to high-dimensional linear models as a two-step procedure based on sample splitting, where a feature screening approach is used to reduce the dimensionality to below sample size (see, e.g., [23] and [25]) and then the knockoff filter is applied to the set of selected features after the screening step for selective inference. The key ingredient of the knockoff filter is constructing the so-called knockoff variables in a geometrical way that mimic perfectly the correlation structure among the original covariates and can be used as control variables to evaluate the importance of original variables. Recently, [15] extended the work of [4] by introducing the framework of model-X knockoffs for FDR control in general high-dimensional nonlinear models. A crucial distinction is that the knockoff variables are constructed in a probabilistic fashion such that the joint dependency structure of the original variables and their knockoff copies is invariant to the swapping of any set of original variables and their knockoff counterparts, which enables us to go beyond linear models and handle high dimensionality. As a result, model-X knockoffs enjoys exact finite-sample FDR control at the target level. However, a major assumption in [15] is that the joint distribution of all the covariates needs to be known for the valid FDR control.

Motivated by applications in economics and finance, in this paper we model the association structure of the covariates using the latent factor model, which reduces effectively the dimensionality and enables reliable estimation of the unknown joint distribution of all the covariates. By taking into account the latent factor model structure, we first estimate the association structure of covariates and then construct empirical knockoffs matrix using the estimated dependency structure. Our empirical knockoffs matrix can be regarded as an approximation to the oracle knockoffs matrix in [15] that requires the knowledge of the true covariate distribution. Exploiting the general framework of model-X knockoffs in [15], we suggest the new method of intertwined probabilistic factors decoupling (IPAD) for stable interpretable forecasting with knockoffs inference in high-dimensional models. The innovations of our method and work are fourfold. First, we estimate the covariate distribution from data instead of assuming that it is known when constructing the knockoff variables. Second, our procedure does not require any sample splitting and is thus more practical when the sample size is limited. Third, we provide theoretical justifications on the asymptotic false discovery rate control when the estimated dependency structure is employed. Fourth, the theory for power analysis is also established which reveals that there can be asymptotically no power loss in applying the knockoffs procedure compared to the underlying variable selection method. Therefore, FDR control by knockoffs can be a pure gain. Compared to earlier work, an additional challenge of our study is that knowing the true underlying distribution does not lead to the most efficient construction of the oracle knockoffs matrix due to the presence of latent factors.

The rest of the paper is organized as follows. Section 2 introduces the model setting and presents the new IPAD procedure. We establish the asymptotic properties of IPAD in Section 3. Sections 4 and 5 present several simulation and real data examples to showcase the finite-sample performance and the advantages of our newly suggested procedure compared to some popularly used ones. We discuss some implications and extensions of our work in Section 6. The proofs of the main results are relegated to the Appendix. Additional technical details and numerical results are provided in the Supplementary Material.

2. Intertwined probabilistic factors decoupling

To facilitate the technical presentation, we will introduce the model setting for the high-dimensional FDR control problem in Section 2.1 and present the new IPAD procedure in Section 2.2.

2.1. Model setting

Consider the high-dimensional linear regression model

y=Xβ+ε, (1)

where y ∈ ℝn is the response vector, X ∈ ℝn×p is the random matrix of a large number of potential regressors, β = (β1, ⋯, βp)′ ∈ ℝp is the regression coefficient vector, ε ∈ ℝn is the vector of model errors, and n and p denote the sample size and dimensionality, respectively. Here without loss of generality, we assume that both the response and the covariates are centered with mean zero and thus there is no intercept. Motivated by many applications in economics and finance, we further assume that the design matrix X follows the exact factor model

X=F0Λ0+E=C0+E, (2)

where F0=(f10,,fn0)n×r is a random matrix of latent factors, Λ0=(λ10,,λp0)p×r is a matrix of deterministic factor loadings, and error term E ∈ ℝn×p captures the remaining variation that cannot be explained by these latent factors. We assume that the number of factors r is fixed but unknown and the components of E are independent and identically distributed (i.i.d.) from some parametric distribution with cumulative distribution function G(·;η0), where η0 ∈ ℝm is a finite-dimensional unknown parameter vector. For technical simplicity, models (1) and (2) are assumed to have no endogeneity and satisfy that F0 has i.i.d. rows and is independent of E.

In this paper, we focus on the high-dimensional scenario when the dimensionality p can be much larger than sample size n. Therefore, to ensure model identifiability we impose the sparsity assumption that the true regression coefficient vector β has only a small portion of nonzeros; specifically, β takes nonzero values only on some (unknown) index set S0{1,,p} and βj = 0 for all jS1:={1,,p}\S0. Denote by s=|S0| the size of S0. We assume that s = o(n) throughout the paper.

We are interested in identifying the index set S0 with a theoretically guaranteed error rate. To be more precise, we try to select variables in S0 while keeping the false discovery rate (FDR) under some prespecified desired level q ∈ (0,1), where the FDR is defined as

FDR:=E[FDP]withFDP:=|S^S1||S^|1. (3)

Here the FDP stands for the false discovery proportion and S^ represents the set of variables selected by some procedure using observed data (X, y). A slightly modified version of FDR is defined as

mFDR:=E[|S^S1||S^|+q1]. (4)

Clearly, FDR is more conservative than mFDR in that the latter is always under control if the former is.

It is easy to see that FDR is a measurement of type I error for variable selection. The other important aspect of variable selection is power, which is defined as

Power:=E[|S^S0||S0|]=E[S^S0|s]. (5)

It is well known that FDR and power are two sides of the same coin. We aim at developing a variable selection procedure with theoretically guaranteed FDR control and meanwhile achieving high power.

2.2. IPAD

The key ingredient of the model-X knockoffs framework introduced originally in [15] is the construction of the so-called model-X knockoff variables defined as follows.

Definition 1 (Model-X knockoff variables [15])

For a set of random variables x = (X1,⋯, Xp), a new set of random variables x˜=(X˜1,,X˜p) is called a set of model-X knockoff variables if it satisfies the following properties:

  1. For any subset S{1,,p}, we have [x,x˜]swap(S)=d[x,x˜], where =d denotes equal in distribution and the vector [x,x˜]swap(S) is obtained by swapping Xj and X˜j for each jS.

  2. Conditional on x, the knockoffs vector x˜ is independent of response Y.

See Section B of Supplementary Material for a brief review of the model-X knockoffs framework. In theory, if the distribution of C0 and the value of η0 are known, the SCIP algorithm proposed in [15] can be used to construct the knockoff variables. However, the computational cost can be high depending on the exact distributions. Instead we introduce a more efficient and practically implementable approach for constructing the knockoff variables below.

We start with introducing the knockoff generating function – for each given augmented parameter vector θ = vec(vec C, η), define

X˜(θ)=C+Eη, (6)

where Eη is a matrix composed of i.i.d. random samples from distribution G(·;η). To gain some insights, let us first consider the ideal situation where the the factor model structure (2) is fully available; that is, we know the realization C0 and the true distribution G(·;η0) for the error matrix E. In such case, the oracle (ideal) knockoffs matrix X˜(θ0) can be constructed as

X˜(θ0)=C0+Eη0, (7)

where Eη0 is an i.i.d. copy of E. Note that Eη0 itself is not a function of η0, but we slightly abuse the notation to emphasize the dependence of the distribution function on parameter η0. In practice, θ0 is unknown and needs to be estimated. Letting θ^ denote an estimator (obtained using data X) of θ0, we name X˜(θ^) as the empirical knockoffs matrix:

X˜(θ^)=C^+Eη^, (8)

where C^ is an empirical estimate of C0 and Eη^n×p is composed of i.i.d. random variables from the plug-in estimate of the distribution function, G(;η^), and is independent of (X, y) conditional on η^. The following proposition justifies the validity of the oracle knockoffs matrix.

Proposition 1

Under model setting (2), the oracle knockoffs matrix defined in (7) satisfies Definition 1.

However, the empirical knockoffs matrix given in (8) generally does not satisfy the exchangeability property because of the dependence of θ^ on the training data X. Although the oracle knockoffs matrix is generally unavailable, it plays an important role in our theoretical developments as a proxy of the empirical knockoffs matrix. We remark that in the construction above, we slightly misuse the concept and call C0 a parameter. This is because although C0 is a random matrix, for the construction of valid knockoff variables it is the particular realization C0 leading to the observed data matrix X that matters. In other words, a valid construction of knockoff variables requires the knowledge of the specific realization C0 instead of the distribution of C0. To understand this, consider the scenario where the underlying parameter η0 and the exact distribution of C0 are fully available. If we independently generate random variables from this known distribution and form a new data matrix X1, because of the independence between X1 and X, the exchangeability assumption in Definition 1 will be violated and thus X1 cannot be a valid knockoffs matrix. On the other hand, as long as we know the realization C0 and parameter η0, a valid knockoffs matrix X˜(θ0) can be constructed using (7) regardless of whether the exact distribution of C0 is available or not.

In practice, however, θ0 is unavailable and consequently, X˜(θ0) is inaccessible. To over-come this difficulty, we next introduce our new method IPAD. With the aid of empirical knockoffs matrix, we suggest the following IPAD procedure for FDR control with knockoffs inference.

Procedure 1 (IPAD)

  1. (Estimation of parameters) Estimate the unknown parameters in θ0 using the design matrix X. Denote by θ^=(C^,η^) the resulting estimated parameter vector.

  2. (Construction of empirical knockoffs matrix) Construct the empirical knockoffs matrix (8) by applying the knockoff generating function in (6) to the estimated parameter θ^.

  3. (Application of knockoffs inference) Calculate knockoff statistics Wj(θ^) using data ([X,X˜(θ^)],y) and then construct S^ by applying knockoffs inference to Wj(θ^).

Intuitively, the accuracy of the estimator θ^ in Step 1 will affect the performance of our IPAD procedure. In fact, as shown later in our Theorem 1 in Section 3, the consistency rate of θ^ is indeed reflected in the asymptotic FDR control of the IPAD procedure. There are various ways to construct estimator θ^. A natural and popularly used one is the principal component (PC) estimator (F^,Λ^) studied in [2]. Specifically, we first estimate the number of factors r, denoted as r^, using some method such as the information criterion in [3] or the approach in [1], and then set C^=F^Λ^, where F^ is T1/2 times the eigenvectors corresponding to the top r^ largest eigenvalues of XX′, and Λ^=XF^/T. As for the estimation of η0, existing methods such as the method of moments can be used based on the residual matrix E^=(e^ij). As a concrete example, consider the case where E has i.i.d. N(0,σ2) entries. Then the unknown population parameter is η0 = σ2 and can be estimated naturally as (np)1i=1nj=1pe^ij2.

In Step 3, various methods can be used to construct knockoff statistics. For the illustration purpose, we use the Lasso coefficient difference (LCD) statistic as in [15]. Specifically, with y the response vector and ([X,X˜(θ^)]) the augmented design matrix we consider the variable selection procedure Lasso [39] which solves the following optimization problem

β^aug(θ^;λ)=argminb2p{y[X,X˜(θ^)]b22+λb1}, (9)

where λ ≥ 0 is the regularization parameter and ∥ · ∥m with m ≥ 1 denotes the vector ℓm-norm. Then for each variable xj, the knockoff statistic can be constructed as

Wj(θ^;λ)=|β^jaug(θ^;λ)||β^p+jaug(θ^;λ)|, (10)

where β^aug(θ^;λ) is the th component of the Lasso regression coefficient vector β^aug(θ^;λ). It is seen that intuitively the LCD knockoff statistics evaluate the relative importance of the jth original variable by comparing its Lasso coefficient β^jaug(θ^;λ) with that of its knockoff copy β^j+paug(θ^;λ). In the ideal case when the oracle knockoffs matrix X˜(θ0) is used instead of X˜(θ^) in (9), it is easy to verify that the LCD is a valid construction of knockoff statistics and satisfies the sign-flip property in (A.2). Consequently, the general theory in [15] can be applied to show that the FDR is controlled in finite sample. We next show that even with the empirical knockoffs matrix employed in (9), the FDR can still be asymptotically controlled with delicate technical analyses.

3. Asymptotic properties of IPAD

We now provide theoretical justifications for our IPAD procedure suggested in Section 2 with the LCD knockoff statistics Wj(θ^;λ)=wj([X,X˜(θ^)],y;λ) defined in (10). We will first present some technical conditions in Section 3.1, then prove in Section 3.2 that the FDR is asymptotically under control at desired target level q, and finally in Section 3.3 show that asymptotically IPAD has no power loss compared to the Lasso under some regularity conditions.

3.1. Technical conditions

We first introduce some notation and definitions which will be used later on. We use XsubG(Cx2) to denote that X is a sub-Gaussian random variable with variance proxy Cx2>0 if E[X]=0 and its tail probability satisfies (|X|>u)2exp(u2/Cx2) for each u ≥ 0. In all technical assumptions below, we use M > 1 to denote a large enough generic constant. Throughout the paper, for any vector v = (vi) let us denote by ∥v1, ∥v2, and ∥vmax the 1-norm, 2-norm, and max-norm defined as v1=i|vi|, v2=(ivi2)1/2 and ∥vmax = maxi |vi|, respectively. For any matrix M = (mij), we denote by ∥MF, ∥M1, ∥M2, and ∥Mmax the Frobenius norm, entrywise 1-norm, spectral norm, and entrywise -norm defined as ∥MF = ∥vec(M)∥2, ∥M1 = ∥vec(M)∥1, ∥M2 = supv0Mv2/v2, and ∥Mmax = ∥vec(M)∥max, respectively, where vec(M) represents the vectorization of M. For a symmetric matrix M, vech(M) stands for the vectorization of the lower triangular part.

Condition 1 (Regression errors)

The model error vector ε has i.i.d. components from subG(Cε2).

Condition 2 (Latent factors)

The rows of F0 consist of mean zero i.i.d. random vectors fi0r such that ∥F0maxM almost surely (a.s.) and Σf2+Σf12M, where Σf:=E[fi0fi0].

Condition 3 (Factor loadings)

The rows of Λ0 consist of deterministic vectors λjθr such that ∥Λ0maxM and p1Λ0Λ02+(p1Λ0Λ0)12M.

Condition 4 (Factor errors)

The entries of matrix Eη0 are i.i.d. copies of eη0subG(Ce2) with continuous distribution function G(·;η0). For each 1 ≤ ℓ ≤ m, the ℓth element of η0 is specified as η0=h(E[eη0],,E[eη0m]) with h : ℝm → ℝ some local Lipschitz continuous function in the sense that

|h(t1,,tm)h(E[eη0],,E[eη0m])|Mmaxk{1,,m}|tkE[eη0k]|

for each tk{t:|tE[eη0k]|Mcnp} and 1 ≤ km, where cnp := (p−1 logn)1/2 + (n−1 log p)1/2. Moreover, there exists some stochastic process (eη)η such that

  1. for each η{ηm:ηη0maxMcnp}, the entries of Eη in (6) have identical distribution to eη,

  2. for some sub-Gaussian random variable ZsubG(ce2) with some positive constant ce,

supη:ηη0maxMcnp|eηeη0|M1/2cnp1/2|Z|. (11)

Condition 5 (Eigenseparation)

The r eigenvalues of p1Λ0Λ0Σf are distinct for all p.

The number of factors r is assumed to be known for developing the theory with simplification, but in practice it can be estimated consistently using methods such as information criteria [3] and test statistics [1]. The sub-Gaussian assumptions in Conditions 1 and 4 can be replaced with some other tail conditions as long as similar concentration inequalities hold. Condition 3 is standard in the analysis of factor models. Stochastic loadings can be assumed in Condition 3 with some appropriate distributional assumption, such as sub-Gaussianity, at the cost of much more tedious technical arguments. The boundedness of the eigenvalues of Σf in Condition 2 is standard while the i.i.d. assumption and boundedness of fi0 are stronger compared to the existing literature (e.g., [3] and [2]). However, these conditions are imposed mostly for technical simplicity. In fact, the boundedness condition on fi0 can be replaced with (unbounded) sub-Gaussian or other heavier-tail assumption whenever concentration inequalities are available at the cost of slower convergence rates and stronger sample size requirement. Our theory on FDR control is based on that in [15], which applies only to the case of i.i.d. rows of design matrix X. This is the main reason for imposing the i.i.d. assumption on εi and fi in Conditions 1 and 2. However, we conjecture that similar results can also hold in the presence of some sufficiently weak serial dependence in εi and fi. Condition 4 introduces a sub-Gaussian process eη with respect to η. The norm in (11) can be replaced with any other norm since η is finite dimensional. In the specific case when the components of E have Gaussian distribution such that η is a scalar parameter representing variance, by the reflection principle for the Wiener process ([12], p.511), eη can be constructed as a Wiener process and the inequality (11) can be satisfied. For more information on sub-Gaussian processes, see, e.g., [41]. To understand why we need Condition 5, note from the proof of Lemma 3 that the PC estimator (F^,Λ^) is only consistent for (F0H, Λ0H−1), where H=(Λ0Λ0/p)(F0F^/n)V1 with V an r × r diagonal matrix of r largest eigenvalues of XX/(np). Condition 5 guarantees that F^F0/n is asymptotically unique and invertible, which have been proved by [2], and the fact is used in the proof of Lemma 6 in Appendix. This ultimately ensures that C0 can be estimated well, which in turn guarantees that η0 can be estimated accurately.

Recall that in the IPAD procedure, we first obtain the augmented Lasso estimator β^aug(θ;λ)2p by regressing y on [X,X˜(θ)]. Denote by Aaug(θ;λ)=supp(β^aug(θ;λ)){1,,2p} the active set of the augmented Lasso regression coefficient vector. Throughout this section, we content ourselves with sparse estimates satisfying

|Aaug(θ;λ)|k/2 (12)

for some positive integer k which may diverge with n at an order to be specified later; see, e.g., [28] and [32] for a similar constraint and justifications therein. This can always be achieved since users have the freedom to choose the size of the Lasso model.

3.2. FDR control

To develop the theory for IPAD, we consider the PC estimator C^ for the realization C0 summarized in Section 2.2. The estimator η^=(η^1,,η^m) is constructed as η^=h(Enpe^,Enpe^m) with h, 1 ≤ m, introduced in Condition 4 and Enpe^k=(np)11in,1jpe^ijk the empirical moments of e^ij. Throughout our theoretical analysis, we consider the regularization parameter fixed at λ = C0n−1/2 log p with C0 some large enough constant for all the Lasso procedures. Therefore, we will drop the dependence of various quantities on λ whenever there is no confusion. For example, we will write Aaug(θ;λ) and β^aug(θ;λ) as Aaug(θ) and β^aug(θ), respectively.

Denote by U(θ):=n1[X,X˜(θ)][X,X˜(θ)] and v(θ):=n1[X,X˜(θ)]y and define T(θ) := vec(vech U(θ), v(θ)) ∈ ℝP with P := p(2p + 3). The following lemma states that the statistic T(θ) plays a crucial role in our procedure.

Lemma 1

The set of variables S^ selected by Procedure 1 depends only on T(θ).

For any given θ, define the active set A(θ):=A1aug(θ)A2aug(θ){1,,p}, where A1aug(θ):={j:j{1,,p}Aaug(θ)} and A2aug(θ):={jp:j{p+1,,2p}Aaug(θ)}. That is, A(θ) is equal to the support of knockoff statistics (W1(θ), ⋯, Wp(θ))′ if there are no ties on the magnitudes of the augmented Lasso coefficient vector β^aug(θ).

We next focus on the low-dimensional structure of T(θ) inherited from the augmented Lasso because it will be made clear that this is the key to controlling the FDR without sample splitting. For any subset A{1,,p}, define a lower-dimensional expression of the vector as TA(θ):=vec(vechUA(θ),vA(θ)) with UA(θ) the principle submatrix of U(θ) formed by columns and rows in A and vA(θ) the subvector of v(θ) formed by components in A. Then it is easy to see that UA(θ)=n1[XA,X˜A(θ)][XA,X˜A(θ)] and vA(θ)=n1[XA,X˜A(θ)]y. Motivated by Lemma 1, we define a family of mappings indexed by A that describes the selection algorithm of Procedure 1 with given data set ([XA,X˜A(θ)],y) that forms TA(θ). Formally, define a mapping SA:|A|(2|A|+3)2A as tASA(tA) for given TA(θ)=tA, where 2A refers to the power set of A. That is, SA(tA) represents the outcome of first restricting ourselves to the smaller set of variables A and then applying IPAD to TA(θ)=tA to further select variables from set A.

Lemma 2

Under Conditions 1–4, for any subset AA(θ) we have S{1,,p}(T(θ))=SA(TA(θ)).

When restricting on set A, we can apply Procedure 1 to a lower-dimensional data set ([XA,X˜A(θ)],y) that forms TA(θ) to further select variables from A. The previous two lemmas ensure that this gives us a subset of A that is identical to S{1,⋯,p}(T(θ)). Note that the lower-dimensional problem based on TA(θ) can be easier compared to the original one. We also would like to emphasize that the dimensionality reduction to a smaller model A is only for assisting the theoretical analysis and our Procedure 1 does not need any knowledge of such set A.

It is convenient to define t0=ET(θ0)P. Denote by

I:={tP:tt0maxanp:=C1(k1/2+s3/2)c˜np}, (13)

where C1 is some positive constant and c˜np=p1/2logn+n1/2logp. For any subset A{1,,p}, let IA be the subspace of I when taking out the coordinates corresponding to ETA(θ0). Thus IA|A|(2|A|+3). In addition to Conditions 1–5, we need an assumption on the algorithmic stability of Procedure 1.

Condition 6 (Algorithmic stability)

For any subset A{1,,p} that satisfies |A|knp, there exists a positive sequence ρnp → 0 as np → ∞ such that

sup|A|ksupt1,t2IA|SA(t2)ΔSA(t1)||SA(t1)||SA(t2)|=O(ρnp),

where Δ stands for the symmetric difference between two sets.

Intuitively the above condition assumes that the knockoffs procedure is stable with respect to a small perturbation to the input t in any lower-dimensional subspace IA. Under these regularity conditions, the asymptotic FDR control of our IPAD procedure can be established.

Theorem 1 (Robust FDR control)

Assume that Conditions 1–6 hold. Fix an arbitrary positive constant ν. If (s, k, n, p) satisfies sknp, cnpc/[r2M2C(ν + 2)]1/2, and (k1/2+s3/2)c˜np0 as np → ∞ with c and C some positive constants defined in Lemma 7 in Appendix, then the set of variables S^ obtained by Procedure 1 (IPAD) with the LCD knockoff statistics controls the FDR (3) to be no larger than q + O (ρnp + n−ν + p−ν).

Recall that by definition, the FDR is a function of T(θ^) and can be written as EFDP(T(θ^)) while the FDR computed with the oracle knockoffs, EFDP(T(θ0)), is perfectly controlled to be no larger than q. This observation motivates us to first establish asymptotic equivalence of T(θ^) and T(θ0) with large probability. Then a natural idea is to show that EFDP(T(θ^)) converges to EFDP(T(θ0)) in probability, which turns out to be highly nontrivial because of the discontinuity of FDP(·) (the convergence would be straightforward via the Portmanteau lemma if FDP(·) were continuous). Condition 6 above provides a remedy to this issue by imposing the algorithmic stability assumption.

3.3. Power analysis

We have established the asymptotic FDR control for our IPAD procedure in Section 3.2. We now look at the other side of the coin – the power (5). Recall that in IPAD, we apply the knockoffs inference procedure to the knockoff statistics LCD, which are constructed using the augmented Lasso in (9). Therefore the final set of variables selected by IPAD is a subset of variables picked by the augmented Lasso. For this reason, the power of IPAD is always upper bounded by that of Lasso. We will show in this section that there is in fact no power loss relative to the augmented Lasso in the asymptotic sense.

Condition 7 (Signal strength I)

For any subset AS0 that satisfies |A|/s>1γ for some γ ∈ (0, 1], it holds that βA1>bnpsn1/2logp for some positive sequence bnp → ∞.

Condition 8 (Signal strength II)

There exists some constant C2 ∈ (2(qs)−1,1) such that |S2|C2s with S2={j:|βj|(s/n)1/2logp}.

Condition 7 requires that the overall signal is not too weak, but is weaker than the conventional beta-min condition min minjS0|βj|n1/2logp. Under Condition 8, we can show that |S^|C2s with probability at least 1 − O(pν + nν) using similar techniques to those of Lemma 6 in [26]. The intuition is that given s → ∞, for a variable selection procedure to have high power it should select at least a reasonably large number of variables. The result |S^|C2s will be used to derive the asymptotic order of threshold T, which is in turn crucial to establish the theorem below on power.

Theorem 2 (Power guarantee)

Assume that Conditions 1–5 and 7–8 hold. Fix an arbitrary positive constant ν. If (s,k,n,p) satisfies 2sknp, cnpc/(r2M2C(ν + 2))1/2, and sk1/2c˜np0 as np → ∞ with c and C some positive constants defined in Lemma 7, then both the Lasso procedure based on (X, y) and our IPAD procedure (Procedure 1) have power bounded from below by γo(1) as np → ∞. In particular, if γ = 1 IPAD has no power loss compared to Lasso asymptotically.

4. Simulation studies

We have shown in Section 3 that IPAD can asymptotically control the FDR in high-dimensional setting and there can be no power loss in applying the procedure. We next move on to numerically investigate the finite-sample performance of IPAD using synthetic data sets. We will compare IPAD with the knockoff filter in [4] (BCKnockoff) and the high-dimensional knockoff filter in [5] (HD-BCKnockoff). In what follows, we will first explain in detail the model setups and simulation settings, then discuss the implementation of the aforementioned methods, and finally summarize the comparison results.

4.1. Simulation designs and settings

In all simulations, the design matrix X ∈ ℝn×p is generated from the factor model

X=F0(Λ0)+rθE=C0+rθE, (14)

where F0=(f10,,fn0)n×r is the matrix of latent factors, Λ0=(λ10,,λp0)p×r is the matrix of factor loadings, E ∈ ℝn×p is the matrix of model errors, and θ is a constant controlling the signal-to-noise ratio. The term r is used to single out the effect of the number of factors in calculating the signal-to-noise ratio in factor model (14). We then rescale each column of X to have ℓ2-norm one and simulate the response vector y = (y1,⋯, yn)0 from the following model

yi=f(xi)+cεi,i=1,,n, (15)

where f : ℝp → ℝ is the link function which can be linear or nonlinear, c > 0 is a constant controlling the signal-to-noise ratio, and ε = (ε1,⋯, εn) is the vector of model error. We next explain the four different designs of our simulation studies.

4.1.1. Design 1: linear model with normal factor design matrix

The elements of F0, Λ0, E, and ε are drawn independently from N(0,1). The link function takes a linear form, that is, y=Xβ+cε, where the coefficient vector β = (β1,⋯, βp)′ ∈ ℝp is generated by first choosing s random locations for the true signals and then setting βj at each location to be either A or −A randomly with A some positive value. The remaining ps components of β are set to zero.

4.1.2. Design 2: linear model with fat-tail factor matrix and serial dependence

The elements of E are generated as

eij=(ν2χν,j2)uij, (16)

where uiji.i.d.N(0,1) for all i = 1,⋯, n and j = 1,⋯, p, and χν,j2,j=1,,p are i.i.d. random variables from chi-square distribution with ν = 8 degrees of freedom. The rest of the design is the same as in Design 1. It is worth mentioning that in this case, the entries of matrix E have fat-tail distribution with serial dependence in each column because of the common factor χν,j2. This design is used to check the robustness of IPAD method with respect to the serial dependence and the fat-tail distribution of E.

4.1.3. Design 3: linear model with misspecified design matrix

To evaluate the robustness of IPAD procedure to the misspecification of the factor model structure (14), we set Λ = 0, = 1 and simulate the rows of matrix E independently from N(0,Σ) with Σ = (σij), σij = ρ|ij| for ≤ i,jp. The remaining design is the same as in Design 1. It is seen that our assumption on the independence of the entries of E is violated. This design is used to test the robustness of IPAD to misspecification of the factor model structure of X.

4.1.4. Design 4: nonlinear model with normal factor design matrix

Our last design is used to evaluate the performance of IPAD method when the link function f is nonlinear. To be more specific, we assume the following nonlinear model between the response and covariates

y=sin(Xβ)exp(Xβ)+cε,

where the coefficient vector β, design matrix X, and model error ε are generated similarly as in Design 1.

4.1.5. Simulation settings

The target FDR level is set to be q = 0.2 in all simulations. For Design 1 and Design 2, we set n = 2000, p = 2000, A = 4, s = 50, c = 0.2, r = 3, and θ = 1. In order to evaluate the sensitivity of our method to the dimensionality p and the model sparsity s, we also explore the settings of p = 1000,3000 and s = 100,150. In Design 3, we set r = 0 and ρ = 0,0.5. In Design 4, since the model is nonlinear, we use the nonparametric method of random forest [14] to fit the model and consider lower-dimensional settings of p = 50, 250, and 500. We also decrease the number of observations to n = 1000 and number of true variables to s = 10. Moreover, we set θ = 1, 2 and c = 0.1, 0.2, 0.3 to test the effects of signal-to-noise ratio on the performance of IPAD procedure in Design 4. The implementation details for the estimation procedure of IPAD are provided in Section E.1 of Supplementary Material.

4.2. Simulation results

For each method, we use 100 simulated data sets to calculate its empirical FDR and power, which are the average FDP and TDP (true discovery proportion as in (5)) over 100 repetitions, respectively. Two different thresholds, knockoff and knockoff+ (T1 and T2 in Result 1, respectively), are used in the knockoffs inference implementation. It is worth mentioning that as shown in [15] and summarized in Result 1, knockoff+ controls FDR (3) exactly while knockoff controls only the modified FDR (4).

Tables 1 and 2 summarize the results from Designs 1 and 2, respectively. As shown in Table 1, all approaches can control empirical FDR at the target level (q = 0.2) and knockoff+, which is more conservative, reduces power negligibly. It is worth mentioning that even for Design 2, in which the design matrix X is drawn from fat-tail distribution with serial dependence, we still have FDR under control with decent level of power. This suggests that the no serial correlation assumption in our theoretical analysis could just be technical. Compared to the results by BCKnockoff and HD-BCKnockoff, we see that using the extra information from the factor structure in constructing knockoff variables can help with both FDR and power. Table 2 also shows the effects of model sparsity on the performance of various approaches. It can be seen that when the number of true signals is increased from 50 to 150, the FDR is still under control and the empirical power of IPAD remains steady.

Table 1:

Simulation results for Designs 1 and 2 of Section 4.1 with different values of dimensionality p

Design 1
Design 2
FDR Power FDR+ Power+ R2 FDR Power FDR+ Power+ R2

p = 1000
IPAD 0.195 0.991 0.180 0.990 0.659 0.199 0.961 0.180 0.960 0.652
BCKnockoff 0.207 0.942 0.192 0.938 0.659 0.172 0.887 0.152 0.885 0.653

p = 2000
IPAD 0.194 0.979 0.179 0.979 0.649 0.199 0.935 0.183 0.933 0.656
HD-BCKnockoff 0.142 0.706 0.127 0.691 0.649 0.136 0.607 0.113 0.581 0.644

p = 3000
IPAD 0.191 0.964 0.176 0.963 0.652 0.188 0.913 0.171 0.911 0.658
HD-BCKnockoff 0.172 0.668 0.149 0.658 0.652 0.125 0.559 0.099 0.524 0.651

Note that FDR+ and Power+ are the values of FDR and Power corresponding to the knockoff+ threshold T2.

Table 2:

Simulation results for Designs 1 and 2 of Section 4.1 with different sparsity level s

Design 1
Design 2
FDR Power FDR+ Power+ R2 FDR Power FDR+ Power+ R2

s = 50
IPAD 0.194 0.979 0.179 0.979 0.649 0.199 0.935 0.183 0.933 0.656
HD-BCKnockoff 0.142 0.706 0.127 0.691 0.649 0.136 0.607 0.113 0.581 0.644

s = 100
IPAD 0.191 0.978 0.183 0.977 0.783 0.181 0.937 0.174 0.936 0.789
HD-BCKnockoff 0.152 0.703 0.140 0.698 0.787 0.106 0.583 0.097 0.573 0.778

s = 150
IPAD 0.183 0.973 0.178 0.972 0.842 0.188 0.935 0.182 0.935 0.848
HD-BCKnockoff 0.139 0.660 0.130 0.654 0.858 0.115 0.578 0.106 0.570 0.843

Table 3 is devoted to the case of Design 3, where the rows of matrix X are generated independently from multivariate normal distribution with AR(1) correlation structure. This is a setting where the factor model structure in X is misspecified. Since BCknockoff and HD-BCknockoff make no use of the factor structure in generating knockoff variables, in both low- and high-dimensional examples both methods control FDR exactly at the target level. IPAD based methods have empirical FDR slightly over the target level, which may be caused by the misspecification of the factor structure. On the other hand, IPAD based approaches have much higher empirical power than comparison methods.

Table 3:

Simulation results for Design 3 of Section 4.1

ρ = 0
ρ = 0:5
FDR Power FDR+ Power+ R2 FDR Power FDR+ Power+ R2

p = 1000
IPAD 0.204 0.995 0.189 0.995 0.444 0.226 0.984 0.216 0.984 0.446
BCKnockoff 0.188 0.919 0.172 0.917 0.444 0.137 0.827 0.117 0.821 0.445

p = 2000
IPAD 0.203 0.993 0.189 0.993 0.447 0.220 0.982 0.202 0.980 0.445
HD-BCKnockoff 0.151 0.630 0.126 0.603 0.449 0.115 0.522 0.090 0.467 0.442

p = 3000
IPAD 0.225 0.988 0.205 0.987 0.445 0.219 0.979 0.206 0.978 0.443
HD-BCKnockoff 0.150 0.589 0.126 0.560 0.446 0.092 0.439 0.064 0.381 0.447

Table 4 corresponds to Design 4 in which response y is related to X nonlinearly. Since BCKnockoff and HD-BCKnockoff are designed for linear models, only the results from IPAD method are reported. It can be seen form Table 4 that IPAD approach can control FDR with reasonably high power even in the nonlinear setting. We also observe that in nonlinear setting, the power of IPAD deteriorates faster as dimensionality p increases compared to the linear setting due to the use of the fully nonparametric approach for estimation.

Table 4:

Simulation results for Design 4 of Section 4.1

θ = 1
θ = 2
FDR Power FDR+ Power+ R2 FDR Power FDR+ Power+ R2

p = 50
c = 0.1 0.109 0.839 0.081 0.720 0.707 0.110 0.943 0.061 0.858 0.707
c = 0.2 0.137 0.847 0.068 0.726 0.547 0.097 0.920 0.061 0.837 0.547
c = 0.3 0.137 0.765 0.091 0.582 0.451 0.123 0.907 0.076 0.774 0.451

p = 250
c = 0.1 0.189 0.740 0.104 0.504 0.702 0.174 0.876 0.139 0.788 0.702
c = 0.2 0.218 0.666 0.131 0.522 0.552 0.209 0.831 0.118 0.660 0.552
c = 0.3 0.200 0.569 0.101 0.361 0.451 0.224 0.766 0.141 0.599 0.451

p = 500
c = 0.1 0.243 0.661 0.169 0.497 0.702 0.223 0.831 0.173 0.740 0.702
c = 0.2 0.204 0.507 0.111 0.266 0.543 0.216 0.749 0.126 0.594 0.543
c = 0.3 0.247 0.478 0.128 0.299 0.451 0.241 0.691 0.156 0.550 0.451

5. Empirical analysis

Our simulation results in Section 4 suggest that IPAD is a powerful approach with asymptotic FDR control. We further examine the application of IPAD to the quarterly data on 109 macroeconomic variables from the third quarter of year 1960 (1960Q3) to the fourth quarter of year 2008 (2008Q4) in the United States discussed in [37]. These variables are transformed by taking logarithms and/or differencing following [37]. Our real data analysis consists of two parts. In the first part, we focus on the performance of IPAD method in terms of empirical FDR and power. To save space, the numerical results for the real data based simulation study are presented in Section E.2 of of Supplementary Material. In the second part, the forecasting performance of IPAD method will be evaluated.

We now apply the IPAD approach to the real economic data set for forecasting. One-step ahead prediction is conducted using rolling window of size 120. More specifically, one of the 109 variables is chosen as the response and the remaining 108 variables are treated as predictors. For each quarter between 1990Q3 and 2008Q4, we use the previous 120 periods for model fitting and then one-step ahead prediction is conducted based on the fitted model. We compare IPAD with the competing methods of autoregression of order one (AR(1)), factor augmented AR(1) (FAR), and Lasso, where each method is implemented in a same way as IPAD for one-step ahead prediction; see Section E.3 of Supplementary Material for the implementation details of all the methods.

The number of factors r^ is chosen by the PCp1 criterion in [3]. For the Lasso and IPAD, the regularization parameter λ is selected with the tenfold cross-validation. Table 5 shows the root mean-squared prediction error (RMSE) of these methods. As can be seen, the RMSE of IPAD is very close to those of comparison methods. To statistically compare the relative prediction accuracy of IPAD versus other approaches, we have used the Diebold–Mariano test [21], where the square of one-step ahead prediction error is used as the loss function. Table 6 reports the test results. The results indicate that one-step ahead prediction accuracy of IPAD is comparable to other approaches.

Table 5:

Root mean-squared error of one-period ahead forecast of various macroeconomic variables

AR FAR Lasso IPAD

RGDP 2.245 1.929 2.070 2.106
CPI-ALL 1.526 1.552 1.579 1.571
Imports 7.549 5.871 6.595 6.993
IP: cons dble 9.683 8.353 8.424 9.175
Emp: TTU 1.112 0.989 1.167 1.100
U: mean duration 0.573 0.487 0.502 0.494
HStarts: South 0.074 0.071 0.076 0.074
NAPM new ordrs 4.800 4.378 4.659 4.673
PCED-NDUR-ENERGY 31.927 32.121 33.546 32.164
Emp. Hours 2.102 1.899 2.080 1.944
FedFunds 0.421 0.396 0.406 0.392
Cons credit 2.573 2.537 2.648 2.580
EX rate: Canada 10.132 10.139 10.122 10.113
DJIA 23.117 23.997 24.585 23.398
Consumer expect 6.496 6.888 6.681 6.661

Table 6:

Diebold–Mariano test for comparing prediction accuracy of IPAD against other procedures

IPAD vs. AR IPAD vs. FAR IPAD vs. Lasso

RGDP −0.780 1.160 0.462
CPI-ALL 0.521 0.394 −0.218
Imports −0.976 2:631** 1.464
IP: cons dble −1.026 1.567 2:487*
Emp: TTU −0.140 1.692 −1.845
U: mean duration −3:383*** 0.672 −0.505
HStarts: South 0.096 0.821 −0.766
NAPM new ordrs −0.517 1.814 0.076
PCED-NDUR-ENERGY 0.753 0.049 −1.759
Emp. Hours −1.200 0.297 −2:063*
FedFunds −0.971 −0.134 −0.625
Cons credit 0.207 0.359 −0.661
EX rate: Canada −0.466 −0.138 −0.037
DJIA 0.585 −0.959 −1.428
Consumer expect 1.212 −1.038 −0.277

It is worth mentioning that one main advantage of IPAD is its interpretability and stability. Using IPAD for forecasting, we not only enjoy the same level of accuracy as other methods but also obtain the information on variable importance with stability. Recall that for each one-step ahead prediction, we apply IPAD 100 times and obtain 100 sets of selected variables. Thus we can calculate the selection frequency of each variable in each one-step ahead prediction. Figure 1 depicts the frequencies of top five selected variables in predicting real GDP growth before and after year 2000, where the variable importance is ranked according to the aggregated frequencies over the entire time period before or after 2000. We have experimented with different cutoff years around year 2000, and the top five ranked variables stay the same so only the results corresponding to cutoff year 2000 are reported. Changes in index of help wanted advertising in newspapers, percentages of changes in real personal consumption of services, and percentage of changes in real gross private domestic investment in residential sector were the top three important variables in predicting real GDP growth during the whole period. It is interesting to see that percentage of changes in residential price index was among top five important variables in predicting GDP growth during the 90s, and then starting from year 2000 it was replaced by changes in index of consumer expectations about stability of economy. Moreover, it is also seen that the percentage of changes in industrial production of fuels was of great importance for predicting real GDP growth during some periods but not the others.

Figure 1:

Figure 1:

Frequencies of top selected variables in predicting real GDP growth. The set of selected variables are index of help-wanted advertising in newspapers (Help wanted indx), real personal consumption expenditures - services (Cons-Serv), real gross private domestic investment - residential (Res.Inv), residential price index (PFI-RES), industrial production index - fuels (IP:fuels), and University of Michigan index of consumer expectations (Consumer expect).

As a comparison, it is very difficult to interpret the results of FAR. As for Lasso based method, there is no theoretical guarantee on FDR control and in addition, Lasso usually gives us models with much larger size. For instance, in predicting real GDP growth, IPAD on average selects 5.42 macroeconomic variables while Lasso on average selects 13.32 variables. To summarize, our real data analysis indicates that IPAD is an applicable approach for controlling FDR with competitive prediction power and high interpretability and stability.

6. Discussions

We have suggested in this paper a new procedure IPAD for feature selection in high-dimensional linear models that achieves asymptotic FDR control while retaining high power. Our model setting involves a latent factor model that is motivated by applications in economics and finance. Our method falls into the general model-X knockoffs framework in [15], but allows the unknown covariate distribution for the knockoff variable construction. With the LCD knockoff statistics, we have shown that the FDR of IPAD can be asymptotically under control while the power can be asymptotically the same as that of Lasso. Our simulation study and empirical analysis also suggest that IPAD has highly competitively performance compared to many widely used forecasting methods such as Lasso and FAR, but with much higher interpretability and stability.

Our work has focused on the scenario of static models. It would be interesting to extend the IPAD procedure to high-dimensional dynamic models with time series data. It is also interesting to consider nonlinear models and more flexible machine learning methods for forecasting as well as more refined factor model structures on the covariates for the knockoffs inference with IPAD, and develop theoretical guarantees for the IPAD framework in these more general model settings. These extensions are beyond the scope of the current paper and are interesting topics for future research.

Acknowledgments

The author names are alphabetically ordered. This work was supported by NIH Grant 1R01GM131407-01, NSF CAREER Award DMS-1150318, a grant from the Simons Foundation, Adobe Data Science Research Award, and a Grant-in-Aid for JSPS Overseas Research Fellowship 29-60. Most of this work was completed while Uematsu visited USC as a JSPS Overseas Research Fellow and Postdoctoral Scholar. The authors sincerely thank the Joint Editor, Associate Editor, and referees for their valuable comments that helped improve the paper substantially.

A. Proofs of main results

We provide the proofs of Theorems 1–2 in this appendix. The proofs of Proposition 1 and Lemmas 1–2 and additional technical details are included in the Supplementary Material.

To ease the technical presentation, let us introduce some notation. We denote by ≲. the inequality up to some positive constant factor. Restricting the columns of X and X˜(θ^) to the variables in index set A such that |A|k, we obtain the n × k submatrices XA and X˜A(θ^), respectively. Moreover, we define TA(θ^):=vec(vechUA(θ^),vA(θ^))k(2k+3) with UA(θ^) the principle submatrix of U(θ^) formed by columns and rows in set A, and vA(θ^) the subvector of v(θ^) formed by components in set A. Then it is easy to see that UA(θ^)=n1[XA,X˜A(θ^)][XA,X˜A(θ^)] and vA(θ^)=n1[XA,X˜A(θ^)]y. For the oracle factor loading matrix Λ0, with a slight abuse of notation we use ΛA0 to denote the row restricted to the variables in A for notational convenience. Recall that ν > 0 is a fixed positive number, cnp = (p−1 log n)1/2 +(n−1 log p)1/2, and c˜np=p1/2logn+n1/2logp. We define πnp = nν + pν. Since λ is fixed at C0n−1/2 log p, in all the proofs we will drop the dependence of various quantities on λ whenever there is no confusion.

A.1. Proof of Theorem 1

Recall that for a given θ, A(θ0) is the support of knockoff statistics (W1(θ),⋯, Wp(θ)). Define set A^(θ^):=A(θ^)A(θ0). It follows from (12) that the cardinality of A^(θ^) is bounded by k. Hereafter we write A^(θ^) as A^ for notational simplicity.

By Lemmas 1–2 and the definition of the FDP, we know that S{1,,p}(T(θ^))=SA^(TA^(θ^)) and thus the resulting FDR’s are the same. Therefore, we can restrict ourselves to the smaller model A^ when studying the FDR of IPAD. The same arguments as above also hold for the oracle knockoffs; that is, the FDR of IPAD applied to T(θ0) is the same as that applied to TA^(θ0). Note that all the FDR’s we discuss here are with respect to the full model {1,⋯, p}. For this reason, in what follows we will abuse the notation and use FDRA^(TA^(θ)) and FDRA^(TA^(θ0)) to denote the FDR of IPAD based on TA^(θ) and TA^(θ0), respectively. We want to emphasize that although we put a subscript A^ in FDR’s, their values are still deterministic as argued above. Summarizing the facts, we obtain

FDRA^(TA^(θ^))=FDR{1,,p}(T(θ^)),FDRA^(TA^(θ0))=FDR{1,,p}(T(θ0)).

Meanwhile, by construction X˜(θ0) satisfies the two properties in Definition 1 and is a valid model-X knockoffs matrix. Therefore, for any value of the regularization parameter, the LCD statistics Wj(θ0) based on ([X,X˜(θ0)],y) together with Result 1 ensure the exact FDR control at some target level q ∈ (0, 1). Summarizing this, we obtain that the FDR of IPAD applied to T(θ0) is controlled at target level q.

Combining the arguments in the previous two paragraphs, we deduce

FDRA^(TA^(θ0))=FDR{1,,p}(T(θ0))q.

Thus the desired results follow automatically if we can prove that FDRA^(TA^(θ^)) is asymptotically close to FDRA^(TA^(θ0)). We next proceed to prove it.

Recall the definitions of I and IA as in (13). Define the event

Enp={TA^(θ^)IA^}{TA^(θ0)IA^}.

Lemma 3 in Section C.4 establishes θ^Θnp with probability at least 1 − O(πnp) and θ0Θnp. Hence, Lemma 4 in Section C.5 guarantees that

(Enpc)2(sup||A|k,θΘnpTA(θ)E[TA(θ0)]max>anp)=O(πnp), (17)

where anp=C1(k1/2+s3/2)c˜np for some constant C1 > 0.

For a given deterministic set A{1,,p}, let FDPA() be the FDP function corresponding to FDPA(). By the definition of FDP function, we have for any t1, t2|A|(2|A|+3),

FDPA(t2)FDPA(t1)=|S1SA(t2)||SA(t2)||S1SA(t1)||SA(t1)|=|S1SA(t2)|(|SA(t1)||SA(t2)|)|SA(t1)||SA(t2)|+|S1SA(t2)||S1SA(t1)||SA(t1)|.

Further, note that

|S1SA(t2)|/|SA(t2)|1,SA(t2)||SA(t1)|SA(t2)ΔSA(t1)|,S1SA(t2)||S1SA(t1)|{SA(t2)ΔSA(t1)}S1|.

Combining the results above yields

|FDPA(t1)FDPA(t2)|||SA(t1)||SA(t2)|||SA(t1)|+|{SA(t2)ΔSA(t1)}S1||SA(t1)|2|SA(t2)ΔSA(t1)||SA(t1)|.

Similarly we have

|FDPA(t1)FDPA(t2)|2|SA(t2)ΔSA(t1)||SA(t2)|.

Thus it holds that

sup|A|ksupt1,t2IA|FDPA(t1)FDPA(t2)|sup|A|ksupt1,t2IA|SA(t2)ΔSA(t1)||SA(t1)||SA(t2)|=O(ρnp), (18)

where the last two steps are due to Condition 6. Therefore, (17) and (18) together with the fact that FDP(·) ∈ [0, 1] entail that

|FDRA^(TA^(θ^))FDRA^(TA^(θ0))|=|EFDPA^(TA^(θ^))EFDPA^(TA^(θ0))|E|FDPA^(TA^(θ^))FDPA^(TA^(θ0))|E[|FDPA^(TA^(θ^))FDPA^(TA^(θ0))||Enp](Enp)+2(Enpc)sup|A|ksupt1,t2IA|FDPA(t1)FDPA(t2)|+O(πnp)=O(ρnp)+O(πnp).

This completes the proof of Theorem 1.

A.2. Proof of Theorem 2

By the definition of the LCD statistics, we construct the augmented Lasso estimator for each θ ∈ Θnp, which is defined as

β^aug(θ)=argminb2py[X,X˜(θ)]b22+λb1. (19)

The Lasso estimator of regressing y on only X is also given by

β^=argminbpyXb22+λb1, (20)

where λ = O(n−1/2 log p). According to the true model S0, the underlying true parameter vector corresponding to β^aug(θ) should be given by βaug := (β′, 0′)′ ∈ ℝ2p with β=(βS0,0)p and |S0|=s for any θ ∈ Θnp. By Lemma 5 in Section C.6, with probability at least 1 − O(πnp) the Lasso estimators satisfy

supθΘnpβ^aug(θ)βaug1=O(sλ),β^β1=O(sλ),

where λ = O(n−1/2 log p).

We now prove that under Condition 7, the power of the augmented Lasso (19) is bounded from below by γ ∈ [0, 1]; that is,

E|S^auglassoS0|/sγ, (21)

where S^auglasso={j:β^jaug(θ)0}. To this end, we first show that with asymptotic probability one,

|S^auglassocS0|/s|1γ. (22)

The key is to use proof by contradiction. Suppose |S^auglassocS0|/s>1γ. Then we can see that

supθΘnpβ^aug(θ)βaug1supθΘnpβ^S^auglasscaug(θ)βS^auglasscaug1=βS^auglasscaug1βS^auglasscS0aug1>bnpsn1/2logp,

where the last step is by Condition 7. However, by Lemma 5 with probability at least 1−O(πnp), the left hand side above is bounded from above by O() with λ = O(n−1/2 log p). These two results contradict with each other since bnp → ∞. Hence (22) is proved. Therefore, the result in (21) follows immediately since |S^auglassoS0|=s|S^auglassocS0| and

E|S^auglassoS0|/sγ(|S^auglassoS0|/s>γ)=γ(|S^auglassocS0|/s1γ)=γ(1O(πnp)).

Let S^lasso={j:β^j0}. Using the same argument, we can show that the power of the Lasso (20) is also bounded from below by γ(1−O(πnp)) under Condition 7. That is, we have

E|S^lassoS0|/sγ(1O(πnp)).

Next we show that our knockoffs procedure has at least the same power as the augmented Lasso and hence the Lasso itself. Namely, we prove

E|S^lassoS0|/sγ (23)

with threshold T2. Note that the same argument is still valid for T1. Let |W(1)| ≥ ⋯ ≥ |W(p)| and define j as |W(j*)|=T2. Then by the definition of T2, it holds that T2<Wj*+10. Here we have assumed that there are no ties on the magnitudes of Wj’s which should be a reasonable assumption considering the continuity of the Lasso solution. As in the proof of Theorem 3 in [26], it is sufficient to consider the following two cases.

Case 1

Consider the case of T2<Wj*+1<0. In this case, from the definition of threshold T2 we have

2+|{j:W(j)T2}||{j:W(j)T2}|>q.

Using the same argument as in Lemma 6 of [26] together with Lemma 5, we can prove from Condition 8 that |S^|C2s with probability at least 1 − O(πnp). This leads to |{j : W(j) ≤ −T2}| > C2qs−2 with the same probability. Now from the same argument as in A.5 of [26], we can obtain T2 = O(λ). On the other hand, Lemma 5 and some algebra establish that

O(sλ)=β^aug(θ^)βaug1=j=1p|β^jaug(θ^)βj|+j=1p|β^j+paug(θ^)|=jS^cS0|β^jaug(θ^)βj|+jS1|β^jaug(θ^)|+jS^cS0|β^jaug(θ^)βj|+j=1p|β^j+paug(θ^)|. (24)

We then consider the lower bound of the last term in (24). For any jS^c, it holds that |β^j+paug(θ^)|>|β^jaug(θ^)|T2. Hence we obtain

j=1p|β^j+paug(θ^)|jS^S0|β^j+paug(θ^)|+jS^cS0|β^j+paug(θ^)|jS^S0|β^j+paug(θ^)|+jS^cS0|β^jaug(θ^)|T2|S^cS0|. (25)

Plugging (25) into (24) and applying the triangle inequality yield

O(sλ)jS^S0|β^j+paug(θ^)βj|+jS1|β^jaug(θ^)|+jS^cS0|β^jaug(θ^)βj|+jS^S0|β^j+paug(θ^)|+jS^cS0|β^jaug(θ^)|T2|S^cS0|jS^S0|β^jaug(θ^)βj|+jS1|β^jaug(θ^)|+jS^cS0|βj|+jS^S0|β^j+paug(θ^)|T2|S^cS0|jS^cS0|βj|T2|S^cS0|=βS^cS01T2|S^cS0|.

Since T2|S^cS0|=O(sλ) for λ = O(n−1/2 log p) due to the discussion above, we obtain

βS^cS01=O(sn1/2logp). (26)

Suppose |S^cS0|/s>1γ. Then Condition 7 gives βS^cS01>bnpsn1/2logp for some positive diverging sequence bnp; this contradicts with (26). Thus we obtain |S^cS0|/s1γ with asymptotic probability one, which leads to (23) by taking expectation.

Case 2

Consider the case of W(j*+1)=0. In this case, by the definition of threshold T2

1+|{j:W(j)<0}||{j:W(j)>0}|q. (27)

If |{j : W(j) < 0}| > C3s for some constant C3 > 0, then from the same argument as in A.5 of [26], we can obtain T2 = O(λ), and the rest of the proof is the same as in Case 1. On the other hand, if |{j : W(j) < 0}| ≤ o(s) we have

|{j:W(j)0}S0|=|{j:W(j)>0}S0|+|{j:W(j)<0}S0||S^S0|+o(s).

Now note that |{j:W(j)0}||{j:|β^jaug|0,j=1,,p}|. Then we can see that with asymptotic probability one,

|{j:W(j)0}S0||{j:β^jaug0,j=1,,p}S0|=|S^auglassoS0|γs(1o(1)).

Consequently, we obtain |S^S0|/sγ(1o(1)), which leads to (23) by taking expectation. Combining these two cases concludes the proof of Theorem 2.

B. Review of model-X knockoffs framework

The key idea of the model-X knockoffs framework is to construct the so-called model-X knockoff variables, which concept was introduced originally in [15] and whose definition is stated formally as follows for completeness.

Definition 1 (Model-X knockoff variables [15])

For a set of random variables x = (X1,⋯, Xp), a new set of random variables x˜=(X˜1,,X˜p) is called a set of model-X knockoff variables if it satisfies the following properties:

  1. For any subset S{1,,p}, we have [x,x˜]swap(S)=d[x,x˜], where =d denotes equal in distribution and the vector [x,x˜]swap(S) is obtained by swapping Xj and X˜j for each jS.

  2. Conditional on x, the knockoffs vector x˜ is independent of response Y.

An important consequence is that the null regressors {Xj:jS1} can be swapped with their knockoffs without changing the joint distribution of the original variables x, their knockoffs x˜, and response Y. That is, we can obtain for any SS1,

([x,x˜]swap(S),Y)=d([x,x˜],Y). (A.1)

Such a property is known as the exchangeability property using the terminology in [15]. For more details, see Lemma 3.2 therein. Following [15], one can obtain a knockoffs matrix X˜n×p given observed design matrix X.

Using the augmented design matrix [X,X˜] and response vector y constructed by stacking the n observations, [15] suggested constructing knockoff statistics Wj=wj([X,X˜],y), j ∈ {1,⋯, p}, for measuring the importance of the jth variable, where wj is some function that satisfies the property that swapping xj ∈ ℝn with its corresponding knockoff variable x˜jn changes the sign of Wj; that is,

wj([X,X˜]Swap(S),y)={wj([X,X˜],y),jS,wj([X,X˜],y),jS. (A.2)

The knockoff statistics constructed above Wj=wj([X,X˜],y) satisfy the so-called sign-flip property; that is, conditional on |Wj|’s the signs of the null Wj’s with jS0 are i.i.d. coin flips (with equal chance 1/2). For the examples on valid constructions of knockoff statistics, see [15].

Let t > 0 be a fixed threshold and define S^={j:Wjt} as the set of discovered variables. Then intuitively, the sign-flip property entails

|S^S1|=d|{j:Wjt}S1||{j:Wjt}|.

Therefore, the FDP function can be estimated (conservatively) as

FDP=|S^S1||S^|1|{j:Wjt}||S^|1=:FDP^

for each t. In light of this observation, [15] proposed to choose the threshold by resorting to the above FDP^. Their results are summarized formally as follows.

Result 1 ([15])

Let q ∈ (0,1) denote the target FDR level. Assume that we choose a threshold T1 > 0 such that

T1=min{t>0:|{j:Wjt}||{j:Wjt}|1q}

or T1 = +∞ if the set is empty. Then the procedure selecting the variables S^={j:WjT1} controls the mFDR in (4) to no larger than q. Moreover, assume that we choose a slightly more conservative threshold T2 > 0 such that

T2=min{t>0:1+|{j:Wjt}||{j:Wjt}|1q}

or T2 = +∞ if the set is empty. Then the procedure selecting the variables S^={j:WjT2} controls the FDR in (3) to no larger than q.

It is worth noting that Result 1 was derived under the assumption that the joint distribution of the p covariates is known. In our model setting (1) and (2), however there exist unknown parameters that need to be estimated from data. In such case, it is natural to construct the knockoff variables and knockoff statistics with estimated distribution of the p covariates. Such a plug-in principle usually leads to breakdown of the exchangeability property in Definition 1, preventing us from using directly Result 1. To address this challenging issue, we will introduce our new method in the next section and provide detailed theoretical analysis for it.

It is also worth mentioning that recently, [6] provided an elegant new line of theory which ensures FDR control of model-X knockoffs procedure under the approximate exchangeability assumption, which is weaker than the exact exchangeability condition required in Definition 1. However, the conditions they need on estimation error of the joint distribution of x is difficult to be satisfied in high dimensions. [26] investigated the robustness of model-X knockoffs procedure with respect to unknown covariate distribution when covariates x follow a joint Gaussian distribution. Their procedure needs data splitting and their proofs rely heavily on the Gaussian distribution assumption, and thus their development may not be suitable for economic data with limited sample size and heavy-tailed distribution. For these reasons, our results complement substantially those in [15], [26], and [6].

C. Proofs of Proposition 1 and some key lemmas

C.1. Proof of Proposition 1

Observe that the second property of Definition 1 holds naturally since X˜(θ0) is constructed without using the information of y. Thus it remains to verify the first property of Definition 1. Since F0 and Eη0 have i.i.d. rows, let us consider the case of a single observation and show that [x,x˜(θ0)]swap(S)=d[x,x˜(θ0)] for any subset S{1,,p}. By Proposition 2 of [15], it suffices to consider the case of S={j} for an arbitrary j ∈ {1,⋯, p}. It follows from the definition of model (2) and the construction of x˜(θ0) that

[x,x˜(θ0)]swap({j})=[c0+e,c0+eη0]swap({j})=[c0+e˜(j),c0+e˜η0(j)], (A.3)

where e˜(j) and e˜η0(j) are defined such that [e,eη0]swap({j})=[e˜(j),e˜η0(j)]. Since model (2) assumes that e has i.i.d. components and eη0 is an independent copy of e, it holds that

[e˜(j),e˜η0(j)]=d[e,eη0]. (A.4)

Therefore, in view of (A.3) and (A.4) and the independence between (e,eη0) and c0, we have

[x,x˜(θ0)]swap({j})=d[c0+e,c0+eη0]=[x,x˜(θ0)],

which completes the proof of Proposition 1.

C.2. Proof of Lemma 1

For λ fixed at C0n−1/2 log p and each given θ, Wj(θ)=wj([X,X˜(θ)],y) depends only on β^aug(θ) by the LCD construction. Moreover, the Lasso solution β^aug(θ) satisfies the Karush–Kuhn–Tucker (KKT) conditions:

v(θ)U(θ)β^aug(θ)=n1λz, (A.5)
wherez=(z1,,z2p)Twithzj{{sgn(β^j)}ifβ^j0,[1,1]ifβ^j=0,forj=1,,2p. (A.6)

This means that β^aug(θ) depends on the data ([X,X˜(θ)],y) only through U(θ) and v(θ). Thus using notation T(θ) = vec(vechU(θ),v(θ)) with the fact that U(θ) is symmetric, we can reparametrize wj([X,X˜(θ)],y) as wj(T(θ)) with a slight abuse of notation. Furthermore, note that the thresholds T1 and T2 are both completely determined by wj(T(θ)). Consequently, by the construction of S^ we can see that S^ depends only on T(θ), which completes the proof of Lemma 1.

C.3. Proof of Lemma 2

We continue to use the same λ and θ as in Lemma 1 and its proof. Recall that SA(tA) represents the outcome of first restricting ourselves to the smaller set of variables A and then applying IPAD to TA(θ)=tA to further select variables from A. Also recall that A(θ) is the support of knockoff statistics Wj(θ). Thus the knockoff threshold T1 or T2 depends only on Wj(θ) with jA(θ).

On the other hand, when we restrict ourselves to AA(θ) we solve the following KKT conditions with respect to β˜:=(β˜1,,β˜2|A|)T2|A| to get the Lasso solution:

β˜=(UA(θ)UA(θ))1(vA(θ)n1λz˜), (A.7)
wherez˜=(z˜1,,z˜2|A|)Twithz˜j{{sgn(β˜j)}ifβ˜j0,[1,1]ifβ˜j=0,forj=1,,2|A|. (A.8)

Since λ is always fixed at the same value C0n−1/2 log p, it is seen that the solution to the above KKT conditions is identical to β^AAaug(θ), where the latter denotes the subvector of β^aug(θ) formed by stacking β^j1aug(θ), j1A and β^p+j2aug(θ), j2A all together. Therefore, the Lasso solution to (A.7)(A.8) and the Lasso solution to (A.5)(A.6) have the identical support (when viewed in the original 2p-dimensional space) and in addition, identical values on the support. This guarantees that S{1,⋯,p}(T(θ)) and SA(TA(θ)) are identical and thus concludes the proof of Lemma 2.

C.4. Lemma 3 and its proof

Lemma 3

Assume that Conditions 2–5 hold. Then with probability at least 1−O(πnp), the estimator θ^=(vec(C^),η^) lies in the shrinking set given by

Θnp={θ=(vec(C),η):CC0max+ηη0maxO(cnp)},

where cnp = (n−1 log p)1/2 + (p−1 log n)1/2 and πnp = pν + nν.

Proof. We divide the proof into two parts. We prove the bound for C^C0max in Part 1 and then for η^η0max in Part 2.

Part 1

Note that C^C0max=maxi,j|c^ijcij0|, where the maximum is taken over i ∈ {1,⋯, n} and j ∈ {1,⋯, p}. We write fi*=Hfi0 and λj*=H1λj0 with rotation matrix H defined in Lemma 6 in Section D.1. From the definition of cij, it holds that

c^ijcij0=(f^ifi*)λj*+f^i(λ^jλj*).

From Lemma 6, we can assume ‖H2 + ‖H−12 + ‖V2 + ‖V−12 ≲ 1, which occurs with probability at least 1 − O(pν). We also have maxi{1,,n}f^i221 a.s. by the assumed restriction F^F^/n=Ir as mentioned on p.213 of [3]. Hence, the triangle and Cauchy–Schwarz inequalities with Conditions 2 and 3 give

maxi,j|c^ijcij|maxif^ifi*2maxjλj*2+maxif^i2maxjλ^jλj*2maxif^ifi*2+maxjλ^jλj*2. (A.9)

Then it is sufficient to derive upper bounds for maxi kˆfifik2 and maxjλ^jλj*2 that hold with high probability. Using the decomposition of A.1 in [2] along with taking maximum over i, ℓ ∈ {1,⋯, n}, we can deduce

maxif^ifi*2V12maxi((σe2/n)f^i2+n1=1nf^2|p1j=1p(ejeijE[ejeij])|+n1=1nf^f02p1j=1pλj0eij2+n1=1nf^fi02p1j=1pλj0ej2)O(n1)+maxi,|p1j=1p(ejeijE[ejeij])|+maxip1j=1pλj0eij2O(n1)+R1+R2, (A.10)

where we have used the boundedness of f^2 discussed above and f02r1/2f0max1 in Condition 2 for the second inequality, and defined R1=maxi,|p1j=1p(ejeijE[ejeij])| and R2=maxi,k|p1j=1pλjk0eij|. Similarly, the expression on p.165 of [2] with taking maximum over i ∈ {1,⋯, n} and j ∈ {1,⋯, p} leads to

maxjλ^jλj*2H2maxjn1i=1nfi0eij2+n1i=1nf^i(f^ifi*)2H12maxjλj02+maxjn1i=1n(f^ifi*)eij2maxjn1i=1nfi0eij2+maxif^ifi*2+maxif^ifi*2maxj(n1i=1neij2)1/2=R3+maxif^ifi*2(1+R4), (A.11)

where R3=maxj,k|n1i=1nfik0eij|2 and R4=maxj(n1i=1neij2)1/2, and the Cauchy–Schwarz inequality has been used to obtain the second inequality. To evaluate R4, we note that

R42maxjEeij2+maxj|n1i=1n(eij2Eeij2)|.

The first term is bounded by 2Ce2. For the second term, Lemma 7(a) in Section D.2 with p replaced by n and the union bound give

(maxj|n1i=1n(eij2Eeij2)|>u)pmaxj(|n1i=1n(eij2Eeij2)|>u)2pexp(nu2/C)

for all 0 ≤ uc. Thus putting u = (C(ν + 1)n−1 log p)1/2 and using condition cnpc/(r2M2C(ν + 2))1/2, we obtain R42=O(1)+O((n1logp)1/2)=O(1) with probability at least 1 − O(pν). This together with the observation from (A.9)(A.11) yields

maxi,j|c^ijcij0|R3+{R1+R2+O(n1)}(1+R4)R1+R2+R3+O(n1).

Hence the convergence rate of maxi,j|c^ijcij0| is determined by the slowest term out of R1, R2, R3, and O(n−1). We evaluate these terms by Lemma 7 in Section D.2 and the union bound with condition cnpc/(r2M2C(ν + 2))1/2 as above. First for R1, Lemma 7(a) by letting u1 = (C(ν + 2)p−1 log n)1/2 results in

(R1>u1)2n2exp{p(ν+2)p1logn}=O(nν).

Next for R2, Lemma 7(c) with u2 = (2(ν + 1)p−1 logn)1/2 gives

(R2>u2)2rnexp{p(ν+1)p1logn}=O(nν).

Finally for R3, Lemma 7(b) with putting u3 = (C(ν + 1)n−1 log p)1/2 leads to

(R3>u3)2rpexp{n(ν+1)n1logp}=O(pν).

Consequently, we obtain the first result C^C0max=O(cnp), which holds with probability at least 1 − O(πnp).

Part 2

Next we derive the convergence rate of η^. It is sufficient to prove only the case when η0 is a scalar (so that we write η0=η10) since dimensionality m is fixed and ηk0's share the identical property thanks to Condition 4. Recall notation Enpek=(np)1i,jeijk. Letting δij=cij0c^ij, we have e^ij=xijc^ij=eij+δij. For an arbitrary fixed k ∈ {1,⋯, m}, the binomial expansion entails

|Enpe^kEek|=|Enp(e+δ)kEek|=|Enp(ekEek)+Enp=0k1(k)eδk||Enp(ekEek)|+=0k1(k)maxi,j|δij|kEnp|e||Enp(ekEek)|+O(maxi,j|δij|)=0k1Enp|e|. (A.12)

For all k ∈ {1,⋯, m}, the strong law of large numbers with Theorem 2.5.7 in [22] entails |EnpekEek|=o((np)1/2log(np)) a.s. under Condition 4. Furthermore, the second term of (A.12) is O(cnp) with probability at least 1 − O(πnp) from Part 1 and the same law of large numbers. Consequently, we obtain

|Enpe^kEek|cnp.

Therefore by the construction of η^1 and local Lipschitz continuity of h1 in Condition 4, we see that

|η^1η10|=|h1(Enpe^,,Enpe^m)h1(Ee,,Eem)|maxk{1,,m}|Enpe^kEek|

with probability at least 1 − O(πnp). This completes the proof of Lemma 3.

C.5. Lemma 4 and its proof

Lemma 4

Assume that Conditions 1–4 hold. Then with probability at least 1−O(πnp), the following statements hold

sup|A|k,θΘnpUA(θ)E[UA(θ0)]max=O(k1/2c˜np),
sup|A|k,θΘnpvA(θ)E[vA(θ0)]max=O(s3/2c˜np),

where Θnp was defined in Lemma 3 and c˜np=n1/2logp+p1/2logn. Consequently, we have

sup|A|k,θΘnpTA(θ)E[TA(θ0)]max=O((k1/2+s3/2)c˜np).

Proof. To complete the proof of (a), we verify the following

(ai)sup|A|k,θΘnpUA(θ)UA(θ0)maxk1/2c˜np,
(aii)U(θ0)E[U(θ0)]max(n1logp)1/2.

From (a–i) and (a–ii), we can conclude that

sup|A|k,θΘnpUA(θ)E[UA(θ0)]maxsup|A|k,θΘnpUA(θ)UA(θ0)max+sup|A|kUA(θ0)E[UA(θ0)]maxsup|A|k,θΘnpUA(θ)UA(θ0)max+U(θ0)E[U(θ0)]maxk1/2c˜np,

which yields result (a).

We begin with showing (a–i); this is the uniform extension of Lemma 8(a) in Section D.3 over |A|k. In fact, the proof is almost the same, with the only difference that bound (A.23) should be replaced with the bound derived in Lemma 9(c); that is,

max|A|kn1/2EA21(kn1logp)1/2, (A.13)

which holds with probability at least 1 − O(pν). Notice that (kn−1 log p)1/2 ≤ log1/2 p. Therefore, even if we use (A.13) instead of (A.23) in the proof of Lemma 8(a) we can still derive the same convergence rate k1/2c˜np as in Lemma 8(a), and hence (a–i) holds with probability at least 1 − O(πnp).

For (a–ii), we see that

U(θ0)E[U(θ0)]maxn1XXE[n1XX]max+n1X˜(θ0)X˜(θ0)E[n1X˜(θ0)X˜(θ0)]max+2n1XX˜(θ0)E[n1XX˜(θ0)]max=:W1+W2+2W3. (A.14)

We derive the bounds for each of these terms. First, W1 is bounded as

W1n1C0C0E[n1C0C0]max+n1EEEn1EEmax+2n1EC0max=:W1,1+W1,2+W1,3.

Under Condition 3, we deduce

W1,1=maxj,{1,,p}|k,m=1rλjk0λm0n1i=1n(fik0fim0Efik0fim0)|rM2maxj,{1,,p}|n1i=1n(fik0fim0Efik0fim0)|.

From Lemma 7(d) with Condition 2 and the union bound, we have

(maxj,{1,,p}|n1i=1n(fik0fim0Efik0fim0)|>u)p2maxj,{1,,p}(|n1i=1n(fik0fim0Efik0fim0)|>u)2p2exp(nu2/C).

Hence, letting u = (C(ν + 2)n−1 log p)1/2 above yields the bound W1,1 ≲ (n−1 log p)1/2 with probability at least 1−O(pν). Next for W1,2, we can find from Lemma 7(a) with p replaced by n and the union bound that

(n1EEEn1EEmax>u)p2maxj,(|n1i=1n(eijeiEeijei)|>u)2p2exp(nu2/C).

Letting u = (C(ν + 2)n−1 log p)1/2 and using n−1 log pc2/(C(ν + 2)), we obtain W1,2 ≲ (n−1 log p)1/2 with probability at least 1 − O(pν). Next for W1,3, the union bound gives

(n1EF0Λ0max>u)=(maxj,{1,,p}|n1k=1ri=1neijfik0λk0|>u)(rmaxj,{1,,p}maxk{1,,r}|n1i=1neijfik0||λk0|>u)rpmaxk{1,,r}maxj{1,,p}(|n1i=1neijfik0|>u/(rM)).

Lemma 7(b) states that for all 0 ≤ u/(rM) ≤ c/(rM) it holds that

(|n1i=1neijfik0|>u/(rM))2exp{nu2/(Cr2M2)}.

Therefore, if we put u = rM(C(ν + 1)n−1 log p)1/2 using n−1 log pc2/(r2M2C(ν + 1)), the upper bound of the probability is further bounded by 2rpν. Thus we obtain W13 ≲ (n−1 log p)1/2 with probability at least 1 − O(pν). Consequently, the bound of W1 is

W1W1,1+W1,2+W1,3(n1logp)1/2

with probability at least 1 − O(pν). Note that we have the same result for W2 since it has the same distribution as W1. Finally, W3 is bounded as

W3n1C0C0E[n1C0C0]max+n1EEη0max+n1EC0max+n1Eη0C0max=:W1,1+W3,1+W1,3+W3,2.

The upper bound of W3,1 turns out to be O((n−1 log p)1/2) that holds with probability at least 1−O(pν). We check this claim. Using the union bound and the inequality of Lemma 7(a) with p replaced by n and putting u = (C(ν + 2)n−1 log p)1/2 yield

(n1EEη0max>u)p2maxj,(|n1i=1n(eijeη0,i)|>u)2pν.

Finally, W3,2 is found to have the same bound as W1,3 because Eη0 is an independent copy of E. Consequently, with probability at least 1 − O(pν), we obtain

U(θ0)E[U(θ0)]max(n1logp)1/2.

This completes the proof of (a) since pνnp = O(1).

Next we show (b) by verifying the following

(bi)sup|A|k,θΘnpvA(θ)vA(θ0)maxs3/2c˜np,
(bii)v(θ0)E[v(θ0)]maxs(n1logp)1/2,

Similar to the proof of (a), we need to modify the proof of Lemma 8(b) in Section D.3 for obtaining the uniform bound with respect to A, but the obtained result is already uniform over the choice of A. Thus the same upper bound holds and (bi) follows. Next we show (bii). It holds that

v(θ0)Ev(θ0)maxn1XyEn1Xymax+n1X˜(θ0)yEn1X˜(θ0)ymax(n1XXEn1XX)βmax+n1XεEn1Xεmax+(n1X˜(θ0)XE[n1X˜(θ0)X])βmax+n1X˜(θ0)εE[n1X˜(θ0)ε]max=:Z1+Z2+Z3+Z4.

These terms can be bounded by the results obtained in the proof of (a–ii). We see that

Z1s1/2n1XXS0En1XXS0maxβS02sW1s(n1logp)1/2

with probability at least 1 − O(pν). Next we deduce

Z2n1Λ0F0εmax+n1Eεmax.

The first and second terms can be bounded by the same ways as W1,3 and W3,1 in the proof of (a) above with E and Eη0 replaced by ε, respectively. Then the first term dominates the second and hence Z2 ≲ (n−1 log p)1/2 with probability at least 1−O(pν). Similarly, we can obtain

Z3s1/2n1X˜(θ0)XS0En1X˜(θ0)XS0maxβS02sW3s(n1logp)1/2

with probability at least 1−O(pν). Note that Z4 has the same bound as Z2. Consequently, collecting terms leads to the result, Z1 + ⋯ + Z4s(n−1 log p)1/2 with probability at least 1 − O(pν). This proves (b–ii) and concludes the proof of Lemma 4.

C.6. Lemma 5 and its proof

Lemma 5

Assume that all the conditions of Theorem 2 hold. Then with probability at least 1 − O(πnp), the Lasso solution in (19) satisfies

supθΘnpβ^aug(θ)βaug2=O(s1/2λ),supθΘnpβ^aug(θ)βaug1=O(sλ),

where λ = c1n1/2 log p with c1 some positive constant.

Proof. Let δ(:=δ(θ)):=β^aug(θ)βaug. We start with introducing two inequalities

supθΘnpn1[X,X˜(θ)]εmax21λ, (A.15)
infθΘnp,δVδU(θ)δ/δ22σe2(1+o(1)), (A.16)

where λ = c1n−1/2 log p for some positive constant c1 and

V={δ2p:δS113δS01,δ0k}. (A.17)

It is well known that the rate of convergence of the Lasso estimator can be obtained provided that (A.15) and (A.16) hold. Thus we show that these two inequalities actually hold with high probability in Step 1, and then derive the convergence rate using (A.15) and (A.16) in Step 2.

Step 1

We check whether (A.15) and (A.16) actually hold with high probability. We first verify (A.15). By the proofs of Lemmas 8 and 4, we have

supθΘnpn1[X,X˜(θ)]εmaxn1Xεmax+supθΘnpn1X˜(θ)εn1X˜(θ0)εmax+n1X˜(θ0)εmax.

The first and third terms can both be upper bounded by O(n−1/2 log p) with probability at least 1−O(pν), following the same lines for deriving bound for Z2 in the proof of Lemma 4. To evaluate the second term, we can use the argument about V2 and its upper bound (A.24) in the proof of Lemma 8. That bound still holds with the same rate O(n−1/2 log p) even if we take A={1,,p}. Thus we conclude that (A.15) is true for the given λ by choosing an appropriate positive large constant c1, with probability at least 1 − O(πnp).

Next to verify (A.16), we derive the population lower bound first and then show that the difference is negligible. From the construction, we have

E[n1X˜(θ0)X˜(θ0)]=E[n1XX]=Λ0ΣfΛ0+σe2Ip,E[n1X˜(θ0)X]=E[n1XX˜(θ0)]=Λ0ΣfΛ0.

Using these equations, we obtain the lower bound

infδVδE[U(θ0)]δ/δ22=infδVδ(Λ0ΣfΛ0+σe2IpΛ0ΣfΛ0Λ0ΣfΛ0Λ0ΣfΛ0+σe2Ip)δ/δ22=infδVδ{(1111)Λ0ΣfΛ0+σe2I2p}δ/δ22σe2. (A.18)

Because δV is sparse and satisfies |B|k for B:=supp(δ), it holds that δU(θ0)δ=δBUB(θ0)δB and δE[U(θ0)]δ=δBE[UB(θ0)]δB. Hence from Lemma 4 together with the condition on dimensionality, we obtain

sup|B|k,θΘnpUB(θ)E[UB(θ0)]max=O(k1/2c˜np)=o(s1) (A.19)

with probability at least 1 − O(πnp). Thus using (A.19), we have for any δV,

δE[U(θ0)]δδU(θ)δ=δB{E[UB(θ0)]UB(θ)}δBδ12sup|B|k,θΘnpUB(θ)E[UB(θ0)]max=(δS01+δS11)2o(s1)δS012o(s1)δS022O(1)δ22o(1).

Rearranging the terms with (A.18) yields

infθΘnp,δVδU(θ)δ/δ22infδVδE[U(θ0)]δ/δ22|o(1)|σe2|o(1)|,

resulting in (A.16). In consequence, two inequalities (A.15) and (A.16) hold with probability at least 1 − O(πnp).

Step 2

This part is well known in the literature (e.g., [33]) so we briefly give the proof omitting the details. Because the objective function is given by

β^aug(θ)=argminb2pn1y[X,X˜(θ)]b22+λb1,

the global optimality of the Lasso estimator implies

(2n)1y[X,X˜(θ)]β^aug(θ)22+λβ^aug(θ)1(2n)1y[X,X˜(θ)]βaug22+λβaug1,

where the true parameter vector βaug was defined in the proof of Theorem 2. Note that supθΘnpδ(θ)0k by the assumption. Expanding the inequality and collecting terms with (A.15) yield

21δU(θ)δn1ε[X,X˜(θ)]maxδ1+λδ1(3/2)λδ1. (A.20)

On the other hand, applying Lemma 1 of [33] to our model reveals that δV. Thus we can use (A.16), (A.20), and (A.17) to get

δ22(σe2+o(1))3λδ1=3λ(δS11+δS01)12λδS01.

Since |S0|=s and δS01s1/2δS02, it holds that δ212s1/2λ/(σe2+o(1)). Since δS02δ2, we obtain the desired bound δ148sλ/(σe2+o(1)). This bound holds uniformly over θ ∈ Θnp, which completes the proof of Lemma 5.

D. Additional technical lemmas and their proofs

D.1. Lemma 6 and its proof

Lemma 6

Denote by V ∈ ℝr×r a diagonal matrix with its entries the r largest eigenvalues of (np)−1XXand define H=(Λ0Λ0/p)(F0F^/n)V1. Assume that Conditions 2–5 hold. ThenH2 + ‖H−12 + ‖V2 + ‖V−12 is bounded from above by some constant with probability at least 1 − O(p−ν).

Proof. Let λk[A] denote the kth largest eigenvalue of square matrix A throughout the proof. Because ‖Λ00Λ0/p2M and

F0F^/n2n1/2F02n1/2F^2(rn)1/2n1/2F0max(λ1[n1F^F^])1/2r1/2M

by Conditions 2–3, and F^F^/n=Ir, we have

H2Λ0Λ0/p2F0F^/n2V12V12,

where ‖V−12 is equal to the reciprocal of the rth largest eigenvalue of (np)−1XX′. Similarly, under Conditions 2–3 we also have

H12V2(F0F^/n)12(Λ0Λ0/p)12V2(F0F^/n)12,

where ‖V2 is equal to the largest eigenvalue of (np)−1XX′ and the inverse matrix in the upper bound is well defined by [2]. To see if (F0F^/n)12 is bounded from above, it suffices to bound the minimum eigenvalue of F0F^F^F0/n2 away from zero uniformly in n. Regarding r eigenvalues of the matrix, Sylvester’s law of inertia (e.g., [31], Theorem 4.5.8) entails that all the r eigenvalues are positive for all n. Moreover, by Proposition 1 of [2] we know that the limiting matrix of F^F0/n is nonsingular under Conditions 2 and 5. Therefore, we can conclude that liminfnλr[F0F^F^F0/n2]>0 a.s., and hence ‖H−12 ≲ ‖V2 follows.

To complete the proof, it is sufficient to show that the maximum and rth largest eigenvalues of (np)−1XX′ are bounded from above and away from zero, respectively, for all large n and p. By the definition of the spectral norm and triangle inequality, we have

{λ1[(np)1XX)]}1/2=(np)1/2X2(np)1/2F0Λ02+(np)1/2E2n1/2F02p1/2Λ02+(np)1/2E2.

By Conditions 2 and 3, the first term is a.s. bounded by a constant as discussed above. The second term is O((np)−1/2) = o(1) with probability at least 1 − 2exp(−|O(np)|) by Lemma 9(a) under Condition 4. Therefore, the largest eigenvalue of (np)−1XX′ is bounded from above by some constant with probability at least 1 − 2exp(−|O(np)|).

Next we bound the rth largest eigenvalue of (np)−1XX′ away from zero. Since the matrix is symmetric, Weyl’s inequality (e.g., [31], Theorem 4.3.1) yields

λr[(np)1XX]=λr[(np)1{F0Λ0Λ0F0+(EΛ0F0+F0Λ0E)+EE}]λr[(np)1F0Λ0Λ0F0]+λn[(np)1(EΛ0F0+F0Λ0E)]+λn[(np)1EE]. (A.21)

The third term of lower bound (A.21) is obviously nonnegative. For the first term of lower bound (A.21), let V denote a subspace of ℝn. Because F0Λ0Λ0F0 is symmetric, the Courant–Fischer min-max Theorem (e.g., [31], Theorem 4.2.6) yields

λr[(np)1F0Λ0Λ0F0]=maxV:dim(V)=rminvV\{0}{(np)1vF0Λ0Λ0F0vvv}maxV:dim(V)=rminvV\{0}(n1vF0F0vvv)minF0vr\{0}(p1vF0Λ0Λ0F0vvF0F0v)=λr[n1F0F0]λr[p1Λ0Λ0]=λr[n1F0F0]λr[p1Λ0Λ0]λr[Σf]λr[p1Λ0Λ0]n1F0F0Σf2λr[Σf]λr[p1Λ0Λ0]rn1F0F0Σfmax.

In this lower bound, the first term is bounded away from zero by Conditions 2–3. Meanwhile, to evaluate the second term we use Lemma 7(d) in Section D.2, which together with the union bound establishes

(n1F0F0Σfmax>u)r2maxk,{1,,r}(|n1i=1n(fik0fi0Efik0fi0)|>u)2r2exp(nu2/C)

for any 0 ≤ uc. Thus the second one turns out to be O((n−1 log p)1/2) = o(1) with probability at least 1 − O(pν) once we set u = (Cνn−1 log p)1/2 and assume n−1 log pc2/() without loss of generality. Therefore, the first term of lower bound (A.21) is bounded away from zero eventually. For the second term of (A.21), since the spectral norm gives the upper bound of the spectral radius we have

|λn[(np)1(EΛ0F0+F0Λ0E)]|(np)1(EΛ0F0+F0Λ0E)22(np)1/2E2p1/2Λ02n1/2F02=O((np)1/2)O(1)O(1)=o(1),

which holds with probability at least 1 − 2exp(−|O(np)|) by Lemma 9(a) in Section D.4. As a consequence, the desired result holds with probability at least 1 − O(pν) and this concludes the proof of Lemma 6.

D.2. Lemma 7 and its proof

Lemma 7

Assume that Conditions 2–4 hold. Then there exist some positive constants c and C such that the following inequalities hold

  1. For all ℓ,i ∈ {1,⋯, n} and 0 ≤ uc, we have
    (|p1j=1p(ejeijE[ejeij])|>u)2exp(pu2/C).
  2. For all k ∈ {1,⋯, r}, j ∈ {1,⋯, p}, and 0 ≤ uc, we have
    (|n1i=1nfik0eij|>u)2exp(nu2/C).
  3. For all k ∈ {1,⋯, r}, i ∈ {1,⋯, n}, and u ≥ 0, we have
    (|p1j=1pλjk0eij|>u)2exp(pu2/C).
  4. For all k, ℓ ∈ {1,⋯, r} and 0 ≤ uc, we have
    (|n1i=1n(fik0fi0E[fik0fi0])|>u)2exp(nu2/C).

Proof. (a) To obtain the first result, we rely on the Hanson–Wright inequality. Let ξ = (ξ1,⋯, ξm)0 ∈ ℝm denote a random vector whose components are independent copies of esubG(Ce2). Then the inequality states that for any (nonrandom) matrix A ∈ ℝm×m,

(|ξAξEξAξ|>u)2exp{C˜Hmin(u2K4AF2,uK2A2)}, (A.22)

where K is a positive constant such that supk1k1/2(E|e|k)1/kK and C˜H is a positive constant. In our setting, we can take K=3Ce2 (e.g., Lemma 1.4 of [34]). Using this inequality, we first prove the case when = i. If we set m = p and A = diag(p−1, …, p−1), then we have

|ξAξEξAξ|=|p1j=1p(ξj2Eξj2)|=d|p1j=1p(eij2E[eij2])|

for all i. Moreover, we obtain AF2=p1 and ‖A2 = p−1 in this case. The assumed condition 0<u9Ce2=K2 entails that u2/K4u/K2 so the result follows from (A.22) with C˜H replaced by CH=81Ce4/C˜H.

Similarly, we prove the case when i. We set m = p + 1 and A = (a1,⋯, ap+1), where a1 = (0,p−1,⋯, p−1)′ and aj = 0 for j = 2,⋯, p + 1. That is, the entries of A are all zero except that the second to (p + 1)th components in the first column vector are p−1. Under this setting, we observe that

|ξAξEξAξ|=|p1j=2p+1ξ1ξj|=d|p1j=1pejeij|

for all i. Moreover, we obtain AF2=A2=p1 in this case. Therefore, the same bound holds as in the case of = i from (A.22) again. Consequently, for any 0u9Ce2 we have

(|p1j=1p(ejeijE[ejeij])|>u)2exp(pu2/CH)

(b) We prove the second assertion by Bernstein’s inequality for the sum of a martingale difference sequence (e.g., Theorem 3.14 in [11]). Fix k = 1 and j = 1. Define Fi1 as the σ-field generated from {f10:=i,i1,}. Then (fi10ei1,Fi) forms a martingale difference sequence because E|fi10ei1|< and E[fi10ei1|Fi1]=0 under Conditions 2 and 4. Since the sub-Gaussianity of ei1 implies Eei124Ce2 (e.g., Lemma 1.4 of [34]), we have Vi:=E[fik02eij2|Fi1]4Ce2M2, and hence i=1nVi4nCe2M2 a.s. due to boundedness |fi10|M a.s. On the other hand, by the sub-Gaussianity of eij and boundedness of |fi10| again we observe that for all p ≥ 3 and i ∈ {1,⋯, n},

E[(0fi10ei1)p|Fi1]Mp(2Ce2)p/2pΓ(p/2)p!(2CeM)p2Vi/2,

where Γ denotes the Gamma function and we have used the estimates pΓ(p/2) ≤ p! and 2p/2−2 ≤ 2p−2/2 for p ≥ 3 in the last inequality. Then an application of Theorem 3.14 in [11] by putting x = u, y=4M2Ce2, and c = 2MCe in their notation gives the one-sided result. Making twice the bound yields

(|n1i=1nfik0eij|>u)2exp(nu28M2Ce2+4MCeu).

For all 0uMCe2, the upper bound is further bounded by 2exp(nu2/(12M2Ce2)). We set CI=12M2Ce2. Consequently, for any 0uMCe2 we have

(|n1i=1nfik0eij|>u)2exp(nu2/CI).

(c) We prove the third inequality. Note that

(|λjk0eij|>u)2exp{u22λjk02Ce2}2exp{u22M2Ce2}.

This implies that λjk0eij is a sequence of i.i.d. subG(M2Ce2). Thus the result is obtained directly by Bernstein’s inequality for the sum of independent sub-Gaussian random variables. Consequently, for any u ≥ 0 putting CJ=M2Ce2 leads to

(|p1j=1pλjk0eij|>u)2exp(pu2/CJ).

(d) We show the last inequality. Note that for each k, (fik)i ∼ i.i.d. subG(M2) since |fik0|M a.s. by Lemma 1.8 of [34] under Condition 2. Thus the remaining is the same as (a). Set CK=81M4/C˜H here. Then for any 0 ≤ u ≤ 9M2, we have

(|n1i=1n(fik0fi0E[fik0fi0])|>u)2exp(nu2/CK).

Finally the obtained inequalities hold even if the constant in the upper bound is replaced with arbitrary fixed constant C such that C ≥ max{CH,CI,CJ,CK}. Similarly, we can also restrict the range of u for each inequality to be 0 ≤ uc for arbitrary fixed constant c that satisfies 0<cmin(9Ce2,MCe2,9M2). This completes the proof of Lemma 7.

D.3. Lemma 8 and its proof

Lemma 8

Assume that Conditions 1–4 hold. Then for any set A satisfying |A|k, the following statements hold with probability at least 1 − O(πnp)

supθΘnpUA(θ)UA(θ0)max=O(k1/2c˜np),
supθΘnpvA(θ)vA(θ0)max=O(s3/2c˜np),

where Θnp was defined in Lemma 3 and c˜np=n1/2logp+p1/2logn. Consequently, we have

supθΘnpTA(θ)TA(θ0)max=O((k1/2+s3/2)c˜np).

Proof. We first state some results that are useful in the proof. Since ‖n−1/2F02 = O(1) a.s. by Condition 2 and k1/2ΛA02=O(1) for any A such that |A|k under Condition 3, we first have

n1/2CA02n1/2F02k1/2k1/2ΛA02k1/2.

Next Lemma 9(b) in Section D.4 gives directly

n1/2Eη0A21 (A.23)

with probability at least 1 − O(pν). By Condition 4, we also deduce

(supηΘnpEηEη0max>u)npmaxi,j(supηΘnp|eηijeη0ij|>u)npmaxi,j(|Z|>u/(M1/2cnp1/2))2npexp(u2/(ce2Mcnp))

for any u ≥ 0. Thus setting u=2ceM1/2cnp1/2log1/2(np) with some large enough positive constant M, we obtain that with probability at least 1 − O((np)ν),

supηΘEηEη0maxcnplog1/2(np)=O(c˜np).

We will use these results and Lemma 10 in Section D.5 in the proofs below.

To prove (a), we have

UA(θ)UA(θ0)maxn1X˜A(θ)X˜A(θ)n1X˜A(θ0)X˜A(θ0)max+2n1XAX˜A(θ)n1XAX˜A(θ0)max=:U1+U2.

Observe that U1 is further bounded as

U1n1CACAn1CA0CA0max+n1EηAEηAn1Eη0AEη0Amax+2n1EηACAn1Eη0ACA0max=:U11+U12+U13.

By Lemma 10, it is easy to see that

U11n1(CACA0)(CACA0)max+2n1CA0(CACA0)maxn1/2CACA0maxCACA02+2n1/2CA02CACA0maxk1/2CC0max2+k1/2CC0max=O(k1/2cnp2+k1/2cnp)=O(k1/2cnp),

where the last estimate follows from Lemma 3. Similarly, we deduce

U12n1(EηAEη0A)(EηAEη0A)max+2n1Eη0A(EηAEη0A)maxn1/2EηAEη0AmaxEηAEη0A2+2n1/2Eη0A2EηAEη0Amaxk1/2EηEη0max2+EηEη0max=O(k1/2c˜np2+c˜np)

and

U13n1(EηAEη0A)(CACA0)max+n1Eη0A(CACA0)max+n1CA0(EηAEη0A0)maxk1/2EηEη0maxCC0max+n1/2Eη0A2CC0max+n1/2CA02EηEη00max=O(k1/2c˜npcnp+cnp+k1/2c˜np)=O(k1/2c˜np).

Combining these bounds of U11U13, we have

U1U11+U12+U13k1/2c˜np.

This holds uniformly in θ ∈ Θnp with probability at least 1 − O(πnp) by Lemma 3 and the discussion above. Next we obtain

U2n1CA0(CACA0)max+n1CA0(EηAEη0A)max+n1EA(CACA0)max+n1EA(EηAEη0A)maxn1/2CA02CACA0max+n1/2CA02EηAEη0Amax+n1/2Eη0A2CACA0max+n1/2Eη0A2EηEη0max=O(k1/2cnp+k1/2c˜np+cnp+c˜np)=O(k1/2c˜np).

This also holds uniformly in θ ∈ Θnp with probability at least 1 − O(πnp) by Lemma 3 and the discussion above. Consequently, it holds that

supθΘnpUA(θ)UA(θ0)maxk1/2c˜np

with probability at least 1 − O(πnp).

To prove (b), we have

vA(θ)vA(θ0)maxn1X˜A(θ)yn1X˜A(θ0)ymaxn1X˜A(θ)Xβn1X˜A(θ0)Xβmax+n1X˜A(θ)εn1X˜A(θ0)εmax=:V1+V2.

First, because Xβ=XS0βS0 we see that

V1s1/2n1X˜A(θ)XS0n1X˜A(θ0)XS0maxβS02sn1X˜A(θ)XS0n1X˜A(θ0)XS0max.

Recall that |S0|=s and snp. By a similar bound of U2, the norm just above can be bounded further as

n1/2CS002CACA0max+n1/2CS002EηAEη0Amax+n1/2Eη0S02CACA0max+n1/2Eη0S02EηAEη0Amaxs1/2CC0max+s1/2EηEη0max+CC0max+EηEη0max=O(s1/2cnp+s1/2c˜np+cnp+c˜np)=O(s1/2c˜np).

Thus we have

V1ss1/2c˜np=s3/2c˜np

with probability at least 1 − O(πnp). Next the same procedure yields

V2X˜A(θ)X˜A(θ0)maxn1/2ε2CC0max+EηEη0maxc˜np, (A.24)

where n1/2ε2=(Eε2)1/2+o(1) a.s. by the law of large numbers for independent random variables. Since the results hold uniformly in θ ∈ Θnp, combining them leads to

supθΘnpvA(θ)vA(θ0)maxs3/2c˜np

with probability at least 1 − O(πnp). This concludes the proof of Lemma 8.

D.4. Lemma 9 and its proof

Lemma 9

Assume that Condition 4 holds. Then the following statements hold

  1. We have
    ((np)1/2E21)12exp(|O(np)|);
  2. For any fixed set A with |A|kn, we have
    (n1/2EA21)12pν;
  3. For all kn, we have
    (max|A|kn1/2EA21(n1klogp)1/2)12pν,

where ν > 0 is a predetermined constant.

Proof. Result (a) is obtained by Theorem 5.39 of [40]. Moreover, by the same theorem there exist some positive constants c and C such that for any A with |A|kn and every t ≥ 0,

(σe1n1/2EA2>1+C+n1/2t)2exp(ct2),

where σe2=Ee2. Therefore, result (b) is immediately obtained by putting t2 = c−1ν log p since n−1/2t = o(1) and exp (−ct2) = pv in this case.

For (c), taking the union bound leads to

(σe1max|A|kn1/2EA2>1+C+n1/2t)(pk)max|A|k(σe1n1/2EA2>1+C+n1/2t)2pkexp(ct2).

Set t2 = c−1(ν + k)log p in this inequality. Then we have n1/2t = O((n−1k log p)1/2) and

2pkexp(ct2)2pkexp((ν+k)logp)=2pν,

which gives result (c) and completes the proof of Lemma 9.

D.5. Lemma 10 and its proof

Lemma 10

For matrices Ak1×n and Bn×k2, we haveABmaxn1/2A2Bmax andABmaxn1/2AmaxB2.

Proof. For any matrix M = (mij) ∈ ℝk×n, let ‖M∞,∞ denote the induced -norm. First, we have

M,:=supvn\{0}Mvmaxvmaxsupvn\{0}Mv2v2v2vmaxn1/2M2.

Therefore, by a simple calculation we see that

ABmax=vec(AB)max=(Ik2A)vec(B)max=(Ik2A)vec(B)maxvec(B)maxvec(B)maxIk2A,vec(B)max=A,Bmaxn1/2A2Bmax.

The second assertion follows from applying this inequality to BA′. This concludes the proof of Lemma 10.

E. Additional numerical details and results

E.1. Estimation procedure

In implementing the IPAD algorithm suggested in Section 2, we use the PCp1 criterion proposed in [3] to estimate the number of factors r. With an estimated number of factors r^, we use the principle component method discussed in Section 3.2 to obtain an estimate C^ of matrix C0. Denote by E^=(e^ij)=XC^. Recall that in the construction of knockoff variables, the distribution of E needs to be estimated. Throughout our simulation studies, we misspecify the model and treat the entries of E as i.i.d. Gaussian random variables. Under this working model assumption, the only unknown parameter is the variance which can be estimated by the following maximum likelihood estimator

σ^2=(np)1i=1nj=1pe^ij2.

Then the knockoffs matrix X^ is constructed using (8) with the entries of Eη^ drawn independently from N(0,σ^2). For the two comparison methods BCKnockoff and HD-BCKnockoff, we follow the implementation in [4] and [5], respectively. Thus it is seen that neither BCKnockoff nor HD-BCKnockoff uses the factor structure in X when constructing the knockoff variables.

In Designs 1–3, with the constructed empirical knockoffs matrix X^ we apply the Lasso method to fit the model with y the response vector and [X,X^] the augmented design matrix. The value of the regularization parameter λ is chosen by the tenfold cross-validation. Then the LCD discussed in Section 2.2 is used in the construction of knockoff statistics. In Design 4, we assume the nonlinear relationship between the response and the covariates. In this case, random forest is used for estimation of the model. To construct the knockoff statistics, we use the variable importance measure of mean decrease accuracy (MDA) introduced in [14]. This measure is based on the idea that if a variable is unimportant, then rearranging its values should not degrade the prediction accuracy. The MDA for the jth variable, denoted as MDA^j, measures the amount of increase in prediction error when the values of the jth variable in the out-of-sample prediction are permuted randomly. Then intuitively, MDA^j will be small and around zero if the jth variable is unimportant in predicting the response. For each original variable xj, we compute Wj statistic as |MDA^j||MDA^j+p|,j=1,,p.

E.2. Simulation study

To evaluate the performance of IPAD approach in terms of empirical FDR and power with real economic data, we set up one additional Monte Carlo simulation study. In this design, we use the transformed macroeconomic variables described above as the design matrix X, but simulate response y from the model in Design 1 in Section 4.1. We set the number of true signals, the amplitude of signals, and the target FDR level to s = 10, A = 4, and q = 0.2, respectively.

Table 7 shows the results for IPAD and HD-BCKnockoff approaches. As expected, HD-BCKnockoff can control FDR but suffers from lack of power. On the other hand, IPAD has empirical FDR slightly higher than the target level (q = 0.2) while its power is reasonably high. These results are consistent with our theory in Section 3 because IPAD only controls FDR asymptotically. Additional reason for having slightly higher FDR than the target level can be deviation of the design matrix from our factor model assumption. Overall this simulation study indicates that IPAD can control FDR at around the target level with reasonably high power when we use the macroeconomic data set. In the next section, using the same data set we will compare the forecasting performance of IPAD with that of some commonly used forecasting methods in the literature.

Table 7:

Real data simulation results with (n,p) = (195,109)

FDR Power FDR+ Power+ R2
c = 0.2

IPAD 0.278 0.812 0.223 0.796 0.747
HD-BCKnockoff 0.096 0.009 0.010 0.002 0.758

c = 0.3

IPAD 0.280 0.757 0.221 0.723 0.665
HD-BCKnockoff 0.149 0.121 0.027 0.036 0.678

c = 0.5

IPAD 0.286 0.661 0.215 0.571 0.560
HD-BCKnockoff 0.119 0.009 0.008 0.001 0.554

E.3. Methods of comparison in empirical analysis

We compare the following different methods in the empirical analysis presented in Section 5, where each method is implemented in a same way as IPAD for one-step ahead prediction.

  1. Autoregression of order one (AR(1)). Assume that
    yt=α0+ρyt1+εt,
    where yt is regressed on yt−1, and α0 and ρ are the AR(1) coefficients that need to be estimated. With the ordinary least squares estimates α^0 and ρ^, the one-step ahead prediction based on this model is y^T+1=α^0+ρ^yT.
  2. Factor augmented AR(1) (FAR). We first extract m factors f1,⋯, fm form the 109 transformed macroeconomic variables by principal component analysis (PCA). Denote by f˜tm the factor vector at time t extracted from the rows of matrix [f1,⋯, fm] ∈ ℝn×m. Then we regress yt on yt−1 and f˜t1 and fit the following model
    yt=α0+ρyt1+γf˜t1+εt
    with γ ∈ ℝm. The number of factors m is determined using the PCp1 criterion in [3]. Similar to AR(1) model, one-step ahead forecast of yt at time T is
    y^T+1=α^0+ρ^yT+γ^f˜T.
  3. Lasso method. The yt is regressed on yt−1, f˜t1, and the 108 transformed macroeconomic variables zt−1 ∈ ℝ108 at time t − 1
    yt=α0+ρyt1+γf˜t1+δzt1+εt,
    where f˜t is the same as in the FAR(1) model, and α0, ρ, and δ ∈ ℝ108 are regression coefficients that need to be estimated. The coefficients are estimated by Lasso method with regularization parameter chosen by the cross-validation. With the estimated Lasso coefficient vector β^Lasso, one-step ahead forecast of yt at time T is
    y^T+1=β^LassoxT,
    where xT is the augmented predictor vector at time T.
  4. IPAD method. We regress yt on the augmented vector (yt1,zt1). The lagged variable yt−1 is assumed to be always in the model. To account for this, we implement IPAD in three steps. First, we regress yt on yt−1 and obtain the residuals ey,t. Second, we regress each of the 108 variables in zt−1 on yt−1 and obtain the residual vector ez,t−1. At last, we fit model (1)(2) using the IPAD approach by treating ey,t as the response and ez,t−1 as predictors, which returns us a set of selected variables (a subset of the 108 macroeconomic variables). With the set of variables S^ selected by IPAD, we fit the following model by the least-squares regression
    yt=α0+ρyt1+δzt1,S^+εt, (A.25)
    where zt,S^ stands for the subvector of zt corresponding to the set of variables S^ selected by IPAD at time t. Since S^ from IPAD is random due to the randomness in generating knockoff variables, we apply the IPAD procedure 100 times and compute the average of these 100 one-step ahead predictions based on (A.25) and use the mean value as the final predicted value of yT+1.

References

  • [1].Ahn SC and Horenstein AR (2013). Eigenvalue ratio test for the number of factors. Econometrica 81, 1203–1227. [Google Scholar]
  • [2].Bai J (2003). Inferential theory for factor models of large dimensions. Econometrica 71, 135–171. [Google Scholar]
  • [3].Bai J and Ng S (2002). Determining the number of factors in approximate factor models. Econometrica 70, 191–221. [Google Scholar]
  • [4].Barber RF and Candès EJ (2015). Controlling the false discovery rate via knockoffs. The Annals of Statistics 43, 2055–2085. [Google Scholar]
  • [5].Barber RF and Candès EJ (2016). A knockoff filter for high-dimensional selective inference. arXiv preprint arXiv:1602.03574. [Google Scholar]
  • [6].Barber RF, Candès EJ, and Samworth RJ (2018). Robust inference with knockoffs. arXiv preprint arXiv:1801.03896. [Google Scholar]
  • [7].Belloni A, Chernozhukov V, Chetverikov D, Hansen C, and Kato K (2018). High-dimensional econometrics and regularized GMM. arXiv preprint arXiv:1806.01888. [Google Scholar]
  • [8].Benjamini Y (2010). Discovering the false discovery rate. Journal of the Royal Statistical Society Series B 72, 405–416. [Google Scholar]
  • [9].Benjamini Y and Hochberg Y (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B 57, 289–300. [Google Scholar]
  • [10].Benjamini Y and Yekutieli D (2001). The control of the false discovery rate in multiple testing under dependency. The Annals of Statistics 29, 1165–1188. [Google Scholar]
  • [11].Bercu B, Delyon B, and Rio E (2015). Concentration Inequalities for Sums and Martingales (1st ed.). Springer. [Google Scholar]
  • [12].Billingsley P (1995). Probability and Measure (3rd ed.). Wiley-Interscience. [Google Scholar]
  • [13].Bonferroni CE (1935). Il calcolo delle assicurazioni su gruppi di teste. Studi in Onore del Professore Salvatore Ortu Carboni, 13–60. [Google Scholar]
  • [14].Breiman L (2001). Random forests. Machine Learning 45, 5–32. [Google Scholar]
  • [15].Candès EJ, Fan Y, Janson L, and Lv J (2018). Panning for gold: ‘model-X’ knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society Series B 80, 551–577. [Google Scholar]
  • [16].Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, Newey W, and Robins J (2018). Double/debiased machine learning for treatment and structural parameters. Econometrics Journal 21, C1–C68. [Google Scholar]
  • [17].Chernozhukov V, Härdle WK, Huang C, and Wang W (2018). Lasso-driven inference in time and space. arXiv preprint arXiv:1806.05081. [Google Scholar]
  • [18].Chernozhukov V, Newey W, and Robins J (2018). Double/de-biased machine learning using regularized Riesz representers. arXiv preprint arXiv:1802.08667. [Google Scholar]
  • [19].Chudik A, Kapetanios G, and Pesaran H (2018). A one covariate at a time, multiple testing approach to variable selection in high-dimensional linear regression models. Econometrica, to appear. [Google Scholar]
  • [20].De Mol C, Giannone D, and Reichlin L (2008). Forecasting using a large number of predictors: Is Bayesian shrinkage a valid alternative to principal components? Journal of Econometrics 146, 318–328. [Google Scholar]
  • [21].Diebold FX and Mariano RS (1995). Comparing predictive accuracy. Journal of Business & Economic Statistics 20, 134–144. [Google Scholar]
  • [22].Durrett R (2010). Probability: Theory and Examples (4th ed.). Cambridge University Press. [Google Scholar]
  • [23].Fan J and Fan Y (2008). High-dimensional classification using features annealed independence rules. The Annals of Statistics 36, 2605–2637. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Fan J, Han X, and Gu W (2012). Estimating false discovery proportion under arbitrary covariance dependence (with discussion). Journal of American Statistical Association 107, 1019–1045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Fan J and Lv J (2008). Sure independence screening for ultrahigh dimensional feature space (with discussion). Journal of the Royal Statistical Society Series B 70, 849–911. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Fan Y, Demirkaya E, Li G, and Lv J (2019). RANK: large-scale inference with graphical nonlinear knockoffs. Journal of the American Statistical Association, to appear. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Fan Y, Demirkaya E, and Lv J (2019). Nonuniformity of p-values can occur early in diverging dimensions. Journal of Machine Learning Research, to appear. [PMC free article] [PubMed] [Google Scholar]
  • [28].Fan Y and Lv J (2013). Asymptotic equivalence of regularization methods in thresholded parameter space. Journal of the American Statistical Association 108, 1044–1061. [Google Scholar]
  • [29].Guo Z, Kang H, Cai TT, and Small DS (2018). Confidence intervals for causal effects with invalid instruments by using two-stage hard thresholding with voting. Journal of the Royal Statistical Society Series B 80, 793–815. [Google Scholar]
  • [30].Holm S (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70. [Google Scholar]
  • [31].Horn RA and Johnson CR (2012). Matrix Analysis (2nd ed.). Cambridge University Press. [Google Scholar]
  • [32].Lv J (2013). Impacts of high dimensionality in finite samples. The Annals of Statistics 41, 2236–2262. [Google Scholar]
  • [33].Negahban SN, Ravikumar P, Wainwright MJ, and Yu B (2012). A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Statistical Science 27, 538–557. [Google Scholar]
  • [34].Rigollet P and Hütter J-C (2017). High Dimensional Statistics. Massachusetts Institute of Technology, MIT Open CourseWare. [Google Scholar]
  • [35].Romano JP and Wolf M (2005). Exact and approximate stepdown methods for multiple hypothesis testing. Journal of the American Statistical Association 100, 94–108. [Google Scholar]
  • [36].Shah RD and Bühlmann P (2018). Goodness-of-fit tests for high dimensional linear models. Journal of the Royal Statistical Society Series B 80, 113–135. [Google Scholar]
  • [37].Stock JH and Watson MW (2012). Generalized shrinkage methods for forecasting using many predictors. Journal of Business & Economic Statistics 30, 481–493. [Google Scholar]
  • [38].Stucky B and van de Geer S (2018). Asymptotic confidence regions for high-dimensional structured sparsity. IEEE Transactions on Signal Processing 66, 2178–2190. [Google Scholar]
  • [39].Tibshirani R (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58, 267–288. [Google Scholar]
  • [40].Vershynin R (2012). Introduction to the non-asymptotic analysis of random matrices. In Eldar YC and Kutyniok G (Eds.), Compressed Sensing: Theory and Practice, pp. 210–268. Cambridge University Press. [Google Scholar]
  • [41].Vizcarra AB and Viens FG (2007). Some applications of the Malliavin calculus to sub-Gaussian and non-sub-Gaussian random fields. In Dalang RC, Dozzi M, and Russo F (Eds.), Seminar on Stochastic Analysis, Random Fields and Applications V, pp. 363–395. Springer Science & Business Media. [Google Scholar]
  • [42].Wooldridge JM and Zhu Y (2018). Inference in approximately sparse correlated random effects probit models. Journal of Bussiness & Economic Statistics, to appear. [Google Scholar]
  • [43].Zhang X and Cheng G (2017). Simultaneous inference for high-dimensional linear models. Journal of the American Statistical Association 112, 757–768. [Google Scholar]

RESOURCES