Abstract
Interpretability and stability are two important features that are desired in many contemporary big data applications arising in statistics, economics, and finance. While the former is enjoyed to some extent by many existing forecasting approaches, the latter in the sense of controlling the fraction of wrongly discovered features which can enhance greatly the interpretability is still largely underdeveloped. To this end, in this paper we exploit the general framework of model-X knockoffs introduced recently in Candès, Fan, Janson and Lv (2018), which is nonconventional for reproducible large-scale inference in that the framework is completely free of the use of p-values for significance testing, and suggest a new method of intertwined probabilistic factors decoupling (IPAD) for stable interpretable forecasting with knockoffs inference in high-dimensional models. The recipe of the method is constructing the knockoff variables by assuming a latent factor model that is exploited widely in economics and finance for the association structure of covariates. Our method and work are distinct from the existing literature in that we estimate the covariate distribution from data instead of assuming that it is known when constructing the knockoff variables, our procedure does not require any sample splitting, we provide theoretical justifications on the asymptotic false discovery rate control, and the theory for the power analysis is also established. Several simulation examples and the real data analysis further demonstrate that the newly suggested method has appealing finite-sample performance with desired interpretability and stability compared to some popularly used forecasting methods.
Keywords: Reproducibility, Power, Latent factors, Model-X knockoffs, Large-scale inference and FDR, Stability
1. Introduction
Forecasting is a fundamental problem that arises in statistics, economics, and finance. With the availability of big data, many machine learning algorithms such as the Lasso and random forest can be resorted to for such a purpose by exploring a large pool of potential features. Many of these existing procedures provide a certain measure of feature importance which can then be utilized to judge the relative importance of selected features for the goal of interpretability. Yet the issue of stability in the sense of controlling the fraction of wrongly discovered features is still largely underdeveloped. As argued in [20] in the econometric settings, it is difficult to obtain interpretability and stability simultaneously even in simple Lasso forecasting. A natural question is how to ensure both interpretability and stability for flexible forecasting.
Naturally stability is related to statistical inference. The recent years have witnessed a growing body of work on high-dimensional inference in the statistics and econometrics literature; see, for example, [42], [38], [43], [16], [18], [17], [36], and [29]. Most existing work on high-dimensional inference for interpretable models has focused primarily on the aspects of post-selection inference known as selective inference and debiasing for regularization and machine learning methods. In real applications, one is often interested in conducting global inference relative to the full model as opposed to local inference conditional on the selected model. Moreover, many statistical inferences are based on p-values from significance testing. However, oftentimes obtaining valid p-values even for the Lasso in relatively complicated high-dimensional nonlinear models also remains largely unresolved, not to mention for the case of more complicated model fitting procedures such as random forest. Indeed high-dimensional inference is intrinsically challenging even in the parametric settings [27].
The desired property of stability for interpretable forecasting in this paper concentrates on global inference by controlling precisely the fraction of wrongly discovered features in high-dimensional models, which is also known as reproducible large-scale inference. Such a problem involves testing the joint significance of a large number of features simultaneously, which is known widely as the problem of multiple testing in statistical inference. For this problem, the null hypothesis for each feature states that the feature is unimportant in the joint model which can be understood as the property that this individual feature and the response are independent conditional on all the remaining features, while the corresponding alternative hypothesis states the opposite. Conventionally p-values from the hypothesis testing are used to decide whether or not to reject each null hypothesis with a significance level to control the probability of false discovery in a single hypothesis test, meaning rejecting the null hypothesis when it is true. When performing multiple hyothesis tests, the probability of making at least one false discovery which is known as the family-wise error rate can be inflated compared to that for the case of a single hypothesis test. The work on controlling such an error rate for multiple testing dates back to [13], where a simple, useful idea is lowering the significance level for each individual test as the target level divided by the total number of tests to be performed. The Bonferroni correction procedure is, however, well known to be conservative with relatively low power. Later on, [30] proposed a stepdown procedure which is less conservative than the Bonferroni procedure. More recently, [35] suggested a procedure in which the critical values of individual tests are constructed sequentially.
A more powerful and extremely popular approach to multiple testing is the Benjamini–Hochberg (BH) procedure for controlling the false discovery rate (FDR) which was originated in [9], where the FDR is defined as the expectation of the fraction of falsely rejected null hypotheses known as the false discovery proportion. Given the p-values from the multiple hypothesis tests, this procedure sorts the p-values from low to high and chooses a simple, intuitive cutoff point, which can be viewed as an adaptive extension of the Bonferroni correction for multiple comparisons, of the p-values for rejecting the null hypotheses. The BH procedure was shown to be capable of controlling the FDR at the desired level for independent test statistics in [9] and for positive regression dependency among the test statistics in [10], where it was shown that a simple modification of the procedure can control the FDR under other forms of dependency but such a modification is generally conservative. There is a huge literature on the theory, applications, and various extensions of the original BH procedure for FDR control; see, for instance, [8], [24], [7], and [19].
The aforementioned econometric and statistical inference methods including the BH-type procedures for FDR control are all rooted on the availability and validity of computable p-values for evaluating variable importance. As mentioned before, such a prerequisite can become a luxury that is largely unclear how to obtain in high dimensions even for the case of Lasso in general nonlinear models and random forest. In contrast, [4] proposed a novel procedure named the knockoff filter for FDR control that bypasses the use of p-values in Gaussian linear model with deterministic design matrix, where the dimensionality is no larger than the sample size, and [5] generalized the method to high-dimensional linear models as a two-step procedure based on sample splitting, where a feature screening approach is used to reduce the dimensionality to below sample size (see, e.g., [23] and [25]) and then the knockoff filter is applied to the set of selected features after the screening step for selective inference. The key ingredient of the knockoff filter is constructing the so-called knockoff variables in a geometrical way that mimic perfectly the correlation structure among the original covariates and can be used as control variables to evaluate the importance of original variables. Recently, [15] extended the work of [4] by introducing the framework of model-X knockoffs for FDR control in general high-dimensional nonlinear models. A crucial distinction is that the knockoff variables are constructed in a probabilistic fashion such that the joint dependency structure of the original variables and their knockoff copies is invariant to the swapping of any set of original variables and their knockoff counterparts, which enables us to go beyond linear models and handle high dimensionality. As a result, model-X knockoffs enjoys exact finite-sample FDR control at the target level. However, a major assumption in [15] is that the joint distribution of all the covariates needs to be known for the valid FDR control.
Motivated by applications in economics and finance, in this paper we model the association structure of the covariates using the latent factor model, which reduces effectively the dimensionality and enables reliable estimation of the unknown joint distribution of all the covariates. By taking into account the latent factor model structure, we first estimate the association structure of covariates and then construct empirical knockoffs matrix using the estimated dependency structure. Our empirical knockoffs matrix can be regarded as an approximation to the oracle knockoffs matrix in [15] that requires the knowledge of the true covariate distribution. Exploiting the general framework of model-X knockoffs in [15], we suggest the new method of intertwined probabilistic factors decoupling (IPAD) for stable interpretable forecasting with knockoffs inference in high-dimensional models. The innovations of our method and work are fourfold. First, we estimate the covariate distribution from data instead of assuming that it is known when constructing the knockoff variables. Second, our procedure does not require any sample splitting and is thus more practical when the sample size is limited. Third, we provide theoretical justifications on the asymptotic false discovery rate control when the estimated dependency structure is employed. Fourth, the theory for power analysis is also established which reveals that there can be asymptotically no power loss in applying the knockoffs procedure compared to the underlying variable selection method. Therefore, FDR control by knockoffs can be a pure gain. Compared to earlier work, an additional challenge of our study is that knowing the true underlying distribution does not lead to the most efficient construction of the oracle knockoffs matrix due to the presence of latent factors.
The rest of the paper is organized as follows. Section 2 introduces the model setting and presents the new IPAD procedure. We establish the asymptotic properties of IPAD in Section 3. Sections 4 and 5 present several simulation and real data examples to showcase the finite-sample performance and the advantages of our newly suggested procedure compared to some popularly used ones. We discuss some implications and extensions of our work in Section 6. The proofs of the main results are relegated to the Appendix. Additional technical details and numerical results are provided in the Supplementary Material.
2. Intertwined probabilistic factors decoupling
To facilitate the technical presentation, we will introduce the model setting for the high-dimensional FDR control problem in Section 2.1 and present the new IPAD procedure in Section 2.2.
2.1. Model setting
Consider the high-dimensional linear regression model
(1) |
where y ∈ ℝn is the response vector, X ∈ ℝn×p is the random matrix of a large number of potential regressors, β = (β1, ⋯, βp)′ ∈ ℝp is the regression coefficient vector, ε ∈ ℝn is the vector of model errors, and n and p denote the sample size and dimensionality, respectively. Here without loss of generality, we assume that both the response and the covariates are centered with mean zero and thus there is no intercept. Motivated by many applications in economics and finance, we further assume that the design matrix X follows the exact factor model
(2) |
where is a random matrix of latent factors, is a matrix of deterministic factor loadings, and error term E ∈ ℝn×p captures the remaining variation that cannot be explained by these latent factors. We assume that the number of factors r is fixed but unknown and the components of E are independent and identically distributed (i.i.d.) from some parametric distribution with cumulative distribution function G(·;η0), where η0 ∈ ℝm is a finite-dimensional unknown parameter vector. For technical simplicity, models (1) and (2) are assumed to have no endogeneity and satisfy that F0 has i.i.d. rows and is independent of E.
In this paper, we focus on the high-dimensional scenario when the dimensionality p can be much larger than sample size n. Therefore, to ensure model identifiability we impose the sparsity assumption that the true regression coefficient vector β has only a small portion of nonzeros; specifically, β takes nonzero values only on some (unknown) index set and βj = 0 for all . Denote by the size of . We assume that s = o(n) throughout the paper.
We are interested in identifying the index set with a theoretically guaranteed error rate. To be more precise, we try to select variables in while keeping the false discovery rate (FDR) under some prespecified desired level q ∈ (0,1), where the FDR is defined as
(3) |
Here the FDP stands for the false discovery proportion and represents the set of variables selected by some procedure using observed data (X, y). A slightly modified version of FDR is defined as
(4) |
Clearly, FDR is more conservative than mFDR in that the latter is always under control if the former is.
It is easy to see that FDR is a measurement of type I error for variable selection. The other important aspect of variable selection is power, which is defined as
(5) |
It is well known that FDR and power are two sides of the same coin. We aim at developing a variable selection procedure with theoretically guaranteed FDR control and meanwhile achieving high power.
2.2. IPAD
The key ingredient of the model-X knockoffs framework introduced originally in [15] is the construction of the so-called model-X knockoff variables defined as follows.
Definition 1 (Model-X knockoff variables [15])
For a set of random variables x = (X1,⋯, Xp), a new set of random variables is called a set of model-X knockoff variables if it satisfies the following properties:
For any subset , we have , where denotes equal in distribution and the vector is obtained by swapping Xj and for each .
Conditional on x, the knockoffs vector is independent of response Y.
See Section B of Supplementary Material for a brief review of the model-X knockoffs framework. In theory, if the distribution of C0 and the value of η0 are known, the SCIP algorithm proposed in [15] can be used to construct the knockoff variables. However, the computational cost can be high depending on the exact distributions. Instead we introduce a more efficient and practically implementable approach for constructing the knockoff variables below.
We start with introducing the knockoff generating function – for each given augmented parameter vector θ = vec(vec C, η), define
(6) |
where Eη is a matrix composed of i.i.d. random samples from distribution G(·;η). To gain some insights, let us first consider the ideal situation where the the factor model structure (2) is fully available; that is, we know the realization C0 and the true distribution G(·;η0) for the error matrix E. In such case, the oracle (ideal) knockoffs matrix can be constructed as
(7) |
where is an i.i.d. copy of E. Note that itself is not a function of η0, but we slightly abuse the notation to emphasize the dependence of the distribution function on parameter η0. In practice, θ0 is unknown and needs to be estimated. Letting denote an estimator (obtained using data X) of θ0, we name as the empirical knockoffs matrix:
(8) |
where is an empirical estimate of C0 and is composed of i.i.d. random variables from the plug-in estimate of the distribution function, , and is independent of (X, y) conditional on . The following proposition justifies the validity of the oracle knockoffs matrix.
Proposition 1
Under model setting (2), the oracle knockoffs matrix defined in (7) satisfies Definition 1.
However, the empirical knockoffs matrix given in (8) generally does not satisfy the exchangeability property because of the dependence of on the training data X. Although the oracle knockoffs matrix is generally unavailable, it plays an important role in our theoretical developments as a proxy of the empirical knockoffs matrix. We remark that in the construction above, we slightly misuse the concept and call C0 a parameter. This is because although C0 is a random matrix, for the construction of valid knockoff variables it is the particular realization C0 leading to the observed data matrix X that matters. In other words, a valid construction of knockoff variables requires the knowledge of the specific realization C0 instead of the distribution of C0. To understand this, consider the scenario where the underlying parameter η0 and the exact distribution of C0 are fully available. If we independently generate random variables from this known distribution and form a new data matrix X1, because of the independence between X1 and X, the exchangeability assumption in Definition 1 will be violated and thus X1 cannot be a valid knockoffs matrix. On the other hand, as long as we know the realization C0 and parameter η0, a valid knockoffs matrix can be constructed using (7) regardless of whether the exact distribution of C0 is available or not.
In practice, however, θ0 is unavailable and consequently, is inaccessible. To over-come this difficulty, we next introduce our new method IPAD. With the aid of empirical knockoffs matrix, we suggest the following IPAD procedure for FDR control with knockoffs inference.
Procedure 1 (IPAD)
(Estimation of parameters) Estimate the unknown parameters in θ0 using the design matrix X. Denote by the resulting estimated parameter vector.
(Construction of empirical knockoffs matrix) Construct the empirical knockoffs matrix (8) by applying the knockoff generating function in (6) to the estimated parameter .
(Application of knockoffs inference) Calculate knockoff statistics using data and then construct by applying knockoffs inference to .
Intuitively, the accuracy of the estimator in Step 1 will affect the performance of our IPAD procedure. In fact, as shown later in our Theorem 1 in Section 3, the consistency rate of is indeed reflected in the asymptotic FDR control of the IPAD procedure. There are various ways to construct estimator . A natural and popularly used one is the principal component (PC) estimator studied in [2]. Specifically, we first estimate the number of factors r, denoted as , using some method such as the information criterion in [3] or the approach in [1], and then set , where is T1/2 times the eigenvectors corresponding to the top largest eigenvalues of XX′, and . As for the estimation of η0, existing methods such as the method of moments can be used based on the residual matrix . As a concrete example, consider the case where E has i.i.d. entries. Then the unknown population parameter is η0 = σ2 and can be estimated naturally as .
In Step 3, various methods can be used to construct knockoff statistics. For the illustration purpose, we use the Lasso coefficient difference (LCD) statistic as in [15]. Specifically, with y the response vector and the augmented design matrix we consider the variable selection procedure Lasso [39] which solves the following optimization problem
(9) |
where λ ≥ 0 is the regularization parameter and ∥ · ∥m with m ≥ 1 denotes the vector ℓm-norm. Then for each variable xj, the knockoff statistic can be constructed as
(10) |
where is the ℓth component of the Lasso regression coefficient vector . It is seen that intuitively the LCD knockoff statistics evaluate the relative importance of the jth original variable by comparing its Lasso coefficient with that of its knockoff copy . In the ideal case when the oracle knockoffs matrix is used instead of in (9), it is easy to verify that the LCD is a valid construction of knockoff statistics and satisfies the sign-flip property in (A.2). Consequently, the general theory in [15] can be applied to show that the FDR is controlled in finite sample. We next show that even with the empirical knockoffs matrix employed in (9), the FDR can still be asymptotically controlled with delicate technical analyses.
3. Asymptotic properties of IPAD
We now provide theoretical justifications for our IPAD procedure suggested in Section 2 with the LCD knockoff statistics defined in (10). We will first present some technical conditions in Section 3.1, then prove in Section 3.2 that the FDR is asymptotically under control at desired target level q, and finally in Section 3.3 show that asymptotically IPAD has no power loss compared to the Lasso under some regularity conditions.
3.1. Technical conditions
We first introduce some notation and definitions which will be used later on. We use to denote that X is a sub-Gaussian random variable with variance proxy if and its tail probability satisfies for each u ≥ 0. In all technical assumptions below, we use M > 1 to denote a large enough generic constant. Throughout the paper, for any vector v = (vi) let us denote by ∥v∥1, ∥v∥2, and ∥v∥max the ℓ1-norm, ℓ2-norm, and max-norm defined as , and ∥v∥max = maxi |vi|, respectively. For any matrix M = (mij), we denote by ∥M∥F, ∥M∥1, ∥M∥2, and ∥M∥max the Frobenius norm, entrywise ℓ1-norm, spectral norm, and entrywise ℓ∞-norm defined as ∥M∥F = ∥vec(M)∥2, ∥M∥1 = ∥vec(M)∥1, ∥M∥2 = supv≠0 ∥Mv∥2/∥v∥2, and ∥M∥max = ∥vec(M)∥max, respectively, where vec(M) represents the vectorization of M. For a symmetric matrix M, vech(M) stands for the vectorization of the lower triangular part.
Condition 1 (Regression errors)
The model error vector ε has i.i.d. components from .
Condition 2 (Latent factors)
The rows of F0 consist of mean zero i.i.d. random vectors such that ∥F0∥max ≤ M almost surely (a.s.) and , where .
Condition 3 (Factor loadings)
The rows of Λ0 consist of deterministic vectors such that ∥Λ0∥max ≤ M and .
Condition 4 (Factor errors)
The entries of matrix are i.i.d. copies of with continuous distribution function G(·;η0). For each 1 ≤ ℓ ≤ m, the ℓth element of η0 is specified as with hℓ : ℝm → ℝ some local Lipschitz continuous function in the sense that
for each and 1 ≤ k ≤ m, where cnp := (p−1 logn)1/2 + (n−1 log p)1/2. Moreover, there exists some stochastic process (eη)η such that
for each , the entries of Eη in (6) have identical distribution to eη,
for some sub-Gaussian random variable with some positive constant ce,
(11) |
Condition 5 (Eigenseparation)
The r eigenvalues of are distinct for all p.
The number of factors r is assumed to be known for developing the theory with simplification, but in practice it can be estimated consistently using methods such as information criteria [3] and test statistics [1]. The sub-Gaussian assumptions in Conditions 1 and 4 can be replaced with some other tail conditions as long as similar concentration inequalities hold. Condition 3 is standard in the analysis of factor models. Stochastic loadings can be assumed in Condition 3 with some appropriate distributional assumption, such as sub-Gaussianity, at the cost of much more tedious technical arguments. The boundedness of the eigenvalues of Σf in Condition 2 is standard while the i.i.d. assumption and boundedness of are stronger compared to the existing literature (e.g., [3] and [2]). However, these conditions are imposed mostly for technical simplicity. In fact, the boundedness condition on can be replaced with (unbounded) sub-Gaussian or other heavier-tail assumption whenever concentration inequalities are available at the cost of slower convergence rates and stronger sample size requirement. Our theory on FDR control is based on that in [15], which applies only to the case of i.i.d. rows of design matrix X. This is the main reason for imposing the i.i.d. assumption on εi and fi in Conditions 1 and 2. However, we conjecture that similar results can also hold in the presence of some sufficiently weak serial dependence in εi and fi. Condition 4 introduces a sub-Gaussian process eη with respect to η. The norm in (11) can be replaced with any other norm since η is finite dimensional. In the specific case when the components of E have Gaussian distribution such that η is a scalar parameter representing variance, by the reflection principle for the Wiener process ([12], p.511), eη can be constructed as a Wiener process and the inequality (11) can be satisfied. For more information on sub-Gaussian processes, see, e.g., [41]. To understand why we need Condition 5, note from the proof of Lemma 3 that the PC estimator is only consistent for (F0H, Λ0H−1), where with V an r × r diagonal matrix of r largest eigenvalues of XX′/(np). Condition 5 guarantees that is asymptotically unique and invertible, which have been proved by [2], and the fact is used in the proof of Lemma 6 in Appendix. This ultimately ensures that C0 can be estimated well, which in turn guarantees that η0 can be estimated accurately.
Recall that in the IPAD procedure, we first obtain the augmented Lasso estimator by regressing y on . Denote by the active set of the augmented Lasso regression coefficient vector. Throughout this section, we content ourselves with sparse estimates satisfying
(12) |
for some positive integer k which may diverge with n at an order to be specified later; see, e.g., [28] and [32] for a similar constraint and justifications therein. This can always be achieved since users have the freedom to choose the size of the Lasso model.
3.2. FDR control
To develop the theory for IPAD, we consider the PC estimator for the realization C0 summarized in Section 2.2. The estimator is constructed as with hℓ, 1 ≤ ℓ ≤ m, introduced in Condition 4 and the empirical moments of . Throughout our theoretical analysis, we consider the regularization parameter fixed at λ = C0n−1/2 log p with C0 some large enough constant for all the Lasso procedures. Therefore, we will drop the dependence of various quantities on λ whenever there is no confusion. For example, we will write and as and , respectively.
Denote by and and define T(θ) := vec(vech U(θ), v(θ)) ∈ ℝP with P := p(2p + 3). The following lemma states that the statistic T(θ) plays a crucial role in our procedure.
Lemma 1
The set of variables selected by Procedure 1 depends only on T(θ).
For any given θ, define the active set , where and . That is, is equal to the support of knockoff statistics (W1(θ), ⋯, Wp(θ))′ if there are no ties on the magnitudes of the augmented Lasso coefficient vector .
We next focus on the low-dimensional structure of T(θ) inherited from the augmented Lasso because it will be made clear that this is the key to controlling the FDR without sample splitting. For any subset , define a lower-dimensional expression of the vector as with the principle submatrix of U(θ) formed by columns and rows in and the subvector of v(θ) formed by components in . Then it is easy to see that and . Motivated by Lemma 1, we define a family of mappings indexed by that describes the selection algorithm of Procedure 1 with given data set that forms . Formally, define a mapping as for given , where refers to the power set of . That is, represents the outcome of first restricting ourselves to the smaller set of variables and then applying IPAD to to further select variables from set .
Lemma 2
Under Conditions 1–4, for any subset we have .
When restricting on set , we can apply Procedure 1 to a lower-dimensional data set that forms to further select variables from . The previous two lemmas ensure that this gives us a subset of that is identical to S{1,⋯,p}(T(θ)). Note that the lower-dimensional problem based on can be easier compared to the original one. We also would like to emphasize that the dimensionality reduction to a smaller model is only for assisting the theoretical analysis and our Procedure 1 does not need any knowledge of such set .
It is convenient to define . Denote by
(13) |
where C1 is some positive constant and . For any subset , let be the subspace of when taking out the coordinates corresponding to . Thus . In addition to Conditions 1–5, we need an assumption on the algorithmic stability of Procedure 1.
Condition 6 (Algorithmic stability)
For any subset that satisfies , there exists a positive sequence ρnp → 0 as n ∧ p → ∞ such that
where Δ stands for the symmetric difference between two sets.
Intuitively the above condition assumes that the knockoffs procedure is stable with respect to a small perturbation to the input t in any lower-dimensional subspace . Under these regularity conditions, the asymptotic FDR control of our IPAD procedure can be established.
Theorem 1 (Robust FDR control)
Assume that Conditions 1–6 hold. Fix an arbitrary positive constant ν. If (s, k, n, p) satisfies s ∨ k ≤ n ∧ p, cnp ≤ c/[r2M2C(ν + 2)]1/2, and as n ∧ p → ∞ with c and C some positive constants defined in Lemma 7 in Appendix, then the set of variables obtained by Procedure 1 (IPAD) with the LCD knockoff statistics controls the FDR (3) to be no larger than q + O (ρnp + n−ν + p−ν).
Recall that by definition, the FDR is a function of and can be written as while the FDR computed with the oracle knockoffs, , is perfectly controlled to be no larger than q. This observation motivates us to first establish asymptotic equivalence of and T(θ0) with large probability. Then a natural idea is to show that converges to in probability, which turns out to be highly nontrivial because of the discontinuity of FDP(·) (the convergence would be straightforward via the Portmanteau lemma if FDP(·) were continuous). Condition 6 above provides a remedy to this issue by imposing the algorithmic stability assumption.
3.3. Power analysis
We have established the asymptotic FDR control for our IPAD procedure in Section 3.2. We now look at the other side of the coin – the power (5). Recall that in IPAD, we apply the knockoffs inference procedure to the knockoff statistics LCD, which are constructed using the augmented Lasso in (9). Therefore the final set of variables selected by IPAD is a subset of variables picked by the augmented Lasso. For this reason, the power of IPAD is always upper bounded by that of Lasso. We will show in this section that there is in fact no power loss relative to the augmented Lasso in the asymptotic sense.
Condition 7 (Signal strength I)
For any subset that satisfies for some γ ∈ (0, 1], it holds that for some positive sequence bnp → ∞.
Condition 8 (Signal strength II)
There exists some constant C2 ∈ (2(qs)−1,1) such that with .
Condition 7 requires that the overall signal is not too weak, but is weaker than the conventional beta-min condition min . Under Condition 8, we can show that with probability at least 1 − O(p−ν + n−ν) using similar techniques to those of Lemma 6 in [26]. The intuition is that given s → ∞, for a variable selection procedure to have high power it should select at least a reasonably large number of variables. The result will be used to derive the asymptotic order of threshold T, which is in turn crucial to establish the theorem below on power.
Theorem 2 (Power guarantee)
Assume that Conditions 1–5 and 7–8 hold. Fix an arbitrary positive constant ν. If (s,k,n,p) satisfies 2s ≤ k ≤ n ∧ p, cnp ≤ c/(r2M2C(ν + 2))1/2, and as n ∧ p → ∞ with c and C some positive constants defined in Lemma 7, then both the Lasso procedure based on (X, y) and our IPAD procedure (Procedure 1) have power bounded from below by γ − o(1) as n ∧ p → ∞. In particular, if γ = 1 IPAD has no power loss compared to Lasso asymptotically.
4. Simulation studies
We have shown in Section 3 that IPAD can asymptotically control the FDR in high-dimensional setting and there can be no power loss in applying the procedure. We next move on to numerically investigate the finite-sample performance of IPAD using synthetic data sets. We will compare IPAD with the knockoff filter in [4] (BCKnockoff) and the high-dimensional knockoff filter in [5] (HD-BCKnockoff). In what follows, we will first explain in detail the model setups and simulation settings, then discuss the implementation of the aforementioned methods, and finally summarize the comparison results.
4.1. Simulation designs and settings
In all simulations, the design matrix X ∈ ℝn×p is generated from the factor model
(14) |
where is the matrix of latent factors, is the matrix of factor loadings, E ∈ ℝn×p is the matrix of model errors, and θ is a constant controlling the signal-to-noise ratio. The term is used to single out the effect of the number of factors in calculating the signal-to-noise ratio in factor model (14). We then rescale each column of X to have ℓ2-norm one and simulate the response vector y = (y1,⋯, yn)0 from the following model
(15) |
where f : ℝp → ℝ is the link function which can be linear or nonlinear, c > 0 is a constant controlling the signal-to-noise ratio, and ε = (ε1,⋯, εn)′ is the vector of model error. We next explain the four different designs of our simulation studies.
4.1.1. Design 1: linear model with normal factor design matrix
The elements of F0, Λ0, E, and ε are drawn independently from . The link function takes a linear form, that is, , where the coefficient vector β = (β1,⋯, βp)′ ∈ ℝp is generated by first choosing s random locations for the true signals and then setting βj at each location to be either A or −A randomly with A some positive value. The remaining p − s components of β are set to zero.
4.1.2. Design 2: linear model with fat-tail factor matrix and serial dependence
The elements of E are generated as
(16) |
where for all i = 1,⋯, n and j = 1,⋯, p, and are i.i.d. random variables from chi-square distribution with ν = 8 degrees of freedom. The rest of the design is the same as in Design 1. It is worth mentioning that in this case, the entries of matrix E have fat-tail distribution with serial dependence in each column because of the common factor . This design is used to check the robustness of IPAD method with respect to the serial dependence and the fat-tail distribution of E.
4.1.3. Design 3: linear model with misspecified design matrix
To evaluate the robustness of IPAD procedure to the misspecification of the factor model structure (14), we set Λ = 0, rθ = 1 and simulate the rows of matrix E independently from with Σ = (σij), σij = ρ|i−j| for ≤ i,j ≤ p. The remaining design is the same as in Design 1. It is seen that our assumption on the independence of the entries of E is violated. This design is used to test the robustness of IPAD to misspecification of the factor model structure of X.
4.1.4. Design 4: nonlinear model with normal factor design matrix
Our last design is used to evaluate the performance of IPAD method when the link function f is nonlinear. To be more specific, we assume the following nonlinear model between the response and covariates
where the coefficient vector β, design matrix X, and model error ε are generated similarly as in Design 1.
4.1.5. Simulation settings
The target FDR level is set to be q = 0.2 in all simulations. For Design 1 and Design 2, we set n = 2000, p = 2000, A = 4, s = 50, c = 0.2, r = 3, and θ = 1. In order to evaluate the sensitivity of our method to the dimensionality p and the model sparsity s, we also explore the settings of p = 1000,3000 and s = 100,150. In Design 3, we set r = 0 and ρ = 0,0.5. In Design 4, since the model is nonlinear, we use the nonparametric method of random forest [14] to fit the model and consider lower-dimensional settings of p = 50, 250, and 500. We also decrease the number of observations to n = 1000 and number of true variables to s = 10. Moreover, we set θ = 1, 2 and c = 0.1, 0.2, 0.3 to test the effects of signal-to-noise ratio on the performance of IPAD procedure in Design 4. The implementation details for the estimation procedure of IPAD are provided in Section E.1 of Supplementary Material.
4.2. Simulation results
For each method, we use 100 simulated data sets to calculate its empirical FDR and power, which are the average FDP and TDP (true discovery proportion as in (5)) over 100 repetitions, respectively. Two different thresholds, knockoff and knockoff+ (T1 and T2 in Result 1, respectively), are used in the knockoffs inference implementation. It is worth mentioning that as shown in [15] and summarized in Result 1, knockoff+ controls FDR (3) exactly while knockoff controls only the modified FDR (4).
Tables 1 and 2 summarize the results from Designs 1 and 2, respectively. As shown in Table 1, all approaches can control empirical FDR at the target level (q = 0.2) and knockoff+, which is more conservative, reduces power negligibly. It is worth mentioning that even for Design 2, in which the design matrix X is drawn from fat-tail distribution with serial dependence, we still have FDR under control with decent level of power. This suggests that the no serial correlation assumption in our theoretical analysis could just be technical. Compared to the results by BCKnockoff and HD-BCKnockoff, we see that using the extra information from the factor structure in constructing knockoff variables can help with both FDR and power. Table 2 also shows the effects of model sparsity on the performance of various approaches. It can be seen that when the number of true signals is increased from 50 to 150, the FDR is still under control and the empirical power of IPAD remains steady.
Table 1:
Simulation results for Designs 1 and 2 of Section 4.1 with different values of dimensionality p
Design 1 |
Design 2 |
|||||||||
---|---|---|---|---|---|---|---|---|---|---|
FDR | Power | FDR+ | Power+ | R2 | FDR | Power | FDR+ | Power+ | R2 | |
p = 1000 |
||||||||||
IPAD | 0.195 | 0.991 | 0.180 | 0.990 | 0.659 | 0.199 | 0.961 | 0.180 | 0.960 | 0.652 |
BCKnockoff | 0.207 | 0.942 | 0.192 | 0.938 | 0.659 | 0.172 | 0.887 | 0.152 | 0.885 | 0.653 |
p = 2000 |
||||||||||
IPAD | 0.194 | 0.979 | 0.179 | 0.979 | 0.649 | 0.199 | 0.935 | 0.183 | 0.933 | 0.656 |
HD-BCKnockoff | 0.142 | 0.706 | 0.127 | 0.691 | 0.649 | 0.136 | 0.607 | 0.113 | 0.581 | 0.644 |
p = 3000 |
||||||||||
IPAD | 0.191 | 0.964 | 0.176 | 0.963 | 0.652 | 0.188 | 0.913 | 0.171 | 0.911 | 0.658 |
HD-BCKnockoff | 0.172 | 0.668 | 0.149 | 0.658 | 0.652 | 0.125 | 0.559 | 0.099 | 0.524 | 0.651 |
Note that FDR+ and Power+ are the values of FDR and Power corresponding to the knockoff+ threshold T2.
Table 2:
Simulation results for Designs 1 and 2 of Section 4.1 with different sparsity level s
Design 1 |
Design 2 |
|||||||||
---|---|---|---|---|---|---|---|---|---|---|
FDR | Power | FDR+ | Power+ | R2 | FDR | Power | FDR+ | Power+ | R2 | |
s = 50 |
||||||||||
IPAD | 0.194 | 0.979 | 0.179 | 0.979 | 0.649 | 0.199 | 0.935 | 0.183 | 0.933 | 0.656 |
HD-BCKnockoff | 0.142 | 0.706 | 0.127 | 0.691 | 0.649 | 0.136 | 0.607 | 0.113 | 0.581 | 0.644 |
s = 100 |
||||||||||
IPAD | 0.191 | 0.978 | 0.183 | 0.977 | 0.783 | 0.181 | 0.937 | 0.174 | 0.936 | 0.789 |
HD-BCKnockoff | 0.152 | 0.703 | 0.140 | 0.698 | 0.787 | 0.106 | 0.583 | 0.097 | 0.573 | 0.778 |
s = 150 |
||||||||||
IPAD | 0.183 | 0.973 | 0.178 | 0.972 | 0.842 | 0.188 | 0.935 | 0.182 | 0.935 | 0.848 |
HD-BCKnockoff | 0.139 | 0.660 | 0.130 | 0.654 | 0.858 | 0.115 | 0.578 | 0.106 | 0.570 | 0.843 |
Table 3 is devoted to the case of Design 3, where the rows of matrix X are generated independently from multivariate normal distribution with AR(1) correlation structure. This is a setting where the factor model structure in X is misspecified. Since BCknockoff and HD-BCknockoff make no use of the factor structure in generating knockoff variables, in both low- and high-dimensional examples both methods control FDR exactly at the target level. IPAD based methods have empirical FDR slightly over the target level, which may be caused by the misspecification of the factor structure. On the other hand, IPAD based approaches have much higher empirical power than comparison methods.
Table 3:
Simulation results for Design 3 of Section 4.1
ρ = 0 |
ρ = 0:5 |
|||||||||
---|---|---|---|---|---|---|---|---|---|---|
FDR | Power | FDR+ | Power+ | R2 | FDR | Power | FDR+ | Power+ | R2 | |
p = 1000 |
||||||||||
IPAD | 0.204 | 0.995 | 0.189 | 0.995 | 0.444 | 0.226 | 0.984 | 0.216 | 0.984 | 0.446 |
BCKnockoff | 0.188 | 0.919 | 0.172 | 0.917 | 0.444 | 0.137 | 0.827 | 0.117 | 0.821 | 0.445 |
p = 2000 |
||||||||||
IPAD | 0.203 | 0.993 | 0.189 | 0.993 | 0.447 | 0.220 | 0.982 | 0.202 | 0.980 | 0.445 |
HD-BCKnockoff | 0.151 | 0.630 | 0.126 | 0.603 | 0.449 | 0.115 | 0.522 | 0.090 | 0.467 | 0.442 |
p = 3000 |
||||||||||
IPAD | 0.225 | 0.988 | 0.205 | 0.987 | 0.445 | 0.219 | 0.979 | 0.206 | 0.978 | 0.443 |
HD-BCKnockoff | 0.150 | 0.589 | 0.126 | 0.560 | 0.446 | 0.092 | 0.439 | 0.064 | 0.381 | 0.447 |
Table 4 corresponds to Design 4 in which response y is related to X nonlinearly. Since BCKnockoff and HD-BCKnockoff are designed for linear models, only the results from IPAD method are reported. It can be seen form Table 4 that IPAD approach can control FDR with reasonably high power even in the nonlinear setting. We also observe that in nonlinear setting, the power of IPAD deteriorates faster as dimensionality p increases compared to the linear setting due to the use of the fully nonparametric approach for estimation.
Table 4:
Simulation results for Design 4 of Section 4.1
θ = 1 |
θ = 2 |
|||||||||
---|---|---|---|---|---|---|---|---|---|---|
FDR | Power | FDR+ | Power+ | R2 | FDR | Power | FDR+ | Power+ | R2 | |
p = 50 |
||||||||||
c = 0.1 | 0.109 | 0.839 | 0.081 | 0.720 | 0.707 | 0.110 | 0.943 | 0.061 | 0.858 | 0.707 |
c = 0.2 | 0.137 | 0.847 | 0.068 | 0.726 | 0.547 | 0.097 | 0.920 | 0.061 | 0.837 | 0.547 |
c = 0.3 | 0.137 | 0.765 | 0.091 | 0.582 | 0.451 | 0.123 | 0.907 | 0.076 | 0.774 | 0.451 |
p = 250 |
||||||||||
c = 0.1 | 0.189 | 0.740 | 0.104 | 0.504 | 0.702 | 0.174 | 0.876 | 0.139 | 0.788 | 0.702 |
c = 0.2 | 0.218 | 0.666 | 0.131 | 0.522 | 0.552 | 0.209 | 0.831 | 0.118 | 0.660 | 0.552 |
c = 0.3 | 0.200 | 0.569 | 0.101 | 0.361 | 0.451 | 0.224 | 0.766 | 0.141 | 0.599 | 0.451 |
p = 500 |
||||||||||
c = 0.1 | 0.243 | 0.661 | 0.169 | 0.497 | 0.702 | 0.223 | 0.831 | 0.173 | 0.740 | 0.702 |
c = 0.2 | 0.204 | 0.507 | 0.111 | 0.266 | 0.543 | 0.216 | 0.749 | 0.126 | 0.594 | 0.543 |
c = 0.3 | 0.247 | 0.478 | 0.128 | 0.299 | 0.451 | 0.241 | 0.691 | 0.156 | 0.550 | 0.451 |
5. Empirical analysis
Our simulation results in Section 4 suggest that IPAD is a powerful approach with asymptotic FDR control. We further examine the application of IPAD to the quarterly data on 109 macroeconomic variables from the third quarter of year 1960 (1960Q3) to the fourth quarter of year 2008 (2008Q4) in the United States discussed in [37]. These variables are transformed by taking logarithms and/or differencing following [37]. Our real data analysis consists of two parts. In the first part, we focus on the performance of IPAD method in terms of empirical FDR and power. To save space, the numerical results for the real data based simulation study are presented in Section E.2 of of Supplementary Material. In the second part, the forecasting performance of IPAD method will be evaluated.
We now apply the IPAD approach to the real economic data set for forecasting. One-step ahead prediction is conducted using rolling window of size 120. More specifically, one of the 109 variables is chosen as the response and the remaining 108 variables are treated as predictors. For each quarter between 1990Q3 and 2008Q4, we use the previous 120 periods for model fitting and then one-step ahead prediction is conducted based on the fitted model. We compare IPAD with the competing methods of autoregression of order one (AR(1)), factor augmented AR(1) (FAR), and Lasso, where each method is implemented in a same way as IPAD for one-step ahead prediction; see Section E.3 of Supplementary Material for the implementation details of all the methods.
The number of factors is chosen by the PCp1 criterion in [3]. For the Lasso and IPAD, the regularization parameter λ is selected with the tenfold cross-validation. Table 5 shows the root mean-squared prediction error (RMSE) of these methods. As can be seen, the RMSE of IPAD is very close to those of comparison methods. To statistically compare the relative prediction accuracy of IPAD versus other approaches, we have used the Diebold–Mariano test [21], where the square of one-step ahead prediction error is used as the loss function. Table 6 reports the test results. The results indicate that one-step ahead prediction accuracy of IPAD is comparable to other approaches.
Table 5:
Root mean-squared error of one-period ahead forecast of various macroeconomic variables
AR | FAR | Lasso | IPAD | |
---|---|---|---|---|
RGDP | 2.245 | 1.929 | 2.070 | 2.106 |
CPI-ALL | 1.526 | 1.552 | 1.579 | 1.571 |
Imports | 7.549 | 5.871 | 6.595 | 6.993 |
IP: cons dble | 9.683 | 8.353 | 8.424 | 9.175 |
Emp: TTU | 1.112 | 0.989 | 1.167 | 1.100 |
U: mean duration | 0.573 | 0.487 | 0.502 | 0.494 |
HStarts: South | 0.074 | 0.071 | 0.076 | 0.074 |
NAPM new ordrs | 4.800 | 4.378 | 4.659 | 4.673 |
PCED-NDUR-ENERGY | 31.927 | 32.121 | 33.546 | 32.164 |
Emp. Hours | 2.102 | 1.899 | 2.080 | 1.944 |
FedFunds | 0.421 | 0.396 | 0.406 | 0.392 |
Cons credit | 2.573 | 2.537 | 2.648 | 2.580 |
EX rate: Canada | 10.132 | 10.139 | 10.122 | 10.113 |
DJIA | 23.117 | 23.997 | 24.585 | 23.398 |
Consumer expect | 6.496 | 6.888 | 6.681 | 6.661 |
Table 6:
Diebold–Mariano test for comparing prediction accuracy of IPAD against other procedures
IPAD vs. AR | IPAD vs. FAR | IPAD vs. Lasso | |
---|---|---|---|
RGDP | −0.780 | 1.160 | 0.462 |
CPI-ALL | 0.521 | 0.394 | −0.218 |
Imports | −0.976 | 2:631** | 1.464 |
IP: cons dble | −1.026 | 1.567 | 2:487* |
Emp: TTU | −0.140 | 1.692 | −1.845 |
U: mean duration | −3:383*** | 0.672 | −0.505 |
HStarts: South | 0.096 | 0.821 | −0.766 |
NAPM new ordrs | −0.517 | 1.814 | 0.076 |
PCED-NDUR-ENERGY | 0.753 | 0.049 | −1.759 |
Emp. Hours | −1.200 | 0.297 | −2:063* |
FedFunds | −0.971 | −0.134 | −0.625 |
Cons credit | 0.207 | 0.359 | −0.661 |
EX rate: Canada | −0.466 | −0.138 | −0.037 |
DJIA | 0.585 | −0.959 | −1.428 |
Consumer expect | 1.212 | −1.038 | −0.277 |
It is worth mentioning that one main advantage of IPAD is its interpretability and stability. Using IPAD for forecasting, we not only enjoy the same level of accuracy as other methods but also obtain the information on variable importance with stability. Recall that for each one-step ahead prediction, we apply IPAD 100 times and obtain 100 sets of selected variables. Thus we can calculate the selection frequency of each variable in each one-step ahead prediction. Figure 1 depicts the frequencies of top five selected variables in predicting real GDP growth before and after year 2000, where the variable importance is ranked according to the aggregated frequencies over the entire time period before or after 2000. We have experimented with different cutoff years around year 2000, and the top five ranked variables stay the same so only the results corresponding to cutoff year 2000 are reported. Changes in index of help wanted advertising in newspapers, percentages of changes in real personal consumption of services, and percentage of changes in real gross private domestic investment in residential sector were the top three important variables in predicting real GDP growth during the whole period. It is interesting to see that percentage of changes in residential price index was among top five important variables in predicting GDP growth during the 90s, and then starting from year 2000 it was replaced by changes in index of consumer expectations about stability of economy. Moreover, it is also seen that the percentage of changes in industrial production of fuels was of great importance for predicting real GDP growth during some periods but not the others.
Figure 1:
Frequencies of top selected variables in predicting real GDP growth. The set of selected variables are index of help-wanted advertising in newspapers (Help wanted indx), real personal consumption expenditures - services (Cons-Serv), real gross private domestic investment - residential (Res.Inv), residential price index (PFI-RES), industrial production index - fuels (IP:fuels), and University of Michigan index of consumer expectations (Consumer expect).
As a comparison, it is very difficult to interpret the results of FAR. As for Lasso based method, there is no theoretical guarantee on FDR control and in addition, Lasso usually gives us models with much larger size. For instance, in predicting real GDP growth, IPAD on average selects 5.42 macroeconomic variables while Lasso on average selects 13.32 variables. To summarize, our real data analysis indicates that IPAD is an applicable approach for controlling FDR with competitive prediction power and high interpretability and stability.
6. Discussions
We have suggested in this paper a new procedure IPAD for feature selection in high-dimensional linear models that achieves asymptotic FDR control while retaining high power. Our model setting involves a latent factor model that is motivated by applications in economics and finance. Our method falls into the general model-X knockoffs framework in [15], but allows the unknown covariate distribution for the knockoff variable construction. With the LCD knockoff statistics, we have shown that the FDR of IPAD can be asymptotically under control while the power can be asymptotically the same as that of Lasso. Our simulation study and empirical analysis also suggest that IPAD has highly competitively performance compared to many widely used forecasting methods such as Lasso and FAR, but with much higher interpretability and stability.
Our work has focused on the scenario of static models. It would be interesting to extend the IPAD procedure to high-dimensional dynamic models with time series data. It is also interesting to consider nonlinear models and more flexible machine learning methods for forecasting as well as more refined factor model structures on the covariates for the knockoffs inference with IPAD, and develop theoretical guarantees for the IPAD framework in these more general model settings. These extensions are beyond the scope of the current paper and are interesting topics for future research.
Acknowledgments
The author names are alphabetically ordered. This work was supported by NIH Grant 1R01GM131407-01, NSF CAREER Award DMS-1150318, a grant from the Simons Foundation, Adobe Data Science Research Award, and a Grant-in-Aid for JSPS Overseas Research Fellowship 29-60. Most of this work was completed while Uematsu visited USC as a JSPS Overseas Research Fellow and Postdoctoral Scholar. The authors sincerely thank the Joint Editor, Associate Editor, and referees for their valuable comments that helped improve the paper substantially.
A. Proofs of main results
We provide the proofs of Theorems 1–2 in this appendix. The proofs of Proposition 1 and Lemmas 1–2 and additional technical details are included in the Supplementary Material.
To ease the technical presentation, let us introduce some notation. We denote by ≲. the inequality up to some positive constant factor. Restricting the columns of X and to the variables in index set such that , we obtain the n × k submatrices and , respectively. Moreover, we define with the principle submatrix of formed by columns and rows in set , and the subvector of formed by components in set . Then it is easy to see that and . For the oracle factor loading matrix Λ0, with a slight abuse of notation we use to denote the row restricted to the variables in for notational convenience. Recall that ν > 0 is a fixed positive number, cnp = (p−1 log n)1/2 +(n−1 log p)1/2, and . We define πnp = n−ν + p−ν. Since λ is fixed at C0n−1/2 log p, in all the proofs we will drop the dependence of various quantities on λ whenever there is no confusion.
A.1. Proof of Theorem 1
Recall that for a given θ, is the support of knockoff statistics (W1(θ),⋯, Wp(θ))′. Define set . It follows from (12) that the cardinality of is bounded by k. Hereafter we write as for notational simplicity.
By Lemmas 1–2 and the definition of the FDP, we know that and thus the resulting FDR’s are the same. Therefore, we can restrict ourselves to the smaller model when studying the FDR of IPAD. The same arguments as above also hold for the oracle knockoffs; that is, the FDR of IPAD applied to T(θ0) is the same as that applied to . Note that all the FDR’s we discuss here are with respect to the full model {1,⋯, p}. For this reason, in what follows we will abuse the notation and use and to denote the FDR of IPAD based on and , respectively. We want to emphasize that although we put a subscript in FDR’s, their values are still deterministic as argued above. Summarizing the facts, we obtain
Meanwhile, by construction satisfies the two properties in Definition 1 and is a valid model-X knockoffs matrix. Therefore, for any value of the regularization parameter, the LCD statistics Wj(θ0) based on together with Result 1 ensure the exact FDR control at some target level q ∈ (0, 1). Summarizing this, we obtain that the FDR of IPAD applied to T(θ0) is controlled at target level q.
Combining the arguments in the previous two paragraphs, we deduce
Thus the desired results follow automatically if we can prove that is asymptotically close to . We next proceed to prove it.
Recall the definitions of and as in (13). Define the event
Lemma 3 in Section C.4 establishes with probability at least 1 − O(πnp) and θ0 ∈ Θnp. Hence, Lemma 4 in Section C.5 guarantees that
(17) |
where for some constant C1 > 0.
For a given deterministic set , let be the FDP function corresponding to . By the definition of FDP function, we have for any t1, ,
Further, note that
Combining the results above yields
Similarly we have
Thus it holds that
(18) |
where the last two steps are due to Condition 6. Therefore, (17) and (18) together with the fact that FDP(·) ∈ [0, 1] entail that
This completes the proof of Theorem 1.
A.2. Proof of Theorem 2
By the definition of the LCD statistics, we construct the augmented Lasso estimator for each θ ∈ Θnp, which is defined as
(19) |
The Lasso estimator of regressing y on only X is also given by
(20) |
where λ = O(n−1/2 log p). According to the true model , the underlying true parameter vector corresponding to should be given by βaug := (β′, 0′)′ ∈ ℝ2p with and for any θ ∈ Θnp. By Lemma 5 in Section C.6, with probability at least 1 − O(πnp) the Lasso estimators satisfy
where λ = O(n−1/2 log p).
We now prove that under Condition 7, the power of the augmented Lasso (19) is bounded from below by γ ∈ [0, 1]; that is,
(21) |
where . To this end, we first show that with asymptotic probability one,
(22) |
The key is to use proof by contradiction. Suppose . Then we can see that
where the last step is by Condition 7. However, by Lemma 5 with probability at least 1−O(πnp), the left hand side above is bounded from above by O(sλ) with λ = O(n−1/2 log p). These two results contradict with each other since bnp → ∞. Hence (22) is proved. Therefore, the result in (21) follows immediately since and
Let . Using the same argument, we can show that the power of the Lasso (20) is also bounded from below by γ(1−O(πnp)) under Condition 7. That is, we have
Next we show that our knockoffs procedure has at least the same power as the augmented Lasso and hence the Lasso itself. Namely, we prove
(23) |
with threshold T2. Note that the same argument is still valid for T1. Let |W(1)| ≥ ⋯ ≥ |W(p)| and define j∗ as . Then by the definition of T2, it holds that . Here we have assumed that there are no ties on the magnitudes of Wj’s which should be a reasonable assumption considering the continuity of the Lasso solution. As in the proof of Theorem 3 in [26], it is sufficient to consider the following two cases.
Case 1
Consider the case of . In this case, from the definition of threshold T2 we have
Using the same argument as in Lemma 6 of [26] together with Lemma 5, we can prove from Condition 8 that with probability at least 1 − O(πnp). This leads to |{j : W(j) ≤ −T2}| > C2qs−2 with the same probability. Now from the same argument as in A.5 of [26], we can obtain T2 = O(λ). On the other hand, Lemma 5 and some algebra establish that
(24) |
We then consider the lower bound of the last term in (24). For any , it holds that . Hence we obtain
(25) |
Plugging (25) into (24) and applying the triangle inequality yield
Since for λ = O(n−1/2 log p) due to the discussion above, we obtain
(26) |
Suppose . Then Condition 7 gives for some positive diverging sequence bnp; this contradicts with (26). Thus we obtain with asymptotic probability one, which leads to (23) by taking expectation.
Case 2
Consider the case of . In this case, by the definition of threshold T2
(27) |
If |{j : W(j) < 0}| > C3s for some constant C3 > 0, then from the same argument as in A.5 of [26], we can obtain T2 = O(λ), and the rest of the proof is the same as in Case 1. On the other hand, if |{j : W(j) < 0}| ≤ o(s) we have
Now note that . Then we can see that with asymptotic probability one,
Consequently, we obtain , which leads to (23) by taking expectation. Combining these two cases concludes the proof of Theorem 2.
B. Review of model-X knockoffs framework
The key idea of the model-X knockoffs framework is to construct the so-called model-X knockoff variables, which concept was introduced originally in [15] and whose definition is stated formally as follows for completeness.
Definition 1 (Model-X knockoff variables [15])
For a set of random variables x = (X1,⋯, Xp), a new set of random variables is called a set of model-X knockoff variables if it satisfies the following properties:
For any subset , we have , where denotes equal in distribution and the vector is obtained by swapping Xj and for each .
Conditional on x, the knockoffs vector is independent of response Y.
An important consequence is that the null regressors can be swapped with their knockoffs without changing the joint distribution of the original variables x, their knockoffs , and response Y. That is, we can obtain for any ,
(A.1) |
Such a property is known as the exchangeability property using the terminology in [15]. For more details, see Lemma 3.2 therein. Following [15], one can obtain a knockoffs matrix given observed design matrix X.
Using the augmented design matrix and response vector y constructed by stacking the n observations, [15] suggested constructing knockoff statistics , j ∈ {1,⋯, p}, for measuring the importance of the jth variable, where wj is some function that satisfies the property that swapping xj ∈ ℝn with its corresponding knockoff variable changes the sign of Wj; that is,
(A.2) |
The knockoff statistics constructed above satisfy the so-called sign-flip property; that is, conditional on |Wj|’s the signs of the null Wj’s with are i.i.d. coin flips (with equal chance 1/2). For the examples on valid constructions of knockoff statistics, see [15].
Let t > 0 be a fixed threshold and define as the set of discovered variables. Then intuitively, the sign-flip property entails
Therefore, the FDP function can be estimated (conservatively) as
for each t. In light of this observation, [15] proposed to choose the threshold by resorting to the above . Their results are summarized formally as follows.
Result 1 ([15])
Let q ∈ (0,1) denote the target FDR level. Assume that we choose a threshold T1 > 0 such that
or T1 = +∞ if the set is empty. Then the procedure selecting the variables controls the mFDR in (4) to no larger than q. Moreover, assume that we choose a slightly more conservative threshold T2 > 0 such that
or T2 = +∞ if the set is empty. Then the procedure selecting the variables controls the FDR in (3) to no larger than q.
It is worth noting that Result 1 was derived under the assumption that the joint distribution of the p covariates is known. In our model setting (1) and (2), however there exist unknown parameters that need to be estimated from data. In such case, it is natural to construct the knockoff variables and knockoff statistics with estimated distribution of the p covariates. Such a plug-in principle usually leads to breakdown of the exchangeability property in Definition 1, preventing us from using directly Result 1. To address this challenging issue, we will introduce our new method in the next section and provide detailed theoretical analysis for it.
It is also worth mentioning that recently, [6] provided an elegant new line of theory which ensures FDR control of model-X knockoffs procedure under the approximate exchangeability assumption, which is weaker than the exact exchangeability condition required in Definition 1. However, the conditions they need on estimation error of the joint distribution of x is difficult to be satisfied in high dimensions. [26] investigated the robustness of model-X knockoffs procedure with respect to unknown covariate distribution when covariates x follow a joint Gaussian distribution. Their procedure needs data splitting and their proofs rely heavily on the Gaussian distribution assumption, and thus their development may not be suitable for economic data with limited sample size and heavy-tailed distribution. For these reasons, our results complement substantially those in [15], [26], and [6].
C. Proofs of Proposition 1 and some key lemmas
C.1. Proof of Proposition 1
Observe that the second property of Definition 1 holds naturally since is constructed without using the information of y. Thus it remains to verify the first property of Definition 1. Since F0 and have i.i.d. rows, let us consider the case of a single observation and show that for any subset . By Proposition 2 of [15], it suffices to consider the case of for an arbitrary j ∈ {1,⋯, p}. It follows from the definition of model (2) and the construction of that
(A.3) |
where and are defined such that . Since model (2) assumes that e has i.i.d. components and is an independent copy of e, it holds that
(A.4) |
Therefore, in view of (A.3) and (A.4) and the independence between and c0, we have
which completes the proof of Proposition 1.
C.2. Proof of Lemma 1
For λ fixed at C0n−1/2 log p and each given θ, depends only on by the LCD construction. Moreover, the Lasso solution satisfies the Karush–Kuhn–Tucker (KKT) conditions:
(A.5) |
(A.6) |
This means that depends on the data only through U(θ) and v(θ). Thus using notation T(θ) = vec(vechU(θ),v(θ)) with the fact that U(θ) is symmetric, we can reparametrize as wj(T(θ)) with a slight abuse of notation. Furthermore, note that the thresholds T1 and T2 are both completely determined by wj(T(θ)). Consequently, by the construction of we can see that depends only on T(θ), which completes the proof of Lemma 1.
C.3. Proof of Lemma 2
We continue to use the same λ and θ as in Lemma 1 and its proof. Recall that represents the outcome of first restricting ourselves to the smaller set of variables and then applying IPAD to to further select variables from . Also recall that is the support of knockoff statistics Wj(θ). Thus the knockoff threshold T1 or T2 depends only on Wj(θ) with .
On the other hand, when we restrict ourselves to we solve the following KKT conditions with respect to to get the Lasso solution:
(A.7) |
(A.8) |
Since λ is always fixed at the same value C0n−1/2 log p, it is seen that the solution to the above KKT conditions is identical to , where the latter denotes the subvector of formed by stacking , and , all together. Therefore, the Lasso solution to (A.7)–(A.8) and the Lasso solution to (A.5)–(A.6) have the identical support (when viewed in the original 2p-dimensional space) and in addition, identical values on the support. This guarantees that S{1,⋯,p}(T(θ)) and are identical and thus concludes the proof of Lemma 2.
C.4. Lemma 3 and its proof
Lemma 3
Assume that Conditions 2–5 hold. Then with probability at least 1−O(πnp), the estimator lies in the shrinking set given by
where cnp = (n−1 log p)1/2 + (p−1 log n)1/2 and πnp = p−ν + n−ν.
Proof. We divide the proof into two parts. We prove the bound for in Part 1 and then for in Part 2.
Part 1
Note that , where the maximum is taken over i ∈ {1,⋯, n} and j ∈ {1,⋯, p}. We write and with rotation matrix H defined in Lemma 6 in Section D.1. From the definition of cij, it holds that
From Lemma 6, we can assume ‖H‖2 + ‖H−1‖2 + ‖V‖2 + ‖V−1‖2 ≲ 1, which occurs with probability at least 1 − O(p−ν). We also have a.s. by the assumed restriction as mentioned on p.213 of [3]. Hence, the triangle and Cauchy–Schwarz inequalities with Conditions 2 and 3 give
(A.9) |
Then it is sufficient to derive upper bounds for maxi kˆfi−fi∗k2 and that hold with high probability. Using the decomposition of A.1 in [2] along with taking maximum over i, ℓ ∈ {1,⋯, n}, we can deduce
(A.10) |
where we have used the boundedness of discussed above and in Condition 2 for the second inequality, and defined and . Similarly, the expression on p.165 of [2] with taking maximum over i ∈ {1,⋯, n} and j ∈ {1,⋯, p} leads to
(A.11) |
where and , and the Cauchy–Schwarz inequality has been used to obtain the second inequality. To evaluate R4, we note that
The first term is bounded by . For the second term, Lemma 7(a) in Section D.2 with p replaced by n and the union bound give
for all 0 ≤ u ≤ c. Thus putting u = (C(ν + 1)n−1 log p)1/2 and using condition cnp ≤ c/(r2M2C(ν + 2))1/2, we obtain with probability at least 1 − O(p−ν). This together with the observation from (A.9)–(A.11) yields
Hence the convergence rate of is determined by the slowest term out of R1, R2, R3, and O(n−1). We evaluate these terms by Lemma 7 in Section D.2 and the union bound with condition cnp ≤ c/(r2M2C(ν + 2))1/2 as above. First for R1, Lemma 7(a) by letting u1 = (C(ν + 2)p−1 log n)1/2 results in
Next for R2, Lemma 7(c) with u2 = (2(ν + 1)p−1 logn)1/2 gives
Finally for R3, Lemma 7(b) with putting u3 = (C(ν + 1)n−1 log p)1/2 leads to
Consequently, we obtain the first result , which holds with probability at least 1 − O(πnp).
Part 2
Next we derive the convergence rate of . It is sufficient to prove only the case when η0 is a scalar (so that we write ) since dimensionality m is fixed and share the identical property thanks to Condition 4. Recall notation . Letting , we have . For an arbitrary fixed k ∈ {1,⋯, m}, the binomial expansion entails
(A.12) |
For all k ∈ {1,⋯, m}, the strong law of large numbers with Theorem 2.5.7 in [22] entails a.s. under Condition 4. Furthermore, the second term of (A.12) is O(cnp) with probability at least 1 − O(πnp) from Part 1 and the same law of large numbers. Consequently, we obtain
Therefore by the construction of and local Lipschitz continuity of h1 in Condition 4, we see that
with probability at least 1 − O(πnp). This completes the proof of Lemma 3.
C.5. Lemma 4 and its proof
Lemma 4
Assume that Conditions 1–4 hold. Then with probability at least 1−O(πnp), the following statements hold
where Θnp was defined in Lemma 3 and . Consequently, we have
Proof. To complete the proof of (a), we verify the following
From (a–i) and (a–ii), we can conclude that
which yields result (a).
We begin with showing (a–i); this is the uniform extension of Lemma 8(a) in Section D.3 over . In fact, the proof is almost the same, with the only difference that bound (A.23) should be replaced with the bound derived in Lemma 9(c); that is,
(A.13) |
which holds with probability at least 1 − O(p−ν). Notice that (kn−1 log p)1/2 ≤ log1/2 p. Therefore, even if we use (A.13) instead of (A.23) in the proof of Lemma 8(a) we can still derive the same convergence rate as in Lemma 8(a), and hence (a–i) holds with probability at least 1 − O(πnp).
For (a–ii), we see that
(A.14) |
We derive the bounds for each of these terms. First, W1 is bounded as
Under Condition 3, we deduce
From Lemma 7(d) with Condition 2 and the union bound, we have
Hence, letting u = (C(ν + 2)n−1 log p)1/2 above yields the bound W1,1 ≲ (n−1 log p)1/2 with probability at least 1−O(p−ν). Next for W1,2, we can find from Lemma 7(a) with p replaced by n and the union bound that
Letting u = (C(ν + 2)n−1 log p)1/2 and using n−1 log p ≤ c2/(C(ν + 2)), we obtain W1,2 ≲ (n−1 log p)1/2 with probability at least 1 − O(p−ν). Next for W1,3, the union bound gives
Lemma 7(b) states that for all 0 ≤ u/(rM) ≤ c/(rM) it holds that
Therefore, if we put u = rM(C(ν + 1)n−1 log p)1/2 using n−1 log p ≤ c2/(r2M2C(ν + 1)), the upper bound of the probability is further bounded by 2rp−ν. Thus we obtain W13 ≲ (n−1 log p)1/2 with probability at least 1 − O(p−ν). Consequently, the bound of W1 is
with probability at least 1 − O(p−ν). Note that we have the same result for W2 since it has the same distribution as W1. Finally, W3 is bounded as
The upper bound of W3,1 turns out to be O((n−1 log p)1/2) that holds with probability at least 1−O(p−ν). We check this claim. Using the union bound and the inequality of Lemma 7(a) with p replaced by n and putting u = (C(ν + 2)n−1 log p)1/2 yield
Finally, W3,2 is found to have the same bound as W1,3 because is an independent copy of E. Consequently, with probability at least 1 − O(p−ν), we obtain
This completes the proof of (a) since p−ν/πnp = O(1).
Next we show (b) by verifying the following
Similar to the proof of (a), we need to modify the proof of Lemma 8(b) in Section D.3 for obtaining the uniform bound with respect to , but the obtained result is already uniform over the choice of . Thus the same upper bound holds and (b–i) follows. Next we show (b–ii). It holds that
These terms can be bounded by the results obtained in the proof of (a–ii). We see that
with probability at least 1 − O(p−ν). Next we deduce
The first and second terms can be bounded by the same ways as W1,3 and W3,1 in the proof of (a) above with E and replaced by ε, respectively. Then the first term dominates the second and hence Z2 ≲ (n−1 log p)1/2 with probability at least 1−O(p−ν). Similarly, we can obtain
with probability at least 1−O(p−ν). Note that Z4 has the same bound as Z2. Consequently, collecting terms leads to the result, Z1 + ⋯ + Z4 ≲ s(n−1 log p)1/2 with probability at least 1 − O(p−ν). This proves (b–ii) and concludes the proof of Lemma 4.
C.6. Lemma 5 and its proof
Lemma 5
Assume that all the conditions of Theorem 2 hold. Then with probability at least 1 − O(πnp), the Lasso solution in (19) satisfies
where λ = c1n1/2 log p with c1 some positive constant.
Proof. Let . We start with introducing two inequalities
(A.15) |
(A.16) |
where λ = c1n−1/2 log p for some positive constant c1 and
(A.17) |
It is well known that the rate of convergence of the Lasso estimator can be obtained provided that (A.15) and (A.16) hold. Thus we show that these two inequalities actually hold with high probability in Step 1, and then derive the convergence rate using (A.15) and (A.16) in Step 2.
Step 1
We check whether (A.15) and (A.16) actually hold with high probability. We first verify (A.15). By the proofs of Lemmas 8 and 4, we have
The first and third terms can both be upper bounded by O(n−1/2 log p) with probability at least 1−O(p−ν), following the same lines for deriving bound for Z2 in the proof of Lemma 4. To evaluate the second term, we can use the argument about V2 and its upper bound (A.24) in the proof of Lemma 8. That bound still holds with the same rate O(n−1/2 log p) even if we take . Thus we conclude that (A.15) is true for the given λ by choosing an appropriate positive large constant c1, with probability at least 1 − O(πnp).
Next to verify (A.16), we derive the population lower bound first and then show that the difference is negligible. From the construction, we have
Using these equations, we obtain the lower bound
(A.18) |
Because is sparse and satisfies for , it holds that and . Hence from Lemma 4 together with the condition on dimensionality, we obtain
(A.19) |
with probability at least 1 − O(πnp). Thus using (A.19), we have for any ,
Rearranging the terms with (A.18) yields
resulting in (A.16). In consequence, two inequalities (A.15) and (A.16) hold with probability at least 1 − O(πnp).
Step 2
This part is well known in the literature (e.g., [33]) so we briefly give the proof omitting the details. Because the objective function is given by
the global optimality of the Lasso estimator implies
where the true parameter vector βaug was defined in the proof of Theorem 2. Note that by the assumption. Expanding the inequality and collecting terms with (A.15) yield
(A.20) |
On the other hand, applying Lemma 1 of [33] to our model reveals that . Thus we can use (A.16), (A.20), and (A.17) to get
Since and , it holds that . Since , we obtain the desired bound . This bound holds uniformly over θ ∈ Θnp, which completes the proof of Lemma 5.
D. Additional technical lemmas and their proofs
D.1. Lemma 6 and its proof
Lemma 6
Denote by V ∈ ℝr×r a diagonal matrix with its entries the r largest eigenvalues of (np)−1XX′ and define . Assume that Conditions 2–5 hold. Then ‖H‖2 + ‖H−1‖2 + ‖V‖2 + ‖V−1‖2 is bounded from above by some constant with probability at least 1 − O(p−ν).
Proof. Let λk[A] denote the kth largest eigenvalue of square matrix A throughout the proof. Because ‖Λ00Λ0/p‖2 ≤ M and
by Conditions 2–3, and , we have
where ‖V−1‖2 is equal to the reciprocal of the rth largest eigenvalue of (np)−1XX′. Similarly, under Conditions 2–3 we also have
where ‖V‖2 is equal to the largest eigenvalue of (np)−1XX′ and the inverse matrix in the upper bound is well defined by [2]. To see if is bounded from above, it suffices to bound the minimum eigenvalue of away from zero uniformly in n. Regarding r eigenvalues of the matrix, Sylvester’s law of inertia (e.g., [31], Theorem 4.5.8) entails that all the r eigenvalues are positive for all n. Moreover, by Proposition 1 of [2] we know that the limiting matrix of is nonsingular under Conditions 2 and 5. Therefore, we can conclude that a.s., and hence ‖H−1‖2 ≲ ‖V‖2 follows.
To complete the proof, it is sufficient to show that the maximum and rth largest eigenvalues of (np)−1XX′ are bounded from above and away from zero, respectively, for all large n and p. By the definition of the spectral norm and triangle inequality, we have
By Conditions 2 and 3, the first term is a.s. bounded by a constant as discussed above. The second term is O((n ∧ p)−1/2) = o(1) with probability at least 1 − 2exp(−|O(n ∨ p)|) by Lemma 9(a) under Condition 4. Therefore, the largest eigenvalue of (np)−1XX′ is bounded from above by some constant with probability at least 1 − 2exp(−|O(n ∨ p)|).
Next we bound the rth largest eigenvalue of (np)−1XX′ away from zero. Since the matrix is symmetric, Weyl’s inequality (e.g., [31], Theorem 4.3.1) yields
(A.21) |
The third term of lower bound (A.21) is obviously nonnegative. For the first term of lower bound (A.21), let denote a subspace of ℝn. Because is symmetric, the Courant–Fischer min-max Theorem (e.g., [31], Theorem 4.2.6) yields
In this lower bound, the first term is bounded away from zero by Conditions 2–3. Meanwhile, to evaluate the second term we use Lemma 7(d) in Section D.2, which together with the union bound establishes
for any 0 ≤ u ≤ c. Thus the second one turns out to be O((n−1 log p)1/2) = o(1) with probability at least 1 − O(p−ν) once we set u = (Cνn−1 log p)1/2 and assume n−1 log p ≤ c2/(Cν) without loss of generality. Therefore, the first term of lower bound (A.21) is bounded away from zero eventually. For the second term of (A.21), since the spectral norm gives the upper bound of the spectral radius we have
which holds with probability at least 1 − 2exp(−|O(n ∨ p)|) by Lemma 9(a) in Section D.4. As a consequence, the desired result holds with probability at least 1 − O(p−ν) and this concludes the proof of Lemma 6.
D.2. Lemma 7 and its proof
Lemma 7
Assume that Conditions 2–4 hold. Then there exist some positive constants c and C such that the following inequalities hold
- For all ℓ,i ∈ {1,⋯, n} and 0 ≤ u ≤ c, we have
- For all k ∈ {1,⋯, r}, j ∈ {1,⋯, p}, and 0 ≤ u ≤ c, we have
- For all k ∈ {1,⋯, r}, i ∈ {1,⋯, n}, and u ≥ 0, we have
- For all k, ℓ ∈ {1,⋯, r} and 0 ≤ u ≤ c, we have
Proof. (a) To obtain the first result, we rely on the Hanson–Wright inequality. Let ξ = (ξ1,⋯, ξm)0 ∈ ℝm denote a random vector whose components are independent copies of . Then the inequality states that for any (nonrandom) matrix A ∈ ℝm×m,
(A.22) |
where K is a positive constant such that and is a positive constant. In our setting, we can take (e.g., Lemma 1.4 of [34]). Using this inequality, we first prove the case when ℓ = i. If we set m = p and A = diag(p−1, …, p−1), then we have
for all i. Moreover, we obtain and ‖A‖2 = p−1 in this case. The assumed condition entails that u2/K4 ≤ u/K2 so the result follows from (A.22) with replaced by .
Similarly, we prove the case when ℓ ≠ i. We set m = p + 1 and A = (a1,⋯, ap+1), where a1 = (0,p−1,⋯, p−1)′ and aj = 0 for j = 2,⋯, p + 1. That is, the entries of A are all zero except that the second to (p + 1)th components in the first column vector are p−1. Under this setting, we observe that
for all ℓ ≠ i. Moreover, we obtain in this case. Therefore, the same bound holds as in the case of ℓ = i from (A.22) again. Consequently, for any we have
(b) We prove the second assertion by Bernstein’s inequality for the sum of a martingale difference sequence (e.g., Theorem 3.14 in [11]). Fix k = 1 and j = 1. Define as the σ-field generated from . Then forms a martingale difference sequence because and under Conditions 2 and 4. Since the sub-Gaussianity of ei1 implies (e.g., Lemma 1.4 of [34]), we have , and hence a.s. due to boundedness a.s. On the other hand, by the sub-Gaussianity of eij and boundedness of again we observe that for all p ≥ 3 and i ∈ {1,⋯, n},
where Γ denotes the Gamma function and we have used the estimates pΓ(p/2) ≤ p! and 2p/2−2 ≤ 2p−2/2 for p ≥ 3 in the last inequality. Then an application of Theorem 3.14 in [11] by putting x = u, , and c = 2MCe in their notation gives the one-sided result. Making twice the bound yields
For all , the upper bound is further bounded by . We set . Consequently, for any we have
(c) We prove the third inequality. Note that
This implies that is a sequence of i.i.d. . Thus the result is obtained directly by Bernstein’s inequality for the sum of independent sub-Gaussian random variables. Consequently, for any u ≥ 0 putting leads to
(d) We show the last inequality. Note that for each k, (fik)i ∼ i.i.d. subG(M2) since a.s. by Lemma 1.8 of [34] under Condition 2. Thus the remaining is the same as (a). Set here. Then for any 0 ≤ u ≤ 9M2, we have
Finally the obtained inequalities hold even if the constant in the upper bound is replaced with arbitrary fixed constant C such that C ≥ max{CH,CI,CJ,CK}. Similarly, we can also restrict the range of u for each inequality to be 0 ≤ u ≤ c for arbitrary fixed constant c that satisfies . This completes the proof of Lemma 7.
D.3. Lemma 8 and its proof
Lemma 8
Assume that Conditions 1–4 hold. Then for any set satisfying , the following statements hold with probability at least 1 − O(πnp)
where Θnp was defined in Lemma 3 and . Consequently, we have
Proof. We first state some results that are useful in the proof. Since ‖n−1/2F0‖2 = O(1) a.s. by Condition 2 and for any such that under Condition 3, we first have
Next Lemma 9(b) in Section D.4 gives directly
(A.23) |
with probability at least 1 − O(p−ν). By Condition 4, we also deduce
for any u ≥ 0. Thus setting with some large enough positive constant M, we obtain that with probability at least 1 − O((np)−ν),
We will use these results and Lemma 10 in Section D.5 in the proofs below.
To prove (a), we have
Observe that U1 is further bounded as
By Lemma 10, it is easy to see that
where the last estimate follows from Lemma 3. Similarly, we deduce
and
Combining these bounds of U11–U13, we have
This holds uniformly in θ ∈ Θnp with probability at least 1 − O(πnp) by Lemma 3 and the discussion above. Next we obtain
This also holds uniformly in θ ∈ Θnp with probability at least 1 − O(πnp) by Lemma 3 and the discussion above. Consequently, it holds that
with probability at least 1 − O(πnp).
To prove (b), we have
First, because we see that
Recall that and s ≤ n ∧ p. By a similar bound of U2, the norm just above can be bounded further as
Thus we have
with probability at least 1 − O(πnp). Next the same procedure yields
(A.24) |
where a.s. by the law of large numbers for independent random variables. Since the results hold uniformly in θ ∈ Θnp, combining them leads to
with probability at least 1 − O(πnp). This concludes the proof of Lemma 8.
D.4. Lemma 9 and its proof
Lemma 9
Assume that Condition 4 holds. Then the following statements hold
- We have
- For any fixed set with , we have
- For all k ≤ n, we have
where ν > 0 is a predetermined constant.
Proof. Result (a) is obtained by Theorem 5.39 of [40]. Moreover, by the same theorem there exist some positive constants c and C such that for any with and every t ≥ 0,
where . Therefore, result (b) is immediately obtained by putting t2 = c−1ν log p since n−1/2t = o(1) and exp (−ct2) = p−v in this case.
For (c), taking the union bound leads to
Set t2 = c−1(ν + k)log p in this inequality. Then we have n1/2t = O((n−1k log p)1/2) and
which gives result (c) and completes the proof of Lemma 9.
D.5. Lemma 10 and its proof
Lemma 10
For matrices and , we have ‖AB‖max ≤ n1/2‖A‖2‖B‖max and ‖AB‖max ≤ n1/2‖A‖max‖B‖2.
Proof. For any matrix M = (mij) ∈ ℝk×n, let ‖M‖∞,∞ denote the induced ℓ∞-norm. First, we have
Therefore, by a simple calculation we see that
The second assertion follows from applying this inequality to B′A′. This concludes the proof of Lemma 10.
E. Additional numerical details and results
E.1. Estimation procedure
In implementing the IPAD algorithm suggested in Section 2, we use the PCp1 criterion proposed in [3] to estimate the number of factors r. With an estimated number of factors , we use the principle component method discussed in Section 3.2 to obtain an estimate of matrix C0. Denote by . Recall that in the construction of knockoff variables, the distribution of E needs to be estimated. Throughout our simulation studies, we misspecify the model and treat the entries of E as i.i.d. Gaussian random variables. Under this working model assumption, the only unknown parameter is the variance which can be estimated by the following maximum likelihood estimator
Then the knockoffs matrix is constructed using (8) with the entries of drawn independently from . For the two comparison methods BCKnockoff and HD-BCKnockoff, we follow the implementation in [4] and [5], respectively. Thus it is seen that neither BCKnockoff nor HD-BCKnockoff uses the factor structure in X when constructing the knockoff variables.
In Designs 1–3, with the constructed empirical knockoffs matrix we apply the Lasso method to fit the model with y the response vector and the augmented design matrix. The value of the regularization parameter λ is chosen by the tenfold cross-validation. Then the LCD discussed in Section 2.2 is used in the construction of knockoff statistics. In Design 4, we assume the nonlinear relationship between the response and the covariates. In this case, random forest is used for estimation of the model. To construct the knockoff statistics, we use the variable importance measure of mean decrease accuracy (MDA) introduced in [14]. This measure is based on the idea that if a variable is unimportant, then rearranging its values should not degrade the prediction accuracy. The MDA for the jth variable, denoted as , measures the amount of increase in prediction error when the values of the jth variable in the out-of-sample prediction are permuted randomly. Then intuitively, will be small and around zero if the jth variable is unimportant in predicting the response. For each original variable xj, we compute Wj statistic as .
E.2. Simulation study
To evaluate the performance of IPAD approach in terms of empirical FDR and power with real economic data, we set up one additional Monte Carlo simulation study. In this design, we use the transformed macroeconomic variables described above as the design matrix X, but simulate response y from the model in Design 1 in Section 4.1. We set the number of true signals, the amplitude of signals, and the target FDR level to s = 10, A = 4, and q = 0.2, respectively.
Table 7 shows the results for IPAD and HD-BCKnockoff approaches. As expected, HD-BCKnockoff can control FDR but suffers from lack of power. On the other hand, IPAD has empirical FDR slightly higher than the target level (q = 0.2) while its power is reasonably high. These results are consistent with our theory in Section 3 because IPAD only controls FDR asymptotically. Additional reason for having slightly higher FDR than the target level can be deviation of the design matrix from our factor model assumption. Overall this simulation study indicates that IPAD can control FDR at around the target level with reasonably high power when we use the macroeconomic data set. In the next section, using the same data set we will compare the forecasting performance of IPAD with that of some commonly used forecasting methods in the literature.
Table 7:
Real data simulation results with (n,p) = (195,109)
FDR | Power | FDR+ | Power+ | R2 | |
---|---|---|---|---|---|
c = 0.2 | |||||
IPAD | 0.278 | 0.812 | 0.223 | 0.796 | 0.747 |
HD-BCKnockoff | 0.096 | 0.009 | 0.010 | 0.002 | 0.758 |
c = 0.3 | |||||
IPAD | 0.280 | 0.757 | 0.221 | 0.723 | 0.665 |
HD-BCKnockoff | 0.149 | 0.121 | 0.027 | 0.036 | 0.678 |
c = 0.5 | |||||
IPAD | 0.286 | 0.661 | 0.215 | 0.571 | 0.560 |
HD-BCKnockoff | 0.119 | 0.009 | 0.008 | 0.001 | 0.554 |
E.3. Methods of comparison in empirical analysis
We compare the following different methods in the empirical analysis presented in Section 5, where each method is implemented in a same way as IPAD for one-step ahead prediction.
- Autoregression of order one (AR(1)). Assume that
where yt is regressed on yt−1, and α0 and ρ are the AR(1) coefficients that need to be estimated. With the ordinary least squares estimates and , the one-step ahead prediction based on this model is . - Factor augmented AR(1) (FAR). We first extract m factors f1,⋯, fm form the 109 transformed macroeconomic variables by principal component analysis (PCA). Denote by the factor vector at time t extracted from the rows of matrix [f1,⋯, fm] ∈ ℝn×m. Then we regress yt on yt−1 and and fit the following model
with γ ∈ ℝm. The number of factors m is determined using the PCp1 criterion in [3]. Similar to AR(1) model, one-step ahead forecast of yt at time T is - Lasso method. The yt is regressed on yt−1, , and the 108 transformed macroeconomic variables zt−1 ∈ ℝ108 at time t − 1
where is the same as in the FAR(1) model, and α0, ρ, and δ ∈ ℝ108 are regression coefficients that need to be estimated. The coefficients are estimated by Lasso method with regularization parameter chosen by the cross-validation. With the estimated Lasso coefficient vector , one-step ahead forecast of yt at time T is
where xT is the augmented predictor vector at time T. - IPAD method. We regress yt on the augmented vector . The lagged variable yt−1 is assumed to be always in the model. To account for this, we implement IPAD in three steps. First, we regress yt on yt−1 and obtain the residuals ey,t. Second, we regress each of the 108 variables in zt−1 on yt−1 and obtain the residual vector ez,t−1. At last, we fit model (1)–(2) using the IPAD approach by treating ey,t as the response and ez,t−1 as predictors, which returns us a set of selected variables (a subset of the 108 macroeconomic variables). With the set of variables selected by IPAD, we fit the following model by the least-squares regression
where stands for the subvector of zt corresponding to the set of variables selected by IPAD at time t. Since from IPAD is random due to the randomness in generating knockoff variables, we apply the IPAD procedure 100 times and compute the average of these 100 one-step ahead predictions based on (A.25) and use the mean value as the final predicted value of yT+1.(A.25)
References
- [1].Ahn SC and Horenstein AR (2013). Eigenvalue ratio test for the number of factors. Econometrica 81, 1203–1227. [Google Scholar]
- [2].Bai J (2003). Inferential theory for factor models of large dimensions. Econometrica 71, 135–171. [Google Scholar]
- [3].Bai J and Ng S (2002). Determining the number of factors in approximate factor models. Econometrica 70, 191–221. [Google Scholar]
- [4].Barber RF and Candès EJ (2015). Controlling the false discovery rate via knockoffs. The Annals of Statistics 43, 2055–2085. [Google Scholar]
- [5].Barber RF and Candès EJ (2016). A knockoff filter for high-dimensional selective inference. arXiv preprint arXiv:1602.03574. [Google Scholar]
- [6].Barber RF, Candès EJ, and Samworth RJ (2018). Robust inference with knockoffs. arXiv preprint arXiv:1801.03896. [Google Scholar]
- [7].Belloni A, Chernozhukov V, Chetverikov D, Hansen C, and Kato K (2018). High-dimensional econometrics and regularized GMM. arXiv preprint arXiv:1806.01888. [Google Scholar]
- [8].Benjamini Y (2010). Discovering the false discovery rate. Journal of the Royal Statistical Society Series B 72, 405–416. [Google Scholar]
- [9].Benjamini Y and Hochberg Y (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B 57, 289–300. [Google Scholar]
- [10].Benjamini Y and Yekutieli D (2001). The control of the false discovery rate in multiple testing under dependency. The Annals of Statistics 29, 1165–1188. [Google Scholar]
- [11].Bercu B, Delyon B, and Rio E (2015). Concentration Inequalities for Sums and Martingales (1st ed.). Springer. [Google Scholar]
- [12].Billingsley P (1995). Probability and Measure (3rd ed.). Wiley-Interscience. [Google Scholar]
- [13].Bonferroni CE (1935). Il calcolo delle assicurazioni su gruppi di teste. Studi in Onore del Professore Salvatore Ortu Carboni, 13–60. [Google Scholar]
- [14].Breiman L (2001). Random forests. Machine Learning 45, 5–32. [Google Scholar]
- [15].Candès EJ, Fan Y, Janson L, and Lv J (2018). Panning for gold: ‘model-X’ knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society Series B 80, 551–577. [Google Scholar]
- [16].Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, Newey W, and Robins J (2018). Double/debiased machine learning for treatment and structural parameters. Econometrics Journal 21, C1–C68. [Google Scholar]
- [17].Chernozhukov V, Härdle WK, Huang C, and Wang W (2018). Lasso-driven inference in time and space. arXiv preprint arXiv:1806.05081. [Google Scholar]
- [18].Chernozhukov V, Newey W, and Robins J (2018). Double/de-biased machine learning using regularized Riesz representers. arXiv preprint arXiv:1802.08667. [Google Scholar]
- [19].Chudik A, Kapetanios G, and Pesaran H (2018). A one covariate at a time, multiple testing approach to variable selection in high-dimensional linear regression models. Econometrica, to appear. [Google Scholar]
- [20].De Mol C, Giannone D, and Reichlin L (2008). Forecasting using a large number of predictors: Is Bayesian shrinkage a valid alternative to principal components? Journal of Econometrics 146, 318–328. [Google Scholar]
- [21].Diebold FX and Mariano RS (1995). Comparing predictive accuracy. Journal of Business & Economic Statistics 20, 134–144. [Google Scholar]
- [22].Durrett R (2010). Probability: Theory and Examples (4th ed.). Cambridge University Press. [Google Scholar]
- [23].Fan J and Fan Y (2008). High-dimensional classification using features annealed independence rules. The Annals of Statistics 36, 2605–2637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Fan J, Han X, and Gu W (2012). Estimating false discovery proportion under arbitrary covariance dependence (with discussion). Journal of American Statistical Association 107, 1019–1045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Fan J and Lv J (2008). Sure independence screening for ultrahigh dimensional feature space (with discussion). Journal of the Royal Statistical Society Series B 70, 849–911. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Fan Y, Demirkaya E, Li G, and Lv J (2019). RANK: large-scale inference with graphical nonlinear knockoffs. Journal of the American Statistical Association, to appear. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Fan Y, Demirkaya E, and Lv J (2019). Nonuniformity of p-values can occur early in diverging dimensions. Journal of Machine Learning Research, to appear. [PMC free article] [PubMed] [Google Scholar]
- [28].Fan Y and Lv J (2013). Asymptotic equivalence of regularization methods in thresholded parameter space. Journal of the American Statistical Association 108, 1044–1061. [Google Scholar]
- [29].Guo Z, Kang H, Cai TT, and Small DS (2018). Confidence intervals for causal effects with invalid instruments by using two-stage hard thresholding with voting. Journal of the Royal Statistical Society Series B 80, 793–815. [Google Scholar]
- [30].Holm S (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70. [Google Scholar]
- [31].Horn RA and Johnson CR (2012). Matrix Analysis (2nd ed.). Cambridge University Press. [Google Scholar]
- [32].Lv J (2013). Impacts of high dimensionality in finite samples. The Annals of Statistics 41, 2236–2262. [Google Scholar]
- [33].Negahban SN, Ravikumar P, Wainwright MJ, and Yu B (2012). A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Statistical Science 27, 538–557. [Google Scholar]
- [34].Rigollet P and Hütter J-C (2017). High Dimensional Statistics. Massachusetts Institute of Technology, MIT Open CourseWare. [Google Scholar]
- [35].Romano JP and Wolf M (2005). Exact and approximate stepdown methods for multiple hypothesis testing. Journal of the American Statistical Association 100, 94–108. [Google Scholar]
- [36].Shah RD and Bühlmann P (2018). Goodness-of-fit tests for high dimensional linear models. Journal of the Royal Statistical Society Series B 80, 113–135. [Google Scholar]
- [37].Stock JH and Watson MW (2012). Generalized shrinkage methods for forecasting using many predictors. Journal of Business & Economic Statistics 30, 481–493. [Google Scholar]
- [38].Stucky B and van de Geer S (2018). Asymptotic confidence regions for high-dimensional structured sparsity. IEEE Transactions on Signal Processing 66, 2178–2190. [Google Scholar]
- [39].Tibshirani R (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58, 267–288. [Google Scholar]
- [40].Vershynin R (2012). Introduction to the non-asymptotic analysis of random matrices. In Eldar YC and Kutyniok G (Eds.), Compressed Sensing: Theory and Practice, pp. 210–268. Cambridge University Press. [Google Scholar]
- [41].Vizcarra AB and Viens FG (2007). Some applications of the Malliavin calculus to sub-Gaussian and non-sub-Gaussian random fields. In Dalang RC, Dozzi M, and Russo F (Eds.), Seminar on Stochastic Analysis, Random Fields and Applications V, pp. 363–395. Springer Science & Business Media. [Google Scholar]
- [42].Wooldridge JM and Zhu Y (2018). Inference in approximately sparse correlated random effects probit models. Journal of Bussiness & Economic Statistics, to appear. [Google Scholar]
- [43].Zhang X and Cheng G (2017). Simultaneous inference for high-dimensional linear models. Journal of the American Statistical Association 112, 757–768. [Google Scholar]