Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Jul 31.
Published in final edited form as: J Am Stat Assoc. 2019 Apr 11;115(529):362–379. doi: 10.1080/01621459.2018.1546589

RANK: Large-Scale Inference with Graphical Nonlinear Knockoffs *

Yingying Fan 1, Emre Demirkaya 1, Gaorong Li 2, Jinchi Lv 1
PMCID: PMC7394464  NIHMSID: NIHMS1021107  PMID: 32742045

Abstract

Power and reproducibility are key to enabling refined scientific discoveries in contemporary big data applications with general high-dimensional nonlinear models. In this paper, we provide theoretical foundations on the power and robustness for the model-X knockoffs procedure introduced recently in Candès, Fan, Janson and Lv (2018) in high-dimensional setting when the covariate distribution is characterized by Gaussian graphical model. We establish that under mild regularity conditions, the power of the oracle knockoffs procedure with known covariate distribution in high-dimensional linear models is asymptotically one as sample size goes to infinity. When moving away from the ideal case, we suggest the modified model-X knockoffs method called graphical nonlin-ear knockoffs (RANK) to accommodate the unknown covariate distribution. We provide theoretical justifications on the robustness of our modified procedure by showing that the false discovery rate (FDR) is asymptotically controlled at the target level and the power is asymptotically one with the estimated covariate distribution. To the best of our knowledge, this is the first formal theoretical result on the power for the knockoffs procedure. Simulation results demonstrate that compared to existing approaches, our method performs competitively in both FDR control and power. A real data set is analyzed to further assess the performance of the suggested knockoffs procedure.

Keywords: Power, Reproducibility, Big data, High-dimensional nonlinear models, Robustness, Large-scale inference and FDR, Graphical nonlinear knockoffs

1. Introduction

Feature selection with big data is of fundamental importance to many contemporary applications from different disciplines of social sciences, health sciences, and engineering [36, 24, 8]. Over the past two decades, various feature selection methods, theory, and algorithms have been extensively developed and investigated for a wide spectrum of flexible models ranging from parametric to semiparametric and nonparametric linking a high-dimensional covariate vector x = (X1, · · · , Xp)T of p features Xj’s to a response Y of interest, where the dimensionality p can be large compared to the available sample size n or even greatly exceed n. The success of feature selection for enhanced prediction in practice can be attributed to the reduction of noise accumulation associated with high-dimensional data through dimensionality reduction. In particular, most existing studies have focused on the power perspective of feature selection procedures such as the sure screening property, model selection consistency, oracle property, and oracle inequalities. When the model is correctly specified, researchers and practitioners often would like to know whether the estimated model involving a subset of the p covariates enjoys reproducibility in that the fraction of noise features in the discovered model is controlled. Yet such a practical issue of reproducibility is largely less well understood for the settings of general high-dimensional nonlinear models. Moreover, it is no longer clear whether the power of feature selection procedures can be retained when one intends to ensure the reproducibility.

Indeed, the issues of power and reproducibility are key to enabling refined scientific discoveries in big data applications utilizing general high-dimensional nonlinear models. To characterize the reproducibility of statistical inference, the seminal paper of [4] introduced an elegant concept of false discovery rate (FDR) which is defined as the expectation of the fraction of false discoveries among all the discoveries, and proposed a popularly used Benjamini–Hochberg procedure for FDR control by resorting to the p-values for large-scale multiple testing returned by some statistical estimation and testing procedure. There is a huge literature on FDR control for large-scale inference and various generalizations and extensions of the original FDR procedure were developed and investigated for different settings and applications [5, 15, 57, 58, 1, 13, 14, 20, 64, 12, 32, 27, 48, 66, 21, 44, 59]. Most of existing work either assumes a specific functional form such as linearity on the dependence structure of response Y on covariates Xj’s, or relies on the p-values for evaluating the significance of covariates Xj’s. Yet in high-dimensional settings, we often do not have such luxury since response Y could depend on covariate vector x through very complicated forms and even when Y and x have simple dependence structure, high dimensionality of covariates can render classical p-value calculation procedures no longer justified or simply invalid [39, 26, 60]. These intrinsic challenges can make the p-value based methods difficult to apply or even fail [9].

To accommodate arbitrary dependence structure of Y on x and bypass the need of calculating accurate p-values for covariate significance, [9] recently introduced the model-X knockoffs framework for FDR control in general high-dimensional nonlinear models. Their work was inspired by and builds upon the ingenious development of the knockoff filter in [2], which provides effective FDR control in the setting of Gaussian linear model with dimensionality p no larger than sample size n. The knockoff filter was later extended in [3] to high-dimensional linear model using the ideas of data splitting and feature screening. The salient idea of [2] is to construct the so-called “knockoff” variables which mimic the dependence structure of the original covariates but are independent of response Y conditional on the original covariates. These knockoff variables can be used as control variables. By comparing the regression outcomes for original variables with those for control variables, the relevant set of variables can be identified more accurately and thus the FDR can be better controlled. The model-X knockoffs framework introduced in [9] greatly expands the applicability of the original knockoff filter in that the response Y and covariates x can have arbitrarily complicated dependence structure and the dimensionality p can be arbitrarily large compared to sample size n. It was theoretically justified in [9] that the model-X knockoffs procedure controls FDR exactly in finite samples of arbitrary dimensions. However, one important assumption in their theoretical development is that the joint distribution of covariates x should be known. Moreover, formal power analysis of the knockoffs framework is still lacking even for the setting of Gaussian linear model.

Despite the importance of known covariate distribution in their theoretical development, [9] empirically explored the scenario of unknown covariate distribution for the specific setting of generalized linear model (GLM) [46] with Gaussian design matrix and discovered that the estimation error of the covariate distribution can have negligible effect on FDR control. Yet there exist no formal theoretical justifications on the robustness of the model-X knockoffs method and it is also unclear to what extent such robustness can hold beyond the GLM setting. To address these fundamental challenges, our paper intends as the first attempt to provide theoretical foundations on the power and robustness for the model-X knockoffs framework. Specifically, the major innovations of the paper are twofold. First, we will formally investigate the power of the knockoffs framework in high-dimensional linear models with both known and unknown covariate distribution. Second, we will provide theoretical support on the robustness of the model-X knockoffs procedure with unknown covariate distribution in general high-dimensional nonlinear models.

More specifically, in the ideal case of known covariate distribution, we prove that the model-X knockoffs procedure in [9] has asymptotic power one under mild regularity conditions in high-dimensional linear models. When moving away from the ideal scenario, to accommodate the difficulty caused by unknown covariate distribution we suggest the modified model-X knockoffs method called graphical nonlinear knockoffs (RANK). The modified knockoffs procedure exploits the data splitting idea, where the first half of the sample is used to estimate the unknown covariate distribution and reduce the model size, and the second half of the sample is employed to globally construct the knockoff variables and apply the knockoffs procedure. We establish that the modified knockoffs procedure asymptotically controls the FDR regardless of whether the reduced model contains the true model or not. Such feature makes our work intrinsically different from that in [3] requiring the sure screening property [23] of the reduced model; see Section 3.1 for more detailed discussions on the differences. In our theoretical analysis of FDR, we still allow for arbitrary dependence structure of response Y on covariates x and assume that the joint distribution of x is characterized by Gaussian graphical model with unknown precision matrix [41]. In the specific case of high-dimensional linear models with unknown covariate distribution, we also provide robustness analysis on the power of our modified procedure.

The rest of the paper is organized as follows. Section 2 reviews the model-X knockoffs framework and provides theoretical justifications on its power in high-dimensional linear models. We introduce the modified model-X knockoffs procedure RANK and investigate its robustness on both FDR control and power with respect to the estimation of unknown covariate distribution in Section 3. Section 4 presents several simulation examples of both linear and nonlinear models to verify our theoretical results. We demonstrate the performance of our procedure on a real data set in Section 5. Section 6 discusses some implications and extensions of our work. The proofs of main results are relegated to the Appendix. Additional technical details are provided in the Supplementary Material.

2. Power analysis for oracle model-X knockoffs

Suppose we have a sample (xi,Yi)i=1n of n independent and identically distributed (i.i.d.) observations from the population (x, Y), where dimensionality p of covariate vector x = (X1, · · · , Xp)T can greatly exceed available sample size n. To ensure model identifiability, it is common to assume that only a small fraction of p covariates Xj’s are truly relevant to response Y. To be more precise, [9] defined the set of irrelevant features S1 as that consisting of Xj’s such that Xj is independent of Y conditional on all remaining p − 1 covariates Xk’s with kj, and thus the set of truly relevant features S0 is given naturally by S1C, the complement of set S1. Features in sets S0 and S0C=S1 are also referred to as important and noise features, respectively.

We aim at accurately identifying these truly relevant features in set S0 that is assumed to be identifiable while keeping the false discovery rate (FDR) [4] under control. The FDR for a feature selection procedure is defined as

FDR=E[FDP]withFDP=|S^S0C||S^|, (1)

where S^ denotes the sparse model returned by the feature selection procedure, | · | stands for the cardinality of a set, and the convention 0/0 = 0 is used in the definition of the false discovery proportion (FDP) which is the fraction of noise features in the discovered model. Here feature selection procedure can be any favorite sparse modeling method by the choice of the user.

2.1. Review of model-X knockoffs framework

Our suggested graphical nonlinear knockoffs procedure in Section 3 falls in the general framework of model-X knockoffs introduced in [9], which we briefly review in this section. The key ingredient of model-X knockoffs framework is the construction of the so-called model-X knockoff variables that are defined as follows.

Definition 1 ([9]). Model-X knockoffs for the family of random variables x = (X1, · · ·, Xp)T is a new family of random variables x˜=(X˜1,,X˜p)T that satisfies two properties: (1) (xT,x˜T)swap(S)=d(xT,x˜T) for any subset S{1,,p}, where swap(S) means swapping components Xj and X˜j for each jS and =d denotes equal in distribution, and (2) x˜Y|x.

We see from Definition 1 that model-X knockoff variables X˜j's mimic the probabilistic dependency structure among the original features Xj’s and are independent of response Y given Xj’s. When the covariate distribution is characterized by Gaussian graphical model [41], that is,

x~N(0,Ω01) (2)

with p×p precision matrix 0 encoding the graphical structure of the conditional dependency among the covariates Xj’s, we can construct the p-variate model-X knockoff random variable x˜ characterized in Definition 1 as

x˜|x~N(xdiag{s}Ω0x,2diag{s}diag{s}Ω0diag{s}), (3)

where s is a p-dimensional vector with nonnegative components chosen in a suitable way. In fact, in view of (2) and (3) it is easy to show that the original features and model-X knockoff variables have the following joint distribution

(xx˜)~N((00),(Σ0Σ0diag{s}Σ0diag{s}Σ0)) (4)

with Σ0=Ω01 the covariance matrix of covariates x. Intuitively, larger components of s means that the constructed knockoff variables deviate further from the original features, resulting in higher power in distinguishing them. The p-dimensional vector s in (3) should be chosen in a way such that Σ0 2−1diag{s} is positive definite, and can be selected using the methods in [9]. We will treat it as a nuisance parameter throughout our theoretical analysis.

With the constructed knockoff variables x˜, the knockoffs inference framework proceeds as follows. We select important variables by resorting to the knockoff statistics Wj=fj(Zj,Z˜j) defined for each 1 ≤ jp, where Zj and Z˜j represent feature importance measures for jth covariate Xj and its knockoff counterpart X˜j, respectively, and fj (·, ·) is an antisymmetric function satisfying fj=(zj,z˜j)=fj(z˜j,zj). For example, in linear regression models, one can choose Zj and Z˜j as the Lasso [61] regression coefficients of Xj and X˜j, respectively, and a valid knockoff statistic is Wj=fj(zj,z˜j)=|zj||z˜j|. There are also many other options for defining the feature importance measures. Observe that all model-X knockoff variables X˜j's are just noise features by the second property in Definition 1. Thus intuitively, a large positive value of knockoff statistic Wj indicates that jth covariate Xj is important, while a small magnitude of Wj usually corresponds to noise features.

The final step of the knockoffs inference framework is to sort |Wj|’s from high to low and select features whose Wj’s are at or above some threshold T, which results in the discovered model

S^=S^(T)={1jp:WjT}. (5)

Following [2] and [9], one can choose the threshold T in the following two ways

T=min{tW:|{j:Wjt}||{j:Wjt}|q}, (6)
T+=min{tW:1+|{j:Wjt}||{j:Wjt}|q}, (7)

where W={|Wj|:1jp}/{0} is the set of unique nonzero values attained by |Wj|’s and q ∈ (0, 1) is the desired FDR level specified by the user. The procedures using threshold T in (6) and threshold T+ in (7) are referred to as knockoffs and knockoffs+ methods, respectively. It was proved in [9] that model-X knockoffs procedure controls a modified FDR that replaces |S^| in the denominator by q1+|S^| in (1), and model-X knockoffs+ procedure achieves exact FDR control in finite samples regardless of dimensionality p and dependence structure of response Y on covariates x. The major assumption needed in [9] is that the distribution of covariates x is known. Throughout the paper, we implicitly use the threshold T+ defined in (7) for FDR control in the knockoffs inference framework but still write it as T for notational simplicity.

2.2. Power analysis in linear models

Although the knockoffs procedures were proved rigorously to have controlled FDR in [2, 3, 9], their power advantages over popularly used approaches have been demonstrated only numerically therein. In fact, formal power analysis for the knockoffs framework is still lacking even in simple model settings such as linear regression. We aim to fill in this gap as a first attempt and provide theoretical foundations on the power analysis for model-X knockoffs framework. In this section, we will focus on the oracle model-X knockoffs procedure for the ideal case when the true precision matrix 0 for the covariate distribution in (2) is known, which is the setting assumed in [9]. The robustness analysis for the case of unknown precision matrix 0 will be undertaken in Section 3.

We would like to remark that the power analysis for the knockoffs framework is necessary and nontrivial. The FDR and power are two sides of the same coin, just like type I and type II errors in hypothesis testing. The knockoffs framework is a wrapper and can be combined with most model selection methods to achieve FDR control. Yet the theoretical properties of power after applying the knockoffs procedure are completely unknown for the case of correlated covariates and unknown covariate distribution. For example, when the knockoffs framework is combined with the Lasso, it further selects variables from the set of variables picked by Lasso applied with the augmented design matrix to achieve the FDR control. For this reason, the power of knockoffs is usually lower than that of Lasso. The main focus of this section is to investigate how much power loss the knockoffs framework would encounter when combined with Lasso.

Since the power analysis for the knockoffs framework is nontrivial and challenging, we content ourselves on the setting of high-dimensional linear models for the technical analysis on power. The linear regression model assumes that

y=Xβ0+ε, (8)

where y = (Y1, · · ·, Yn)T is an n-dimensional response vector, X = (x1, · · ·, xn)T is an n × p design matrix consisting of p covariates Xj’s, β0 = (β0,1, · · ·, β0,p)T is a p-dimensional true regression coefficient vector, and ε = (ε1, · · ·, εn)T is an n-dimensional error vector independent of X. As mentioned before, the true model S0=supp(β0) which is the support of β0 is assumed to be sparse with size s=|S0|, and the n rows of design matrix X are i.i.d. observations generated from Gaussian graphical model (2). Without loss of generality, all the diagonal entries of covariance matrix Σ0 are assumed to be ones.

As discussed in Section 2.1, there are many choices of the feature selection procedure up to the user for producing the feature importance measures Zj and Z˜j for covariates Xj and knockoff variables X˜j, respectively, and there are also different ways to construct the knockoff statistics Wj. For the illustration purpose, we adopt the Lasso coefficient difference (LCD) as the knockoff statistics in our power analysis. The specific choice of LCD for knockoff statistics was proposed and recommended in [9], in which it was demonstrated empirically to outperform some other choices in terms of power. The LCD is formally defined as

Wj=|β^j(λ)||β^p+j(λ)|, (9)

where β^j(λ) and β^p+j(λ) denote the jth and (p + j)th components, respectively, of the Lasso [61] regression coefficient vector

β^(λ)=argminb2p{(2n)1y[X,X˜]b22+λb1} (10)

with λ ≥ 0 the regularization parameter, X˜=(x˜1,,x˜n)T an n × p matrix whose n rows are independent random vectors of model-X knockoff variables generated from (3), and r for r ≥ 0 the Lr-norm of a vector. To simplify the technical analysis, we assume that with asymptotic probability one, there are no ties in the magnitude of nonzero Wj’s and no ties in the magnitude of nonzero components of Lasso solution in (10), which is a mild condition in light of the continuity of the underlying distributions.

To facilitate the power analysis, we impose some basic regularity conditions.

Condition 1. The components of ε are i.i.d. with sub-Gaussian distribution.

Condition 2. It holds that minjS0|β0,j|kn{(logp)/n}1/2 for some slowly diverging sequence κn → ∞ as n → ∞.

Condition 3. There exists some constant c ∈ (2(qs)−1, 1) such that with asymptotic probability one, |S^|cs for S^ given in (5).

Condition 1 can be relaxed to heavier-tailed distributions at the cost of slower convergence rates as long as similar concentration inequalities used in the proofs continue to hold. Condition 2 is assumed to ensure that the Lasso solution β^(λ) does not miss a great portion of important features in S0. This is necessary since the knockoffs procedure under investigation builds upon the Lasso solution and thus its power is naturally upper bounded by that of Lasso. To see this, recall the well-known oracle inequality for Lasso [7, 8] that with asymptotic probability one, β^(λ)β02=O(s1/2λ) for λ chosen in the order of {(log p)/n}1/2. Then Condition 2 entails that for some κn → ∞, O(sλ2)=β^(λ)β022jS^LcS0β0,j2n1(logp)kn2|S^LcS0| with S^L=supp{β^(λ)}. Thus the number of important features missed by Lasso |S^LcS0| is upper bounded by O(skn2) with asymptotic probability one. This guarantees that the power of Lasso is lowered bounded by 1O(kn2); that is, Lasso has asymptotic power one. However, as discussed previously the power of knockoffs is always upper bounded by that of Lasso. So we are interested in the relative power of knockoffs compared to that of Lasso. For this reason, Condition 2 is imposed to simplify the technical analysis of the knockoffs power by ensuring that the asymptotic power of Lasso is one. We will show in Theorem 1 that there is almost no power loss when applying model-X knockoffs procedure.

Condition 3 imposes a lower bound on the size of the sparse model selected by the knockoffs procedure. Recall that we assume the number of true variables s can diverge with sample size n. The rationale behind Condition 3 is that any method with high power should at least be able to select a large number of variables which are not necessarily true ones though. Since it is not straightforward to check, we provide a sufficient condition that is more intuitive in Lemma 1 below, which shows that Condition 3 can hold as long as there exist enough strong signals in the model. We acknowledge that Lemma 1 may not be a necessary condition for Condition 3.

Lemma 1. Assume that Condition 1 holds and there exists some constant c ∈ (2(qs)−1, 1) such that |S2|cs with S2={j:|β0,j|[sn1(logp)]1/2}. Then Condition 3 holds.

We would like to mention that the conditions of Lemma 1 are not stronger than Condition 2. We require a few strong signals, and yet still allow for many very weak ones. In other words, the set of strong signals S2 is only a large enough proper subset of the set of all signals S0.

We are now ready to characterize the statistical power of the knockoffs procedure in high-dimensional linear model (8). Formally speaking, the power of a feature selection procedure is defined as

Power(S^)=E[|S^S0||S0|], (11)

where S^ denotes the discovered sparse model returned by the feature selection procedure.

Theorem 1. Assume that Condition 1–3 hold, all the eigenvalues of 0 are bounded away from 0 and ∞, the smallest eigenvalue of 2diag(s) − diag(s)0diag(s) is positive and bounded away from 0, and λ = Cλ{(log p)/n}1/2 with Cλ > 0 some constant. Then the oracle model-X knockoffs procedure satisfies that with probability at least 1c3pc3,

|S^S0|/|S0|1Cl1Cλ(φ+1)kn1

and therefore,

Power(S^)=E[|S^S0||S0|]1Cl1Cλ(φ+1)kn1c3pc3+o(kn1)1

as n → ∞, where φ is the golden ratio and Cl1 is some positive constant.

Theorem 1 reveals that the oracle model-X knockoffs procedure in [9] knowing the true precision matrix 0 for the covariate distribution can indeed have asymptotic power one under some mild regularity conditions. Since parameter κn characterizes the signal strength, it is seen that the stronger the signal, the faster the convergence of power to one. This shows that for the ideal case, model-X knockoffs procedure can enjoy appealing FDR control and power properties simultaneously.

3. Robustness of graphical nonlinear knockoffs

When moving away from the ideal scenario considered in Section 2, a natural question is whether both properties of FDR control and power can continue to hold with no access to the knowledge of true covariate distribution. To gain insights into such a question, we now turn to investigating the robustness of model-X knockoffs framework. Hereafter we assume that the true precision matrix 0 for the covariate distribution in (2) is unknown. We will begin with the FDR analysis and then move on to the power analysis.

3.1. Modified model-X knockoffs

We would like to emphasize that the linear model assumption is no longer needed here and arbitrary dependence structure of response y on covariates x is allowed. As mentioned in Introduction, to overcome the difficulty caused by unknown precision matrix 0 we modify the model-X knockoffs procedure described in Section 2.1 and suggest the method of graphical nonlinear knockoffs (RANK).

To ease the presentation, we first introduce some notation. For each given p × p symmetric positive definite matrix , denote by C = Ip diag{s} and B = (2diag{s} diag{s}diag{s})1/2 the square root matrix. We defined n × p matrix X˜Ω=(x˜1Ω,,x˜nΩ)T by independently generating X˜iΩ from the conditional distribution

x˜iΩ|xi~N(CΩxi,(BΩ)2), (12)

where X = (x1, · · ·, xn)T is the original n × p design matrix generated from Gaussian graphical model (2). It is easy to show that the (2p)-variate random vectors (xiT,(x˜iΩ)T)T are i.i.d. with Gaussian distribution of mean 0 and covariance matrix given by cov(xi) =Σ0, cov(xi,x˜iΩ)=Σ0CΩ, and cov(x˜iΩ)=(BΩ)2+CΩΣ0(CΩ)T.

Our modified knockoffs method RANK exploits the idea of data splitting, in which one half of the sample is used to estimate unknown precision matrix 0 and reduce the model dimensionality, and the other half of the sample is employed to construct the knockoff variables and implement the knockoffs inference procedure, with the steps detailed below.

  • Step 1. Randomly split the data (X, y) into two folds (X(k), y(k)) with 1 ≤ k ≤ 2 each of sample size n/2.

  • Step 2. Use the first fold of data (X(1), y(1)) to obtain an estimate Ω^ of the precision matrix and a reduced model with support S˜.

  • Step 3. With estimated precision matrix Ω^ from Step 2, construct an (n/2) × p knockoffs matrix X^ using X(2) with rows independently generated from (12); that is, X^=X(2)(CΩ^)T+ZBΩ^ with Z an (n/2) × p matrix with i.i.d. N(0; 1) components.

  • Step 4. Construct knockoff statistics Wj’s using only data on support S˜, that is, Wj=Wj(y(2),XS˜(2),X^S˜) for jS˜ and Wj = 0 for jS˜c. Then apply knockoffs inference procedure to Wj’s to obtain final set of features S^.

Here for any matrix A and subset S(1,,p), the compact notation AS stands for the submatrix of A consisting of columns in set S.

As discussed in Section 2.1, the model-X knockoffs framework utilizes sparse regression procedures such as the Lasso. For this reason, even in the original model-X knockoffs procedure the knockoff statistics Wj’s (see, e.g., (9)) take nonzero values only over a much smaller model than the full model. This observation motivates us to estimate such a smaller model using the first half of the sample in Step 2 of our modified procedure. When implementing this modified procedure, we limit ourselves to sparse models S˜ with size bounded by some positive integer Kn that diverges with n; see, for example, [30, 45] for detailed discussions and justifications on similar consideration of sparse models. In addition to sparse regression procedures, feature screening methods such as [23, 17] can also be used to obtain the reduced model S˜.

The above modified knockoffs method differs from the original model-X knockoffs procedure [9] in that we use an independent sample to obtain the estimated precision matrix Ω^ and reduced model S˜. In particular, the independence between estimates (Ω^,S˜) and data (X(2), y(2)) plays an important role in our theoretical analysis for the robustness of the knockoffs procedure. In fact, the idea of data splitting has been popularly used in the literature for various purposes [25, 19, 55, 3]. Although the work of [3] has the closest connection to ours, there are several key differences between these two methods. Specifically, [3] considered high-dimensional linear model with fixed design, where the data is split into two portions with the first portion used for feature screening and the second portion employed for applying the original knockoff filter in [2] on the reduced model. To ensure FDR control, it was required in [3] that the feature screening method should enjoy the sure screening property [23], that is, the reduced model after the screening step contains the true model S0 with asymptotic probability one. In contrast, one major advantage of our method is that the asymptotic FDR control can be achieved without requiring the sure screening property; see Theorem 2 in Section 3.2 for more details. Such major distinction is rooted on the difference in constructing knockoff variables; that is, we construct model-X knockoff variables globally in Step 3 above, whereas [3] constructed knockoff variables locally on the reduced model. Another major difference is that our method works with random design and does not need any assumption on how response y depends upon covariates x, while the method in [3] requires the linear model assumption and cannot be extended to nonlinear models.

3.2. Robustness of FDR control for graphical nonlinear knockoffs

We begin with investigating the robustness of FDR control for the modified model-X knockoffs procedure RANK. To simplify the notation, we rewrite (X(2), y(2)) as (X, y) with sample size n whenever there is no confusion, where n now represents half of the original sample size. For each given p × p symmetric positive definite matrix , an n × p knockoffs matrix X˜Ω=(x˜1Ω,,x˜nΩ)T can be constructed with n rows independently generated according to (12) and the modified knockoffs procedure proceeds with a given reduced model S. Then the FDP and FDR functions in (1) can be rewritten as

FDRn(Ω,S)=E[FDPn(y,XS,X˜SΩ)], (13)

where the subscript n is used to emphasize the dependence of FDP and FDR functions on sample size. It is easy to check that the knockoffs procedure based on (y, XS, X˜SΩ0) satisfies all the conditions in [9] for FDR control for any reduced model S that is independent of X and X˜Ω0, which ensures that FDRn(Ω0,S) can be controlled at the target level q. To study the robustness of our modified knockoffs procedure, we will make a connection between functions FDRn(Ω,S) and FDRn(Ω0,S).

To ease the presentation, denote by X˜0=X˜Ω0 the oracle knockoffs matrix with =0, C0=CΩ0, and B0=BΩ0. The following proposition establishes a formal characterization of the FDR as a function of the precision matrix used in generating the knockoff variables and the reduced model S.

Proposition 1. For any given symmetric positive definite Ωp×p and S{1,,p}, it holds that

FDRn(Ω,S)=E[gn(XaugSHΩ)], (14)

where XaugS=[X,X˜0,S]n×(p+|S|), function gn(·) is some conditional expectation of the FDP function whose functional form is free of and S, and

HΩ=(IpCSΩC0,S(B0,STB0,S)1/2((BSΩ)T)1/20(B0,STB0,S)1/2((BSΩ)TBSΩ)1/2).

We see from Proposition 1 that when Ω=Ω0, it holds that HΩ0=Ip+|S| and thus the value of the FDR function at point 0 reduces to

FDRn(Ω0,S)=E[gn(XaugS)],

which can be shown to be bounded from above by the target FDR level q using the results proved in [9]. Since the dependence of FDR function on is completely through matrix H, we can reparameterize the FDR function as FDRn(HΩ,S). In view of (14), FDRn(HΩ,S) is the expectation of some measurable function with respect to the probability law of XaugS which has matrix normal distribution with independent rows, and thus is expected to be a smooth function of entries of H by measure theory. Motivated by such an observation, we make the following Lipschitz continuity assumption.

Condition 4. There exists some constant L > 0 such that for all |S|Kn and ΩΩ02C2an with some constant C2 > 0 an → 0, |FDRn(HΩ,S)FDRn(HΩ0,S)|LHΩHΩ0F, where 2 and F denote the matrix spectral norm and matrix Frobenius norm, respectively

Condition 5. Assume that the estimated precision matrix Ω^ satisfies Ω^Ω02C2an with probability 1O(pc1) for some constants C2, c1 > 0 and an → 0, and that |S˜|Kn.

The error rate of precision matrix estimation assumed in Condition 5 is quite flexible. We would like to emphasize that no sparsity assumption has been made on the true precision matrix 0. Bounding the size of sparse models is also important for ensuring model identifiability and stability; see, for instance, [30, 45] for more detailed discussions.

Theorem 2. Assume that all the eigenvalues of 0 are bounded away from 0 and ∞ and the smallest eigenvalue of 2diag(s) − diag(s)0diag(s) is bounded from below by some positive constant. Then under Condition 4, it holds that

sup|S|Kn,ΩΩ02C2an|FDRn(HΩ,S)FDR(HΩ0,S)|O(Kn1/2an). (15)

Moreover, under Conditions 4–5 with Kn1/2an0, the FDR of RANK is bounded from above by q+O(Kn1/2an)+O(pc1), where q ∈ (0, 1) is the target FDR level.

Theorem 2 establishes the robustness of the FDR with respect to the precision matrix ; see the uniform bound in (15). As a consequence, it shows that our modified model-X knockoffs procedure RANK can indeed have FDR asymptotically controlled at the target level q. We remark that the term Kn1/2 in Theorem 2 is because Condition 4 is imposed through the matrix Frobenius norm, which is motivated from results on the smoothness of integral function from calculus. If one is willing to impose assumption through matrix spectral norm instead of Frobenius norm, then the extra term Kn1/2 can be dropped and the set S can be taken as the full model {1, · · ·, p}.

We would like to stress that Theorem 2 allows for arbitrarily complicated dependence structure of response y on covariates x and for any valid construction of knockoff statistics Wj’s. This is different from the conditions needed for power analysis in Section 2.2 (that is, the linear model setting and LCD knockoff statistics). Moreover, the asymptotic FDR control in Theorem 2 does not need the sure screening property of {S˜S0}1 as n → ∞.

3.3. Robustness of power in linear models

We are now curious about the other side of the coin; that is, the robustness theory for the power of our modified knockoffs procedure RANK. As argued at the beginning of Section 2.2, to ease the presentation and simplify the technical derivations we come back to high-dimensional linear models (8) and use the LCD in (9) as the knockoff statistics. The difference with the setting in Section 2.2 is that we no longer assume that the true precision matrix 0 is known and use the modified knockoffs procedure introduced in Section 3.1 to achieve asymptotic FDR control.

Recall that for the RANK procedure, the reduced model S˜ is first obtained from an independent subsample and then the knockoffs procedure is applied on the second fold of data to further select features from S˜. Clearly if S˜ does not have the sure screening property of {S˜S0}1 as n → ∞, then the Lasso solution based on [XS˜(2),X˜S˜(2)] as given in (18) is no longer a consistent estimate of β0 even when the true precision matrix 0 is used to generate the knockoff variables. In addition, the final power of our modified knockoffs procedure will always be upper bounded by s1|S˜S0|. Nevertheless, the results in this section are still useful in the sense that model (8) can be viewed as the projected model on support S˜. Thus our power analysis here is relative power analysis with respect to the reduced model S˜. In other words, we will focus on how much power loss would occur after we apply the model-X knockoffs procedure to (XS˜(2), XS˜Ω, y(2)) when compared to the power of s1|S˜S0|. Since our focus is relative power loss, without loss of generality we will condition on the event

{S˜S0}. (16)

We would like to point out that all conditions and results in this section can be adapted correspondingly when we view model (8) as the projected model if S˜S0. Similarly as in FDR analysis, we restrict ourselves to sparse models with size bounded by Kn that diverges as n → ∞, that is, |S˜|Kn.

With taken as the estimated precision matrix Ω^, we can generate the knockoff variables from (12). Then the Lasso procedure can be applied to the augmented data (X(2), X^, y(2)) with X^ constructed in Step 3 of our modified knockoffs procedure and the LCD can be defined as

W^j=WjΩ^,S˜=|β^j(λ;Ω^,S˜)||β^p+j(λ;Ω^,S˜)|, (17)

where β^j(λ;Ω^,S˜) and β^p+j(λ;Ω^,S˜) are the jth and (j + p)th components, respectively, of the Lasso estimator

β^(λ;Ω^,S˜)=argminbS˜1=0{n1y(2)[X(2),X^]b22+λb1} (18)

with λ ≥ 0 the regularization parameter and S˜1={1j2p:jS˜andjpS˜}.

Unlike the FDR analysis in Section 3.2, we now need sparsity assumption on the true precision matrix 0.

Condition 6. Assume that 0 is Lp-sparse with each row having at most Lp nonzeros for some diverging Lp and all the eigenvalues of 0 are bounded away from 0 and ∞.

For each given precision matrix and reduced model S, we define WjΩ,S similarly as in (17) except that is used to generate the knockoff variables and set S is used in (18) to calculate the Lasso solution. Denote by S^Ω={j:WjΩ,ST}S the final set of selected features using the LCD WjΩ,S in the knockoffs inference framework. We further define a class of precision matrices Ωp×p

A={Ω:ΩisLp-sparseandΩΩ02C2an}, (19)

where C2 and an are the same as in Theorem 2 and Lp is some positive integer that diverges with n. Similarly as in Section 2.2, in the technical analysis we assume implicitly that with asymptotic probability one, for all valid constructions of the knockoff variables there are no ties in the magnitude of nonzero knockoff statistics and no ties in the magnitude of nonzero components of Lasso solution uniformly over all ΩA and |S|Kn.

Condition 7. It holds that {Ω^A}1c2pc2 for some constant c2 > 0.

The assumption on the estimated precision matrix Ω^ made in Condition 7 is mild and flexible. A similar class of precision matrices was considered in [29] with detailed discussions on the choices of the estimation procedures. See, for example, [31, 10, 52] for some more recent developments on large precision matrix estimation and inference. In parallel to Theorem 1, we have the following results on the power of our modified knockoffs procedure with the estimated precision matrix Ω^.

Theorem 3. Assume that Conditions 1–2 and 6–7 hold, the smallest eigenvalue of 2diag(s)−diag(s)0diag(s) is positive and bounded away from 0, |{j:|β0,j|[sn1(logp)]1/2}|cs, and λ = Cλ{(log p)/n}1/2 with c ∈ ((qs)−1, 1) and Cλ > 0 some constants. Then if [(Lp+Lp)1/2+Kn1/2]an=o(1) and s{an+(Kn+Lp)[n1(logp)]1/2}=o(1), RANK with estimated precision matrix Ω^ and reduced model S˜ has power satisfying

Power(Ω^,S˜)=E[|S^Ω^S0||S0|]1Cl1Cλ(φ+1)kn1c2pc2c3pc3+o(kn1)=1o(1),

where φ is the golden ratio and Cl1 is some positive constant.

Theorem 3 establishes the robustness of the power for the RANK method. In view of Theorems 2–3, we see that our modified knockoffs procedure RANK can enjoy appealing properties of FDR control and power simultaneously when the true covariate distribution is unknown and needs to be estimated in high dimensions.

4. Simulation studies

So far we have seen that our suggested RANK method admits appealing theoretical properties for large-scale inference in high-dimensional nonlinear models. We now examine the finite-sample performance of RANK through four simulation examples.

4.1. Model setups and simulation settings

Recall that the original knockoff filter (KF) in [2] was designed for linear regression model with dimensionality p not exceeding sample size n, while the high-dimensional knockoff filter (HKF) in [3] considers linear model with p possibly larger than n. To compare RANK with the HKF procedure in high-dimensional setting, our first simulation example adopts the linear regression model

y=Xβ+ε, (20)

where y is an n-dimensional response vector, X is an n × p design matrix, β = (β1, · · ·, βp)T is a p-dimensional regression coefficient vector, and ε is an n-dimensional error vector. Non-linear models provide useful and flexible alternatives to linear models and are widely used in real applications. Our second through fourth simulation examples are devoted to three popular nonlinear model settings: the partially linear model, the single-index model, and the additive model, respectively. As a natural extension of linear model (20), the partially linear model assumes that

y=Xβ+g(U)+ε, (21)

where g(U) = (g(U1), · · ·, g(Un))T is an n-dimensional vector-valued function with covariate vector U = (U1, · · ·, Un)T, g(·) is some unknown smooth nonparametric function, and the rest of notation is the same as in model (20). In particular, the partially linear model is a semiparametric regression model that has been commonly used in many areas such as economics, finance, medicine, epidemiology, and environmental science [16, 33].

The third and fourth simulation examples drop the linear component. As a popular tool for dimension reduction, the single-index model assumes that

y=g(Xβ)+ε, (22)

where g(Xβ)=(g(x1Tβ),,g(xnTβ))T with X = (x1, · · ·, xn)T, g(·) is an unknown link function, and the remaining notation is the same as in model (20). In particular, the single-index model provides a flexible extension of the GLM by relaxing the parametric form of the link function [40, 56, 34, 42, 37]. To bring more flexibility while alleviating the curse of dimensionality, the additive model assumes that

y=j=1pgj(Xj)+ε, (23)

where gj(θ) = (gj(θ1), · · ·, gj(θn))T for θ = (θ1, · · ·, θn)T, Xj represents the jth covariate vector with X = (X1, · · ·, Xp), gj (·)’s are some unknown smooth functions, and the rest of notation is the same as in model (20). The additive model has been widely employed for nonparametric modeling of high-dimensional data [35, 51, 47, 11].

For the linear model (20) in simulation example 1, the rows of the n × p design matrix X are generated as i.i.d. copies of N(0, Σ) with precision matrix Σ–1 = (ρ|j–k|)1≤j,kp for ρ = 0 and 0.5. We set the true regression coefficient vector β0p as a sparse vector with s = 30 nonzero components, where the signal locations are chosen randomly and each nonzero coefficient is selected randomly from {±A} with A = 1.5 and 3.5. The error vector ε is assumed to be N (0, σ2In) with σ = 1. We set sample size n = 400 and consider the high-dimensional scenario with dimensionality p = 200, 400, 600, 800, and 1000. For the partially linear model (21) in simulation example 2, we choose the true function as g(U) = sin(2πU ), generate U = (U1, · · ·, Un)T with i.i.d. Ui from uniform distribution on [0, 1], and set A = 1.5 with the remaining setting the same as in simulation example 1.

Since the single-index model and additive model are more complex than the linear model and partially linear model, we reduce the true model size s while keeping sample size n = 400 in both simulation examples 3 and 4. For the single-index model (22) in simulation example 3, we consider the true link function g(x) = x3/2 and set p = 200, 400, 600, 800, and 1000. The true p-dimensional regression coefficient vector β0 is generated similarly with s = 10 and A = 1.5. For the additive model (23) in simulation example 4, we assume that s = 10 of the functions gj(·)’s are nonzero with j’s chosen randomly from {1, · · ·, p} and the remaining p − 10 functions gj(·)’s vanish. Specifically, each nonzero function gj (·) is taken to be a polynomial of degree 3 and all coefficients under the polynomial basis functions are generated independently as N (0, 102) as in [11]. The dimensionality p is allowed to vary with values 200, 400, 600, 800, and 1000. For each simulation example, we set the number of repetitions as 100.

4.2. Estimation procedures

To implement RANK procedure described in Section 3.1, we need to construct a precision matrix estimator Ω^ and obtain the reduced model S˜ using the first fold of data (X(1), y(1)). Among all available estimators in the literature, we employ the ISEE method in [31] for precision matrix estimation due to its scalability, simple tuning, and nice theoretical properties. For simplicity, we choose sj=1/Λmax(Ω^) for all 1 ≤ jp, where Ω^ denotes the ISEE estimator for the true precision matrix 0 and Λmax standards for the largest eigenvalue of a matrix. Then we can obtain an (n/2) × (2p) augmented design matrix [X(2), X^], where X^ represents an (n/2) × p knockoffs matrix constructed in Step 3 of our modified knockoffs procedure in Section 3.1. To construct the reduced model S˜ using the first fold of data (X(1), y(1)), we borrow the strengths from the recent literature on feature selection methods. After S˜ is obtained, we employ the reduced data (XaugS˜,y(2)) with XaugS˜=[XS˜(2),X^S˜] to fit a model and construct the knockoff statistics. In what follows, we will discuss feature selection methods for obtaining S˜ for the linear model (20), partially linear model (21), single-index model (22), and additive model (23) in simulation examples 1–4, respectively. We will also discuss the construction of knockoff statistics in each model setting.

For the linear model (20) in simulation example 1, we obtain the reduced model S˜ by first applying the Lasso procedure

β^(1)=argminbp{n1y(1)X(1)b22+λb1} (24)

with λ ≥ 0 the regulation parameter and then taking the support S˜=supp(β^(1)). Then with the estimated Ω^ and S˜, we construct the knockoff statistics as the LCD (17), where the estimated regression coefficient vector is obtained by applying the Lasso procedure on the reduced model as described in (18). The regularization parameter λ in Lasso is tuned using the K-fold cross-validation (CV).

For the partially linear model (21) in simulation example 2, we employ the profiling method in semiparametric regression based on the first fold of data (X(1), U(1), y(1)) by observing that model (21) becomes a linear model when conditioning on the covariate vector U(1). Consequently we need to estimate both the profiled response E(y(1)|U(1)) and the profiled covariates E(X(1)|U(1)). To this end, we adopt the local linear smoothing estimators [18] E(y(1)|U(1)^) and E(X(1)|U(1)^) of E(y(1)|U(1)) and E(X(1)|U(1)) using the Epanechnikov kernel K(u) = 0.75(1 − u2)+ with the optimal bandwidth selected by the generalized cross-validation (GCV). Then we define the Lasso estimator β^(1) for the p-dimensional regression coefficient vector similarly as in (24) with y(1) and X(1) replaced by y(1)E(y(1)|U(1)^) and X(1)E(X(1)|U(1)^), respectively. The reduced model is then taken as S˜=supp(β^(1)). For knockoff statistics W^j, we set W^j=0 for all jS˜. On the support S˜, we construct W^j=|β^j||β^p+j| with β^j and β^p+j the Lasso coefficients obtained by applying the model fitting procedure described above to the reduced data (XaugS˜, US˜(2), y(2)) in the second subsample with XaugS˜=[XS˜(2),X^S˜].

To fit the single-index model (22) in simulation example 3, we employ the Lasso-SIR method in [43]. The Lasso-SIR first divides the sample of m = n/2 observations in the first subsample (X(1), y(1)) into H slices of equal length c, and constructs the matrix ΛH=1mc(X(1))TMMTX(1), where M = IH1c is an m × H matrix that is the Kronecker product of the identity matrix IH and the constant vector 1c of ones. Then the Lasso-SIR estimates the p-dimensional regression coefficient vector β^(1) using the Lasso procedure similarly as in (18) with the original response vector y(1) replaced by a new response vector y˜(1)=(cλ1)1MMTX(1)η1, where λ1 denotes the largest eigenvalue of matrix ΛH and η1 is the corresponding eigenvector. We set the number of slices H = 5. Then the reduced model is taken as S˜=supp(β^(1)). We then apply the fitting procedure Lasso-SIR discussed above to the reduced data (XaugS˜,y(2)) with XaugS˜=[XS˜(2),X^S˜] and construct knockoff statistics in a similar way as in partially linear model.

To fit the additive model (23) in simulation example 4, we apply the GAMSEL procedure in [11] for sparse additive regression. In particular, we choose 6 basis functions each with 6 degrees of freedom for the smoothing splines using orthogonal polynomials for each additive component and set the penalty mixing parameter γ = 0.9 in GAMSEL to obtain estimators of the true functions gj(·)’s. The GAMSEL procedure is first applied to the first subsample (X(1), y(1)) to obtain the reduced model S˜, and then applied to the reduced data (XaugS˜,y(2)) with XaugS˜=[XS˜(2),X^S˜] to obtain estimates g^j and g^p+j for the additive functions corresponding to the jth covariate and its knockoff counterpart with jS˜, respectively. The knockoff statistics are then constructed as

W^j=g^jn/22g^p+jn/22forjS˜ (25)

and W^j=0 for jS˜, where g^jn/2 represents the empirical norm of the estimated function g^j() evaluated at its observed points and n/2 stands for the size of the second subsample.

It is seen that in all four examples above, intuitively large positive values of knockoff statistics W^j provide strong evidence against the jth null hypothesis H0,j: βj = 0 or H0,j: gj = 0. For all simulation examples, we set the target FDR level at q = 0.2.

4.3. Simulation results

To gain some insights into the effect of data splitting, we also implemented our procedure without the data splitting step. To differentiate, we use RANKs to denote the procedure with data splitting and RANK to denote the procedure without data splitting. To examine the feature selection performance, we look at both measures of FDR and power. The empirical versions of FDR and power based on 100 replications are reported in Tables 12 for simulation example 1 and Tables 35 for simulation examples 2–4, respectively. In particular, Table 1 compares the performance of RANK and RANK+ with that of RANKs and RANKs+, where the subscript + stands for the corresponding method when the modified knockoff threshold T+ is used. We see from Table 1 that RANK and RANK+ mimic closely RANKs and RANKs+, respectively, suggesting that data splitting is more of a technical assumption. In addition, the FDR is approximately controlled at the target level of q = 0.2 with high power, which is in line with our theory. Table 2 summarizes the comparison of RANKs with HKF procedure for high-dimensional linear regression model. Despite that both methods are based on data splitting, their practical performance is very different. It is seen that although controlling the FDR below the target level, HKF suffers from a loss of power due to the use of the screening step and the power deteriorates as dimensionality p increases. In contrast, the performance of RANKs is robust across different correlation levels ρ and dimensionality p. It is worth mentioning that HKF procedure with data recycling performed generally better than that with data splitting alone. Thus only the results for the former version are reported in Table 2 for simplicity.

Table 1:

Simulation results for linear model (20) in simulation example 1 with A = 1.5 in Section 4.1

ρ p RANK
RANK+
RANKs
RANKs+
FDR Power FDR Power FDR Power FDR Power
0 200 0.2054 1.00 0.1749 1.00 0.1909 1.00 0.1730 1.00
400 0.2062 1.00 0.1824 1.00 0.2010 1.00 0.1801 1.00
600 0.2263 1.00 0.1940 1.00 0.2206 1.00 0.1935 1.00
800 0.2385 1.00 0.1911 1.00 0.2247 1.00 0.1874 1.00
1000 0.2413 1.00 0.2083 1.00 0.2235 1.00 0.1970 1.00
0.5 200 0.2087 1.00 0.1844 1.00 0.1875 1.00 0.1692 1.00
400 0.2144 1.00 0.1879 1.00 0.1954 1.00 0.1703 1.00
600 0.2292 1.00 0.1868 1.00 0.2062 1.00 0.1798 1.00
800 0.2398 1.00 0.1933 1.00 0.2052 0.9997 0.1805 0.9997
1000 0.2412 1.00 0.2019 1.00 0.2221 0.9984 0.2034 0.9984

Table 2:

Simulation results for linear model (20) in simulation example 1 with A = 3.5 in Section 4.1

ρ p RANKs
RANKs+
HKF
HKF+
FDR Power FDR Power FDR Power FDR Power
0 200 0.1858 1.00 0.1785 1.00 0.1977 0.9849 0.1749 0.9837
400 0.1895 1.00 0.1815 1.00 0.2064 0.9046 0.1876 0.8477
600 0.2050 1.00 0.1702 1.00 0.1964 0.8424 0.1593 0.7668
800 0.2149 1.00 0.1921 1.00 0.1703 0.7513 0.1218 0.6241
1000 0.2180 1.00 0.1934 1.00 0.1422 0.7138 0.1010 0.5550
0.5 200 0.1986 1.00 0.1618 1.00 0.1992 0.9336 0.1801 0.9300
400 0.1971 1.00 0.1805 1.00 0.1657 0.8398 0.1363 0.7825
600 0.2021 1.00 0.1757 1.00 0.1253 0.7098 0.0910 0.6068
800 0.2018 1.00 0.1860 1.00 0.1374 0.6978 0.0917 0.5792
1000 0.2097 0.9993 0.1920 0.9993 0.1552 0.6486 0.1076 0.5524

Table 3:

Simulation results for partially linear model (21) in simulation example 2 in Section 4.1

ρ p RANK
RANK+
RANKs
RANKs+
FDR Power FDR Power FDR Power FDR Power
0 200 0.2117 1.00 0.1923 1.00 0.1846 0.9976 0.1699 0.9970
400 0.2234 1.00 0.1977 1.00 0.1944 0.9970 0.1747 0.9966
600 0.2041 1.00 0.1776 1.00 0.2014 0.9968 0.1802 0.9960
800 0.2298 1.00 0.1810 1.00 0.2085 0.9933 0.1902 0.9930
1000 0.2322 1.00 0.1979 1.00 0.2113 0.9860 0.1851 0.9840
0.5 200 0.2180 1.00 0.1929 1.00 0.1825 0.9952 0.1660 0.9949
400 0.2254 1.00 0.1966 1.00 0.1809 0.9950 0.1628 0.9948
600 0.2062 1.00 0.1814 1.00 0.2038 0.9945 0.1898 0.9945
800 0.2264 1.00 0.1948 1.00 0.2019 0.9916 0.1703 0.9906
1000 0.2316 1.00 0.2033 1.00 0.2127 0.9830 0.1857 0.9790

Table 5:

Simulation results for additive model (23) in simulation example 4 in Section 4.1

ρ p RANK
RANK+
RANKs
RANKs+
FDR Power FDR Power FDR Power FDR Power
0 200 0.1926 0.9780 0.1719 0.9690 0.2207 0.9490 0.1668 0.9410
400 0.2094 0.9750 0.1773 0.9670 0.2236 0.9430 0.1639 0.9340
600 0.2155 0.9670 0.1729 0.9500 0.2051 0.9310 0.1620 0.9220
800 0.2273 0.9590 0.1825 0.9410 0.2341 0.9280 0.1905 0.9200
1000 0.2390 0.9570 0.1751 0.9350 0.2350 0.9140 0.1833 0.9070
0.5 200 0.1904 0.9680 0.1733 0.9590 0.2078 0.9370 0.1531 0.9330
400 0.2173 0.9650 0.1701 0.9540 0.2224 0.9360 0.1591 0.9280
600 0.2267 0.9600 0.1656 0.9360 0.2366 0.9340 0.1981 0.9270
800 0.2306 0.9540 0.1798 0.9320 0.2332 0.9150 0.1740 0.9110
1000 0.2378 0.9330 0.1793 0.9270 0.2422 0.8970 0.1813 0.8880

For high-dimensional nonlinear settings of partially linear model, single-index model, and additive model in simulation examples 2–4, we see from Tables 35 that RANKs and RANKs+ performed well and similarly as RANK and RANK+ in terms of both FDR control and power across different scenarios. These results demonstrate the model-X feature of our procedure for large-scale inference in nonlinear models.

5. Real data analysis

In addition to simulation examples presented in Section 4, we also demonstrate the practical utility of our RANK procedure on a gene expression data set, which is based on Affymetrix GeneChip microarrays for the plant Arabidopsis thaliana in [63]. It is well known that isoprenoids play a key role in plant and animal physiological processes, such as photosynthesis, respiration, regulation of growth, and defense against pathogens in plant physiological processes. In particular, [38] found that many of the genes expressed preferentially in mature leaves are readily recognizable as genes involved in photosynthesis, including rubisco activase (AT2G39730), fructose bisphosphate aldolase (AT4G38970), and two glycine hydroxymethyl-transferase genes (AT4G37930 and AT5G26780). Thus isoprenoids have become important ingredients in various drugs (e.g., against cancer and malaria), fragrances (e.g., menthol), and food colorants (e.g., carotenoids). See, for instance, [63, 53, 49] on studying the mechasnism of isoprenoid synthesis in a wide range of applications.

The aforementioned data set in [63] consists of 118 gene expression patterns under various experimental conditions for 39 isoprenoid genes, 15 of which are assigned to the regulatory pathway, 19 to the plastidal pathway, and the remaining 5 isoprenoid genes encode protein located in the mitochondrion. Moreover, 795 additional genes from 56 metabolic pathways are incorporated into the isoprenoid genetic network. Thus the combined data set is comprised of a sample of n = 118 gene expression patterns for 834 genes. This data set was studied in [65] for identifying genes that exhibit significant association with the specific isoprenoid gene GGPPS11 (AGI code AT4G36810). Motivated by [65], we choose the expression level of isoprenoid gene GGPPS11 as the response and treat the remaining p = 833 genes from 58 different metabolic pathways as the covariates, in which the dimensionality p is much larger than sample size n. All the variables are logarithmically transformed. To identify important genes associated with isoprenoid gene GGPPS11, we employ the RANK method using the Lasso procedure with target FDR level q = 0.2. The implementation of RANK is the same as that in Section 4 for the linear model. Since the sample size of this data set is relatively low, we choose to implement RANK without sample splitting, which has been demonstrated in Section 4 to be capable of controlling the FDR at the desired level.

Table 6 lists the selected genes by RANK, RANK+, and Lasso along with their associated pathways. We see from Table 6 that RANK, RANK+, and Lasso selected 9 genes, 7 genes, and 17 genes, respectively. The common set of four genes, AT4G38970, AT2G27820, AT2G01880, and AT5G19220, was selected by all three methods. The values of the adjusted R2 for these three selected models are equal to 0.7523, 0.7515, and 0.7843, respectively, showing similar level of goodness of fit. In particular, among the top 20 genes selected using the Elem-OLS method with entrywise transformed Gram matrix in [65], we found that five genes (AT1G57770, AT1G78670, AT3G56960, AT2G27820, and AT4G13700) selected by RANK are included in such a list of top 20 genes, and three genes (AT1G57770, AT1G78670, and AT2G27820) picked by RANK+ are contained in the same list.

Table 6:

Selected genes and their associated pathways for real data analysis in Section 5

RANK
RANK+
Pathway Gene Pathway Gene
Calvin AT4G38970 Calvin AT4G38970
Carote AT1G57770 Carote AT1G57770
Folate AT1G78670 Folate AT1G78670
Inosit AT3G56960
Phenyl AT2G27820 Phenyl AT2G27820
Purine AT3G01820 Purine AT3G01820
Ribo AT4G13700
Ribo AT2G01880 Ribo AT2G01880
Starch AT5G19220 Starch AT5G19220
Lasso
Pathway Gene Pathway Gene
Berber AT2G34810 Porphy AT4G18480
Calvin AT4G38970 Pyrimi AT5G59440
Calvin AT3G04790 Ribo AT2G01880
Glutam AT5G18170 Starch AT5G19220
Glycol AT4G27600 Starch AT2G21590
Pentos AT3G04790 Trypto AT5G48220
Phenyl AT2G27820 Trypto AT5G17980
Porphy AT1G03475 Mevalo AT5G47720
Porphy AT3G51820

To gain some scientific insights into the selected genes, we conducted Gene Ontology (GO) enrichment analysis to interpret, from the biological point of view, the influence of selected genes on isoprenoid gene GGPPS11, which is known as a precursor to chloroplast, carotenoids, tocopherols, and abscisic acids. Specifically, in the enrichment test of GO biological process, gene AT1G57770 is involved in carotenoid biosynthetic process. In the GO cellular component enrichment test, genes AT4G38970 and AT5G19220 are located in chloroplast, chloroplast envelope, and chloroplast stroma; gene AT1G57770 is located in chloroplast and mitochondrion; and gene AT2G27820 is located in chloroplast, chloroplast stroma, and cytosol. The GO molecular function enrichment test shows that gene AT4G38970 has fructose-bisphosphate aldolase activity and gene AT1G57770 has carotenoid isomerase activity and oxidoreductase activity. These scientific insights in terms of biological process, cellular component, and molecular function suggest that the selected genes may have meaningful biological relationship with the target isoprenoid gene GGPPS11. See, for example, [38, 50, 62] for more discussions on these genes.

6. Discussions

Our analysis in this paper reveals that the suggested RANK method exploiting the general framework of model-X knockoffs introduced in [9] can asymptotically control the FDR in general high-dimensional nonlinear models with unknown covariate distribution. The robustness of the FDR control under estimated covariate distribution is enabled by imposing the Gaussian graphical structure on the covariates. Such a structural assumption has been widely employed to model the association networks among the covariates and extensively studied in the literature. Our method and theoretical results are powered by scalable large precision matrix estimation with statistical efficiency. It would be interesting to extend the robustness theory of the FDR control beyond Gaussian designs as well as for heavy-tailed data and dependent observations.

Our work also provides a first attempt to the power analysis for the model-X knockoffs framework. The nontrivial technical analysis establishes that RANK can have asymptotic power one in high-dimensional linear model setting when the Lasso is used for sparse regression. It would be interesting to extend the power analysis for RANK with a wide class of sparse regression and feature screening methods including SCAD, SIS, and many other concave regularization methods [22, 23, 17, 30]. Though more challenging, it is also important to investigate the power property for RANK beyond linear models. The power analysis in general high-dimensional nonlinear models is highly challenging for several reasons. First, the minimum signal strength needs to be characterized precisely in the power analysis. Yet unlike the beta-min measure in the linear model, there lacks a popularly accepted measure with explicit formula on the minimum signal strength in general high-dimensional nonlinear models. Second, the estimation error associated with each covariate plays an important role in the power analysis. However, in general nonlinear models it is unclear how to disentangle the individual estimation error corresponding to each covariate. Third, the knockoffs procedure builds on some underlying variable selection method, which itself is highly challenging both empirically and theoretically in general high-dimensional nonlinear models.

Our RANK procedure utilizes the idea of data splitting, which plays an important role in our technical analysis. Our numerical examples, however, suggest that data splitting is more of a technical assumption than a practical necessity. It would be interesting to develop theoretical guarantees for RANK without data splitting. These extensions are interesting topics for future research.

Supplementary Material

1

Table 4:

Simulation results for single-index model (22) in simulation example 3 in Section 4.1

ρ p RANK
RANK+
RANKs
RANKs+
FDR Power FDR Power FDR Power FDR Power
0 200 0.1893 1 0.1413 1 0.1899 1 0.1383 1
400 0.2163 1 0.1598 1 0.245 0.998 0.1676 0.997
600 0.2166 1 0.1358 1 0.2314 0.999 0.1673 0.998
800 0.1964 1 0.1406 1 0.2443 0.992 0.1817 0.992
1000 0.2051 1 0.134 1 0.2431 0.969 0.1611 0.962
0.5 200 0.2189 1 0.1591 1 0.2322 1 0.1626 1
400 0.2005 1 0.1314 1 0.2099 0.996 0.1615 0.995
600 0.2064 1 0.1426 1 0.2331 0.998 0.1726 0.998
800 0.2049 1 0.1518 1 0.2288 0.994 0.1701 0.994
1000 0.2259 1 0.1423 1 0.2392 0.985 0.185 0.983

Acknowledgments

This work was supported by NIH Grant 1R01GM131407–01, NSF CAREER Award DMS-1150318, a grant from the Simons Foundation, and Adobe Data Science Research Award. The authors sincerely thank the Joint Editor, Associate Editor, and referees for their valuable comments that helped improve the article substantially.

A. Proofs of main results

We provide the proofs of Theorems 1–3, Propositions 1–2, and Lemmas 1–2 in this appendix. Additional technical details for the proofs of Lemmas 3–8 are included in the Supplementary Material. To ease the technical presentation, we first introduce some notation. Let Λmin(·) and Λmax(·) be the smallest and largest eigenvalues of a symmetric matrix. For any matrix A = (aij), denote by A1=maxji|aij|, Amax=maxi,j|aij|, A2=Λmax1/2(ATA), and AF=[tr(ATA)]1/2 the matrix 1-norm, entrywise maximum norm, spectral norm, and Frobenius norm, respectively. For any set S{1,,p}, we use AS to represent the submatrix of A formed by columns in set S and AS,S to denote the principal submatrix formed by columns and rows in set S.

A.1. Proofs of Lemma 1 and Theorem 1

Observe that the choice of S˜={1,p} certainly satisfies the sure screening property. We see that Lemma 1 and Theorem 1 are specific cases of Lemma 6 in Section B.4 of Supplementary Material and Theorem 3, respectively. Thus we only prove the latter ones.

A.2. Proof of Proposition 1

In this proof, we will consider and S as deterministic parameters and focus only on the second half of sample (X(2), y(2)) used in FDR control. Thus, we will drop the superscripts in (X(2), y(2)) whenever there is no confusion. For a given precision matrix , the matrix of knockoff variables

X˜Ω=[x˜1Ω,,x˜nΩ]T

can be generated using (12) with 0 replaced by . Here, we use the superscript to emphasize the dependence of knockoffs matrix on . Recall that for a given set S with k=|S|, we calculate the knockoff statistics Wj’s using (y, XS, X˜SΩ). Thus, the FDR function can be written as

FDRn(Ω,S)=E[FDPn(y,XS,X˜SΩ)]=E[g1,n(X,X˜SΩ)], (26)

where g1,n(X,X˜SΩ)=E[FDPn(y,XS,X˜SΩ)|X,X˜SΩ]. It is seen that the function gn is the conditional FDP when knockoff variables x˜iΩ, 1 ≤ i ≤ n, are simulated using and only variables in set S are used to construct knockoff statistics Wj. We want to emphasize that since given X the response y is independent of X˜SΩ, the functional form of g1,n is free of the matrix used to generate knockoff variables.

Using the technical arguments in [9], we can show that FDRn(Ω0,S)q for any sample size n and all subsets S{1,,p} that are independent of the original data (X(2), y(2)) used in the knockoffs procedure. Observe that the only difference between FDRn(Ω,S) and FDRn(Ω0,S) is that different precision matrices are used to generate knockoff variables. We restrict ourselves to the following data generating scheme

x˜iΩ=(CΩ)Txi+BΩzi,i=1,,n,

where C = Ip diag{s}, zi~N(0,Ip)p are i.i.d. normal random vectors that are independent of xi’s, and B = (2diag{s}–diag{s}diag{s})1/2. For simplicity, write x˜i(0)=x˜iΩ0, B0=BΩ0 and C0=CΩ0, i.e., the matrices corresponding to the oracle case. Then restricted to set S,

x˜i,SΩ=(CSΩ)Txi+(BSΩ)Tzi,x˜i,S(0)=C0,SxiT+(B0,S)Tzi,

where the subscript S means the submatrix (subvector) formed by columns (components) in set S. We want to make connections between x˜i,SΩ and x˜i,S(0). To this end, construct

xi,SΩ=(CSΩ)Txi+B˜TB0,STzi, (27)

where B˜=(B0,STB0,S)1/2((BSΩ)TBSΩ)1/2. Then it is seen that (xi, x˜i,SΩ) and (xi, xi,SΩ) have identical joint distribution. Although xi,SΩ cannot be calculated in practice for a given due to its dependency on 0, the random vector xi,SΩ acts as a proxy of x˜i,SΩ in studying the FDR function. In fact, by construction (26) can be further written as

FDRn(Ω,S)=E[g1,n(X,X˜SΩ)]=E[g1,n(X,XSΩ)], (28)

where XSΩ=[x1,SΩ,,xn,SΩ]T.

Observe that the randomness in both X˜S(0) and XSΩ is fully determined by the same random matrices X and ZB0,S, which are independent of each other and whose rows are i.i.d. copies from N (0, Σ0) and N(0, B0,ST, B0,S), respectively. For this reason, we can rewrite the FDR function in (28) as

FDRn(Ω,S)=E[g1,n(XaugSHΩ)],

where XaugS=[X,X˜0,S]=[X,XC0,S+ZB0,S]n×(p+k) is the augmented matrix collecting columns of X and X˜0,S, and

HΩ=(IpCSΩC0,SB˜0B˜),

which completes the proof of Proposition 1.

A.3. Lemma 2 and its proof

Lemma 2. Assume that ΩΩ02=O(an) with an → 0 some deterministic sequence and all the notation the same as in Proposition 1. If Λmin{2diag(s) − diag(s)0diag(s)} ≥ c0 and Λmax(Σ0)c01 for some constant c0 > 0, then it holds that

B˜Ik2c1ΩΩ02=O(an),

where B˜ is given in (27) and c1 > 0 is some uniform constant independent of set S.

Proof. We use C to denote some generic positive constant whose value may change from line to line. First note that

(BSΩ)TBSΩB0,STB0,S=(diag(s)(ΩΩ0)diag(s))S,S. (29)

Further, since Σ0 – 2–1diag(s) is positive it follows that s2Λmax(Σ0)2c01. Thus it holds that

(BSΩ)TBSΩB0,STB0,S2C(ΩΩ0)S,S2C(ΩΩ0)2=O(an).

For n large enough, by the triangle inequality we have

Λmin((BSΩ)TBSΩ)Λmin(B0,STB0,S)+Λmin((BSΩ)TBSΩB0,STB0,S)Λmin(B0TB0)O(an)=Λmin(2diag(s)diag(s)Ω0diag(s))O(an)c0/2.

In addition, Λmin((B0,S)TB0,S)Λmin((B0)TB0)=Λmin(2diag(s)diag(s)Ω0diag(s))c0/2. The above two inequalities together with Lemma 2.2 in [54] entail that

((BSΩ)TBSΩ)1/2((B0,S)TB0,S)1/22(c0/2+c0/2)1(BSΩ)TBSΩ(B0,S)TB0,SB0,S2CΩΩ02=O(an), (30)

where the last step is because of (29). Thus it follows that

B˜Ik2((BSΩ)TBSΩ)1/2((B0,S)TB0,S)1/22((B0,S)TB0,S)1/22CΩΩ02Λmin1/2((B0)TB0)CΩΩ02, (31)

where the last step comes from assumption Λmin(B0TB0)=Λmin(2diag(s)diag(s)Ω0diag(s))c0. This concludes the proof of Lemma 2.

A.4. Proof of Theorem 2

We now proceed to prove Theorem 2 with the aid of Lemma 2 in Section A.3. We use the same notation as in the proof of Proposition 1 and use C > 0 to denote a generic constant whose value may change from line to line.

We start with proving (15). By Condition 4, we have

|FDR(HΩ,S)FDR(HΩ0,S)|LHΩHΩ0F, (32)

where the constant L is uniform over all ΩΩ0C2an and |S|Kn. Denote by k=|S|. By the definition of H, it holds that

HΩHΩ0=HΩIp+k=(0CSΩC0,SB˜0B˜Ik).

By the definition and matrix norm inequality, we deduce

HΩHΩ0F=CSΩC0,SB˜F+B˜IkFkCSΩC0,SB˜2+kB˜Ik2Kn(CSΩC0,S2+C0,S(B˜Ik)2+B˜Ik2)Kn((1+C0,S2)B˜Ik2+CΩC02).

Since Σ0 2–1diag(s) is positive definite, it follows that sj max(Σ0) 2/Λmin(0) ≤ C. Thus C0,S2C02=IΩ0diag(s)2+1Ω02diag(s)2C. This along with CΩC02=(ΩΩ0)diag(s)2Can and Lemma 2 entails that HΩHΩ0F can be further bounded as

HΩHΩ0FCKnan.

Combining the above result with (32) leads to

sup{|S|Kn,ΩΩ0Can}|FDR(HΩ,S)FDR(HΩ0,S)|O(Knan), (33)

which completes the proof of (15).

We next establish the FDR control for RANK. By Condition 7, the event E0={Ω^Ω02C2an} occurs with probability at least 1O(pc1). Since Ω^ and S˜ are estimates from independent subsample (X(1), y(1)), it follows from (15) that

|E[FDPn(XS˜(2),X^S˜)|E0]E[FDPn(XS˜(2),X˜0,S˜)|E0]|sup|S|Kn,ΩΩ0C2an|E[FDPn(XS(2),X˜SΩ)|E0]E[FDPn(XS(2),X˜0,S)|E0]|=sup|S|Kn,ΩΩ0C2an|E[FDPn(XS(2),X˜SΩ)]E[FDPn(XS(2),X˜0,S)]|=sup|S|Kn,ΩΩ0C2an|FDR(HΩ,S)FDR(HΩ0,S)|O(Knan). (34)

Now note that by the property of conditional expectation, we have

FDRn(Ω^,S˜)FDRn(Ω0,S˜)=(E[FDRn(XS˜(2),X^S˜)|E0c]E[FDRn(XS˜(2),X˜0,S˜)|E0c])(E0)+(E[FDRn(XS˜(2),X^S˜)|E0c]E[FDRn(XS˜(2),X˜0,S˜)|E0c])(E0c)I1+I2.

Let us first consider term I1. By (34), it holds that

|I1||E[FDPn(XS˜(2),X^S˜)|E0]E[FDPn(XS˜(2),X˜0,S˜)|E0]|O(Knan).

We next consider term I2. Since FDP is always bounded between 0 and 1, we have

|I2|2(E0c)O(pc1).

Combining the above two results yields

|FDRn(Ω^,S˜)FDRn(Ω0,S˜)|O(Knan)+O(pc1).

This together with the result of FDRn(Ω0,S˜)q mentioned in the proof of Proposition 1 in Section A.2 completes the proof of Theorem 2.

A.5. Proof of Theorem 3

In this proof, we will drop the superscripts in (X(2), y(2)) whenever there is no confusion. By the definition of power, for any given precision matrix and reduced model S the power can be written as

Power(Ω,S)=E[f(XS,X˜SΩ,y)],

where f is some function describing how the empirical power depends on the data. Note that f(XS,X˜SΩ,y) is a stochastic process indexed by , and we care about the mean of this process. Our main idea is to construct another stochastic process indexed by which has the same mean but possibly different distribution. Then by studying the mean of this new stochastic process, we can prove the desired result.

We next provide more technical details of the proof. The proxy process is defined as

XSΩ=XCSΩ+ZB0,S(B0,STB0,S)1/2((BSΩ)TBS)1/2, (35)

where CSΩ is the submatrix of C = Ip diag{s}, BSΩ is the submatrix of BΩ =(diag(s) – diag(s)diag(s))1/2, and B0=BΩ0. It is easy to see that XSΩ and X˜SΩ defined using (12) have the same distribution. Since Z is independent of (X, y), we can further conclude that (XS,X˜SΩ,y) and (XS,XSΩ,y) have the same joint distribution for each given and S. Thus the power function can be further written as

Power(Ω,S)=E[f(XS,X˜SΩ,y)]=E[f(XS,XSΩ,y)].

Therefore, we only need to study the power of the knockoffs procedure based on the pseudo data (XS,XSΩ,y).

To simplify the technical presentation, we will slightly abuse the notation and still use β^=β^(λ)=β^(λ;Ω,S) to represent the Lasso solution based on pseudo data (XS,XSΩ,y). We will use c and C to denote some generic positive constants whose values may change from line to line. Define

G˜=1nX˜KOTX˜KO(2p)×(2p)andρ˜=1nX˜KOTy2p (36)

with X˜KO=[X,XΩ]n×(2p) the augmented design matrix. For any given set S{1,,p} with k=|S|, (2p) × (2p) matrix A, and (2p)-vector a, we will abuse the notation and denote by AS,S(2k)×(2k) the principal submatrix formed by columns and rows in set {j:jS or jpS} and aS2k the subvector formed by components in set {j:jS or jpS}. For any p × p matrix B (or p-vector), we define BS (or bS)in the same way meaning that columns (or components) in set S will be taken to form the submatrix (or subvector).

With the above notation, note that the Lasso solution β^=(β^1,β^2p)T=β^(λ;Ω,S) restricted to variables in S can be obtained by setting β^j=0 for j ∈ {1 ≤ j ≤ 2p : jS and jpS} and minimizing the following objective function

β^S=argminb2k{12bTG˜S,Sbρ˜STb+λb1}. (37)

By Proposition 2 in Section A.6, it holds that with probability at least 1c3pc3,

supΩA,|S|Knβ^(λ;Ω,S)βT1Cl1sλ, (38)

where λ=Cλ{(log p)/n}1/2 with Cλ > 0 some constant and Cl1 is some positive constant. By Condition 2 and the assumption λ=Cλ{(log p)/n}1/2, we have

minjS0|β0,j|knλCλ. (39)

Denote by WjΩ,S the LCD based on the above β^(λ;Ω,S). Recall that by assumption, there are no ties in the magnitude of nonzero WjΩ,S's and no ties in the nonzero components of the Lasso solution with asymptotic probability one. Let |W(1)Ω,S||W(p)Ω,S| be the ordered knockoff statistics according to magnitude. Denote by j* the index such that |W(j)Ω,S|=T. Then by the definition of T, it holds that T<W(j+1)Ω,S0. We next analyze the two cases of W(j+1)Ω,S=0 and T<W(j+1)Ω,S<0 separately.

Case 1. For the case of W(j+1)Ω,S=0, S^Ω={j:WjΩ,S<0}, and {j:WjΩ,S<T}={j:WjΩ,S<0}. Let q˜n=φCl1Cλkn1 with φ the positive solution to equation φ2φ – 1 = 0 which is known as the golden ratio. If |{j:WjΩ,S<0}|>q˜ns, then we can prove that Tknλ/(Cλφ) using the same arguments as in equation (44) since {j:WjΩ,S<T}={j:WjΩ,S<0}. Thus it reduces to Case 2 below and the arguments therein follow.

On the contrary, if |{j:WjΩ,S<0}|>q˜ns then we have

|S^ΩS0|=|supp(WΩ,S)S0||{j:WjΩ,S<0}S0||supp(WΩ,S)S0|q˜ns (40)

since S^Ω^=supp(WΩ,S)\{j:WjΩ,S<0}. Let us now focus on |supp(WΩ,S)S0|. We observed that

supp(WΩ,S){1,,p}\S1Ω, (41)

where S1Ω={1jp:β^j(λ;Ω,S)=0}. Meanwhile, note that in view of (38) we have with probability at least 1c3pc3,

Cl1sλsupΩA,|S|Knβ^(λ;Ω,S)β01supΩA,|S|KnjS1ΩS0|βj^(λ;Ω,S)β0,j|=jS1ΩS0|β0,j||S1ΩS0|minjS0|β0,j|.

By (39), we can further deduce from the above inequality that

|S1ΩS0|Cl1Cλkn1s,

which together with S0=s entails that

|({1,,p}\S1Ω)S0|(1Cl1Cλkn1)s.

Combining this result with (41) yields

|supp(WΩ,S)S0||({1,,p}\S1Ω)S0|(1Cl1Cλkn1)s. (42)

Thus in view of inequalities (40) and (42), with probability at least 1c3pc3 it holds uniformly over all ΩA and |S|Kn that

|S^ΩS0|s1Cl1Cλ(φ+1)kn1.

Case 2. We next consider the case of T<W(j+1)Ω,S<0. By the definitions of T and j*, we have

|{j:WjΩ,ST}|+2|{j:WjΩ,ST}|>q (43)

since otherwise we would reduce T to |W(j+1)Ω,S| to get the new smaller threshold with the criterion still satisfied. We next bound T using the results in Lemma 6 in Section B.4 of Supplementary Material. Observe that (43) and Lemma 6 lead to |{j:WjΩ,ST}|>q|{j:WjΩ,ST}|2qcs2 with asymptotic probability one. Moreover, when WjΩ,ST we have |β^j(λ;Ω,S)||β^j+p(λ;Ω,S)|T and thus |β^j+p(λ;Ω,S)|T. Using (38), we obtain

Cl1sλβ^(λ;Ω,S)βT1j:WjΩ,ST|β^j+p(λ;Ω,S)|T|j:WjΩ,ST|. (44)

Combining these results leads to Cl1sλT(qcs2) and thus it holds that

TCl1sλ(qcs2)knλCλφ (45)

for large enough n since κn → ∞ as n → ∞ and Cλ is some positive constant.

We now proceed to prove the theorem by showing that Type II error is small. In light of (38), we derive

Cl1sλβ^(λ;Ω,S)βT1=j=1p[|β^j(λ;Ω,S)β0,j|+|β^j+p(λ;Ω,S)|]jS0(S^Ω)c[|β^j(λ;Ω,S)β0,j|+|β^j+p(λ;Ω,S)|]jS0(S^Ω)c[|β^j(λ;Ω,S)β0,j|+|β^j(λ;Ω,S)|T]

since |β^j+p(λ;Ω,S)|+|β^j(λ;Ω,S)|T when j(S^Ω)c. Using the triangle inequality and nothing that |β0,j|Cλ1λkn for jS0, we can conclude that

Cl1sλjS0(S^Ω)c(|β0,j|T)(Cλ1λknT)|(S^Ω)cS0|.

Thus it follows that

|S^ΩS0|s=1|(S^Ω)cS0|s1Cl1λCλ1λknT=1Cl1Cλφφ1kn1

uniformly over all ΩA and |S|Kn since Tκnλ/(Cλφ).

Combining the above two scenarios, we have shown that with asymptotic probability one, uniformly over all ΩA and |S|Kn it holds that with probability at least 1c3pc3,

|S^ΩS0|s1Cl1Cλ(φ+1)kn1 (46)

since φ+1=φ/(φ1) by the definition of φ. This along with the assumption {Ω^A}1c2pc2 in Condition 7 gives

Power(Ω^,S˜)=E[|S^Ω^S0|s]E[|S^Ω^S0|s|Ω^A]{Ω^A}[1Cl1Cλ(φ+1)κn1](1c3pc3)(1c2pc2)=1Cl1Cλ(φ+1)κn1c2pc2c3pc3+o(κn1)=1o(1),

which concludes the proof of Theorem 3.

A.6. Proposition 2 and its proof

Proposition 2. Assume that Conditions 1 and 6 hold, the smallest eigenvalue of 2diag(s) – diag(s)0diag(s) is positive and bounded away from 0, and λ=Cλ{(log p)/n}1/2 with Cλ > 0 some constant. Let βT=(β0T,0,,0)T2p be the expanded vector of true regression coefficient vector. If [(Lp+Lp')1/2+Kn1/2]an=o(1) and s{an+Lp'[(logp)/n]1/2+[Kn(logp)/n]1/2}=o(1), then with probability at least 1c3pc3,

supΩA,|S|Knβ^(λ;Ω,S)βT1=Cl1sλandsupΩA,|S|Knβ^(λ;Ω,S)βT2=Cl2s1/2λ,

where β^(λ;Ω,S) is defined in the proof of Theorem 3 in Section A.5 and c3, c4, Cl1, and Cl2 are all positive constants.

Proof. We adopt the same notation as used in the proof of Theorem 3 in Section A.5. Let us introduce some key events which will be used in the technical analysis. Define

ε3={supΩΩ0C2an,|S|Knρ˜SG˜S,SβT,Sλ0}, (47)
ε4={supΩΩ02C2an,|S|KnG˜S,SGS,SmaxC5a2,n}, (48)

where λ0=C4(logp)/n and a2,n=an+(Lp'+Kn)(logp)/n with C4, C5 > 0 some constants. Then by Lemmas 4 and 7 in Sections B.2 and B.5 of Supplementary Material,

(ε3ε4)=1c3pc3 (49)

for some constant c3 > 0. Hereafter we will condition on the event ε3ε4.

Since β^S is the minimizer of the objective function in (37), we have

12β^STG˜S,Sβ^Sρ˜STβ^S+λβ^S112βT,STG˜S,SβT,Sρ˜STβT,S+λβT,S1.

Some routine calculations lead to

12(β^SβT,S)TG˜S,S(β^SβT,S)+λβ^S1βT,STG˜S,Sβ^S+βT,STG˜S,SβT,S+ρ˜ST(β^SβT,S)+λβT,S1=(ρ˜SG˜S,SβT,S)T(β^SβT,S)+λβT,Sβ^SβT,S1ρ˜SG˜S,SβT,S+λβT,S1. (50)

Let δ^=β^βT. Then we can simplify (50) as

12δ^STG˜S,Sδ^S+λβ^S1δ^S1ρ˜SG˜S,SβT,S+λβT,S1λ0δ^S1+λβT,S1. (51)

Observe that β^S1=β^S01+β^S\S01 and βT,S1=βT,S01+βT,S\S01=βT,S01 with S0 the support of true regression coefficient vector. Then it follows from β^S0βT,S01βT,S01β^S01 that

12δ^STG˜S,Sδ^S+λβ^S\S01λ0δ^S1+λβ^S0βT,S01.

Denote by δ^S01=β^S0βT,S01 and δ^S\S01=β^S\S0βT,S\S01=β^S\S01. Then we can further deduce

12δ^STG˜S,Sδ^S+λδ^S\S01λ0δ^S1+λδ^S01=λ0δ^S01+λ0δ^S\S01+λδ^S01;

that is,

12δ^STG˜S,Sδ^S+(λλ0)δ^S\S01(λ+λ0)δ^S01. (52)

When λ ≥ 2λ0, it holds that

12δ^STG˜S,Sδ^S+λ2δ^S\S013λ2δ^S01. (53)

Since δ^STG˜S,Sδ^S0, we obtain the basic inequality

δ^S\S013δ^S01 (54)

on event ε3. It follows from (53) that

δ^STGS,Sδ^S3λδ^S01+δ^ST(GS,SG˜S,S)δ^S (55)

with

G=(Σ0Σ0diag(s)Σ0diag(s)Σ0).

With some matrix calculations, we can show that

Λmin(G)Λmin(Σ0)Λmin{2diag(s)diag(s)Ω0diag(s)}C,

since both Σ0 and 2diag(s)diag(s)0diag(s) have eigenvalues bounded away from 0. Thus the left hand side of (55) can be bounded from below by c0Cδ^S22.

It remains to bound the right hand side of (55). For the first term, it follows from the Cauchy–Schwarz inequality that

3λδ^S013λsδ^S023λsδ^S2. (56)

For the last term δ^ST(GS,SG˜S,S)δ^S on the right hand of (55), by conditioning on event ε4 and using the Cauchy–Schwarz inequality, the triangle inequality, and the basic inequality (54) we can obtain

|δ^ST(GS,SG˜S,S)δ^S|GS,SG˜S,Smaxδ^S12GS,SG˜S,Smax(δ^S01+δ^S\S01)216GS,SG˜S,Smaxδ^S01216sGS,SG˜S,Smaxδ^S02216sGS,SG˜S,Smaxδ^S2216C5sa2,nδ^S22.

Combining the above results, we can reduce inequality (55) to

Cδ^S223λsδ^S2+16C5sa2,nδ^S22.

Since sa2,n → 0, there exists a positive constant Cl2 such that it holds for n large enough that

δ^S2=β^SβT,S2=Cl2sλ.

Further, by (56) we have

β^SβT,S1=Cl1sλ

for some constant Cl1 and for large enough n. Note that by definition, β^Sc=0 and βT,Sc=0. Therefore, summarizing the above results completes the proof of Proposition 2.

References

  • [1].Abramovich F, Benjamini Y, Donoho DL, and Johnstone IM (2006). Adapting to unknown sparsity by controlling the false discovery rate. Ann. Statist 34, 584–653. [Google Scholar]
  • [2].Barber RF and Candès EJ (2015). Controlling the false discovery rate via knockoffs. The Annals of Statistics 43, 2055–2085. [Google Scholar]
  • [3].Barber RF and Candès EJ (2016). A knockoff filter for high-dimensional selective inference arXiv:1602.03574.
  • [4].Benjamini Y and Hochberg Y (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B 57, 289–300. [Google Scholar]
  • [5].Benjamini Y and Yekutieli D (2001). The control of the false discovery rate in multiple testing under dependency. Ann. Statist 29, 1165–1188. [Google Scholar]
  • [6].Bickel PJ and Levina E (2008). Regularized estimation of large covariance matrices. The Annals of Statistics 36, 199–227. [Google Scholar]
  • [7].Bickel PJ, Ritov Y, and Tsybakov AB (2009). Simultaneous analysis of lasso and dantzig selector. The Annals of Statistics 37, 1705–1732. [Google Scholar]
  • [8].Bühlmann P and van de Geer S (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications Springer. [Google Scholar]
  • [9].Candès EJ, Fan Y, Janson L, and Lv J (2018). Panning for gold: ‘model-X’ knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society Series B 80, 551–577. [Google Scholar]
  • [10].Chen M, Ren Z, Zhao H, and Zhou HH (2016). Asymptotically normal and efficient estimation of covariate-adjusted Gaussian graphical model. Journal of the American Statistical Association 111, 394–406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Chouldechova A and Hastie T (2015). Generalized additive model selection arXiv:1506.03850.
  • [12].Clarke S and Hall P (2009). Robustness of multiple testing procedures against dependence. Ann. Statist 37, 332–358. [Google Scholar]
  • [13].Efron B (2007a). Correlation and large-scale simultaneous significance testing. J. Amer. Statist. Assoc 102, 93–103. [Google Scholar]
  • [14].Efron B (2007b). Size, power and false discovery rates. Ann. Statist 35, 1351–1377. [Google Scholar]
  • [15].Efron B and Tibshirani R (2002). Empirical bayes methods and false discovery rates for microarrays. Genetic Epidemiology 23, 70–86. [DOI] [PubMed] [Google Scholar]
  • [16].Engle R, Granger C, Rice J, and Weiss A (1986). Semiparametric estimates of the relation between weather and electricity sales. Journal of the American Statistical Association 81, 310–320. [Google Scholar]
  • [17].Fan J and Fan Y (2008). High-dimensional classification using features annealed independence rules. The Annals of Statistics 36, 2605–2637. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Fan J and Gijbels I (1996). Local Polynomial Modelling and Its Applications London: Chapman & Hall/CRC. [Google Scholar]
  • [19].Fan J, Guo S, and Hao N (2012). Variance estimation using refitted cross-validation in ultrahigh dimensional regression. J. Roy. Statist. Soc. Ser. B 74, 37–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Fan J, Hall P, and Yao Q (2007). To how many simultaneous hypothesis tests can normal, student’s t or bootstrap calibration be applied? Journal of the American Statistical Association 102, 1282–1288. [Google Scholar]
  • [21].Fan J, Han X, and Gu W (2012). Control of the false discovery rate under arbitrary covariance dependence (with discussion). Journal of American Statistical Association 107, 1019–1045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Fan J and Li R (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of American Statistical Association 96, 1348–1360. [Google Scholar]
  • [23].Fan J and Lv J (2008). Sure independence screening for ultrahigh dimensional feature space (with discussion). Journal of the Royal Statistical Society Series B 70, 849–911. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Fan J and Lv J (2010). A selective overview of variable selection in high dimensional feature space (invited review article). Statistica Sinica 20, 101–148. [PMC free article] [PubMed] [Google Scholar]
  • [25].Fan J, Samworth RJ, and Wu Y (2009). Ultrahigh dimensional variable selection: beyond the linear model. J. Mach. Learn. Res 10, 1829–1853. [PMC free article] [PubMed] [Google Scholar]
  • [26].Fan Y, Demirkaya E, and Lv J (2017). Nonuniformity of p-values can occur early in diverging dimensions arXiv preprint arXiv:1705.03604. [PMC free article] [PubMed]
  • [27].Fan Y and Fan J (2011). Testing and detecting jumps based on a discretely observed process. Journal of Econometrics 164, 331–344. [Google Scholar]
  • [28].Fan Y, Kong Y, Li D, and Lv J (2016). Interaction pursuit with feature screening and selection arXiv preprint arXiv:1605.08933.
  • [29].Fan Y, Kong Y, Li D, and Zheng Z (2015). Innovated interaction screening for high-dimensional nonlinear classification. The Annals of Statistics 43, 1243–1272. [Google Scholar]
  • [30].Fan Y and Lv J (2013). Asymptotic equivalence of regularization methods in thresholded parameter space. Journal of the American Statistical Association 108, 1044–1061. [Google Scholar]
  • [31].Fan Y and Lv J (2016). Innovated scalable efficient estimation in ultra-large Gaussian graphical models. The Annals of Statistics 44, 2098–2126. [Google Scholar]
  • [32].Hall P and Wang Q (2010). Strong approximations of level exceedences related to multiple hypothesis testing. Bernoulli 16, 418–434. [Google Scholar]
  • [33].Härdle W, Liang H, and Gao JT (2000). Partially Linear Models Heidelberg: Springer Physica Verlag. [Google Scholar]
  • [34].Härdle W and Stoker TM (1989). Investigating smooth multiple regression by the method of average derivatives. Journal of the American statistical Association 84, 986–995. [Google Scholar]
  • [35].Hastie T and Tibshirani R (1990). Generalized Additive Models London: Chapman & Hall/CRC. [DOI] [PubMed] [Google Scholar]
  • [36].Hastie T, Tibshirani R, and Friedman J (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd edition). Springer. [Google Scholar]
  • [37].Horowitz JL (2009). Semiparametric and nonparametric methods in econometrics Springer. [Google Scholar]
  • [38].Horvath DP, Schaffer R, and Wisman E (2003). Identification of genes induced in emerging tillers of wild oat (avena fatua) using arabidopsis microarrays. Weed Science 51, 503–508. [Google Scholar]
  • [39].Huber PJ (1973). Robust regression: Asymptotics, conjectures and monte carlo. The Annals of Statistics 1, 799–821. [Google Scholar]
  • [40].Ichimura H (1993). Semiparametric least squares (sls) and weighted sls estimation of single-index models. Journal of Econometrics 58, 71–120. [Google Scholar]
  • [41].Lauritzen SL (1996). Graphical Models Oxford University Press. [Google Scholar]
  • [42].Li Q and Racine JS (2007). Nonparametric econometrics: theory and practice Princeton University Press. [Google Scholar]
  • [43].Lin Q, Zhao Z, and Liu JS (2016). Sparse sliced inverse regression for high dimensional data arXiv:1611.06655.
  • [44].Liu W and Shao Q-M (2014). Phase transition and regularized bootstrap in large-scale t-tests with false discovery rate control. Ann. Statist 42, 2003–2025. [Google Scholar]
  • [45].Lv J (2013). Impacts of high dimensionality in finite samples. The Annals of Statistics 41, 2236–2262. [Google Scholar]
  • [46].McCullagh P and Nelder JA (1989). Generalized Linear Models Chapman and Hall, London. [Google Scholar]
  • [47].Meier L, van de Geer S, and Bühlmann P(2009). High-dimensional additive modeling. The Annals of Statistics 37, 3779–3821. [Google Scholar]
  • [48].Meng L, Sun F, Zhang X, and Waterman MS (2011). Sequence alignment as hypothesis testing. J. Comput. Biol 18, 677–691. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [49].Prelić A, Bleuler S, Zimmermann P, Wille A, Bühlmann P, Gruissem W, Hennig L, Thiele L, and Zitzler E (2006). A systematic comparison and evaluation of biclustering methods for gene expression data. Bioinformatics 22, 1122–1129. [DOI] [PubMed] [Google Scholar]
  • [50].Ramel F, Sulmon C, Bogard M, Couèe I, and Gouesbet G (2009). Differential patterns of reactive oxygen species and antioxidative mechanisms during atrazine injury and sucrose-induced tolerance in arabidopsis thaliana plantlets. BMC Plant Biology 9, 1–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Ravikumar P, Liu H, Lafferty J, and Wasserman L (2009). Spam: sparse sdditive models. Journal of the Royal Statistical Society Series B 71, 1009–1030. [Google Scholar]
  • [52].Ren Z, Kang Y, Fan Y, and Lv J (2018). Tuning-free heterogeneous inference in massive networks. Journal of the American Statistical Association, to appear
  • [53].Schäfer J and Strimmer K (2005). A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statistical Applications in Genetics and Molecular Biology 4, 1544–1615. [DOI] [PubMed] [Google Scholar]
  • [54].Schmitt BA (1992). Perturbation bounds for matrix square roots and pythagorean sums. Linear algebra and its applications 174, 215–227. [Google Scholar]
  • [55].Shah RD and Samworth RJ (2013). Variable selection with error control: Another look at stability selection. J. Roy. Statist. Soc. Ser. B 75, 55–80. [Google Scholar]
  • [56].Stoker TM (1986). Consistent estimation of scaled coefficients. Econometrica, 1461–1481.
  • [57].Storey JD (2002). A direct approach to false discovery rates. J. Roy. Statist. Soc. Ser. B 64, 479–498. [Google Scholar]
  • [58].Storey JD, Taylor JE, and Siegmund D (2004). Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach. J. Roy. Statist. Soc. Ser. B 66, 187–205. [Google Scholar]
  • [59].Su W and Candès EJ (2016). Slope is adaptive to unknown sparsity and asymptotically minimax. Ann. Statist 44, 1038–1068. [Google Scholar]
  • [60].Sur P, Chen Y, and Candès EJ (2017). The likelihood ratio test in high-dimensional logistic regression is asymptotically a rescaled chi-square arXiv preprint arXiv:1706.01191.
  • [61].Tibshirani R (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58, 267–288. [Google Scholar]
  • [62].Wienkoop S, Glinski M, Tanaka N, Tolstikov V, Fiehn O, and Weckwerth W (2004). Linking protein fractionation with multidimensional monolithic reversed-phase peptide chromatography/mass spectrometry enhances protein identification from complex mixtures even in the presence of abundant proteins. Rapid Commun. Mass Spectrom 18, 643–650. [DOI] [PubMed] [Google Scholar]
  • [63].Wille A, Zimmermann P, Vranová E, Fürholz A, Laule O, Bleuler S, Hennig L, Prelić A, von Rohr P, Thiele L, et al. (2004). Sparse graphical Gaussian modeling of the isoprenoid gene network in arabidopsis thaliana. Genome biology 5, R92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [64].Wu WB (2008). On false discovery control under dependence. Ann. Statist 36, 364–380. [Google Scholar]
  • [65].Yang E, Lozano A, and Ravikumar P (2014). Elementary estimators for high-dimensional linear regression. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), 388–396. [Google Scholar]
  • [66].Zhang Y and Liu JS (2011). Fast and accurate approximation to significance tests in genome-wide association studies. Journal of the American Statistical Association 106, 846–857. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES