Abstract
Power and reproducibility are key to enabling refined scientific discoveries in contemporary big data applications with general high-dimensional nonlinear models. In this paper, we provide theoretical foundations on the power and robustness for the model-X knockoffs procedure introduced recently in Candès, Fan, Janson and Lv (2018) in high-dimensional setting when the covariate distribution is characterized by Gaussian graphical model. We establish that under mild regularity conditions, the power of the oracle knockoffs procedure with known covariate distribution in high-dimensional linear models is asymptotically one as sample size goes to infinity. When moving away from the ideal case, we suggest the modified model-X knockoffs method called graphical nonlin-ear knockoffs (RANK) to accommodate the unknown covariate distribution. We provide theoretical justifications on the robustness of our modified procedure by showing that the false discovery rate (FDR) is asymptotically controlled at the target level and the power is asymptotically one with the estimated covariate distribution. To the best of our knowledge, this is the first formal theoretical result on the power for the knockoffs procedure. Simulation results demonstrate that compared to existing approaches, our method performs competitively in both FDR control and power. A real data set is analyzed to further assess the performance of the suggested knockoffs procedure.
Keywords: Power, Reproducibility, Big data, High-dimensional nonlinear models, Robustness, Large-scale inference and FDR, Graphical nonlinear knockoffs
1. Introduction
Feature selection with big data is of fundamental importance to many contemporary applications from different disciplines of social sciences, health sciences, and engineering [36, 24, 8]. Over the past two decades, various feature selection methods, theory, and algorithms have been extensively developed and investigated for a wide spectrum of flexible models ranging from parametric to semiparametric and nonparametric linking a high-dimensional covariate vector x = (X1, · · · , Xp)T of p features Xj’s to a response Y of interest, where the dimensionality p can be large compared to the available sample size n or even greatly exceed n. The success of feature selection for enhanced prediction in practice can be attributed to the reduction of noise accumulation associated with high-dimensional data through dimensionality reduction. In particular, most existing studies have focused on the power perspective of feature selection procedures such as the sure screening property, model selection consistency, oracle property, and oracle inequalities. When the model is correctly specified, researchers and practitioners often would like to know whether the estimated model involving a subset of the p covariates enjoys reproducibility in that the fraction of noise features in the discovered model is controlled. Yet such a practical issue of reproducibility is largely less well understood for the settings of general high-dimensional nonlinear models. Moreover, it is no longer clear whether the power of feature selection procedures can be retained when one intends to ensure the reproducibility.
Indeed, the issues of power and reproducibility are key to enabling refined scientific discoveries in big data applications utilizing general high-dimensional nonlinear models. To characterize the reproducibility of statistical inference, the seminal paper of [4] introduced an elegant concept of false discovery rate (FDR) which is defined as the expectation of the fraction of false discoveries among all the discoveries, and proposed a popularly used Benjamini–Hochberg procedure for FDR control by resorting to the p-values for large-scale multiple testing returned by some statistical estimation and testing procedure. There is a huge literature on FDR control for large-scale inference and various generalizations and extensions of the original FDR procedure were developed and investigated for different settings and applications [5, 15, 57, 58, 1, 13, 14, 20, 64, 12, 32, 27, 48, 66, 21, 44, 59]. Most of existing work either assumes a specific functional form such as linearity on the dependence structure of response Y on covariates Xj’s, or relies on the p-values for evaluating the significance of covariates Xj’s. Yet in high-dimensional settings, we often do not have such luxury since response Y could depend on covariate vector x through very complicated forms and even when Y and x have simple dependence structure, high dimensionality of covariates can render classical p-value calculation procedures no longer justified or simply invalid [39, 26, 60]. These intrinsic challenges can make the p-value based methods difficult to apply or even fail [9].
To accommodate arbitrary dependence structure of Y on x and bypass the need of calculating accurate p-values for covariate significance, [9] recently introduced the model-X knockoffs framework for FDR control in general high-dimensional nonlinear models. Their work was inspired by and builds upon the ingenious development of the knockoff filter in [2], which provides effective FDR control in the setting of Gaussian linear model with dimensionality p no larger than sample size n. The knockoff filter was later extended in [3] to high-dimensional linear model using the ideas of data splitting and feature screening. The salient idea of [2] is to construct the so-called “knockoff” variables which mimic the dependence structure of the original covariates but are independent of response Y conditional on the original covariates. These knockoff variables can be used as control variables. By comparing the regression outcomes for original variables with those for control variables, the relevant set of variables can be identified more accurately and thus the FDR can be better controlled. The model-X knockoffs framework introduced in [9] greatly expands the applicability of the original knockoff filter in that the response Y and covariates x can have arbitrarily complicated dependence structure and the dimensionality p can be arbitrarily large compared to sample size n. It was theoretically justified in [9] that the model-X knockoffs procedure controls FDR exactly in finite samples of arbitrary dimensions. However, one important assumption in their theoretical development is that the joint distribution of covariates x should be known. Moreover, formal power analysis of the knockoffs framework is still lacking even for the setting of Gaussian linear model.
Despite the importance of known covariate distribution in their theoretical development, [9] empirically explored the scenario of unknown covariate distribution for the specific setting of generalized linear model (GLM) [46] with Gaussian design matrix and discovered that the estimation error of the covariate distribution can have negligible effect on FDR control. Yet there exist no formal theoretical justifications on the robustness of the model-X knockoffs method and it is also unclear to what extent such robustness can hold beyond the GLM setting. To address these fundamental challenges, our paper intends as the first attempt to provide theoretical foundations on the power and robustness for the model-X knockoffs framework. Specifically, the major innovations of the paper are twofold. First, we will formally investigate the power of the knockoffs framework in high-dimensional linear models with both known and unknown covariate distribution. Second, we will provide theoretical support on the robustness of the model-X knockoffs procedure with unknown covariate distribution in general high-dimensional nonlinear models.
More specifically, in the ideal case of known covariate distribution, we prove that the model-X knockoffs procedure in [9] has asymptotic power one under mild regularity conditions in high-dimensional linear models. When moving away from the ideal scenario, to accommodate the difficulty caused by unknown covariate distribution we suggest the modified model-X knockoffs method called graphical nonlinear knockoffs (RANK). The modified knockoffs procedure exploits the data splitting idea, where the first half of the sample is used to estimate the unknown covariate distribution and reduce the model size, and the second half of the sample is employed to globally construct the knockoff variables and apply the knockoffs procedure. We establish that the modified knockoffs procedure asymptotically controls the FDR regardless of whether the reduced model contains the true model or not. Such feature makes our work intrinsically different from that in [3] requiring the sure screening property [23] of the reduced model; see Section 3.1 for more detailed discussions on the differences. In our theoretical analysis of FDR, we still allow for arbitrary dependence structure of response Y on covariates x and assume that the joint distribution of x is characterized by Gaussian graphical model with unknown precision matrix [41]. In the specific case of high-dimensional linear models with unknown covariate distribution, we also provide robustness analysis on the power of our modified procedure.
The rest of the paper is organized as follows. Section 2 reviews the model-X knockoffs framework and provides theoretical justifications on its power in high-dimensional linear models. We introduce the modified model-X knockoffs procedure RANK and investigate its robustness on both FDR control and power with respect to the estimation of unknown covariate distribution in Section 3. Section 4 presents several simulation examples of both linear and nonlinear models to verify our theoretical results. We demonstrate the performance of our procedure on a real data set in Section 5. Section 6 discusses some implications and extensions of our work. The proofs of main results are relegated to the Appendix. Additional technical details are provided in the Supplementary Material.
2. Power analysis for oracle model-X knockoffs
Suppose we have a sample of n independent and identically distributed (i.i.d.) observations from the population (x, Y), where dimensionality p of covariate vector x = (X1, · · · , Xp)T can greatly exceed available sample size n. To ensure model identifiability, it is common to assume that only a small fraction of p covariates Xj’s are truly relevant to response Y. To be more precise, [9] defined the set of irrelevant features as that consisting of Xj’s such that Xj is independent of Y conditional on all remaining p − 1 covariates Xk’s with k ≠ j, and thus the set of truly relevant features is given naturally by , the complement of set . Features in sets and are also referred to as important and noise features, respectively.
We aim at accurately identifying these truly relevant features in set that is assumed to be identifiable while keeping the false discovery rate (FDR) [4] under control. The FDR for a feature selection procedure is defined as
(1) |
where denotes the sparse model returned by the feature selection procedure, | · | stands for the cardinality of a set, and the convention 0/0 = 0 is used in the definition of the false discovery proportion (FDP) which is the fraction of noise features in the discovered model. Here feature selection procedure can be any favorite sparse modeling method by the choice of the user.
2.1. Review of model-X knockoffs framework
Our suggested graphical nonlinear knockoffs procedure in Section 3 falls in the general framework of model-X knockoffs introduced in [9], which we briefly review in this section. The key ingredient of model-X knockoffs framework is the construction of the so-called model-X knockoff variables that are defined as follows.
Definition 1 ([9]). Model-X knockoffs for the family of random variables x = (X1, · · ·, Xp)T is a new family of random variables that satisfies two properties: (1) for any subset , where means swapping components Xj and for each and denotes equal in distribution, and (2) .
We see from Definition 1 that model-X knockoff variables mimic the probabilistic dependency structure among the original features Xj’s and are independent of response Y given Xj’s. When the covariate distribution is characterized by Gaussian graphical model [41], that is,
(2) |
with p×p precision matrix Ω0 encoding the graphical structure of the conditional dependency among the covariates Xj’s, we can construct the p-variate model-X knockoff random variable characterized in Definition 1 as
(3) |
where s is a p-dimensional vector with nonnegative components chosen in a suitable way. In fact, in view of (2) and (3) it is easy to show that the original features and model-X knockoff variables have the following joint distribution
(4) |
with the covariance matrix of covariates x. Intuitively, larger components of s means that the constructed knockoff variables deviate further from the original features, resulting in higher power in distinguishing them. The p-dimensional vector s in (3) should be chosen in a way such that Σ0 − 2−1diag{s} is positive definite, and can be selected using the methods in [9]. We will treat it as a nuisance parameter throughout our theoretical analysis.
With the constructed knockoff variables , the knockoffs inference framework proceeds as follows. We select important variables by resorting to the knockoff statistics defined for each 1 ≤ j ≤ p, where Zj and represent feature importance measures for jth covariate Xj and its knockoff counterpart , respectively, and fj (·, ·) is an antisymmetric function satisfying . For example, in linear regression models, one can choose Zj and as the Lasso [61] regression coefficients of Xj and , respectively, and a valid knockoff statistic is . There are also many other options for defining the feature importance measures. Observe that all model-X knockoff variables are just noise features by the second property in Definition 1. Thus intuitively, a large positive value of knockoff statistic Wj indicates that jth covariate Xj is important, while a small magnitude of Wj usually corresponds to noise features.
The final step of the knockoffs inference framework is to sort |Wj|’s from high to low and select features whose Wj’s are at or above some threshold T, which results in the discovered model
(5) |
Following [2] and [9], one can choose the threshold T in the following two ways
(6) |
(7) |
where is the set of unique nonzero values attained by |Wj|’s and q ∈ (0, 1) is the desired FDR level specified by the user. The procedures using threshold T in (6) and threshold T+ in (7) are referred to as knockoffs and knockoffs+ methods, respectively. It was proved in [9] that model-X knockoffs procedure controls a modified FDR that replaces in the denominator by in (1), and model-X knockoffs+ procedure achieves exact FDR control in finite samples regardless of dimensionality p and dependence structure of response Y on covariates x. The major assumption needed in [9] is that the distribution of covariates x is known. Throughout the paper, we implicitly use the threshold T+ defined in (7) for FDR control in the knockoffs inference framework but still write it as T for notational simplicity.
2.2. Power analysis in linear models
Although the knockoffs procedures were proved rigorously to have controlled FDR in [2, 3, 9], their power advantages over popularly used approaches have been demonstrated only numerically therein. In fact, formal power analysis for the knockoffs framework is still lacking even in simple model settings such as linear regression. We aim to fill in this gap as a first attempt and provide theoretical foundations on the power analysis for model-X knockoffs framework. In this section, we will focus on the oracle model-X knockoffs procedure for the ideal case when the true precision matrix Ω0 for the covariate distribution in (2) is known, which is the setting assumed in [9]. The robustness analysis for the case of unknown precision matrix Ω0 will be undertaken in Section 3.
We would like to remark that the power analysis for the knockoffs framework is necessary and nontrivial. The FDR and power are two sides of the same coin, just like type I and type II errors in hypothesis testing. The knockoffs framework is a wrapper and can be combined with most model selection methods to achieve FDR control. Yet the theoretical properties of power after applying the knockoffs procedure are completely unknown for the case of correlated covariates and unknown covariate distribution. For example, when the knockoffs framework is combined with the Lasso, it further selects variables from the set of variables picked by Lasso applied with the augmented design matrix to achieve the FDR control. For this reason, the power of knockoffs is usually lower than that of Lasso. The main focus of this section is to investigate how much power loss the knockoffs framework would encounter when combined with Lasso.
Since the power analysis for the knockoffs framework is nontrivial and challenging, we content ourselves on the setting of high-dimensional linear models for the technical analysis on power. The linear regression model assumes that
(8) |
where y = (Y1, · · ·, Yn)T is an n-dimensional response vector, X = (x1, · · ·, xn)T is an n × p design matrix consisting of p covariates Xj’s, β0 = (β0,1, · · ·, β0,p)T is a p-dimensional true regression coefficient vector, and ε = (ε1, · · ·, εn)T is an n-dimensional error vector independent of X. As mentioned before, the true model which is the support of β0 is assumed to be sparse with size , and the n rows of design matrix X are i.i.d. observations generated from Gaussian graphical model (2). Without loss of generality, all the diagonal entries of covariance matrix Σ0 are assumed to be ones.
As discussed in Section 2.1, there are many choices of the feature selection procedure up to the user for producing the feature importance measures Zj and for covariates Xj and knockoff variables , respectively, and there are also different ways to construct the knockoff statistics Wj. For the illustration purpose, we adopt the Lasso coefficient difference (LCD) as the knockoff statistics in our power analysis. The specific choice of LCD for knockoff statistics was proposed and recommended in [9], in which it was demonstrated empirically to outperform some other choices in terms of power. The LCD is formally defined as
(9) |
where and denote the jth and (p + j)th components, respectively, of the Lasso [61] regression coefficient vector
(10) |
with λ ≥ 0 the regularization parameter, an n × p matrix whose n rows are independent random vectors of model-X knockoff variables generated from (3), and for r ≥ 0 the Lr-norm of a vector. To simplify the technical analysis, we assume that with asymptotic probability one, there are no ties in the magnitude of nonzero Wj’s and no ties in the magnitude of nonzero components of Lasso solution in (10), which is a mild condition in light of the continuity of the underlying distributions.
To facilitate the power analysis, we impose some basic regularity conditions.
Condition 1. The components of ε are i.i.d. with sub-Gaussian distribution.
Condition 2. It holds that for some slowly diverging sequence κn → ∞ as n → ∞.
Condition 3. There exists some constant c ∈ (2(qs)−1, 1) such that with asymptotic probability one, for given in (5).
Condition 1 can be relaxed to heavier-tailed distributions at the cost of slower convergence rates as long as similar concentration inequalities used in the proofs continue to hold. Condition 2 is assumed to ensure that the Lasso solution does not miss a great portion of important features in . This is necessary since the knockoffs procedure under investigation builds upon the Lasso solution and thus its power is naturally upper bounded by that of Lasso. To see this, recall the well-known oracle inequality for Lasso [7, 8] that with asymptotic probability one, for λ chosen in the order of {(log p)/n}1/2. Then Condition 2 entails that for some κn → ∞, with . Thus the number of important features missed by Lasso is upper bounded by with asymptotic probability one. This guarantees that the power of Lasso is lowered bounded by ; that is, Lasso has asymptotic power one. However, as discussed previously the power of knockoffs is always upper bounded by that of Lasso. So we are interested in the relative power of knockoffs compared to that of Lasso. For this reason, Condition 2 is imposed to simplify the technical analysis of the knockoffs power by ensuring that the asymptotic power of Lasso is one. We will show in Theorem 1 that there is almost no power loss when applying model-X knockoffs procedure.
Condition 3 imposes a lower bound on the size of the sparse model selected by the knockoffs procedure. Recall that we assume the number of true variables s can diverge with sample size n. The rationale behind Condition 3 is that any method with high power should at least be able to select a large number of variables which are not necessarily true ones though. Since it is not straightforward to check, we provide a sufficient condition that is more intuitive in Lemma 1 below, which shows that Condition 3 can hold as long as there exist enough strong signals in the model. We acknowledge that Lemma 1 may not be a necessary condition for Condition 3.
Lemma 1. Assume that Condition 1 holds and there exists some constant c ∈ (2(qs)−1, 1) such that with . Then Condition 3 holds.
We would like to mention that the conditions of Lemma 1 are not stronger than Condition 2. We require a few strong signals, and yet still allow for many very weak ones. In other words, the set of strong signals is only a large enough proper subset of the set of all signals .
We are now ready to characterize the statistical power of the knockoffs procedure in high-dimensional linear model (8). Formally speaking, the power of a feature selection procedure is defined as
(11) |
where denotes the discovered sparse model returned by the feature selection procedure.
Theorem 1. Assume that Condition 1–3 hold, all the eigenvalues of Ω0 are bounded away from 0 and ∞, the smallest eigenvalue of 2diag(s) − diag(s)Ω0diag(s) is positive and bounded away from 0, and λ = Cλ{(log p)/n}1/2 with Cλ > 0 some constant. Then the oracle model-X knockoffs procedure satisfies that with probability at least ,
and therefore,
as n → ∞, where φ is the golden ratio and is some positive constant.
Theorem 1 reveals that the oracle model-X knockoffs procedure in [9] knowing the true precision matrix Ω0 for the covariate distribution can indeed have asymptotic power one under some mild regularity conditions. Since parameter κn characterizes the signal strength, it is seen that the stronger the signal, the faster the convergence of power to one. This shows that for the ideal case, model-X knockoffs procedure can enjoy appealing FDR control and power properties simultaneously.
3. Robustness of graphical nonlinear knockoffs
When moving away from the ideal scenario considered in Section 2, a natural question is whether both properties of FDR control and power can continue to hold with no access to the knowledge of true covariate distribution. To gain insights into such a question, we now turn to investigating the robustness of model-X knockoffs framework. Hereafter we assume that the true precision matrix Ω0 for the covariate distribution in (2) is unknown. We will begin with the FDR analysis and then move on to the power analysis.
3.1. Modified model-X knockoffs
We would like to emphasize that the linear model assumption is no longer needed here and arbitrary dependence structure of response y on covariates x is allowed. As mentioned in Introduction, to overcome the difficulty caused by unknown precision matrix Ω0 we modify the model-X knockoffs procedure described in Section 2.1 and suggest the method of graphical nonlinear knockoffs (RANK).
To ease the presentation, we first introduce some notation. For each given p × p symmetric positive definite matrix Ω, denote by CΩ = Ip − diag{s}Ω and BΩ = (2diag{s} – diag{s}Ωdiag{s})1/2 the square root matrix. We defined n × p matrix by independently generating from the conditional distribution
(12) |
where X = (x1, · · ·, xn)T is the original n × p design matrix generated from Gaussian graphical model (2). It is easy to show that the (2p)-variate random vectors are i.i.d. with Gaussian distribution of mean 0 and covariance matrix given by cov(xi) =Σ0, , and .
Our modified knockoffs method RANK exploits the idea of data splitting, in which one half of the sample is used to estimate unknown precision matrix Ω0 and reduce the model dimensionality, and the other half of the sample is employed to construct the knockoff variables and implement the knockoffs inference procedure, with the steps detailed below.
Step 1. Randomly split the data (X, y) into two folds (X(k), y(k)) with 1 ≤ k ≤ 2 each of sample size n/2.
Step 2. Use the first fold of data (X(1), y(1)) to obtain an estimate of the precision matrix and a reduced model with support .
Step 3. With estimated precision matrix from Step 2, construct an (n/2) × p knockoffs matrix using X(2) with rows independently generated from (12); that is, with Z an (n/2) × p matrix with i.i.d. N(0; 1) components.
Step 4. Construct knockoff statistics Wj’s using only data on support , that is, for and Wj = 0 for . Then apply knockoffs inference procedure to Wj’s to obtain final set of features .
Here for any matrix A and subset , the compact notation stands for the submatrix of A consisting of columns in set .
As discussed in Section 2.1, the model-X knockoffs framework utilizes sparse regression procedures such as the Lasso. For this reason, even in the original model-X knockoffs procedure the knockoff statistics Wj’s (see, e.g., (9)) take nonzero values only over a much smaller model than the full model. This observation motivates us to estimate such a smaller model using the first half of the sample in Step 2 of our modified procedure. When implementing this modified procedure, we limit ourselves to sparse models with size bounded by some positive integer Kn that diverges with n; see, for example, [30, 45] for detailed discussions and justifications on similar consideration of sparse models. In addition to sparse regression procedures, feature screening methods such as [23, 17] can also be used to obtain the reduced model .
The above modified knockoffs method differs from the original model-X knockoffs procedure [9] in that we use an independent sample to obtain the estimated precision matrix and reduced model . In particular, the independence between estimates and data (X(2), y(2)) plays an important role in our theoretical analysis for the robustness of the knockoffs procedure. In fact, the idea of data splitting has been popularly used in the literature for various purposes [25, 19, 55, 3]. Although the work of [3] has the closest connection to ours, there are several key differences between these two methods. Specifically, [3] considered high-dimensional linear model with fixed design, where the data is split into two portions with the first portion used for feature screening and the second portion employed for applying the original knockoff filter in [2] on the reduced model. To ensure FDR control, it was required in [3] that the feature screening method should enjoy the sure screening property [23], that is, the reduced model after the screening step contains the true model with asymptotic probability one. In contrast, one major advantage of our method is that the asymptotic FDR control can be achieved without requiring the sure screening property; see Theorem 2 in Section 3.2 for more details. Such major distinction is rooted on the difference in constructing knockoff variables; that is, we construct model-X knockoff variables globally in Step 3 above, whereas [3] constructed knockoff variables locally on the reduced model. Another major difference is that our method works with random design and does not need any assumption on how response y depends upon covariates x, while the method in [3] requires the linear model assumption and cannot be extended to nonlinear models.
3.2. Robustness of FDR control for graphical nonlinear knockoffs
We begin with investigating the robustness of FDR control for the modified model-X knockoffs procedure RANK. To simplify the notation, we rewrite (X(2), y(2)) as (X, y) with sample size n whenever there is no confusion, where n now represents half of the original sample size. For each given p × p symmetric positive definite matrix Ω, an n × p knockoffs matrix can be constructed with n rows independently generated according to (12) and the modified knockoffs procedure proceeds with a given reduced model . Then the FDP and FDR functions in (1) can be rewritten as
(13) |
where the subscript n is used to emphasize the dependence of FDP and FDR functions on sample size. It is easy to check that the knockoffs procedure based on (y, , ) satisfies all the conditions in [9] for FDR control for any reduced model that is independent of X and , which ensures that can be controlled at the target level q. To study the robustness of our modified knockoffs procedure, we will make a connection between functions and .
To ease the presentation, denote by the oracle knockoffs matrix with Ω =Ω0, , and . The following proposition establishes a formal characterization of the FDR as a function of the precision matrix Ω used in generating the knockoff variables and the reduced model .
Proposition 1. For any given symmetric positive definite and , it holds that
(14) |
where , function gn(·) is some conditional expectation of the FDP function whose functional form is free of Ω and , and
We see from Proposition 1 that when Ω=Ω0, it holds that and thus the value of the FDR function at point Ω0 reduces to
which can be shown to be bounded from above by the target FDR level q using the results proved in [9]. Since the dependence of FDR function on Ω is completely through matrix HΩ, we can reparameterize the FDR function as . In view of (14), is the expectation of some measurable function with respect to the probability law of which has matrix normal distribution with independent rows, and thus is expected to be a smooth function of entries of HΩ by measure theory. Motivated by such an observation, we make the following Lipschitz continuity assumption.
Condition 4. There exists some constant L > 0 such that for all and with some constant C2 > 0 an → 0, , where and denote the matrix spectral norm and matrix Frobenius norm, respectively
Condition 5. Assume that the estimated precision matrix satisfies with probability for some constants C2, c1 > 0 and an → 0, and that .
The error rate of precision matrix estimation assumed in Condition 5 is quite flexible. We would like to emphasize that no sparsity assumption has been made on the true precision matrix Ω0. Bounding the size of sparse models is also important for ensuring model identifiability and stability; see, for instance, [30, 45] for more detailed discussions.
Theorem 2. Assume that all the eigenvalues of Ω0 are bounded away from 0 and ∞ and the smallest eigenvalue of 2diag(s) − diag(s)Ω0diag(s) is bounded from below by some positive constant. Then under Condition 4, it holds that
(15) |
Moreover, under Conditions 4–5 with , the FDR of RANK is bounded from above by , where q ∈ (0, 1) is the target FDR level.
Theorem 2 establishes the robustness of the FDR with respect to the precision matrix Ω; see the uniform bound in (15). As a consequence, it shows that our modified model-X knockoffs procedure RANK can indeed have FDR asymptotically controlled at the target level q. We remark that the term in Theorem 2 is because Condition 4 is imposed through the matrix Frobenius norm, which is motivated from results on the smoothness of integral function from calculus. If one is willing to impose assumption through matrix spectral norm instead of Frobenius norm, then the extra term can be dropped and the set can be taken as the full model {1, · · ·, p}.
We would like to stress that Theorem 2 allows for arbitrarily complicated dependence structure of response y on covariates x and for any valid construction of knockoff statistics Wj’s. This is different from the conditions needed for power analysis in Section 2.2 (that is, the linear model setting and LCD knockoff statistics). Moreover, the asymptotic FDR control in Theorem 2 does not need the sure screening property of as n → ∞.
3.3. Robustness of power in linear models
We are now curious about the other side of the coin; that is, the robustness theory for the power of our modified knockoffs procedure RANK. As argued at the beginning of Section 2.2, to ease the presentation and simplify the technical derivations we come back to high-dimensional linear models (8) and use the LCD in (9) as the knockoff statistics. The difference with the setting in Section 2.2 is that we no longer assume that the true precision matrix Ω0 is known and use the modified knockoffs procedure introduced in Section 3.1 to achieve asymptotic FDR control.
Recall that for the RANK procedure, the reduced model is first obtained from an independent subsample and then the knockoffs procedure is applied on the second fold of data to further select features from . Clearly if does not have the sure screening property of as n → ∞, then the Lasso solution based on as given in (18) is no longer a consistent estimate of β0 even when the true precision matrix Ω0 is used to generate the knockoff variables. In addition, the final power of our modified knockoffs procedure will always be upper bounded by . Nevertheless, the results in this section are still useful in the sense that model (8) can be viewed as the projected model on support . Thus our power analysis here is relative power analysis with respect to the reduced model . In other words, we will focus on how much power loss would occur after we apply the model-X knockoffs procedure to (, , y(2)) when compared to the power of . Since our focus is relative power loss, without loss of generality we will condition on the event
(16) |
We would like to point out that all conditions and results in this section can be adapted correspondingly when we view model (8) as the projected model if . Similarly as in FDR analysis, we restrict ourselves to sparse models with size bounded by Kn that diverges as n → ∞, that is, .
With Ω taken as the estimated precision matrix , we can generate the knockoff variables from (12). Then the Lasso procedure can be applied to the augmented data (X(2), , y(2)) with constructed in Step 3 of our modified knockoffs procedure and the LCD can be defined as
(17) |
where and are the jth and (j + p)th components, respectively, of the Lasso estimator
(18) |
with λ ≥ 0 the regularization parameter and .
Unlike the FDR analysis in Section 3.2, we now need sparsity assumption on the true precision matrix Ω0.
Condition 6. Assume that Ω0 is Lp-sparse with each row having at most Lp nonzeros for some diverging Lp and all the eigenvalues of Ω0 are bounded away from 0 and ∞.
For each given precision matrix Ω and reduced model , we define similarly as in (17) except that Ω is used to generate the knockoff variables and set is used in (18) to calculate the Lasso solution. Denote by the final set of selected features using the LCD in the knockoffs inference framework. We further define a class of precision matrices
(19) |
where C2 and an are the same as in Theorem 2 and is some positive integer that diverges with n. Similarly as in Section 2.2, in the technical analysis we assume implicitly that with asymptotic probability one, for all valid constructions of the knockoff variables there are no ties in the magnitude of nonzero knockoff statistics and no ties in the magnitude of nonzero components of Lasso solution uniformly over all and .
Condition 7. It holds that for some constant c2 > 0.
The assumption on the estimated precision matrix made in Condition 7 is mild and flexible. A similar class of precision matrices was considered in [29] with detailed discussions on the choices of the estimation procedures. See, for example, [31, 10, 52] for some more recent developments on large precision matrix estimation and inference. In parallel to Theorem 1, we have the following results on the power of our modified knockoffs procedure with the estimated precision matrix .
Theorem 3. Assume that Conditions 1–2 and 6–7 hold, the smallest eigenvalue of 2diag(s)−diag(s)Ω0diag(s) is positive and bounded away from 0, , and λ = Cλ{(log p)/n}1/2 with c ∈ ((qs)−1, 1) and Cλ > 0 some constants. Then if and , RANK with estimated precision matrix and reduced model has power satisfying
where φ is the golden ratio and is some positive constant.
Theorem 3 establishes the robustness of the power for the RANK method. In view of Theorems 2–3, we see that our modified knockoffs procedure RANK can enjoy appealing properties of FDR control and power simultaneously when the true covariate distribution is unknown and needs to be estimated in high dimensions.
4. Simulation studies
So far we have seen that our suggested RANK method admits appealing theoretical properties for large-scale inference in high-dimensional nonlinear models. We now examine the finite-sample performance of RANK through four simulation examples.
4.1. Model setups and simulation settings
Recall that the original knockoff filter (KF) in [2] was designed for linear regression model with dimensionality p not exceeding sample size n, while the high-dimensional knockoff filter (HKF) in [3] considers linear model with p possibly larger than n. To compare RANK with the HKF procedure in high-dimensional setting, our first simulation example adopts the linear regression model
(20) |
where y is an n-dimensional response vector, X is an n × p design matrix, β = (β1, · · ·, βp)T is a p-dimensional regression coefficient vector, and ε is an n-dimensional error vector. Non-linear models provide useful and flexible alternatives to linear models and are widely used in real applications. Our second through fourth simulation examples are devoted to three popular nonlinear model settings: the partially linear model, the single-index model, and the additive model, respectively. As a natural extension of linear model (20), the partially linear model assumes that
(21) |
where g(U) = (g(U1), · · ·, g(Un))T is an n-dimensional vector-valued function with covariate vector U = (U1, · · ·, Un)T, g(·) is some unknown smooth nonparametric function, and the rest of notation is the same as in model (20). In particular, the partially linear model is a semiparametric regression model that has been commonly used in many areas such as economics, finance, medicine, epidemiology, and environmental science [16, 33].
The third and fourth simulation examples drop the linear component. As a popular tool for dimension reduction, the single-index model assumes that
(22) |
where with X = (x1, · · ·, xn)T, g(·) is an unknown link function, and the remaining notation is the same as in model (20). In particular, the single-index model provides a flexible extension of the GLM by relaxing the parametric form of the link function [40, 56, 34, 42, 37]. To bring more flexibility while alleviating the curse of dimensionality, the additive model assumes that
(23) |
where gj(θ) = (gj(θ1), · · ·, gj(θn))T for θ = (θ1, · · ·, θn)T, Xj represents the jth covariate vector with X = (X1, · · ·, Xp), gj (·)’s are some unknown smooth functions, and the rest of notation is the same as in model (20). The additive model has been widely employed for nonparametric modeling of high-dimensional data [35, 51, 47, 11].
For the linear model (20) in simulation example 1, the rows of the n × p design matrix X are generated as i.i.d. copies of N(0, Σ) with precision matrix Σ–1 = (ρ|j–k|)1≤j,k≤p for ρ = 0 and 0.5. We set the true regression coefficient vector as a sparse vector with s = 30 nonzero components, where the signal locations are chosen randomly and each nonzero coefficient is selected randomly from {±A} with A = 1.5 and 3.5. The error vector ε is assumed to be N (0, σ2In) with σ = 1. We set sample size n = 400 and consider the high-dimensional scenario with dimensionality p = 200, 400, 600, 800, and 1000. For the partially linear model (21) in simulation example 2, we choose the true function as g(U) = sin(2πU ), generate U = (U1, · · ·, Un)T with i.i.d. Ui from uniform distribution on [0, 1], and set A = 1.5 with the remaining setting the same as in simulation example 1.
Since the single-index model and additive model are more complex than the linear model and partially linear model, we reduce the true model size s while keeping sample size n = 400 in both simulation examples 3 and 4. For the single-index model (22) in simulation example 3, we consider the true link function g(x) = x3/2 and set p = 200, 400, 600, 800, and 1000. The true p-dimensional regression coefficient vector β0 is generated similarly with s = 10 and A = 1.5. For the additive model (23) in simulation example 4, we assume that s = 10 of the functions gj(·)’s are nonzero with j’s chosen randomly from {1, · · ·, p} and the remaining p − 10 functions gj(·)’s vanish. Specifically, each nonzero function gj (·) is taken to be a polynomial of degree 3 and all coefficients under the polynomial basis functions are generated independently as N (0, 102) as in [11]. The dimensionality p is allowed to vary with values 200, 400, 600, 800, and 1000. For each simulation example, we set the number of repetitions as 100.
4.2. Estimation procedures
To implement RANK procedure described in Section 3.1, we need to construct a precision matrix estimator and obtain the reduced model using the first fold of data (X(1), y(1)). Among all available estimators in the literature, we employ the ISEE method in [31] for precision matrix estimation due to its scalability, simple tuning, and nice theoretical properties. For simplicity, we choose for all 1 ≤ j ≤ p, where denotes the ISEE estimator for the true precision matrix Ω0 and Λmax standards for the largest eigenvalue of a matrix. Then we can obtain an (n/2) × (2p) augmented design matrix [X(2), ], where represents an (n/2) × p knockoffs matrix constructed in Step 3 of our modified knockoffs procedure in Section 3.1. To construct the reduced model using the first fold of data (X(1), y(1)), we borrow the strengths from the recent literature on feature selection methods. After is obtained, we employ the reduced data with to fit a model and construct the knockoff statistics. In what follows, we will discuss feature selection methods for obtaining for the linear model (20), partially linear model (21), single-index model (22), and additive model (23) in simulation examples 1–4, respectively. We will also discuss the construction of knockoff statistics in each model setting.
For the linear model (20) in simulation example 1, we obtain the reduced model by first applying the Lasso procedure
(24) |
with λ ≥ 0 the regulation parameter and then taking the support . Then with the estimated and , we construct the knockoff statistics as the LCD (17), where the estimated regression coefficient vector is obtained by applying the Lasso procedure on the reduced model as described in (18). The regularization parameter λ in Lasso is tuned using the K-fold cross-validation (CV).
For the partially linear model (21) in simulation example 2, we employ the profiling method in semiparametric regression based on the first fold of data (X(1), U(1), y(1)) by observing that model (21) becomes a linear model when conditioning on the covariate vector U(1). Consequently we need to estimate both the profiled response and the profiled covariates . To this end, we adopt the local linear smoothing estimators [18] and of and using the Epanechnikov kernel K(u) = 0.75(1 − u2)+ with the optimal bandwidth selected by the generalized cross-validation (GCV). Then we define the Lasso estimator for the p-dimensional regression coefficient vector similarly as in (24) with y(1) and X(1) replaced by and , respectively. The reduced model is then taken as . For knockoff statistics , we set for all . On the support , we construct with and the Lasso coefficients obtained by applying the model fitting procedure described above to the reduced data (, , y(2)) in the second subsample with .
To fit the single-index model (22) in simulation example 3, we employ the Lasso-SIR method in [43]. The Lasso-SIR first divides the sample of m = n/2 observations in the first subsample (X(1), y(1)) into H slices of equal length c, and constructs the matrix , where M = IH ⊗ 1c is an m × H matrix that is the Kronecker product of the identity matrix IH and the constant vector 1c of ones. Then the Lasso-SIR estimates the p-dimensional regression coefficient vector using the Lasso procedure similarly as in (18) with the original response vector y(1) replaced by a new response vector , where λ1 denotes the largest eigenvalue of matrix ΛH and η1 is the corresponding eigenvector. We set the number of slices H = 5. Then the reduced model is taken as . We then apply the fitting procedure Lasso-SIR discussed above to the reduced data with and construct knockoff statistics in a similar way as in partially linear model.
To fit the additive model (23) in simulation example 4, we apply the GAMSEL procedure in [11] for sparse additive regression. In particular, we choose 6 basis functions each with 6 degrees of freedom for the smoothing splines using orthogonal polynomials for each additive component and set the penalty mixing parameter γ = 0.9 in GAMSEL to obtain estimators of the true functions gj(·)’s. The GAMSEL procedure is first applied to the first subsample (X(1), y(1)) to obtain the reduced model , and then applied to the reduced data with to obtain estimates and for the additive functions corresponding to the jth covariate and its knockoff counterpart with , respectively. The knockoff statistics are then constructed as
(25) |
and for , where represents the empirical norm of the estimated function evaluated at its observed points and n/2 stands for the size of the second subsample.
It is seen that in all four examples above, intuitively large positive values of knockoff statistics provide strong evidence against the jth null hypothesis H0,j: βj = 0 or H0,j: gj = 0. For all simulation examples, we set the target FDR level at q = 0.2.
4.3. Simulation results
To gain some insights into the effect of data splitting, we also implemented our procedure without the data splitting step. To differentiate, we use RANKs to denote the procedure with data splitting and RANK to denote the procedure without data splitting. To examine the feature selection performance, we look at both measures of FDR and power. The empirical versions of FDR and power based on 100 replications are reported in Tables 1–2 for simulation example 1 and Tables 3–5 for simulation examples 2–4, respectively. In particular, Table 1 compares the performance of RANK and RANK+ with that of RANKs and RANKs+, where the subscript + stands for the corresponding method when the modified knockoff threshold T+ is used. We see from Table 1 that RANK and RANK+ mimic closely RANKs and RANKs+, respectively, suggesting that data splitting is more of a technical assumption. In addition, the FDR is approximately controlled at the target level of q = 0.2 with high power, which is in line with our theory. Table 2 summarizes the comparison of RANKs with HKF procedure for high-dimensional linear regression model. Despite that both methods are based on data splitting, their practical performance is very different. It is seen that although controlling the FDR below the target level, HKF suffers from a loss of power due to the use of the screening step and the power deteriorates as dimensionality p increases. In contrast, the performance of RANKs is robust across different correlation levels ρ and dimensionality p. It is worth mentioning that HKF procedure with data recycling performed generally better than that with data splitting alone. Thus only the results for the former version are reported in Table 2 for simplicity.
Table 1:
ρ | p | RANK |
RANK+ |
RANKs |
RANKs+ |
||||
---|---|---|---|---|---|---|---|---|---|
FDR | Power | FDR | Power | FDR | Power | FDR | Power | ||
0 | 200 | 0.2054 | 1.00 | 0.1749 | 1.00 | 0.1909 | 1.00 | 0.1730 | 1.00 |
400 | 0.2062 | 1.00 | 0.1824 | 1.00 | 0.2010 | 1.00 | 0.1801 | 1.00 | |
600 | 0.2263 | 1.00 | 0.1940 | 1.00 | 0.2206 | 1.00 | 0.1935 | 1.00 | |
800 | 0.2385 | 1.00 | 0.1911 | 1.00 | 0.2247 | 1.00 | 0.1874 | 1.00 | |
1000 | 0.2413 | 1.00 | 0.2083 | 1.00 | 0.2235 | 1.00 | 0.1970 | 1.00 | |
0.5 | 200 | 0.2087 | 1.00 | 0.1844 | 1.00 | 0.1875 | 1.00 | 0.1692 | 1.00 |
400 | 0.2144 | 1.00 | 0.1879 | 1.00 | 0.1954 | 1.00 | 0.1703 | 1.00 | |
600 | 0.2292 | 1.00 | 0.1868 | 1.00 | 0.2062 | 1.00 | 0.1798 | 1.00 | |
800 | 0.2398 | 1.00 | 0.1933 | 1.00 | 0.2052 | 0.9997 | 0.1805 | 0.9997 | |
1000 | 0.2412 | 1.00 | 0.2019 | 1.00 | 0.2221 | 0.9984 | 0.2034 | 0.9984 |
Table 2:
ρ | p | RANKs |
RANKs+ |
HKF |
HKF+ |
||||
---|---|---|---|---|---|---|---|---|---|
FDR | Power | FDR | Power | FDR | Power | FDR | Power | ||
0 | 200 | 0.1858 | 1.00 | 0.1785 | 1.00 | 0.1977 | 0.9849 | 0.1749 | 0.9837 |
400 | 0.1895 | 1.00 | 0.1815 | 1.00 | 0.2064 | 0.9046 | 0.1876 | 0.8477 | |
600 | 0.2050 | 1.00 | 0.1702 | 1.00 | 0.1964 | 0.8424 | 0.1593 | 0.7668 | |
800 | 0.2149 | 1.00 | 0.1921 | 1.00 | 0.1703 | 0.7513 | 0.1218 | 0.6241 | |
1000 | 0.2180 | 1.00 | 0.1934 | 1.00 | 0.1422 | 0.7138 | 0.1010 | 0.5550 | |
0.5 | 200 | 0.1986 | 1.00 | 0.1618 | 1.00 | 0.1992 | 0.9336 | 0.1801 | 0.9300 |
400 | 0.1971 | 1.00 | 0.1805 | 1.00 | 0.1657 | 0.8398 | 0.1363 | 0.7825 | |
600 | 0.2021 | 1.00 | 0.1757 | 1.00 | 0.1253 | 0.7098 | 0.0910 | 0.6068 | |
800 | 0.2018 | 1.00 | 0.1860 | 1.00 | 0.1374 | 0.6978 | 0.0917 | 0.5792 | |
1000 | 0.2097 | 0.9993 | 0.1920 | 0.9993 | 0.1552 | 0.6486 | 0.1076 | 0.5524 |
Table 3:
ρ | p | RANK |
RANK+ |
RANKs |
RANKs+ |
||||
---|---|---|---|---|---|---|---|---|---|
FDR | Power | FDR | Power | FDR | Power | FDR | Power | ||
0 | 200 | 0.2117 | 1.00 | 0.1923 | 1.00 | 0.1846 | 0.9976 | 0.1699 | 0.9970 |
400 | 0.2234 | 1.00 | 0.1977 | 1.00 | 0.1944 | 0.9970 | 0.1747 | 0.9966 | |
600 | 0.2041 | 1.00 | 0.1776 | 1.00 | 0.2014 | 0.9968 | 0.1802 | 0.9960 | |
800 | 0.2298 | 1.00 | 0.1810 | 1.00 | 0.2085 | 0.9933 | 0.1902 | 0.9930 | |
1000 | 0.2322 | 1.00 | 0.1979 | 1.00 | 0.2113 | 0.9860 | 0.1851 | 0.9840 | |
0.5 | 200 | 0.2180 | 1.00 | 0.1929 | 1.00 | 0.1825 | 0.9952 | 0.1660 | 0.9949 |
400 | 0.2254 | 1.00 | 0.1966 | 1.00 | 0.1809 | 0.9950 | 0.1628 | 0.9948 | |
600 | 0.2062 | 1.00 | 0.1814 | 1.00 | 0.2038 | 0.9945 | 0.1898 | 0.9945 | |
800 | 0.2264 | 1.00 | 0.1948 | 1.00 | 0.2019 | 0.9916 | 0.1703 | 0.9906 | |
1000 | 0.2316 | 1.00 | 0.2033 | 1.00 | 0.2127 | 0.9830 | 0.1857 | 0.9790 |
Table 5:
ρ | p | RANK |
RANK+ |
RANKs |
RANKs+ |
||||
---|---|---|---|---|---|---|---|---|---|
FDR | Power | FDR | Power | FDR | Power | FDR | Power | ||
0 | 200 | 0.1926 | 0.9780 | 0.1719 | 0.9690 | 0.2207 | 0.9490 | 0.1668 | 0.9410 |
400 | 0.2094 | 0.9750 | 0.1773 | 0.9670 | 0.2236 | 0.9430 | 0.1639 | 0.9340 | |
600 | 0.2155 | 0.9670 | 0.1729 | 0.9500 | 0.2051 | 0.9310 | 0.1620 | 0.9220 | |
800 | 0.2273 | 0.9590 | 0.1825 | 0.9410 | 0.2341 | 0.9280 | 0.1905 | 0.9200 | |
1000 | 0.2390 | 0.9570 | 0.1751 | 0.9350 | 0.2350 | 0.9140 | 0.1833 | 0.9070 | |
0.5 | 200 | 0.1904 | 0.9680 | 0.1733 | 0.9590 | 0.2078 | 0.9370 | 0.1531 | 0.9330 |
400 | 0.2173 | 0.9650 | 0.1701 | 0.9540 | 0.2224 | 0.9360 | 0.1591 | 0.9280 | |
600 | 0.2267 | 0.9600 | 0.1656 | 0.9360 | 0.2366 | 0.9340 | 0.1981 | 0.9270 | |
800 | 0.2306 | 0.9540 | 0.1798 | 0.9320 | 0.2332 | 0.9150 | 0.1740 | 0.9110 | |
1000 | 0.2378 | 0.9330 | 0.1793 | 0.9270 | 0.2422 | 0.8970 | 0.1813 | 0.8880 |
For high-dimensional nonlinear settings of partially linear model, single-index model, and additive model in simulation examples 2–4, we see from Tables 3–5 that RANKs and RANKs+ performed well and similarly as RANK and RANK+ in terms of both FDR control and power across different scenarios. These results demonstrate the model-X feature of our procedure for large-scale inference in nonlinear models.
5. Real data analysis
In addition to simulation examples presented in Section 4, we also demonstrate the practical utility of our RANK procedure on a gene expression data set, which is based on Affymetrix GeneChip microarrays for the plant Arabidopsis thaliana in [63]. It is well known that isoprenoids play a key role in plant and animal physiological processes, such as photosynthesis, respiration, regulation of growth, and defense against pathogens in plant physiological processes. In particular, [38] found that many of the genes expressed preferentially in mature leaves are readily recognizable as genes involved in photosynthesis, including rubisco activase (AT2G39730), fructose bisphosphate aldolase (AT4G38970), and two glycine hydroxymethyl-transferase genes (AT4G37930 and AT5G26780). Thus isoprenoids have become important ingredients in various drugs (e.g., against cancer and malaria), fragrances (e.g., menthol), and food colorants (e.g., carotenoids). See, for instance, [63, 53, 49] on studying the mechasnism of isoprenoid synthesis in a wide range of applications.
The aforementioned data set in [63] consists of 118 gene expression patterns under various experimental conditions for 39 isoprenoid genes, 15 of which are assigned to the regulatory pathway, 19 to the plastidal pathway, and the remaining 5 isoprenoid genes encode protein located in the mitochondrion. Moreover, 795 additional genes from 56 metabolic pathways are incorporated into the isoprenoid genetic network. Thus the combined data set is comprised of a sample of n = 118 gene expression patterns for 834 genes. This data set was studied in [65] for identifying genes that exhibit significant association with the specific isoprenoid gene GGPPS11 (AGI code AT4G36810). Motivated by [65], we choose the expression level of isoprenoid gene GGPPS11 as the response and treat the remaining p = 833 genes from 58 different metabolic pathways as the covariates, in which the dimensionality p is much larger than sample size n. All the variables are logarithmically transformed. To identify important genes associated with isoprenoid gene GGPPS11, we employ the RANK method using the Lasso procedure with target FDR level q = 0.2. The implementation of RANK is the same as that in Section 4 for the linear model. Since the sample size of this data set is relatively low, we choose to implement RANK without sample splitting, which has been demonstrated in Section 4 to be capable of controlling the FDR at the desired level.
Table 6 lists the selected genes by RANK, RANK+, and Lasso along with their associated pathways. We see from Table 6 that RANK, RANK+, and Lasso selected 9 genes, 7 genes, and 17 genes, respectively. The common set of four genes, AT4G38970, AT2G27820, AT2G01880, and AT5G19220, was selected by all three methods. The values of the adjusted R2 for these three selected models are equal to 0.7523, 0.7515, and 0.7843, respectively, showing similar level of goodness of fit. In particular, among the top 20 genes selected using the Elem-OLS method with entrywise transformed Gram matrix in [65], we found that five genes (AT1G57770, AT1G78670, AT3G56960, AT2G27820, and AT4G13700) selected by RANK are included in such a list of top 20 genes, and three genes (AT1G57770, AT1G78670, and AT2G27820) picked by RANK+ are contained in the same list.
Table 6:
RANK |
RANK+ |
||
---|---|---|---|
Pathway | Gene | Pathway | Gene |
Calvin | AT4G38970 | Calvin | AT4G38970 |
Carote | AT1G57770 | Carote | AT1G57770 |
Folate | AT1G78670 | Folate | AT1G78670 |
Inosit | AT3G56960 | ||
Phenyl | AT2G27820 | Phenyl | AT2G27820 |
Purine | AT3G01820 | Purine | AT3G01820 |
Ribo | AT4G13700 | ||
Ribo | AT2G01880 | Ribo | AT2G01880 |
Starch | AT5G19220 | Starch | AT5G19220 |
Lasso | |||
---|---|---|---|
Pathway | Gene | Pathway | Gene |
Berber | AT2G34810 | Porphy | AT4G18480 |
Calvin | AT4G38970 | Pyrimi | AT5G59440 |
Calvin | AT3G04790 | Ribo | AT2G01880 |
Glutam | AT5G18170 | Starch | AT5G19220 |
Glycol | AT4G27600 | Starch | AT2G21590 |
Pentos | AT3G04790 | Trypto | AT5G48220 |
Phenyl | AT2G27820 | Trypto | AT5G17980 |
Porphy | AT1G03475 | Mevalo | AT5G47720 |
Porphy | AT3G51820 |
To gain some scientific insights into the selected genes, we conducted Gene Ontology (GO) enrichment analysis to interpret, from the biological point of view, the influence of selected genes on isoprenoid gene GGPPS11, which is known as a precursor to chloroplast, carotenoids, tocopherols, and abscisic acids. Specifically, in the enrichment test of GO biological process, gene AT1G57770 is involved in carotenoid biosynthetic process. In the GO cellular component enrichment test, genes AT4G38970 and AT5G19220 are located in chloroplast, chloroplast envelope, and chloroplast stroma; gene AT1G57770 is located in chloroplast and mitochondrion; and gene AT2G27820 is located in chloroplast, chloroplast stroma, and cytosol. The GO molecular function enrichment test shows that gene AT4G38970 has fructose-bisphosphate aldolase activity and gene AT1G57770 has carotenoid isomerase activity and oxidoreductase activity. These scientific insights in terms of biological process, cellular component, and molecular function suggest that the selected genes may have meaningful biological relationship with the target isoprenoid gene GGPPS11. See, for example, [38, 50, 62] for more discussions on these genes.
6. Discussions
Our analysis in this paper reveals that the suggested RANK method exploiting the general framework of model-X knockoffs introduced in [9] can asymptotically control the FDR in general high-dimensional nonlinear models with unknown covariate distribution. The robustness of the FDR control under estimated covariate distribution is enabled by imposing the Gaussian graphical structure on the covariates. Such a structural assumption has been widely employed to model the association networks among the covariates and extensively studied in the literature. Our method and theoretical results are powered by scalable large precision matrix estimation with statistical efficiency. It would be interesting to extend the robustness theory of the FDR control beyond Gaussian designs as well as for heavy-tailed data and dependent observations.
Our work also provides a first attempt to the power analysis for the model-X knockoffs framework. The nontrivial technical analysis establishes that RANK can have asymptotic power one in high-dimensional linear model setting when the Lasso is used for sparse regression. It would be interesting to extend the power analysis for RANK with a wide class of sparse regression and feature screening methods including SCAD, SIS, and many other concave regularization methods [22, 23, 17, 30]. Though more challenging, it is also important to investigate the power property for RANK beyond linear models. The power analysis in general high-dimensional nonlinear models is highly challenging for several reasons. First, the minimum signal strength needs to be characterized precisely in the power analysis. Yet unlike the beta-min measure in the linear model, there lacks a popularly accepted measure with explicit formula on the minimum signal strength in general high-dimensional nonlinear models. Second, the estimation error associated with each covariate plays an important role in the power analysis. However, in general nonlinear models it is unclear how to disentangle the individual estimation error corresponding to each covariate. Third, the knockoffs procedure builds on some underlying variable selection method, which itself is highly challenging both empirically and theoretically in general high-dimensional nonlinear models.
Our RANK procedure utilizes the idea of data splitting, which plays an important role in our technical analysis. Our numerical examples, however, suggest that data splitting is more of a technical assumption than a practical necessity. It would be interesting to develop theoretical guarantees for RANK without data splitting. These extensions are interesting topics for future research.
Supplementary Material
Table 4:
ρ | p | RANK |
RANK+ |
RANKs |
RANKs+ |
||||
---|---|---|---|---|---|---|---|---|---|
FDR | Power | FDR | Power | FDR | Power | FDR | Power | ||
0 | 200 | 0.1893 | 1 | 0.1413 | 1 | 0.1899 | 1 | 0.1383 | 1 |
400 | 0.2163 | 1 | 0.1598 | 1 | 0.245 | 0.998 | 0.1676 | 0.997 | |
600 | 0.2166 | 1 | 0.1358 | 1 | 0.2314 | 0.999 | 0.1673 | 0.998 | |
800 | 0.1964 | 1 | 0.1406 | 1 | 0.2443 | 0.992 | 0.1817 | 0.992 | |
1000 | 0.2051 | 1 | 0.134 | 1 | 0.2431 | 0.969 | 0.1611 | 0.962 | |
0.5 | 200 | 0.2189 | 1 | 0.1591 | 1 | 0.2322 | 1 | 0.1626 | 1 |
400 | 0.2005 | 1 | 0.1314 | 1 | 0.2099 | 0.996 | 0.1615 | 0.995 | |
600 | 0.2064 | 1 | 0.1426 | 1 | 0.2331 | 0.998 | 0.1726 | 0.998 | |
800 | 0.2049 | 1 | 0.1518 | 1 | 0.2288 | 0.994 | 0.1701 | 0.994 | |
1000 | 0.2259 | 1 | 0.1423 | 1 | 0.2392 | 0.985 | 0.185 | 0.983 |
Acknowledgments
This work was supported by NIH Grant 1R01GM131407–01, NSF CAREER Award DMS-1150318, a grant from the Simons Foundation, and Adobe Data Science Research Award. The authors sincerely thank the Joint Editor, Associate Editor, and referees for their valuable comments that helped improve the article substantially.
A. Proofs of main results
We provide the proofs of Theorems 1–3, Propositions 1–2, and Lemmas 1–2 in this appendix. Additional technical details for the proofs of Lemmas 3–8 are included in the Supplementary Material. To ease the technical presentation, we first introduce some notation. Let Λmin(·) and Λmax(·) be the smallest and largest eigenvalues of a symmetric matrix. For any matrix A = (aij), denote by , , , and the matrix ℓ1-norm, entrywise maximum norm, spectral norm, and Frobenius norm, respectively. For any set , we use to represent the submatrix of A formed by columns in set and to denote the principal submatrix formed by columns and rows in set .
A.1. Proofs of Lemma 1 and Theorem 1
Observe that the choice of certainly satisfies the sure screening property. We see that Lemma 1 and Theorem 1 are specific cases of Lemma 6 in Section B.4 of Supplementary Material and Theorem 3, respectively. Thus we only prove the latter ones.
A.2. Proof of Proposition 1
In this proof, we will consider Ω and as deterministic parameters and focus only on the second half of sample (X(2), y(2)) used in FDR control. Thus, we will drop the superscripts in (X(2), y(2)) whenever there is no confusion. For a given precision matrix Ω, the matrix of knockoff variables
can be generated using (12) with Ω0 replaced by Ω. Here, we use the superscript Ω to emphasize the dependence of knockoffs matrix on Ω. Recall that for a given set with , we calculate the knockoff statistics Wj’s using (y, XS, ). Thus, the FDR function can be written as
(26) |
where . It is seen that the function gn is the conditional FDP when knockoff variables , 1 ≤ i ≤ n, are simulated using Ω and only variables in set are used to construct knockoff statistics Wj. We want to emphasize that since given X the response y is independent of , the functional form of g1,n is free of the matrix Ω used to generate knockoff variables.
Using the technical arguments in [9], we can show that for any sample size n and all subsets that are independent of the original data (X(2), y(2)) used in the knockoffs procedure. Observe that the only difference between and is that different precision matrices are used to generate knockoff variables. We restrict ourselves to the following data generating scheme
where CΩ = Ip − Ωdiag{s}, are i.i.d. normal random vectors that are independent of xi’s, and BΩ = (2diag{s}–diag{s}Ωdiag{s})1/2. For simplicity, write , and , i.e., the matrices corresponding to the oracle case. Then restricted to set ,
where the subscript means the submatrix (subvector) formed by columns (components) in set . We want to make connections between and . To this end, construct
(27) |
where . Then it is seen that (xi, ) and (xi, ) have identical joint distribution. Although cannot be calculated in practice for a given Ω due to its dependency on Ω0, the random vector acts as a proxy of in studying the FDR function. In fact, by construction (26) can be further written as
(28) |
where .
Observe that the randomness in both and is fully determined by the same random matrices X and , which are independent of each other and whose rows are i.i.d. copies from N (0, Σ0) and N(0, , ), respectively. For this reason, we can rewrite the FDR function in (28) as
where is the augmented matrix collecting columns of X and , and
which completes the proof of Proposition 1.
A.3. Lemma 2 and its proof
Lemma 2. Assume that with an → 0 some deterministic sequence and all the notation the same as in Proposition 1. If Λmin{2diag(s) − diag(s)Ω0diag(s)} ≥ c0 and for some constant c0 > 0, then it holds that
where is given in (27) and c1 > 0 is some uniform constant independent of set .
Proof. We use C to denote some generic positive constant whose value may change from line to line. First note that
(29) |
Further, since Σ0 – 2–1diag(s) is positive it follows that . Thus it holds that
For n large enough, by the triangle inequality we have
In addition, . The above two inequalities together with Lemma 2.2 in [54] entail that
(30) |
where the last step is because of (29). Thus it follows that
(31) |
where the last step comes from assumption . This concludes the proof of Lemma 2.
A.4. Proof of Theorem 2
We now proceed to prove Theorem 2 with the aid of Lemma 2 in Section A.3. We use the same notation as in the proof of Proposition 1 and use C > 0 to denote a generic constant whose value may change from line to line.
We start with proving (15). By Condition 4, we have
(32) |
where the constant L is uniform over all and . Denote by . By the definition of HΩ, it holds that
By the definition and matrix norm inequality, we deduce
Since Σ0 – 2–1diag(s) is positive definite, it follows that sj ≤ 2Λmax(Σ0) ≤ 2/Λmin(Ω0) ≤ C. Thus . This along with and Lemma 2 entails that can be further bounded as
Combining the above result with (32) leads to
(33) |
which completes the proof of (15).
We next establish the FDR control for RANK. By Condition 7, the event occurs with probability at least . Since and are estimates from independent subsample (X(1), y(1)), it follows from (15) that
(34) |
Now note that by the property of conditional expectation, we have
Let us first consider term I1. By (34), it holds that
We next consider term I2. Since FDP is always bounded between 0 and 1, we have
Combining the above two results yields
This together with the result of mentioned in the proof of Proposition 1 in Section A.2 completes the proof of Theorem 2.
A.5. Proof of Theorem 3
In this proof, we will drop the superscripts in (X(2), y(2)) whenever there is no confusion. By the definition of power, for any given precision matrix Ω and reduced model the power can be written as
where f is some function describing how the empirical power depends on the data. Note that is a stochastic process indexed by Ω, and we care about the mean of this process. Our main idea is to construct another stochastic process indexed by Ω which has the same mean but possibly different distribution. Then by studying the mean of this new stochastic process, we can prove the desired result.
We next provide more technical details of the proof. The proxy process is defined as
(35) |
where is the submatrix of CΩ = Ip − Ωdiag{s}, is the submatrix of BΩ =(diag(s) – diag(s)Ωdiag(s))1/2, and . It is easy to see that and defined using (12) have the same distribution. Since Z is independent of (X, y), we can further conclude that and have the same joint distribution for each given Ω and . Thus the power function can be further written as
Therefore, we only need to study the power of the knockoffs procedure based on the pseudo data .
To simplify the technical presentation, we will slightly abuse the notation and still use to represent the Lasso solution based on pseudo data . We will use c and C to denote some generic positive constants whose values may change from line to line. Define
(36) |
with the augmented design matrix. For any given set with , (2p) × (2p) matrix A, and (2p)-vector a, we will abuse the notation and denote by the principal submatrix formed by columns and rows in set { or } and the subvector formed by components in set { or }. For any p × p matrix B (or p-vector), we define (or )in the same way meaning that columns (or components) in set will be taken to form the submatrix (or subvector).
With the above notation, note that the Lasso solution restricted to variables in can be obtained by setting for j ∈ {1 ≤ j ≤ 2p : and } and minimizing the following objective function
(37) |
By Proposition 2 in Section A.6, it holds that with probability at least ,
(38) |
where λ=Cλ{(log p)/n}1/2 with Cλ > 0 some constant and is some positive constant. By Condition 2 and the assumption λ=Cλ{(log p)/n}1/2, we have
(39) |
Denote by the LCD based on the above . Recall that by assumption, there are no ties in the magnitude of nonzero and no ties in the nonzero components of the Lasso solution with asymptotic probability one. Let be the ordered knockoff statistics according to magnitude. Denote by j* the index such that . Then by the definition of T, it holds that . We next analyze the two cases of and separately.
Case 1. For the case of , , and . Let with φ the positive solution to equation φ2 – φ – 1 = 0 which is known as the golden ratio. If , then we can prove that using the same arguments as in equation (44) since . Thus it reduces to Case 2 below and the arguments therein follow.
On the contrary, if then we have
(40) |
since . Let us now focus on . We observed that
(41) |
where . Meanwhile, note that in view of (38) we have with probability at least ,
By (39), we can further deduce from the above inequality that
which together with entails that
Combining this result with (41) yields
(42) |
Thus in view of inequalities (40) and (42), with probability at least it holds uniformly over all and that
Case 2. We next consider the case of . By the definitions of T and j*, we have
(43) |
since otherwise we would reduce T to to get the new smaller threshold with the criterion still satisfied. We next bound T using the results in Lemma 6 in Section B.4 of Supplementary Material. Observe that (43) and Lemma 6 lead to with asymptotic probability one. Moreover, when we have and thus . Using (38), we obtain
(44) |
Combining these results leads to and thus it holds that
(45) |
for large enough n since κn → ∞ as n → ∞ and Cλ is some positive constant.
We now proceed to prove the theorem by showing that Type II error is small. In light of (38), we derive
since when . Using the triangle inequality and nothing that for , we can conclude that
Thus it follows that
uniformly over all and since T ≤ κnλ/(Cλφ).
Combining the above two scenarios, we have shown that with asymptotic probability one, uniformly over all and it holds that with probability at least ,
(46) |
since by the definition of φ. This along with the assumption in Condition 7 gives
which concludes the proof of Theorem 3.
A.6. Proposition 2 and its proof
Proposition 2. Assume that Conditions 1 and 6 hold, the smallest eigenvalue of 2diag(s) – diag(s)Ω0diag(s) is positive and bounded away from 0, and λ=Cλ{(log p)/n}1/2 with Cλ > 0 some constant. Let be the expanded vector of true regression coefficient vector. If and , then with probability at least ,
where is defined in the proof of Theorem 3 in Section A.5 and c3, c4, , and are all positive constants.
Proof. We adopt the same notation as used in the proof of Theorem 3 in Section A.5. Let us introduce some key events which will be used in the technical analysis. Define
(47) |
(48) |
where and with C4, C5 > 0 some constants. Then by Lemmas 4 and 7 in Sections B.2 and B.5 of Supplementary Material,
(49) |
for some constant c3 > 0. Hereafter we will condition on the event .
Since is the minimizer of the objective function in (37), we have
Some routine calculations lead to
(50) |
Let . Then we can simplify (50) as
(51) |
Observe that and with the support of true regression coefficient vector. Then it follows from that
Denote by and Then we can further deduce
that is,
(52) |
When λ ≥ 2λ0, it holds that
(53) |
Since , we obtain the basic inequality
(54) |
on event . It follows from (53) that
(55) |
with
With some matrix calculations, we can show that
since both Σ0 and 2diag(s)−diag(s)Ω0diag(s) have eigenvalues bounded away from 0. Thus the left hand side of (55) can be bounded from below by .
It remains to bound the right hand side of (55). For the first term, it follows from the Cauchy–Schwarz inequality that
(56) |
For the last term on the right hand of (55), by conditioning on event and using the Cauchy–Schwarz inequality, the triangle inequality, and the basic inequality (54) we can obtain
Combining the above results, we can reduce inequality (55) to
Since sa2,n → 0, there exists a positive constant such that it holds for n large enough that
Further, by (56) we have
for some constant and for large enough n. Note that by definition, and . Therefore, summarizing the above results completes the proof of Proposition 2.
References
- [1].Abramovich F, Benjamini Y, Donoho DL, and Johnstone IM (2006). Adapting to unknown sparsity by controlling the false discovery rate. Ann. Statist 34, 584–653. [Google Scholar]
- [2].Barber RF and Candès EJ (2015). Controlling the false discovery rate via knockoffs. The Annals of Statistics 43, 2055–2085. [Google Scholar]
- [3].Barber RF and Candès EJ (2016). A knockoff filter for high-dimensional selective inference arXiv:1602.03574.
- [4].Benjamini Y and Hochberg Y (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B 57, 289–300. [Google Scholar]
- [5].Benjamini Y and Yekutieli D (2001). The control of the false discovery rate in multiple testing under dependency. Ann. Statist 29, 1165–1188. [Google Scholar]
- [6].Bickel PJ and Levina E (2008). Regularized estimation of large covariance matrices. The Annals of Statistics 36, 199–227. [Google Scholar]
- [7].Bickel PJ, Ritov Y, and Tsybakov AB (2009). Simultaneous analysis of lasso and dantzig selector. The Annals of Statistics 37, 1705–1732. [Google Scholar]
- [8].Bühlmann P and van de Geer S (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications Springer. [Google Scholar]
- [9].Candès EJ, Fan Y, Janson L, and Lv J (2018). Panning for gold: ‘model-X’ knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society Series B 80, 551–577. [Google Scholar]
- [10].Chen M, Ren Z, Zhao H, and Zhou HH (2016). Asymptotically normal and efficient estimation of covariate-adjusted Gaussian graphical model. Journal of the American Statistical Association 111, 394–406. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [11].Chouldechova A and Hastie T (2015). Generalized additive model selection arXiv:1506.03850.
- [12].Clarke S and Hall P (2009). Robustness of multiple testing procedures against dependence. Ann. Statist 37, 332–358. [Google Scholar]
- [13].Efron B (2007a). Correlation and large-scale simultaneous significance testing. J. Amer. Statist. Assoc 102, 93–103. [Google Scholar]
- [14].Efron B (2007b). Size, power and false discovery rates. Ann. Statist 35, 1351–1377. [Google Scholar]
- [15].Efron B and Tibshirani R (2002). Empirical bayes methods and false discovery rates for microarrays. Genetic Epidemiology 23, 70–86. [DOI] [PubMed] [Google Scholar]
- [16].Engle R, Granger C, Rice J, and Weiss A (1986). Semiparametric estimates of the relation between weather and electricity sales. Journal of the American Statistical Association 81, 310–320. [Google Scholar]
- [17].Fan J and Fan Y (2008). High-dimensional classification using features annealed independence rules. The Annals of Statistics 36, 2605–2637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Fan J and Gijbels I (1996). Local Polynomial Modelling and Its Applications London: Chapman & Hall/CRC. [Google Scholar]
- [19].Fan J, Guo S, and Hao N (2012). Variance estimation using refitted cross-validation in ultrahigh dimensional regression. J. Roy. Statist. Soc. Ser. B 74, 37–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Fan J, Hall P, and Yao Q (2007). To how many simultaneous hypothesis tests can normal, student’s t or bootstrap calibration be applied? Journal of the American Statistical Association 102, 1282–1288. [Google Scholar]
- [21].Fan J, Han X, and Gu W (2012). Control of the false discovery rate under arbitrary covariance dependence (with discussion). Journal of American Statistical Association 107, 1019–1045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Fan J and Li R (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of American Statistical Association 96, 1348–1360. [Google Scholar]
- [23].Fan J and Lv J (2008). Sure independence screening for ultrahigh dimensional feature space (with discussion). Journal of the Royal Statistical Society Series B 70, 849–911. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Fan J and Lv J (2010). A selective overview of variable selection in high dimensional feature space (invited review article). Statistica Sinica 20, 101–148. [PMC free article] [PubMed] [Google Scholar]
- [25].Fan J, Samworth RJ, and Wu Y (2009). Ultrahigh dimensional variable selection: beyond the linear model. J. Mach. Learn. Res 10, 1829–1853. [PMC free article] [PubMed] [Google Scholar]
- [26].Fan Y, Demirkaya E, and Lv J (2017). Nonuniformity of p-values can occur early in diverging dimensions arXiv preprint arXiv:1705.03604. [PMC free article] [PubMed]
- [27].Fan Y and Fan J (2011). Testing and detecting jumps based on a discretely observed process. Journal of Econometrics 164, 331–344. [Google Scholar]
- [28].Fan Y, Kong Y, Li D, and Lv J (2016). Interaction pursuit with feature screening and selection arXiv preprint arXiv:1605.08933.
- [29].Fan Y, Kong Y, Li D, and Zheng Z (2015). Innovated interaction screening for high-dimensional nonlinear classification. The Annals of Statistics 43, 1243–1272. [Google Scholar]
- [30].Fan Y and Lv J (2013). Asymptotic equivalence of regularization methods in thresholded parameter space. Journal of the American Statistical Association 108, 1044–1061. [Google Scholar]
- [31].Fan Y and Lv J (2016). Innovated scalable efficient estimation in ultra-large Gaussian graphical models. The Annals of Statistics 44, 2098–2126. [Google Scholar]
- [32].Hall P and Wang Q (2010). Strong approximations of level exceedences related to multiple hypothesis testing. Bernoulli 16, 418–434. [Google Scholar]
- [33].Härdle W, Liang H, and Gao JT (2000). Partially Linear Models Heidelberg: Springer Physica Verlag. [Google Scholar]
- [34].Härdle W and Stoker TM (1989). Investigating smooth multiple regression by the method of average derivatives. Journal of the American statistical Association 84, 986–995. [Google Scholar]
- [35].Hastie T and Tibshirani R (1990). Generalized Additive Models London: Chapman & Hall/CRC. [DOI] [PubMed] [Google Scholar]
- [36].Hastie T, Tibshirani R, and Friedman J (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd edition). Springer. [Google Scholar]
- [37].Horowitz JL (2009). Semiparametric and nonparametric methods in econometrics Springer. [Google Scholar]
- [38].Horvath DP, Schaffer R, and Wisman E (2003). Identification of genes induced in emerging tillers of wild oat (avena fatua) using arabidopsis microarrays. Weed Science 51, 503–508. [Google Scholar]
- [39].Huber PJ (1973). Robust regression: Asymptotics, conjectures and monte carlo. The Annals of Statistics 1, 799–821. [Google Scholar]
- [40].Ichimura H (1993). Semiparametric least squares (sls) and weighted sls estimation of single-index models. Journal of Econometrics 58, 71–120. [Google Scholar]
- [41].Lauritzen SL (1996). Graphical Models Oxford University Press. [Google Scholar]
- [42].Li Q and Racine JS (2007). Nonparametric econometrics: theory and practice Princeton University Press. [Google Scholar]
- [43].Lin Q, Zhao Z, and Liu JS (2016). Sparse sliced inverse regression for high dimensional data arXiv:1611.06655.
- [44].Liu W and Shao Q-M (2014). Phase transition and regularized bootstrap in large-scale t-tests with false discovery rate control. Ann. Statist 42, 2003–2025. [Google Scholar]
- [45].Lv J (2013). Impacts of high dimensionality in finite samples. The Annals of Statistics 41, 2236–2262. [Google Scholar]
- [46].McCullagh P and Nelder JA (1989). Generalized Linear Models Chapman and Hall, London. [Google Scholar]
- [47].Meier L, van de Geer S, and Bühlmann P(2009). High-dimensional additive modeling. The Annals of Statistics 37, 3779–3821. [Google Scholar]
- [48].Meng L, Sun F, Zhang X, and Waterman MS (2011). Sequence alignment as hypothesis testing. J. Comput. Biol 18, 677–691. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [49].Prelić A, Bleuler S, Zimmermann P, Wille A, Bühlmann P, Gruissem W, Hennig L, Thiele L, and Zitzler E (2006). A systematic comparison and evaluation of biclustering methods for gene expression data. Bioinformatics 22, 1122–1129. [DOI] [PubMed] [Google Scholar]
- [50].Ramel F, Sulmon C, Bogard M, Couèe I, and Gouesbet G (2009). Differential patterns of reactive oxygen species and antioxidative mechanisms during atrazine injury and sucrose-induced tolerance in arabidopsis thaliana plantlets. BMC Plant Biology 9, 1–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [51].Ravikumar P, Liu H, Lafferty J, and Wasserman L (2009). Spam: sparse sdditive models. Journal of the Royal Statistical Society Series B 71, 1009–1030. [Google Scholar]
- [52].Ren Z, Kang Y, Fan Y, and Lv J (2018). Tuning-free heterogeneous inference in massive networks. Journal of the American Statistical Association, to appear
- [53].Schäfer J and Strimmer K (2005). A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statistical Applications in Genetics and Molecular Biology 4, 1544–1615. [DOI] [PubMed] [Google Scholar]
- [54].Schmitt BA (1992). Perturbation bounds for matrix square roots and pythagorean sums. Linear algebra and its applications 174, 215–227. [Google Scholar]
- [55].Shah RD and Samworth RJ (2013). Variable selection with error control: Another look at stability selection. J. Roy. Statist. Soc. Ser. B 75, 55–80. [Google Scholar]
- [56].Stoker TM (1986). Consistent estimation of scaled coefficients. Econometrica, 1461–1481.
- [57].Storey JD (2002). A direct approach to false discovery rates. J. Roy. Statist. Soc. Ser. B 64, 479–498. [Google Scholar]
- [58].Storey JD, Taylor JE, and Siegmund D (2004). Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach. J. Roy. Statist. Soc. Ser. B 66, 187–205. [Google Scholar]
- [59].Su W and Candès EJ (2016). Slope is adaptive to unknown sparsity and asymptotically minimax. Ann. Statist 44, 1038–1068. [Google Scholar]
- [60].Sur P, Chen Y, and Candès EJ (2017). The likelihood ratio test in high-dimensional logistic regression is asymptotically a rescaled chi-square arXiv preprint arXiv:1706.01191.
- [61].Tibshirani R (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58, 267–288. [Google Scholar]
- [62].Wienkoop S, Glinski M, Tanaka N, Tolstikov V, Fiehn O, and Weckwerth W (2004). Linking protein fractionation with multidimensional monolithic reversed-phase peptide chromatography/mass spectrometry enhances protein identification from complex mixtures even in the presence of abundant proteins. Rapid Commun. Mass Spectrom 18, 643–650. [DOI] [PubMed] [Google Scholar]
- [63].Wille A, Zimmermann P, Vranová E, Fürholz A, Laule O, Bleuler S, Hennig L, Prelić A, von Rohr P, Thiele L, et al. (2004). Sparse graphical Gaussian modeling of the isoprenoid gene network in arabidopsis thaliana. Genome biology 5, R92. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [64].Wu WB (2008). On false discovery control under dependence. Ann. Statist 36, 364–380. [Google Scholar]
- [65].Yang E, Lozano A, and Ravikumar P (2014). Elementary estimators for high-dimensional linear regression. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), 388–396. [Google Scholar]
- [66].Zhang Y and Liu JS (2011). Fast and accurate approximation to significance tests in genome-wide association studies. Journal of the American Statistical Association 106, 846–857. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.