Abstract
Genetic studies often involve quantitative traits. Identifying genetic features that influence quantitative traits can help to uncover the etiology of diseases. Quantile regression method considers the conditional quantiles of the response variable, and is able to characterize the underlying regression structure in a more comprehensive manner. On the other hand, genetic studies often involve high-dimensional genomic features, and the underlying regression structure may be heterogeneous in terms of both effect sizes and sparsity. To account for the potential genetic heterogeneity, including the heterogeneous sparsity, a regularized quantile regression method is introduced. The theoretical property of the proposed method is investigated, and its performance is examined through a series of simulation studies. A real dataset is analyzed to demonstrate the application of the proposed method.
Keywords: Heterogeneous sparsity, Quantitative traits, Variable selection, Quantile regression, Genomic features
1. Introduction
In many genetic studies, quantitative traits are collected for studying the associations between the traits and certain genomic features. For example, body mass index, lipids and blood pressure have been investigated with respect to single nucleotide polymorphisms (SNPs) (Avery et al., 2011). With the rapid progress of high-throughput genome technology, new types of quantitative traits have emerged and attracted considerable research interest, such as gene expression, DNA methylation, and protein quantification (Landmark-Høyvik et al., 2013). The analysis of these quantitative traits yields new insight into biological processes and sheds light on the genetic basis of diseases.
Typically, quantitative genetic traits are analyzed by least-square based methods, which seek to estimate the E(Y |Z), where Y is the trait and Zis the set of covariates of interest. Quantile regression (Koenker and Bassett, 1978) instead considers the conditional quantile function of Y given Z, Qτ(Y |Z), at a given τ ∈ (0, 1). When τ is fixed at 0.5, quantile regression is simply the median regression, which is well known to be more robust than the least-square estimation. In examining quantiles at different τ, quantile regression provides a more complete picture of the underlying regression structure between Y and Z.
Like least-square methods, traditional quantile regression methods only consider a handful of covariates. With the emergence of high-dimensional data, penalized quantile regression methods have been developed in recent years, and can be broadly classified into two classes. The first class seeks to harness the information shared among different quantiles to jointly estimate the regression coefficients. Jiang et al. (2014) proposed two novel methods, Fused Adaptive Lasso (FAL) and Fused Adaptive Sup-norm estimator (FAS), for variable selection in interquantile regression. FAL combines the LASSO penalty (Tibshirani, 1996) with the fused lasso penalty (Tibshirani et al., 2005), while FAS imposes the grouped sup-norm penalty and the fused lasso penalty. The fused lasso penalty delivers the effect of smoothing the regression slopes of adjacent quantiles, hence FAL and FAS can be used to identify common quantile slopes if such a smoothing property is desired. Zou and Yuan (2008) adopted an F∞ penalty, which either eliminates or retains all the regression coefficients for a covariate at multiple quantiles. The method by Jiang et al. (2013) seeks to shrink the differences among adjacent quantiles by resorting to the fused lasso penalty; however, this method does not perform variable selection at the covariate level, i.e., it does not remove any covariates from the model.
The second class of methods focuses on a single quantile at a time. Koenker (2004) imposed a LASSO penalty on the random effects in the mixed-effect quantile regression model; Li and Zhu (2008) adopted the LASSO penalty; Wu and Liu (2009) explored the SCAD penalty (Fan and Li, 2001) and the adaptive LASSO penalty (Zou, 2006), and proved the selection consistency and normality of the proposed estimators (for a fixed dimension of covariates). Wang et al. (2012) investigated several penalties for quantile regression under the scenario of p > n, i.e., the dimension is larger than the sample size, and proved the selection consistency of their proposed methods through a novel use of the subgradient theory. A recent approach by Peng et al. (2014) shares characteristics with both classes; its loss function targets a single τ, while its penalty borrows information across different quantiles; their proposed penalty was shown to achieve more accurate estimation than the one that uses information from only a single quantile.
If the regression coefficients associated with a covariate are treated as a group, then some groups may be entirely zero and some other groups may be partially zero. Thus, sparsity can occur both at the group level and within the group level, and we call this type of sparsity as the heterogeneous sparsity. In this paper, we propose an approach that conducts joint variable selection and estimation for multiple quantiles under the situation that p can diverge with n. Our proposed method is able to achieve sparsity both at the group level and within the group level. We note that FAL can potentially yield sparsity at the two levels, but this approach has not been evaluated under the scenario where the dimension p is high. To the best of our knowledge, this is the first paper that explicitly investigates the heterogeneous sparsity for quantile regression. We show that our method tends to be more effective than the compared methods in handling heterogeneous sparsity when the dimension is high. We also provide theoretical justification for the proposed method. The paper is organized as follows. In Section 2, we describe the proposed method and the implementation details. In Section 3, we prove the theoretical properties of the proposed method. In Section 4, we show the results of simulation studies regarding several related methods, and in Section 5, we present an example of real data analysis for the proposed method.
2. Method
2.1. Data and model
Let Z represent the vector consisting of p covariates, such as SNPs or genes. Let γτ be the p-dimension coefficient vector at the τth quantile. Let Y be the random variable that denotes the phenotype we are interested in, such as quantitative traits in genetic studies. For a given τ ∈ (0, 1), the linear quantile regression model is known as
where γ0 is the intercept, and Qτ(Y |Z) is the τ-th conditional quantile of Y given Z, that is, PY|Z(Y ≤ Qτ(Y |Z)) = τ.
The dimension p can be potentially very high in genomic studies, but typically it is assumed that only a limited number of genomic features contribute to the phenotype. For this reason, one needs to find a sparse estimation of γτ to identify those important genomic features. On the other hand, we also wish to consider multiple quantile levels simultaneously so that information shared among different quantile levels can be utilized. To this end, we propose the following model for the joint estimation of the regression coefficients for multiple quantiles. Given M quantile levels, 0 < τ1 < ⋯ < τM < 1, our linear quantile regression model is defined as, for τm (m = 1, …, M),
| (1) |
where γm0 is the intercept, and γτm is the p-dimension coefficient vector. For ease of notation, we write γm = γτm. For the above model, we further define with γm = (γm1, …, γmp)T, and intercept parameter γ0 ≡ (γ10, …, γM0)T.
We now focus on the sample version of the model (1). Let be an i.i.d. random sample of size n from population (Y, ZT)T, where Zi = (Zi1, Zi2, …, Zip)T. The sample quantile loss function is defined as
where ρm(u) = u(τm −I(u < 0)) is the quantile check loss function with I(·) being the indicator function. To introduce sparsity to the model, we add to the loss function a penalty function
where λn is the tuning parameter, and ωn = (ωmj : 1 ≤ m ≤ M, 1 ≤ j ≤ p) is the weight vector whose component ωmj > 0 is the weight of parameter γmj. Note that the penalty is a nonconvex function. It essentially divides the regression coefficients into p groups, and each group consists of M parameters associated with the jth covariate. The motivation is that, while each quantile may have its own set of regression parameters, we wish to borrow strengths from each quantile to select covariates that are important across all quantiles as well as those covariates that are important to only some of the quantiles. This type of nonconvex penalty has been considered in the Cox regression model and other settings (see Wang et al. (2009) for an example), but to the best of our knowledge has not been studied in the quantile regression model. We can choose , where γ̃ mj is some consistent estimate for γmj; for example, we may use the estimates from the unpenalized quantile regression conducted at each individual quantile level. When p < n but is fixed, the consistency of the unpenalized estimates has been proved by Koenker and Bassett (1978). When p < n but is diverging with n, the estimates from unpenalized quantile regression are consistent by adapting to Lemma A.1 of Wang et al. (2012). Thus, our objective function is defined as
For the sake of convenience, we define with , m = 1, …, M, and the corresponding parameter space by Θn ⊂ ℝM(p+1). Further define , i = 1, … n. Then, Qn(γ0, γ) can be written as Qn(θ). Emphasizing that γ is a subset of θ, we can write the objective function as
| (2) |
Let θ̂ be a local minimizer of Ln(θ) in (2) for θ ∈ Θn. Because the heterogeneity of sparsity is explicitly taken into account in this model, we name our proposed method as Heterogeneous Quantile Regression (Het-QR). Our model can be modified to accommodate different weights for the losses at different quantiles. That is, Qn(θ) may take the form , where πm is the weight for the mth quantile. Some examples on the choice of weight πm can be found in Koenker (2004) and Zhao and Xiao (2014).
2.2. Implementation
We design the following algorithm to implement the proposed method. First, we show in the Appendix that the objective function can be transformed into
where ξ = (ξ1, …, ξp) are newly introduced nonnegative parameters. Then, the new objective function can be solved by the following iterative algorithm:
Step 1: We first fix θ to solve ξj, j = 1,…, p. To this end, ξj has a closed-form solution. That is, , j = 1, …, p.
- Step 2: We fix ξj, j = 1, …, p, to solve θ. That is, we aim to solve
We can formulate this objective function as a linear program and derive its dual form (see Appendix for details), then the optimization can be conducted by recoursing to existing linear programming packages; we utilize the Quantreg R package (Koenker, 2015) in our implementation. Step 3: Iterate step 1 and step 2 until convergence. Due to the nonconvexity of the penalty function, the estimate is a local minimizer.
3. Theoretical properties
Now we investigate the asymptotic properties of the proposed method. FAL and FAS considered p to be fixed. We study the situation where p can diverge with n. Let the true value of θ be θ*, where the corresponding true values of γm0, γmj, γ are , γ*, respectively. Let the number of nonzero elements in γ* be s. To emphasize that s and p can go to infinity, we use sn and pn when necessary. For Theorems 1 and 2 (to be shown), pn is at the order lower than O(n1/2); for Theorem 3, pn is at the order lower than O(n1/6).
We define some index sets to be used in our theorems. Let 𝒩 = {(m, j) : 1 ≤ m ≤ M, 1 ≤ j ≤ pn}. For the true parameters, define the oracle index set and its complementary set . Assume that I has cardinality |I| = sn.
We define some notations used for our theorems.
Define dnI = max(m,j)∈I ωmj and . Define and θ̂I as the subvectors of the vectors θ* and θ̂ corresponding to the oracle index set I, respectively. For every fixed 1 ≤ m ≤ M, define the index set . Let
| (3) |
| (4) |
where with ZimI being the subvector of Zi corresponding to index set Im.
Let F(y|z) and f (y|z) be the conditional distribution function and the conditional density function of Y given Z = z, respectively. For any proper square matrix A, let λmin(A) and λmax(A) denote the minimum and maximum eigenvalue of A, respectively.
Before stating the main theorems, we need the following regularity conditions labeled by ℒ:
-
(L1)
The conditional density f (y|z) has first order derivative f′(y|z) with respect to y; And f (y|z) and f′ (y|z) are uniformly bounded away from 0 and ∞ on the support set of Y and the support set of Z;
-
(L2)
For random sample Zi = (Zi1, Zi2, …, Zip)T, 1 ≤ i ≤ n, there exists a positive constant C1 such that max1≤i≤n, 1≤j≤p |Zij| ≤ C1;
-
(L3)
For , i = 1, …, n, let . There exist positive constants C2 < C3 such that C2 ≤ λmin(n−1Sn) ≤ λmax(n−1Sn) ≤ C3;
-
(L4)
The dimension sn satisfies that sn = a0nα0, and the dimension pn satisfies that pn = a1nα1, where , and a0 and a1 are two positive constants;
-
(L5)
The matrix Bn given in (6) (see Appendix) satisfies that C4 ≤ λmin(n−1Bn) ≤ λmax(n−1Bn) ≤ C5, where C4 and C5 are positive constants.
-
(L6)
The matrix Σn satisfies that λmin(Σn) ≥ C6, where C6 is a positive constant.
Conditions (L1)–(L3) and (L5)–(L6) are seen in typical theoretical investigation of quantile regression. Condition (L4) specifies the magnitude of sn and pn with respect to the sample size. Under the aforementioned regularity conditions, we present the following three theorems. The proof is relegated to the Appendix. Define and θ̂II as the subvectors of the vectors θ* and θ̂ corresponding to the index set II, respectively. Clearly, . Due to the nonconvexity of the penalty function, all the following theorems and their proof (in the Appendix) are regarding to a local minimizer of the objective function.
Theorem 1
Under conditions (L1)–(L5), if , then the estimator θ̂ of θ* exists, is a local minimizer, and satisfies the estimation consistency that .
Theorem 1 shows that the proposed method is consistent in parameter estimation. The convergence rate is typical for the settings where p diverges with n.
Theorem 2
Under conditions (L1)–(L5), if and , then P(θ̂II = 0) → 1.
Theorem 2 indicates that our method can distinguish the truly zero coefficients from the nonzero coefficients with probability tending to 1. It can be seen that the penalty weight dnII plays a critical role in the property of selection consistency.
Theorem 3
Under conditions (L1)–(L3) and (L5)–(L6), if , and , and the powers of sn and pn in condition (L4) satisfy , then for any unit vector b ∈ ℝM+sn we have
Theorem 3 suggests that the estimated nonzero coefficients have the asymptotic normality. Heuristically, for given n, λn and ωmj, the considered penalty in (2) has its slope tending to infinity when γmj goes to 0, thus the penalty tends to dominate small γmj. On the other hand, when λn is sufficiently small, the penalty has little impact on the estimation of relatively large γmj. These properties, in combination with proper choice of the tuning parameter, play major roles in the oracle property of the proposed estimator. The oracle property for coefficients within a group is mainly due to the penalty weights, which put large penalty on small coefficients (and small penalty on large coefficients).
4. Simulation studies
We conduct simulation studies to evaluate the proposed method along with the following methods: the QR method, which applies quantile regression to each individual quantile level without any variable selection; the QR-LASSO method, which adopts the L1-penalized quantile regression for each quantile level; the QR-aLASSO method, which imposes the adaptive LASSO penalty on each quantile level (Wu and Liu, 2009); the FAL and the FAS method (Jiang et al., 2014). Both FAL and FAS contain a fused-LASSO type of penalty, which encourages the equality of the regression coefficients among different quantiles. FAL allows within-group sparsity, while FAS generates sparsity only at the group level. For Het-QR, we set the penalty weight ωmj to be the inverse of the estimate from the unpenalized quantile regression (unless specified otherwise).
We first consider a model where important covariates have nonzero regression coefficients across all (or almost all) quantiles. We simulate 6 independent covariates, each of which follows the uniform(0,1) distribution. Then we simulate the trait as
where β1 = 1, β2 = 1, β6 = 2, κ = 2 and ε ~ N(0, 1). Under this set up, Z1 and Z2 have constant regression coefficients across all quantiles, while Z6’s regression coefficient is determined by 2 + 2 × Φ−1(τ), which varies across different quantiles. That is, the τth quantile of Y given Z1, Z2 and Z6 is
This model is in line with the model considered by Jiang et al. (2014). All the other 3 covariates, Z3, Z4 and Z5, have no contribution to Y. The sample size n is set to 500. To select the tuning parameter, we follow the lines of Mazumder et al. (2011) and Wang et al. (2012) to generate another dataset with sample size of 10n, and then pick the tuning parameter at which the check loss function is minimized. The total number of simulations for each experiment is 100.
We consider various criteria to evaluate the performance of the compared methods, such as the model size and the parameter estimation error (PEE). The model size refers to the number of estimated non-zero coefficients among the M quantile levels. The PEE is calculated by . To evaluate the prediction error, we simulate an independent dataset, (Ypred, Zpred), with sample size of 100n, and then calculate the F-measure (FM) (Gasso et al., 2009), the quantile prediction error (QPE) and the prediction error (PE). The FM is equal to 2 × Sa/Ma, where Sa is the number of truly nonzero slopes being captured, and Ma is the sum of the estimated model size and the true model size. The QPE is defined as the sample version of the , averaged across all the subjects. The PE is defined as Qn (θ̂)/n, i.e., the check loss averaged across all considered quantiles for all the test samples, evaluated on (Ypred, Zpred).
For the purpose of illustration, we consider three quantiles, τ = 0.25, 0.5, 0.75. The results are shown in the upper panel of Table 1. It can be seen that when p is 6, FAL has the lowest parameter estimation error and FAS has the lowest PE, though the difference between these two methods and the other compared methods is generally quite small. Next, we increase p to 100 to evaluate the methods under a higher dimension. As shown in the lower panel of Table 1, when p is equal to 100, both FAL and FAS have deteriorated performance; for instance, their model sizes tend to be twice (or more) as the true model size and their PEE and PE are higher than Het-QR. This experiment shows that the performance of FAL and FAS is suboptimal when the dimensionality grows large; one potential explanation is that the penalties of FAL and FAS may overemphasize the interquantile shrinkage, which make them less efficient when many noise covariates are present. Further research is merited. As to computation, we did not observe non-convergence for Het-QR in our experiments.
Table 1.
Comparison of Het-QR and other methods in the absence of within-group sparsity (standard error of the sample mean shown in the parenthesis).
| Method | Model-size | FM (%) | PEE × 100 | QPE × 103 | PE × 103 |
|---|---|---|---|---|---|
| p = 6 | |||||
| QR | 18 | – | 53.3(1.5) | 10.2(0.6) | 1041.2(0.6) |
| QR-LASSO | 15.0(0.2) | 75(0.7) | 36.1(1.2) | 6.7(0.5) | 1038.4(0.6) |
| QR-aLASSO | 10.9(0.2) | 91(0.7) | 25.6(0.9) | 5.8(0.4) | 1037.2(0.5) |
| FAL | 11.1(0.2) | 91(0.9) | 25.3(1.0) | 6.2(0.5) | 1037.2(0.5) |
| FAS | 12.1(0.2) | 86(0.9) | 26.4(1.1) | 5.9(0.4) | 1037.1(0.5) |
| Het-QR | 9.6(0.1) | 97(0.6) | 26.0(0.9) | 6.5(0.5) | 1037.5(0.5) |
| p = 100 | |||||
| QR | 300 | – | 1556.4(12.7) | 325.0(5.1) | 1242.1(2.7) |
| QR-LASSO | 47.3(1.2) | 33(0.7) | 120.5(3.5) | 23.4(1.2) | 1052.4(0.9) |
| QR-aLASSO | 16.8(0.4) | 72(1.1) | 46.9(1.9) | 10.5(0.8) | 1042.1(0.7) |
| FAL | 17.7(0.6) | 70(1.4) | 41.9(1.8) | 10.2(0.8) | 1041.0(0.6) |
| FAS | 23.7(0.7) | 58(1.2) | 58.0(2.4) | 13.9(0.9) | 1043.4(0.7) |
| Het-QR | 9.3(0.1) | 99(0.4) | 29.5(1.2) | 8.4(0.6) | 1039.9(0.6) |
Next, we systematically evaluate the situation where within-group sparsity exists. To introduce correlations into covariates, we simulate 20 blocks of covariates, each block containing 5 correlated covariates. For each block, we first simulate a multivariate normal distribution with mean being the unit vector and covariance matrix following either the compound symmetry or the auto-regressive correlation structure with correlation coefficient ρ = 0.5; next, we take the absolute value of the simulated random normal variables as the covariates Z. The total number of covariates is 100. We specify the conditional quantile regression coefficient function γ(τ) as follows. For τ ∈ (0, 0.3], the first 8 regression slopes for Z are (0.5, 0, 0, 0, 0, 0.6, 0, 0); for τ ∈ (0.3, 0.7], the first 8 regression slopes are (0.5, 0, 0, 0, 0, 0.6, 0, 0.7); for τ ∈ (0.7, 1.0), the corresponding slopes are (0.6, 0, 0, 0, 0, 0.7, 0, 0.7). All other regression slopes are 0. Thus, the first and the sixth covariates are active among all quantiles, while the eighth covariate is active only for the last two quantile levels. To generate Y, we first simulate a random number τ ∈ Uniform (0, 1), and then determine the γ(τ) based on τ; subsequently, we obtain
where F−1 is the inverse cumulative function of some distribution F. That is, the τth quantile of Y given Z is
We explore different distributions for F: the standard normal distribution, the T-distribution with degrees of freedom equal to 3 (T3), and the exponential distribution with shape parameter equal to 1.
We first consider the normal distribution for F. The results are shown in Table 2. Because no variable selection is conducted, QR has much larger PEE, QPE, and PE than the other methods; for example, the PEE and QPE of QR are more than 10 times higher than the compared methods. QR-LASSO, QR-aLASSO, FAL, and FAS have more tamed model sizes, but still contain a number of noise features. Het-QR yields a model that is closer to the true model, in which the three considered quantiles contain 2, 2, and 3 nonzero slopes, respectively. Het-QR also appears to have the highest FM, and lowest errors for parameter estimation and prediction. Next, we consider the distribution to be the T3 (Table 3) and the exponential distribution (Table 4), and the results show a similar pattern. These experiments indicate that Het-QR can handle higher dimension as well as the heterogeneous sparsity better than the other methods.
Table 2.
Comparison of Het-QR and other methods for p = 100 under the normal distribution (standard error of the sample mean shown in the parenthesis).
| Method | Model-size | FM (%) | PEE × 100 | QPE × 103 | PE × 103 |
|---|---|---|---|---|---|
| Correlation structure: auto-regressive | |||||
| QR | 300 | – | 1189.3(7.2) | 791.6(9.1) | 1606.5(3.1) |
| QR-LASSO | 41.1(1.3) | 34(0.9) | 108.7(3.1) | 78.5(3.1) | 1353.1(1.2) |
| QR-aLASSO | 17.8(0.5) | 63(1.2) | 60.4(2.5) | 49.2(3.2) | 1341.5(1.2) |
| FAL | 20.3(0.8) | 60(1.6) | 59.5(2.6) | 49.9(3.5) | 1340.4(1.2) |
| FAS | 24.8(1.1) | 53(1.5) | 85.1(2.6) | 86.4(3.1) | 1351.2(1.1) |
| Het-QR | 8.6(0.1) | 96(0.7) | 34.7(1.6) | 32.6(2.8) | 1334.9(1.1) |
| Correlation structure: compound symmetry | |||||
| QR | 300 | – | 1218.1(8.4) | 792.7(9.5) | 1606.1(3.4) |
| QR-LASSO | 40.1(1.1) | 35.0(0.9) | 105.9(3.1) | 74.8(3.0) | 1351.9(1.3) |
| QR-aLASSO | 18.9(0.6) | 61(1.3) | 61.2(2.7) | 48.0(3.1) | 1341.6(1.3) |
| FAL | 19.9(0.8) | 61(1.4) | 61.0(2.9) | 54.4(4.0) | 1341.2(1.3) |
| FAS | 24.1(1.1) | 55(1.6) | 83.8(2.8) | 89.3(3.2) | 1351.3(1.2) |
| Het-QR | 8.9(0.2) | 94(0.8) | 34.6(1.9) | 31.4(2.9) | 1334.7(1.2) |
Table 3.
Comparison of the Het-QR and other methods for p = 100 under the T3 distribution (standard error of the sample mean shown in the parenthesis).
| Method | Model-size | FM (%) | PEE × 100 | QPE × 103 | PE × 103 |
|---|---|---|---|---|---|
| Correlation structure: auto-regressive | |||||
| QR | 300 | – | 1452.1(10.2) | 1149.1(15.2) | 2114.3(4.5) |
| QR-LASSO | 39.4(1.3) | 35(0.9) | 123.6(3.3) | 103.7(3.7) | 1796.8(1.4) |
| QR-aLASSO | 19.7(0.6) | 58(1.3) | 81.0(3.0) | 77.3(4.2) | 1787.7(1.6) |
| FAL | 20.6(0.7) | 58(1.3) | 76.4(3.1) | 74.8(4.6) | 1786.2(1.6) |
| FAS | 24.2(0.9) | 53(1.3) | 99.6(3.4) | 107.8(4.2) | 1795.6(1.6) |
| Het-QR | 8.9(0.2) | 92(1.0) | 47.3(2.3) | 53.5(4.0) | 1779.4(1.5) |
| Correlation structure: compound symmetry | |||||
| QR | 300 | – | 1487.4(11.2) | 1156.3(15.6) | 2115.2(4.7) |
| QR-LASSO | 38.7(1.0) | 35(1.2) | 120.5(3.2) | 99.4(3.4) | 1795.5(1.6) |
| QR-aLASSO | 20.3(0.7) | 56(1.3) | 81.6(3.3) | 76.0(4.1) | 1787.6(1.8) |
| FAL | 22.5(0.9) | 56(1.5) | 82.5(3.9) | 78.1(5.1) | 1786.6(1.8) |
| FAS | 25.6(1.0) | 51(1.3) | 99.1(3.5) | 107.0(4.1) | 1794.7(1.7) |
| Het-QR | 9.1(0.3) | 91(1.1) | 48.6(2.9) | 55.4(4.7) | 1780.1(1.8) |
Table 4.
Comparison of Het-QR and other methods for p = 100 under the exponential distribution (standard error of the sample mean shown in the parenthesis).
| Method | Model-size | FM (%) | PEE × 100 | QPE × 103 | PE × 103 |
|---|---|---|---|---|---|
| Correlation structure: auto-regressive | |||||
| QR | 300 | – | 1024.2(7.1) | 618.4(8.8) | 1446.4(3.1) |
| QR-LASSO | 39.3(1.1) | 36(0.9) | 88.7(2.5) | 60.6(2.2) | 1220.9(1.0) |
| QR-aLASSO | 17.1(0.5) | 66(1.2) | 47.9(1.9) | 37.0(2.3) | 1210.2(0.9) |
| FAL | 18.9(0.7) | 63(1.5) | 43.0(1.9) | 33.3(3.0) | 1209.5(1.0) |
| FAS | 26.9(1.2) | 51(1.7) | 74.5(2.3) | 79.1(3.5) | 1223.8(1.1) |
| Het-QR | 8.7(0.1) | 96(0.6) | 28.8(1.3) | 24.2(1.8) | 1205.4(0.8) |
| Correlation structure: compound symmetry | |||||
| QR | 300 | – | 1041.6(7.5) | 610.9(8.3) | 1444.4(2.9) |
| QR-LASSO | 39.6(1.2) | 36(0.9) | 86.1(2.6) | 58.0(2.4) | 1220.2(1.1) |
| QR-aLASSO | 17.0(0.5) | 66(1.3) | 46.6(2.0) | 35.7(2.3) | 1210.1(1.1) |
| FAL | 18.7(0.7) | 63(1.4) | 41.8(2.0) | 31.0(3.0) | 1208.8(1.1) |
| FAS | 27.6(1.4) | 51(1.8) | 74.3(2.5) | 79.6(3.3) | 1223.5(1.1) |
| Het-QR | 8.7(0.1) | 96(0.7) | 27.2(1.3) | 22.3(1.9) | 1205.1(0.9) |
We finally consider the situation where p > n. While theoretical development is still needed for this setting, our experiment is to evaluate the practical performance of the proposed approach. We let n = 500 and p = 600. For τ ∈ (0, 0.3], the first 8 regression slopes for Z are (0.6, 0, 0, 0, 0,0.6, 0, 0); for τ ∈ (0.3, 0.7], the first 8 regression slopes are (0.6, 0, 0.8, 0, 0, 0.7, 0, 0.8); for τ ∈ (0.7, 1.0), the corresponding slopes are (0.8, 0, 0.8, 0, 0, 0.8, 0, 1.0). In this scenario, Z3 and Z8 have zero coefficients for the first quantile, but nonzero coefficients for the other two quantiles. That is, the τth quantile of Y given Z is
We omit QR, FAL and FAS because they are not designed to handle the setting of ‘p > n’. QR-LASSO can be directly applied to data that have dimension higher than sample size. For QR-aLASSO, we derive the penalty weights using the estimates from QR-LASSO. For Het-QR, we first run Het-QR with the penalty weights equal to 1 to obtain the initial estimators for , m = 1, … M, and then use the inverse of the initial estimators as the penalty weights; finally, we run Het-QR to obtain the θ̂. The results are shown in Table 5. Het-QR tends to yield a smaller model than the compared methods and have better performance in estimating the regression coefficients as well as in prediction.
Table 5.
Comparison of Het-QR and other methods for p > n (standard error of the sample mean shown in the parenthesis).
| Method | Model-size | FM (%) | PEE × 100 | QPE × 103 | PE × 103 |
|---|---|---|---|---|---|
| Correlation structure: auto-regressive | |||||
| QR-LASSO | 67.6(1.8) | 25(0.6) | 199.0(4.2) | 215.5(6.0) | 1801.4(1.7) |
| QR-aLASSO | 15.2(0.3) | 73(1.1) | 81.3(3.0) | 113.3(6.2) | 1768.8(1.6) |
| Het-QR | 10.6(0.1) | 96(0.6) | 44.6(2.5) | 61.0(7.0) | 1753.9(1.6) |
| Correlation structure: compound symmetry | |||||
| QR-LASSO | 62.9(1.8) | 27(0.7) | 182.4(4.2) | 199.8(5.9) | 1796.4(1.8) |
| QR-aLASSO | 15.2(0.3) | 74(1.0) | 76.7(2.9) | 103.5(5.9) | 1766.6(1.6) |
| Het-QR | 10.6(0.1) | 96(0.7) | 46.4(3.0) | 63.1(7.8) | 1754.2(1.7) |
5. Real data analysis
We collect 206 brain tumor patients each with 91 gene expression levels. All patients were de-identified. All patients were diagnosed to have glioma, one of the deadliest cancers among all cancer types. Indeed, many patients died within 1 year after the diagnosis. Glioma is associated with a number of genes. We focus on the PDGFRA gene, which encodes the alpha-type platelet-derived growth factor receptor and has been shown to be an important gene for brain tumors (Holland, 2000; Puputti et al., 2006). We use this dataset to investigate how the expression of PDGFRA is influenced by other genes.
For demonstration, we set τ to 0.25, 0.5, and 0.75. For QR-LASSO, QR-aLASSO, and Het-QR, we use cross-validation to ascertain the tuning parameter. That is, (1) we divide the data into 3 folds; (2) we use 2 folds to build the model and 1 fold to calculate the prediction error, and this is done three times; (3) we choose the λn that minimizes the prediction error as the best tuning parameter (we were not able to obtain an independent dataset with sample size 10n to determine the tuning parameter; it would be meaningful to compare the two procedures when such a dataset becomes available in future). For FAL and FAS, we follow Jiang et al. (2014) to use BIC and AIC for determining the tuning parameter, and the corresponding methods are named as FAL-BIC, FAL-AIC, FAS-BIC, FAS-AIC. Hence, in total 7 approaches are compared. We first examine the model sizes. For all the three quantiles combined, the number of nonzero covariates of the seven models are 89 (QR-LASSO), 47 (QR-aLASSO), 25 (Het-QR), 49 (FAL-BIC), 182 (FAL-AIC), 93 (FAS-BIC), 167 (FAS-AIC). For illustration, we list some of the estimated regression coefficients in Table 6. It can be seen that the coefficients for a given gene often differ among different quantiles. For a better view of the regression coefficients among different quantiles, we plot the estimated coefficients for the first 30 covariates (Fig. 1). Table 6 and Fig. 1 show that most models (except FAS-BIC and FAS-AIC) demonstrate heterogeneous sparsity, i.e., some covariates have nonzero effects in only one or two of the three quantiles. FAS-BIC and FAS-AIC do not show this type of sparsity due to the sup-norm penalty they adopt, as this penalty either selects or removes a covariate for all the quantiles. FAL-AIC and FAS-AIC models contain more nonzero estimates than FAL-BIC and FAS-BIC, consistent with the fact that BIC favors smaller models than AIC. Compared to other methods, Het-QR yields a smaller model which may be easier to interpret and prioritize candidate genes for further functional study.
Table 6.
A snapshot of the estimated regression coefficients (only 5 covariates are shown).
| Gene | τ | QR-LASSO | QR-aLASSO | Het-QR | FAL-BIC | FAL-AIC | FAS-BIC | FAS-AIC |
|---|---|---|---|---|---|---|---|---|
| 0.25 | 0.10 | 0.06 | ||||||
| POLR2A | 0.5 | 0.20 | 0.06 | |||||
| 0.75 | 0.20 | 0.06 | ||||||
| 0.25 | 0.13 | 0.30 | 0.09 | 0.19 | ||||
| SDHA | 0.5 | 0.13 | 0.26 | 0.09 | 0.19 | |||
| 0.75 | 0.03 | 0.32 | 0.2 | 0.13 | 0.58 | 0.09 | 0.19 | |
| 0.25 | 0.09 | 0.01 | 0.04 | |||||
| CDKN2A | 0.5 | 0.03 | 0.03 | 0.01 | 0.04 | |||
| 0.75 | 0.02 | 0.01 | 0.04 | |||||
| 0.25 | 0.10 | 0.10 | 0.19 | |||||
| CDKN2C | 0.5 | 0.02 | 0.07 | 0.13 | 0.19 | 0.26 | 0.10 | 0.19 |
| 0.75 | 0.05 | 0.33 | 0.25 | 0.19 | 0.34 | 0.10 | 0.19 | |
| 0.25 | 0.07 | 0.03 | 0.05 | |||||
| DLL3 | 0.5 | 0.06 | 0.03 | 0.05 | ||||
| 0.75 | 0.06 | 0.12 | 0.07 | 0.01 | 0.16 | 0.03 | 0.05 | |
Note: Zero estimates are left blank.
Fig. 1.
Graphic view of the first 30 regression coefficients estimated by different methods. Estimates are thresholded at 0.4 and −0.4, and only nonzero estimates are shown.
The covariates selected by Het-QR are shown in Table 7. Consistent with the model assumption, the estimated regression coefficients show heterogeneity among quantiles. For example, the CDKN2C gene has zero coefficient at τ = 0.25, and nonzero coefficients at τ = 0.5 and 0.75. In contrast, some other genes, such as BMP2 and SLC4A4, have nonzero coefficients across all the considered quantiles. This suggests that the expression of PDGFRA is influenced by other genes in a delicate manner that may not be fully characterized by least square methods or quantile regression methods that fail to account for the genetic heterogeneity. CDKN2C encodes a cyclin-dependent kinase, and BMP2 and SLC4A4 encode a bone morphogenetic protein and a sodium bicarbonate cotransporter, respectively. This indicates that PDGFRA’s expression is associated with genes with a wide spectrum of cellular functions. The gene EGFR has non-positive regression coefficients, suggesting that there may be some negative control between PDGFRA and EGFR. Future biological studies may provide new insight into the gene regulation of PDGFRA.
Table 7.
The model selected by the Het-QR method.
| Estimated regression coefficients | |||
|---|---|---|---|
| Gene | τ = 0.25 | τ = 0.5 | τ = 0.75 |
| SDHA | 0.2 | ||
| BMP2 | 0.35 | 0.31 | 0.34 |
| CDKN2C | 0.13 | 0.25 | |
| DLL3 | 0.07 | ||
| EGFR | −0.08 | −0.29 | |
| GRIA2 | 0.28 | 0.22 | 0.18 |
| LTF | 0.07 | ||
| OLIG2 | 0.14 | 0.30 | 0.38 |
| PLAT | 0.20 | 0.21 | 0.25 |
| SLC4A4 | −0.21 | −0.25 | −0.24 |
| TAGLN | −0.20 | ||
| TMEM100 | 0.20 | 0.17 | |
One main purpose of variable selection is to apply the selected variables from one dataset to other datasets to guide statistical analysis. Along this line, we further collect the brain tumor data from the cancer genome atlas (TCGA) project, which contains 567 subjects. We apply the models selected by different methods from the training data to the TCGA data to assess the prediction accuracy of the different models. We randomly split the TCGA data into two halves, and use one half to estimate the regression coefficients and the other half to calculate the prediction error; the prediction error is then averaged across the two halves. We repeat the random-splitting 400 times, and calculate the average of the prediction errors. Het-QR appears to have a slightly lower prediction error than the other compared ones, but the difference among the seven methods is generally small; in detail, the observed prediction errors are 1.349 (QR-LASSO), 1.351 (QR-aLASSO), 1.345 (Het-QR), 1.362 (FAL-BIC), 1.513 (FAL-AIC), 1.355 (FAS-BIC), and 1.430 (FAS-AIC).
6. Discussion
In this article, we have proposed a variable selection method that is able to conduct joint variable selection and estimation for multiple quantiles simultaneously. The joint selection/estimation allows one to harness the strength shared among multiple quantiles and to achieve a model that is closer to the truth. In particular, our approach is able to handle the heterogeneous sparsity, under which a covariate contributes to some (but not all) of the quantiles. By considering the heterogeneous sparsity, one can better dissect the regression structure of the trait over the covariates, which in turn leads to more accurate characterization of the underlying biological mechanism.
We have conducted a series of simulation studies to evaluate the performance of our proposed approach and other approaches. Our simulation studies show that the proposed method has superior performance to its peer methods. In real data analysis, our method tends to yield a sparser model than the compared methods. The benefit of achieving a sparse model is of great importance to biological studies, because it helps biological investigators to narrow down important candidate covariates (such as genes or proteins), so that research efforts can be leveraged more efficiently. Our analysis indicates that the regression coefficients at different quantiles can be quite heterogeneous. We suggest that the interpretation of the results be guided by biological knowledge and scientific insight, and that the variability be examined by experimental studies. FAL and FAS were mainly designed to generate interquantile shrinkage for quantile regression; when a smooth γ(τ) (with respect to τ) is desired, these two methods are highly suitable and are indeed the only available methods to achieve such a goal.
We have also provided theoretical proof for the proposed method under the situation that p can grow to infinity. Our exploratory experiments suggest that Het-QR can be potentially applied to ‘p > n’, although theoretical work is still needed to guide future experiments in this direction. Wang et al. (2012) proposed a novel approach for studying asymptotics under the ‘p > n’ situation, and they focused on the penalties that can be written as the difference of two convex functions. The group penalty considered herein does not seem to fall into their framework. Further theoretical development is merited. While we have imposed equal weights for multiple quantiles in this paper, our method can be easily extended to accommodate different weights for different quantiles. Properly chosen weights may lead to improved efficiency of the estimated parameters (Zhao and Xiao, 2014).
Acknowledgments
The authors are grateful to the AE and two reviewers for many helpful and constructive comments. Dr. He’s research is supported by the Institutional Support from the Fred Hutchinson Cancer Research Center. Dr. Kong’s research is supported by Natural Sciences and Engineering Research Council of Canada. The authors thank Dr. Huixia Judy Wang for generously providing the code for FAL and FAS, which greatly expands the breadth of this manuscript. The authors thank Dr. Li Hsu for helpful discussions.
The results shown here are in part based upon data generated by the TCGA Research Network: http://cancergenome.nih.gov/.
Appendix
Transformation of the objective function
Our proof is in vein with the proof of Proposition 1 in Huang et al. (2009). Consider the transformed objective function
| (5) |
By Cauchy–Schwarz inequality, we have
then it follows that (5) is equivalent to
| (6) |
Now, let , then (6) is identical to the original objective function (2).
Derivation of the primal and dual problem
Let , then in step 2 of Section 2.2, we aim to solve
Let en denote the unit vector of length n, and λm the vector of λmj(j = 1, …, p). With slight abuse of notation, let Y be the n × 1 vector consisting of Yi, and Z the n × p matrix consisting of Zi. The above objective function is equivalent to
subject to um − υm = Y − Zγm − γm0en, sm − tm = γm, um ≥ 0, υm ≥ 0, sm ≥ 0, and tm ≥ 0.
Let 0r be the zero vector of length r, . Let Y(M) denote the vector in which Y is stacked by M times. Let
, and . Then, the linear program primal of the above objective function can be written as
subject to Ax = b and (s*T, t*T, u*T, υ*T)T ≥ 0, where A is defined as follows. A is a matrix consisting of two rows of blocks. The first row of A consists of 6 blocks, A11 = IM ⊗ Z, A12 = IM ⊗ en, A13 = [0]nM×Mp, A14 = [0]nM×Mp, A15 = InM, and A16 = −InM. The second row of A consists of the following 6 blocks, A21 = IMp, A22 = [0]Mp × M, A23 = −IMp, A24 = IMp, A25 = [0]nM×nM, and A26 = [0]nM×nM.
Then using standard linear program arguments, we obtain the dual as
subject to Z̃Td̃ = S1 + S2 and d̃ ∈ [0, 1]nM+Mp, where
S2 = 1/2 × (R, [0]Mp×M)T eMp, , and Z̃ is defined as follows. Z̃ consists of two rows of blocks. The first row of Z̃ includes two blocks, Z̃11 = IM ⊗ Z and Z̃12 = IM ⊗ en. The second row includes two blocks, Z̃21 = R and Z̃22 = [0]Mp×M.
Computation time
For p = 100 under the auto-regressive structure, i.e., Table 2, we calculate the summary statistics of the CPU time (seconds). The average time (and the standard error of the sample mean) for QR, QR-LASSO, QR-aLASSO, FAL, FAS, Het-QR is 1.6(0.003), 10.2(0.017), 10.0(0.030), 867.0(8.059), 455.3(2.641), 90.3(0.391), respectively. For p = 600 under the auto-regressive structure, the CPU time for QR-LASSO is 372.6(1.453); the time for QR-aLASSO and Het-QR is 362.0(1.729) and 3717.5(36.401), respectively (excluding the time for calculating penalty weights).
Proof of the theorems
We now give the proof of the theorems. We note that throughout the proof, the upper letter C in different formulas stands for different constants.
Recall that the definitions of 𝒩, I, II, sn and Im have been given in the main text. Define the index set J = {1 ≤ j ≤ pn : there exists 1 ≤ m ≤ M such that } with the cardinality |J| = kn. For every fixed j ∈ J, define the index set . Clearly, the oracle index set I = {(m, j) : m ∈ Mj, j ∈ J}. For every fixed 1 ≤ m ≤ M, define .
We need to define some notations for our proof. Let the vectors γmI, and γ̂mI be the subvectors of γm, and γ̂m corresponding to the index set Im, respectively. Define the subvectors of γ, γ* and γ̂ corresponding to the oracle index set I as and . Let and . Define the vector θI as the subvector of the parameter vector θ corresponding to the oracle index set I. Recall that the vectors and θ̂I are the subvectors of the vectors θ* and θ̂ corresponding to the oracle index set I. Clearly, and .
Similarly, let the vectors γmII, and γ̂mII be the subvectors of γm, and γ̂m corresponding to the index set IIm, respectively. Define the subvectors of γ, γ* and γ̂ corresponding to the index set II as and . Define the vector θII as the subvector of the parameter vector θ corresponding to the index set II. Recall that the vectors and θ̂II are the subvectors of the vectors θ* and θ̂ corresponding to the index set II, respectively. Clearly, θII = γII, and θ̂II = γ̂II.
For convenience, write and ; and and .
We first give a lemma related to the loss function Qn(θ). The lemma plays an important role in the proof of our theorems.
Lemma .1
Under conditions (L1)–(L4), we have
where and
| (7) |
Proof of Lemma .1
Let ψm(u) be a sub-derivative of the quantile function ρm(u), then ψm(u) = τm − I(u < 0) + lm I(u = 0) with lm ∈ [−1, 0]. Let Tn = Qn(θ) − Qn(θ*). Then, there exists an lmi ∈ [−1, 0] for every 1 ≤ m ≤ M and 1 ≤ i ≤ n such that
| (8) |
where θ̄m is on the linear segment between θm and , and may be written as with ηm ∈ (0, 1). For Tn3, note that Yi has a continuous conditional distribution given Zi, hence almost surely for all i = 1, …, n and m = 1, …, M, thus Tn3 = 0 almost surely. Subsequently, we can write
where E(Tn) = E(Tn1) + E(Tn2), and E denotes the conditional expectation given Z. Note that E(Tn1) = 0 because from (1). Rename (Tn2 − E(Tn2)) as Rn2 and E(Tn) as Tn4, then we have
| (9) |
For Rn2 (recall Tn2 in (8)), let , then
| (10) |
where . Note that and that f (t|Zi) is bounded under condition (L1) and under condition (L2). Hence, making use of independence, under conditions (L1) and (L2), for all 1 ≤ m ≤ M and , we can see
Note that
where ξmi is between and . Thus, . By Chebyshev’s inequality, we get . Together with (10), by Cauchy–Schwarz’s inequality, we get
| (11) |
For Tn4, the third term in (9), write E(Tn) = en(θ) − en(θ*), where
is second order differentiable with respect to θm under condition (L1), with gradient , and Hessian matrix .
Let G(θ) and H(θ) be gradient and Hessian matrix of en(θ), then and . It is easy to see that G(θ*) = 0 by in (1). By Taylor expansion of Tn4 = E(Tn) = en(θ) − en(θ*) at θ*, we have
| (12) |
where ξ ∈ (0, 1) and ζmi is between and . Trivially,
| (13) |
Note that f ′(t|Zi) is bounded under condition (L1), under condition (L2), and under condition (L3). Hence, for all 1 ≤ i ≤ n, 1 ≤ m ≤ M, and , we have , and . Hence, from (13), we get . Together with (9), (11) and (12), we obtain Tn = Tn1 + Tn41 + Rn(θ), where , and
This completes the proof of the lemma.
Proof of Theorem 1
Recall the definition of Ln(θ) = Qn(θ) + Pn(γ) in (2). Let θ − θ* = νn u, where νn > 0, u ∈ ℝM(p+1) and ‖u‖2 = 1. It is easy to see that ‖θ − θ*‖2 = νn. Based on the continuity of Ln, if we can prove that in probability
| (14) |
then the minimal value point of Ln(θ* + νnu) on {u : ‖u‖2 ≤ 1} exists and lies in the unit ball {u : ‖u‖2 ≤ 1} in probability. We will prove that (14) holds.
For Qn(θ), because for all m = 1, 2, …, M, by Lemma .1 with ηn = νn under conditions (L1) to (L4), we get
| (15) |
where .
For Pn(γ), let pn(γ) = Pn(γ) − Pn(γ*). Define p1n(γ) = pn(γI, 0), that is,
| (16) |
Clearly, p1n(γ) ≤ pn(γ) and p1n(γ*) = pn(γ*) = 0.
Define ln(θ) = qn(θ) + pn(γ) and l1n(θ) = qn(θ) + p1n(γ), both of which are continuous. Clearly, l1n(θ) ≤ ln(θ) and l1n(θ*) = ln(θ*) = 0. Note that (14) is equivalent to that in probability
| (17) |
Note that
where the last inequality follows from the fact that . Note that where umj is a component of u. Hence,
| (18) |
where kn = |J| ≤ sn. By (15) and the above, we have
| (19) |
where .
For the quadratic term in (19), from condition (L5),
| (20) |
For the linear term in (19), by the independence of (Zi, Yi) and (Zj, Yj) for all i ≠ j and the fact that , we get
Then, it follows that
| (21) |
which implies that . Together with (19) and (20), in probability,
| (22) |
Now take where C0 is a sufficiently large constant. Under condition (L4), i.e., , for the last three terms in (22), we can check that , and . Hence,
| (23) |
Therefore, in probability there exists a local minimizer θ̂ of Ln(θ) such that ‖θ̂ − θ*‖2 < νn. This completes the proof of the theorem.
Proof of Theorem 2
For the quantile function Qn(θ), because for all 1 ≤ m ≤ M, by Lemma .1 with ηn = νn, under conditions (L1)–(L4), we have
| (24) |
where .
Let θ − θ* = νnu where νn > 0 and u ∈ ℝM(p+1). Then ‖θ − θ*‖2 ≤ νn if and only if ‖u‖2 ≤ 1. Let where uI and uII are subvectors of u corresponding to the index sets I and II, respectively. Clearly, . Note that and .
Define the ball Θ̃n = {θ = θ* + νnu ∈ Θn : ‖u‖2 ≤ 1} with . For any , we can see , and ‖θII‖2 = νn‖θII‖2, where ‖θII‖2 ≤ 1.
Consider that Qn(θI, θII) − Qn(θI, 0) = Qn(θI, θII) − Qn(θ*) − (Qn(θI, 0) − Qn(θ*))
From (21), we can see . Under condition (L5), . we have . From (24), we have . Hence,
| (25) |
where . Recall Ln(θI, θII) − Ln(θI, 0) = Qn(θI, θII) − Qn(θI, 0) + Pn(γI, γII) − Pn(γI, 0). From (25), we get
| (26) |
Note that (nλn)−1 (P(γI, γII) − Pn(γI, 0))
| (27) |
For all θ ∈ Θ̃n, we have , which implies that for all j = 1, 2, …, p, where ‖ωn‖∞ = max1≤m≤M, 1≤j≤p |ωmj|. Recall that . From (27), it follows that for all θ ∈ Θ̃n,
| (28) |
Define Ωn = {θ = θ* + νnu ∈ Θ̃n : ‖uII‖2 > 0} and . Clearly, . From (26) and (28), we obtain in probability
where C̃1 and C̃2 are positive constants. Under the given conditions, and as n → ∞. Hence, infθ∈Ωn (Ln(θI, θII) − Ln(θI, 0) > 0 in probability. Thus, infθ∈Ωn Ln(θ) ≥ infθ∈Ωn Ln(θI, θII) − Ln(θI, 0)) + infθ∈Ωn Ln(θI, 0) > infθ∈Ωn Ln(θI, 0) = infθ∈Θ̃n Ln(θI, 0) ≥ infθ∈Θ̃n Ln(θ), which implies that . Therefore, the minimal value point of Ln(θ) on Θ̃n only lies in its subset .
From Theorem 1, we know that in probability θ̃ ∈ Θ̃n and that θ̂ is a local minimizer of Ln(θ). Hence, in probability, which implies that γ̂II = 0.
Proof of Theorem 3
Let where νn > 0 and u ∈ ℝM+sn. Because for all 1 ≤ m ≤ M, and due to conditions (L1)–(L3) and that , Lemma .1 implies that
| (29) |
where , BnI is given in (4),
| (30) |
and UimI is given in Section 3. Note that AnI and BnI are the sub-vector(matrix) of An and Bn in (6) corresponding to the index set I, respectively.
For Pn(γ), define pn(γ) = Pn(γ) − Pn(γ*). Let uI be the subvector of u corresponding to the subvector γI in θI. Then, we have
which is p1n(γ) given in (16). By (18) in the proof of Theorem 1, we have . This, combined with (29), implies that
| (31) |
where .
Define the ball with , where u ∈ ℝM+sn and C0 is a positive constant. Given that , and , we see that and . Hence, considering (31), for all we have , which implies that for all ,
| (32) |
By Theorems 1 and 2, we know that in probability a local minimizer θ̂ of Ln(θ) − Ln(θ*) lies in the ball ΘnI, which implies that in probability. Hence, from (32), we have
| (33) |
where t ∈ ℝM+sn is an unit vector. Since tTBnI t ≤ λmax(BnI) ≤ λmax(Bn) ≤ Cn by condition (L5), we have . Multiplying both sides of (33) by a vector , where b ∈ ℝM+sn is any unit vector, we obtain
| (34) |
Let ξn = bTAnI, and write where bm is the subvector of b corresponding to the subvector of . By the definition of AnI in (30), we see that
where with . Clearly, {ζi, i = 1, …, n} is an independent sequence. Next, we will verify that ξn satisfies the Lindeberg’s condition
| (35) |
where .
For , it is easy to see that from the fact , and that
where Σn is given in (3). Hence, we obtain that and that, by independence . Under condition (L6), we obtain
| (36) |
Note that under condition (L2). By Cauchy–Schwarz’s inequality, , which implies that from the fact . Hence, we have
Together with (36), this implies that, as n → ∞,
which shows that Lindeberg’s condition (35) holds. Hence,
Together with (34) and (36), this implies that
This completes the proof of the theorem.
References
- Avery CL, He Q, North KE, Ambite JL, Boerwinkle E, Fornage M, Hindorff LA, Kooperberg C, Meigs JB, Pankow JS, et al. A phenomics-based strategy identifies loci on APOC1, BRAP, and PLCG1 associated with metabolic syndrome phenotype domains. PLoS Genet. 2011;7:e1002322. doi: 10.1371/journal.pgen.1002322. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 2001;96:1348–1360. [Google Scholar]
- Gasso G, Rakotomamonjy A, Canu S. Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Trans. Signal Process. 2009;57:4686–4698. [Google Scholar]
- Holland EC. Glioblastoma multiforme: the terminator. Proc. Natl. Acad. Sci. 2000;97:6242–6244. doi: 10.1073/pnas.97.12.6242. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang J, Ma S, Xie H, Zhang CH. A group bridge approach for variable selection. Biometrika. 2009;96:339–355. doi: 10.1093/biomet/asp020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jiang L, Bondell HD, Wang HJ. Interquantile shrinkage and variable selection in quantile regression. Comput. Statist. Data Anal. 2014;69:208–219. doi: 10.1016/j.csda.2013.08.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jiang L, Wang HJ, Bondell HD. Interquantile shrinkage in regression models. J. Comput. Graph. Statist. 2013;22:970–986. doi: 10.1080/10618600.2012.707454. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koenker R. Quantile regression for longitudinal data. J. Multivariate Anal. 2004;91:74–89. [Google Scholar]
- Koenker R. quantreg: Quantile Regression. 2015 URL: http://CRAN.R-project.org/package=quantreg R package version 5.11. [Google Scholar]
- Koenker R, Bassett JG. Regression quantiles. Econometrica. 1978;46:33–50. [Google Scholar]
- Landmark-Høyvik H, Dumeaux V, Nebdal D, Lund E, Tost J, Kamatani Y, Renault V, Børresen-Dale AL, Kristensen V, Edvardsen H. Genome-wide association study in breast cancer survivors reveals SNPs associated with gene expression of genes belonging to MHC class I and II. Genomics. 2013;102:278–287. doi: 10.1016/j.ygeno.2013.07.006. [DOI] [PubMed] [Google Scholar]
- Li Y, Zhu J. L1-norm quantile regression. J. Comput. Graph. Statist. 2008;17:163–185. [Google Scholar]
- Mazumder R, Friedman JH, Hastie T. Sparsenet: Coordinate descent with nonconvex penalties. J. Amer. Statist. Assoc. 2011;106:1125–1138. doi: 10.1198/jasa.2011.tm09738. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peng L, Xu J, Kutner N. Shrinkage estimation of varying covariate effects based on quantile regression. Stat. Comput. 2014;24:853–869. doi: 10.1007/s11222-013-9406-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Puputti M, Tynninen O, Sihto H, Blom T, Mäenpää H, Isola J, Paetau A, Joensuu H, Nupponen NN. Amplification of KIT, PDGFRA, VEGFR2, and EGFR in gliomas. Mol. Cancer Res. 2006;4:927–934. doi: 10.1158/1541-7786.MCR-06-0085. [DOI] [PubMed] [Google Scholar]
- Tibshirani R. Regression shrinkage and selection via the lasso. J. RStat. Soc. Ser. B. 1996;58:267–288. [Google Scholar]
- Tibshirani R, Saunders M, Rosset S, Zhu J, Knight K. Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B. 2005;67:91–108. [Google Scholar]
- Wang S, Nan B, Zhu N, Zhu J. Hierarchically penalized Cox regression with grouped variables. Biometrika. 2009;96:307–322. [Google Scholar]
- Wang L, Wu Y, Li R. Quantile regression for analyzing heterogeneity in ultra-high dimension. J. Amer. Statist. Assoc. 2012;107:214–222. doi: 10.1080/01621459.2012.656014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu Y, Liu Y. Variable selection in quantile regression. Statist. Sinica. 2009;19:801–817. [Google Scholar]
- Zhao Z, Xiao Z. Efficient regressions via optimally combining quantile information. Econometric Theory. 2014;30:1272–1314. doi: 10.1017/S0266466614000176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zou H. The adaptive Lasso and its oracle properties. J. Amer. Statist. Assoc. 2006;101:1418–1429. [Google Scholar]
- Zou H, Yuan M. Regularized simultaneous model selection in multiple quantiles regression. Comput. Statist. Data Anal. 2008;52:5296–5304. [Google Scholar]

