Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Jan 26.
Published in final edited form as: Comput Stat Data Anal. 2015 Oct 24;95:222–239. doi: 10.1016/j.csda.2015.10.007

Regularized quantile regression under heterogeneous sparsity with application to quantitative genetic traits

Qianchuan He a,*,1, Linglong Kong b,1, Yanhua Wang c, Sijian Wang d, Timothy A Chan e, Eric Holland f
PMCID: PMC5267342  NIHMSID: NIHMS807036  PMID: 28133403

Abstract

Genetic studies often involve quantitative traits. Identifying genetic features that influence quantitative traits can help to uncover the etiology of diseases. Quantile regression method considers the conditional quantiles of the response variable, and is able to characterize the underlying regression structure in a more comprehensive manner. On the other hand, genetic studies often involve high-dimensional genomic features, and the underlying regression structure may be heterogeneous in terms of both effect sizes and sparsity. To account for the potential genetic heterogeneity, including the heterogeneous sparsity, a regularized quantile regression method is introduced. The theoretical property of the proposed method is investigated, and its performance is examined through a series of simulation studies. A real dataset is analyzed to demonstrate the application of the proposed method.

Keywords: Heterogeneous sparsity, Quantitative traits, Variable selection, Quantile regression, Genomic features

1. Introduction

In many genetic studies, quantitative traits are collected for studying the associations between the traits and certain genomic features. For example, body mass index, lipids and blood pressure have been investigated with respect to single nucleotide polymorphisms (SNPs) (Avery et al., 2011). With the rapid progress of high-throughput genome technology, new types of quantitative traits have emerged and attracted considerable research interest, such as gene expression, DNA methylation, and protein quantification (Landmark-Høyvik et al., 2013). The analysis of these quantitative traits yields new insight into biological processes and sheds light on the genetic basis of diseases.

Typically, quantitative genetic traits are analyzed by least-square based methods, which seek to estimate the E(Y |Z), where Y is the trait and Zis the set of covariates of interest. Quantile regression (Koenker and Bassett, 1978) instead considers the conditional quantile function of Y given Z, Qτ(Y |Z), at a given τ ∈ (0, 1). When τ is fixed at 0.5, quantile regression is simply the median regression, which is well known to be more robust than the least-square estimation. In examining quantiles at different τ, quantile regression provides a more complete picture of the underlying regression structure between Y and Z.

Like least-square methods, traditional quantile regression methods only consider a handful of covariates. With the emergence of high-dimensional data, penalized quantile regression methods have been developed in recent years, and can be broadly classified into two classes. The first class seeks to harness the information shared among different quantiles to jointly estimate the regression coefficients. Jiang et al. (2014) proposed two novel methods, Fused Adaptive Lasso (FAL) and Fused Adaptive Sup-norm estimator (FAS), for variable selection in interquantile regression. FAL combines the LASSO penalty (Tibshirani, 1996) with the fused lasso penalty (Tibshirani et al., 2005), while FAS imposes the grouped sup-norm penalty and the fused lasso penalty. The fused lasso penalty delivers the effect of smoothing the regression slopes of adjacent quantiles, hence FAL and FAS can be used to identify common quantile slopes if such a smoothing property is desired. Zou and Yuan (2008) adopted an F penalty, which either eliminates or retains all the regression coefficients for a covariate at multiple quantiles. The method by Jiang et al. (2013) seeks to shrink the differences among adjacent quantiles by resorting to the fused lasso penalty; however, this method does not perform variable selection at the covariate level, i.e., it does not remove any covariates from the model.

The second class of methods focuses on a single quantile at a time. Koenker (2004) imposed a LASSO penalty on the random effects in the mixed-effect quantile regression model; Li and Zhu (2008) adopted the LASSO penalty; Wu and Liu (2009) explored the SCAD penalty (Fan and Li, 2001) and the adaptive LASSO penalty (Zou, 2006), and proved the selection consistency and normality of the proposed estimators (for a fixed dimension of covariates). Wang et al. (2012) investigated several penalties for quantile regression under the scenario of p > n, i.e., the dimension is larger than the sample size, and proved the selection consistency of their proposed methods through a novel use of the subgradient theory. A recent approach by Peng et al. (2014) shares characteristics with both classes; its loss function targets a single τ, while its penalty borrows information across different quantiles; their proposed penalty was shown to achieve more accurate estimation than the one that uses information from only a single quantile.

If the regression coefficients associated with a covariate are treated as a group, then some groups may be entirely zero and some other groups may be partially zero. Thus, sparsity can occur both at the group level and within the group level, and we call this type of sparsity as the heterogeneous sparsity. In this paper, we propose an approach that conducts joint variable selection and estimation for multiple quantiles under the situation that p can diverge with n. Our proposed method is able to achieve sparsity both at the group level and within the group level. We note that FAL can potentially yield sparsity at the two levels, but this approach has not been evaluated under the scenario where the dimension p is high. To the best of our knowledge, this is the first paper that explicitly investigates the heterogeneous sparsity for quantile regression. We show that our method tends to be more effective than the compared methods in handling heterogeneous sparsity when the dimension is high. We also provide theoretical justification for the proposed method. The paper is organized as follows. In Section 2, we describe the proposed method and the implementation details. In Section 3, we prove the theoretical properties of the proposed method. In Section 4, we show the results of simulation studies regarding several related methods, and in Section 5, we present an example of real data analysis for the proposed method.

2. Method

2.1. Data and model

Let Z represent the vector consisting of p covariates, such as SNPs or genes. Let γτ be the p-dimension coefficient vector at the τth quantile. Let Y be the random variable that denotes the phenotype we are interested in, such as quantitative traits in genetic studies. For a given τ ∈ (0, 1), the linear quantile regression model is known as

Qτ(Y|Z)=γ0+ZTγτ,

where γ0 is the intercept, and Qτ(Y |Z) is the τ-th conditional quantile of Y given Z, that is, PY|Z(YQτ(Y |Z)) = τ.

The dimension p can be potentially very high in genomic studies, but typically it is assumed that only a limited number of genomic features contribute to the phenotype. For this reason, one needs to find a sparse estimation of γτ to identify those important genomic features. On the other hand, we also wish to consider multiple quantile levels simultaneously so that information shared among different quantile levels can be utilized. To this end, we propose the following model for the joint estimation of the regression coefficients for multiple quantiles. Given M quantile levels, 0 < τ1 < ⋯ < τM < 1, our linear quantile regression model is defined as, for τm (m = 1, …, M),

Qτm(Y|Z)=γm0+ZTγτm. (1)

where γm0 is the intercept, and γτm is the p-dimension coefficient vector. For ease of notation, we write γm = γτm. For the above model, we further define γ(γ1T,,γMT)T with γm = (γm1, …, γmp)T, and intercept parameter γ0 ≡ (γ10, …, γM0)T.

We now focus on the sample version of the model (1). Let {(Yi,ZiT)T}i=1n be an i.i.d. random sample of size n from population (Y, ZT)T, where Zi = (Zi1, Zi2, …, Zip)T. The sample quantile loss function is defined as

Qn(γ0,γ)=m=1Mi=1nρm(YiZiTγmγm0)

where ρm(u) = umI(u < 0)) is the quantile check loss function with I(·) being the indicator function. To introduce sparsity to the model, we add to the loss function a penalty function

Pn(γ)=nλnj=1p(m=1Mωmj|γmj|)12,

where λn is the tuning parameter, and ωn = (ωmj : 1 ≤ mM, 1 ≤ jp) is the weight vector whose component ωmj > 0 is the weight of parameter γmj. Note that the penalty is a nonconvex function. It essentially divides the regression coefficients into p groups, and each group consists of M parameters associated with the jth covariate. The motivation is that, while each quantile may have its own set of regression parameters, we wish to borrow strengths from each quantile to select covariates that are important across all quantiles as well as those covariates that are important to only some of the quantiles. This type of nonconvex penalty has been considered in the Cox regression model and other settings (see Wang et al. (2009) for an example), but to the best of our knowledge has not been studied in the quantile regression model. We can choose ωmj=γ˜mj1, where γ̃ mj is some consistent estimate for γmj; for example, we may use the estimates from the unpenalized quantile regression conducted at each individual quantile level. When p < n but is fixed, the consistency of the unpenalized estimates has been proved by Koenker and Bassett (1978). When p < n but is diverging with n, the estimates from unpenalized quantile regression are consistent by adapting to Lemma A.1 of Wang et al. (2012). Thus, our objective function is defined as

m=1Mi=1nρm(YiZiTγmγm0)+nλnj=1p(m=1Mωmj|γmj|)12.

For the sake of convenience, we define θ=(θ1T,,θMT)T with θm=(γm0,γmT)T, m = 1, …, M, and the corresponding parameter space by Θn ⊂ ℝM(p+1). Further define Ui=(1,ZiT)T, i = 1, … n. Then, Qn0, γ) can be written as Qn(θ). Emphasizing that γ is a subset of θ, we can write the objective function as

Ln(θ)Qn(θ)+Pn(γ)=m=1Mi=1nρm(YiUiTθm)+nλnj=1p(m=1Mωmj|γmj|)12. (2)

Let θ̂ be a local minimizer of Ln(θ) in (2) for θ ∈ Θn. Because the heterogeneity of sparsity is explicitly taken into account in this model, we name our proposed method as Heterogeneous Quantile Regression (Het-QR). Our model can be modified to accommodate different weights for the losses at different quantiles. That is, Qn(θ) may take the form m=1Mπmi=1nρm(YiUiTθm), where πm is the weight for the mth quantile. Some examples on the choice of weight πm can be found in Koenker (2004) and Zhao and Xiao (2014).

2.2. Implementation

We design the following algorithm to implement the proposed method. First, we show in the Appendix that the objective function can be transformed into

argminθ,ξm=1Mi=1nρm(YiUiTθm)+λ1j=1pξj+j=1pξj1(m=1Mωmj|γmj|),

where ξ = (ξ1, …, ξp) are newly introduced nonnegative parameters. Then, the new objective function can be solved by the following iterative algorithm:

  • Step 1: We first fix θ to solve ξj, j = 1,…, p. To this end, ξj has a closed-form solution. That is, ξ^j=(m=1Mωmj|γmj|)1/2λ11/2, j = 1, …, p.

  • Step 2: We fix ξj, j = 1, …, p, to solve θ. That is, we aim to solve
    argminθm=1Mi=1nρm(YiUiTθm)+j=1pm=1Mξj1ωmj|γmj|.
    We can formulate this objective function as a linear program and derive its dual form (see Appendix for details), then the optimization can be conducted by recoursing to existing linear programming packages; we utilize the Quantreg R package (Koenker, 2015) in our implementation.
  • Step 3: Iterate step 1 and step 2 until convergence. Due to the nonconvexity of the penalty function, the estimate is a local minimizer.

3. Theoretical properties

Now we investigate the asymptotic properties of the proposed method. FAL and FAS considered p to be fixed. We study the situation where p can diverge with n. Let the true value of θ be θ*, where the corresponding true values of γm0, γmj, γ are γm0*,γmj*, γ*, respectively. Let the number of nonzero elements in γ* be s. To emphasize that s and p can go to infinity, we use sn and pn when necessary. For Theorems 1 and 2 (to be shown), pn is at the order lower than O(n1/2); for Theorem 3, pn is at the order lower than O(n1/6).

We define some index sets to be used in our theorems. Let 𝒩 = {(m, j) : 1 ≤ mM, 1 ≤ jpn}. For the true parameters, define the oracle index set I={(m,j)𝒩:γmj*0} and its complementary set II={(m,j)𝒩:γmj*=0}. Assume that I has cardinality |I| = sn.

We define some notations used for our theorems.

Define dnI = max(m,j)∈I ωmj and dnII=min(m,j)IIωmj(max(m,j)𝒩ωmj)12. Define θI* and θ̂I as the subvectors of the vectors θ* and θ̂ corresponding to the oracle index set I, respectively. For every fixed 1 ≤ mM, define the index set Im={1jpn:γmj*0}. Let

Σn=(Σlm)M×M  with  Σlm=(min(τm,τl)τmτl)E(UilIUimlT), (3)
BnI=Diag(B1,B2,,BM)  with  Bm=i=1nf(UiTθm*|Zi)UimIUimIT, (4)

where UimI=(1,ZimIT)T with ZimI being the subvector of Zi corresponding to index set Im.

Let F(y|z) and f (y|z) be the conditional distribution function and the conditional density function of Y given Z = z, respectively. For any proper square matrix A, let λmin(A) and λmax(A) denote the minimum and maximum eigenvalue of A, respectively.

Before stating the main theorems, we need the following regularity conditions labeled by ℒ:

  • (L1)

    The conditional density f (y|z) has first order derivative f′(y|z) with respect to y; And f (y|z) and f′ (y|z) are uniformly bounded away from 0 and ∞ on the support set of Y and the support set of Z;

  • (L2)

    For random sample Zi = (Zi1, Zi2, …, Zip)T, 1 ≤ in, there exists a positive constant C1 such that max1≤in, 1≤jp |Zij| ≤ C1;

  • (L3)

    For Ui=(1,ZiT)T, i = 1, …, n, let Sn=i=1nUiUiT. There exist positive constants C2 < C3 such that C2 ≤ λmin(n−1Sn) ≤ λmax(n−1Sn) ≤ C3;

  • (L4)

    The dimension sn satisfies that sn = a0nα0, and the dimension pn satisfies that pn = a1nα1, where 0<α0<α1<12, and a0 and a1 are two positive constants;

  • (L5)

    The matrix Bn given in (6) (see Appendix) satisfies that C4 ≤ λmin(n−1Bn) ≤ λmax(n−1Bn) ≤ C5, where C4 and C5 are positive constants.

  • (L6)

    The matrix Σn satisfies that λminn) ≥ C6, where C6 is a positive constant.

Conditions (L1)–(L3) and (L5)–(L6) are seen in typical theoretical investigation of quantile regression. Condition (L4) specifies the magnitude of sn and pn with respect to the sample size. Under the aforementioned regularity conditions, we present the following three theorems. The proof is relegated to the Appendix. Define θII* and θ̂II as the subvectors of the vectors θ* and θ̂ corresponding to the index set II, respectively. Clearly, θII*=0. Due to the nonconvexity of the penalty function, all the following theorems and their proof (in the Appendix) are regarding to a local minimizer of the objective function.

Theorem 1

Under conditions (L1)–(L5), if λndnI12=o(sn34pn34n34), then the estimator θ̂ of θ* exists, is a local minimizer, and satisfies the estimation consistency that θ^θ*2=Op(n12pn12).

Theorem 1 shows that the proposed method is consistent in parameter estimation. The convergence rate Op(n12pn12) is typical for the settings where p diverges with n.

Theorem 2

Under conditions (L1)–(L5), if λndnI12=o(n34sn34pn34) and n12pn12=o(λndnII), then P(θ̂II = 0) → 1.

Theorem 2 indicates that our method can distinguish the truly zero coefficients from the nonzero coefficients with probability tending to 1. It can be seen that the penalty weight dnII plays a critical role in the property of selection consistency.

Theorem 3

Under conditions (L1)–(L3) and (L5)–(L6), if λndnI12=O(n1pn12), and n12pn12=o(λndnII), and the powers of sn and pn in condition (L4) satisfy 0<α0<α1<16, then for any unit vector b ∈ ℝM+sn we have

(nbTΣnb)12bTBnl(θ^IθI*)N(0,1).

Theorem 3 suggests that the estimated nonzero coefficients have the asymptotic normality. Heuristically, for given n, λn and ωmj, the considered penalty in (2) has its slope tending to infinity when γmj goes to 0, thus the penalty tends to dominate small γmj. On the other hand, when λn is sufficiently small, the penalty has little impact on the estimation of relatively large γmj. These properties, in combination with proper choice of the tuning parameter, play major roles in the oracle property of the proposed estimator. The oracle property for coefficients within a group is mainly due to the penalty weights, which put large penalty on small coefficients (and small penalty on large coefficients).

4. Simulation studies

We conduct simulation studies to evaluate the proposed method along with the following methods: the QR method, which applies quantile regression to each individual quantile level without any variable selection; the QR-LASSO method, which adopts the L1-penalized quantile regression for each quantile level; the QR-aLASSO method, which imposes the adaptive LASSO penalty on each quantile level (Wu and Liu, 2009); the FAL and the FAS method (Jiang et al., 2014). Both FAL and FAS contain a fused-LASSO type of penalty, which encourages the equality of the regression coefficients among different quantiles. FAL allows within-group sparsity, while FAS generates sparsity only at the group level. For Het-QR, we set the penalty weight ωmj to be the inverse of the estimate from the unpenalized quantile regression (unless specified otherwise).

We first consider a model where important covariates have nonzero regression coefficients across all (or almost all) quantiles. We simulate 6 independent covariates, each of which follows the uniform(0,1) distribution. Then we simulate the trait as

Y=1.0+β1Z1+β2Z2+β6Z6+κZ6ε,

where β1 = 1, β2 = 1, β6 = 2, κ = 2 and ε ~ N(0, 1). Under this set up, Z1 and Z2 have constant regression coefficients across all quantiles, while Z6’s regression coefficient is determined by 2 + 2 × Φ−1(τ), which varies across different quantiles. That is, the τth quantile of Y given Z1, Z2 and Z6 is

Qτ(Y|Z1,Z2,Z6)=1.0+Z1+Z2+(2+2×Φ1(τ))Z6.

This model is in line with the model considered by Jiang et al. (2014). All the other 3 covariates, Z3, Z4 and Z5, have no contribution to Y. The sample size n is set to 500. To select the tuning parameter, we follow the lines of Mazumder et al. (2011) and Wang et al. (2012) to generate another dataset with sample size of 10n, and then pick the tuning parameter at which the check loss function is minimized. The total number of simulations for each experiment is 100.

We consider various criteria to evaluate the performance of the compared methods, such as the model size and the parameter estimation error (PEE). The model size refers to the number of estimated non-zero coefficients among the M quantile levels. The PEE is calculated by m=1Mj=1p|γ^mjγmj*|/M. To evaluate the prediction error, we simulate an independent dataset, (Ypred, Zpred), with sample size of 100n, and then calculate the F-measure (FM) (Gasso et al., 2009), the quantile prediction error (QPE) and the prediction error (PE). The FM is equal to 2 × Sa/Ma, where Sa is the number of truly nonzero slopes being captured, and Ma is the sum of the estimated model size and the true model size. The QPE is defined as the sample version of the m=1M(Qτm(Ypred|Zpred)ZpredTγ^τmγ^m0)2/M, averaged across all the subjects. The PE is defined as Qn (θ̂)/n, i.e., the check loss averaged across all considered quantiles for all the test samples, evaluated on (Ypred, Zpred).

For the purpose of illustration, we consider three quantiles, τ = 0.25, 0.5, 0.75. The results are shown in the upper panel of Table 1. It can be seen that when p is 6, FAL has the lowest parameter estimation error and FAS has the lowest PE, though the difference between these two methods and the other compared methods is generally quite small. Next, we increase p to 100 to evaluate the methods under a higher dimension. As shown in the lower panel of Table 1, when p is equal to 100, both FAL and FAS have deteriorated performance; for instance, their model sizes tend to be twice (or more) as the true model size and their PEE and PE are higher than Het-QR. This experiment shows that the performance of FAL and FAS is suboptimal when the dimensionality grows large; one potential explanation is that the penalties of FAL and FAS may overemphasize the interquantile shrinkage, which make them less efficient when many noise covariates are present. Further research is merited. As to computation, we did not observe non-convergence for Het-QR in our experiments.

Table 1.

Comparison of Het-QR and other methods in the absence of within-group sparsity (standard error of the sample mean shown in the parenthesis).

Method Model-size FM (%) PEE × 100 QPE × 103 PE × 103
p = 6

QR 18 53.3(1.5) 10.2(0.6) 1041.2(0.6)
QR-LASSO 15.0(0.2) 75(0.7) 36.1(1.2) 6.7(0.5) 1038.4(0.6)
QR-aLASSO 10.9(0.2) 91(0.7) 25.6(0.9) 5.8(0.4) 1037.2(0.5)
FAL 11.1(0.2) 91(0.9) 25.3(1.0) 6.2(0.5) 1037.2(0.5)
FAS 12.1(0.2) 86(0.9) 26.4(1.1) 5.9(0.4) 1037.1(0.5)
Het-QR 9.6(0.1) 97(0.6) 26.0(0.9) 6.5(0.5) 1037.5(0.5)
p = 100

QR 300 1556.4(12.7) 325.0(5.1) 1242.1(2.7)
QR-LASSO 47.3(1.2) 33(0.7) 120.5(3.5) 23.4(1.2) 1052.4(0.9)
QR-aLASSO 16.8(0.4) 72(1.1) 46.9(1.9) 10.5(0.8) 1042.1(0.7)
FAL 17.7(0.6) 70(1.4) 41.9(1.8) 10.2(0.8) 1041.0(0.6)
FAS 23.7(0.7) 58(1.2) 58.0(2.4) 13.9(0.9) 1043.4(0.7)
Het-QR 9.3(0.1) 99(0.4) 29.5(1.2) 8.4(0.6) 1039.9(0.6)

Next, we systematically evaluate the situation where within-group sparsity exists. To introduce correlations into covariates, we simulate 20 blocks of covariates, each block containing 5 correlated covariates. For each block, we first simulate a multivariate normal distribution with mean being the unit vector and covariance matrix following either the compound symmetry or the auto-regressive correlation structure with correlation coefficient ρ = 0.5; next, we take the absolute value of the simulated random normal variables as the covariates Z. The total number of covariates is 100. We specify the conditional quantile regression coefficient function γ(τ) as follows. For τ ∈ (0, 0.3], the first 8 regression slopes for Z are (0.5, 0, 0, 0, 0, 0.6, 0, 0); for τ ∈ (0.3, 0.7], the first 8 regression slopes are (0.5, 0, 0, 0, 0, 0.6, 0, 0.7); for τ ∈ (0.7, 1.0), the corresponding slopes are (0.6, 0, 0, 0, 0, 0.7, 0, 0.7). All other regression slopes are 0. Thus, the first and the sixth covariates are active among all quantiles, while the eighth covariate is active only for the last two quantile levels. To generate Y, we first simulate a random number τ ∈ Uniform (0, 1), and then determine the γ(τ) based on τ; subsequently, we obtain

Y=1.0+ZTγ(τ)+F1(τ),

where F−1 is the inverse cumulative function of some distribution F. That is, the τth quantile of Y given Z is

Qτ(Y|Z)={1.0+F1(τ)+0.5Z1+0.6Z6if 0<τ0.31.0+F1(τ)+0.5Z1+0.6Z6+0.7Z8if 0.3<τ0.71.0+F1(τ)+0.6Z1+0.7Z6+0.7Z8if 0.7<τ<1.

We explore different distributions for F: the standard normal distribution, the T-distribution with degrees of freedom equal to 3 (T3), and the exponential distribution with shape parameter equal to 1.

We first consider the normal distribution for F. The results are shown in Table 2. Because no variable selection is conducted, QR has much larger PEE, QPE, and PE than the other methods; for example, the PEE and QPE of QR are more than 10 times higher than the compared methods. QR-LASSO, QR-aLASSO, FAL, and FAS have more tamed model sizes, but still contain a number of noise features. Het-QR yields a model that is closer to the true model, in which the three considered quantiles contain 2, 2, and 3 nonzero slopes, respectively. Het-QR also appears to have the highest FM, and lowest errors for parameter estimation and prediction. Next, we consider the distribution to be the T3 (Table 3) and the exponential distribution (Table 4), and the results show a similar pattern. These experiments indicate that Het-QR can handle higher dimension as well as the heterogeneous sparsity better than the other methods.

Table 2.

Comparison of Het-QR and other methods for p = 100 under the normal distribution (standard error of the sample mean shown in the parenthesis).

Method Model-size FM (%) PEE × 100 QPE × 103 PE × 103
Correlation structure: auto-regressive

QR 300 1189.3(7.2) 791.6(9.1) 1606.5(3.1)
QR-LASSO 41.1(1.3) 34(0.9) 108.7(3.1) 78.5(3.1) 1353.1(1.2)
QR-aLASSO 17.8(0.5) 63(1.2) 60.4(2.5) 49.2(3.2) 1341.5(1.2)
FAL 20.3(0.8) 60(1.6) 59.5(2.6) 49.9(3.5) 1340.4(1.2)
FAS 24.8(1.1) 53(1.5) 85.1(2.6) 86.4(3.1) 1351.2(1.1)
Het-QR 8.6(0.1) 96(0.7) 34.7(1.6) 32.6(2.8) 1334.9(1.1)
Correlation structure: compound symmetry

QR 300 1218.1(8.4) 792.7(9.5) 1606.1(3.4)
QR-LASSO 40.1(1.1) 35.0(0.9) 105.9(3.1) 74.8(3.0) 1351.9(1.3)
QR-aLASSO 18.9(0.6) 61(1.3) 61.2(2.7) 48.0(3.1) 1341.6(1.3)
FAL 19.9(0.8) 61(1.4) 61.0(2.9) 54.4(4.0) 1341.2(1.3)
FAS 24.1(1.1) 55(1.6) 83.8(2.8) 89.3(3.2) 1351.3(1.2)
Het-QR 8.9(0.2) 94(0.8) 34.6(1.9) 31.4(2.9) 1334.7(1.2)

Table 3.

Comparison of the Het-QR and other methods for p = 100 under the T3 distribution (standard error of the sample mean shown in the parenthesis).

Method Model-size FM (%) PEE × 100 QPE × 103 PE × 103
Correlation structure: auto-regressive

QR 300 1452.1(10.2) 1149.1(15.2) 2114.3(4.5)
QR-LASSO 39.4(1.3) 35(0.9) 123.6(3.3) 103.7(3.7) 1796.8(1.4)
QR-aLASSO 19.7(0.6) 58(1.3) 81.0(3.0) 77.3(4.2) 1787.7(1.6)
FAL 20.6(0.7) 58(1.3) 76.4(3.1) 74.8(4.6) 1786.2(1.6)
FAS 24.2(0.9) 53(1.3) 99.6(3.4) 107.8(4.2) 1795.6(1.6)
Het-QR 8.9(0.2) 92(1.0) 47.3(2.3) 53.5(4.0) 1779.4(1.5)
Correlation structure: compound symmetry

QR 300 1487.4(11.2) 1156.3(15.6) 2115.2(4.7)
QR-LASSO 38.7(1.0) 35(1.2) 120.5(3.2) 99.4(3.4) 1795.5(1.6)
QR-aLASSO 20.3(0.7) 56(1.3) 81.6(3.3) 76.0(4.1) 1787.6(1.8)
FAL 22.5(0.9) 56(1.5) 82.5(3.9) 78.1(5.1) 1786.6(1.8)
FAS 25.6(1.0) 51(1.3) 99.1(3.5) 107.0(4.1) 1794.7(1.7)
Het-QR 9.1(0.3) 91(1.1) 48.6(2.9) 55.4(4.7) 1780.1(1.8)

Table 4.

Comparison of Het-QR and other methods for p = 100 under the exponential distribution (standard error of the sample mean shown in the parenthesis).

Method Model-size FM (%) PEE × 100 QPE × 103 PE × 103
Correlation structure: auto-regressive

QR 300 1024.2(7.1) 618.4(8.8) 1446.4(3.1)
QR-LASSO 39.3(1.1) 36(0.9) 88.7(2.5) 60.6(2.2) 1220.9(1.0)
QR-aLASSO 17.1(0.5) 66(1.2) 47.9(1.9) 37.0(2.3) 1210.2(0.9)
FAL 18.9(0.7) 63(1.5) 43.0(1.9) 33.3(3.0) 1209.5(1.0)
FAS 26.9(1.2) 51(1.7) 74.5(2.3) 79.1(3.5) 1223.8(1.1)
Het-QR 8.7(0.1) 96(0.6) 28.8(1.3) 24.2(1.8) 1205.4(0.8)
Correlation structure: compound symmetry

QR 300 1041.6(7.5) 610.9(8.3) 1444.4(2.9)
QR-LASSO 39.6(1.2) 36(0.9) 86.1(2.6) 58.0(2.4) 1220.2(1.1)
QR-aLASSO 17.0(0.5) 66(1.3) 46.6(2.0) 35.7(2.3) 1210.1(1.1)
FAL 18.7(0.7) 63(1.4) 41.8(2.0) 31.0(3.0) 1208.8(1.1)
FAS 27.6(1.4) 51(1.8) 74.3(2.5) 79.6(3.3) 1223.5(1.1)
Het-QR 8.7(0.1) 96(0.7) 27.2(1.3) 22.3(1.9) 1205.1(0.9)

We finally consider the situation where p > n. While theoretical development is still needed for this setting, our experiment is to evaluate the practical performance of the proposed approach. We let n = 500 and p = 600. For τ ∈ (0, 0.3], the first 8 regression slopes for Z are (0.6, 0, 0, 0, 0,0.6, 0, 0); for τ ∈ (0.3, 0.7], the first 8 regression slopes are (0.6, 0, 0.8, 0, 0, 0.7, 0, 0.8); for τ ∈ (0.7, 1.0), the corresponding slopes are (0.8, 0, 0.8, 0, 0, 0.8, 0, 1.0). In this scenario, Z3 and Z8 have zero coefficients for the first quantile, but nonzero coefficients for the other two quantiles. That is, the τth quantile of Y given Z is

Qτ(Y|Z)={1.0+F1(τ)+0.6Z1+0.6Z6if 0<τ0.31.0+F1(τ)+0.6Z1+0.8Z3+0.7Z6+0.8Z8if 0.3<τ0.71.0+F1(τ)+0.8Z1+0.8Z3+0.8Z6+1.0Z8if 0.7<τ<1.

We omit QR, FAL and FAS because they are not designed to handle the setting of ‘p > n’. QR-LASSO can be directly applied to data that have dimension higher than sample size. For QR-aLASSO, we derive the penalty weights using the estimates from QR-LASSO. For Het-QR, we first run Het-QR with the penalty weights equal to 1 to obtain the initial estimators for γm*, m = 1, … M, and then use the inverse of the initial estimators as the penalty weights; finally, we run Het-QR to obtain the θ̂. The results are shown in Table 5. Het-QR tends to yield a smaller model than the compared methods and have better performance in estimating the regression coefficients as well as in prediction.

Table 5.

Comparison of Het-QR and other methods for p > n (standard error of the sample mean shown in the parenthesis).

Method Model-size FM (%) PEE × 100 QPE × 103 PE × 103
Correlation structure: auto-regressive

QR-LASSO 67.6(1.8) 25(0.6) 199.0(4.2) 215.5(6.0) 1801.4(1.7)
QR-aLASSO 15.2(0.3) 73(1.1) 81.3(3.0) 113.3(6.2) 1768.8(1.6)
Het-QR 10.6(0.1) 96(0.6) 44.6(2.5) 61.0(7.0) 1753.9(1.6)
Correlation structure: compound symmetry
QR-LASSO 62.9(1.8) 27(0.7) 182.4(4.2) 199.8(5.9) 1796.4(1.8)
QR-aLASSO 15.2(0.3) 74(1.0) 76.7(2.9) 103.5(5.9) 1766.6(1.6)
Het-QR 10.6(0.1) 96(0.7) 46.4(3.0) 63.1(7.8) 1754.2(1.7)

5. Real data analysis

We collect 206 brain tumor patients each with 91 gene expression levels. All patients were de-identified. All patients were diagnosed to have glioma, one of the deadliest cancers among all cancer types. Indeed, many patients died within 1 year after the diagnosis. Glioma is associated with a number of genes. We focus on the PDGFRA gene, which encodes the alpha-type platelet-derived growth factor receptor and has been shown to be an important gene for brain tumors (Holland, 2000; Puputti et al., 2006). We use this dataset to investigate how the expression of PDGFRA is influenced by other genes.

For demonstration, we set τ to 0.25, 0.5, and 0.75. For QR-LASSO, QR-aLASSO, and Het-QR, we use cross-validation to ascertain the tuning parameter. That is, (1) we divide the data into 3 folds; (2) we use 2 folds to build the model and 1 fold to calculate the prediction error, and this is done three times; (3) we choose the λn that minimizes the prediction error as the best tuning parameter (we were not able to obtain an independent dataset with sample size 10n to determine the tuning parameter; it would be meaningful to compare the two procedures when such a dataset becomes available in future). For FAL and FAS, we follow Jiang et al. (2014) to use BIC and AIC for determining the tuning parameter, and the corresponding methods are named as FAL-BIC, FAL-AIC, FAS-BIC, FAS-AIC. Hence, in total 7 approaches are compared. We first examine the model sizes. For all the three quantiles combined, the number of nonzero covariates of the seven models are 89 (QR-LASSO), 47 (QR-aLASSO), 25 (Het-QR), 49 (FAL-BIC), 182 (FAL-AIC), 93 (FAS-BIC), 167 (FAS-AIC). For illustration, we list some of the estimated regression coefficients in Table 6. It can be seen that the coefficients for a given gene often differ among different quantiles. For a better view of the regression coefficients among different quantiles, we plot the estimated coefficients for the first 30 covariates (Fig. 1). Table 6 and Fig. 1 show that most models (except FAS-BIC and FAS-AIC) demonstrate heterogeneous sparsity, i.e., some covariates have nonzero effects in only one or two of the three quantiles. FAS-BIC and FAS-AIC do not show this type of sparsity due to the sup-norm penalty they adopt, as this penalty either selects or removes a covariate for all the quantiles. FAL-AIC and FAS-AIC models contain more nonzero estimates than FAL-BIC and FAS-BIC, consistent with the fact that BIC favors smaller models than AIC. Compared to other methods, Het-QR yields a smaller model which may be easier to interpret and prioritize candidate genes for further functional study.

Table 6.

A snapshot of the estimated regression coefficients (only 5 covariates are shown).

Gene τ QR-LASSO QR-aLASSO Het-QR FAL-BIC FAL-AIC FAS-BIC FAS-AIC
0.25 0.10 0.06
POLR2A 0.5 0.20 0.06
0.75 0.20 0.06

0.25 0.13 0.30 0.09 0.19
SDHA 0.5 0.13 0.26 0.09 0.19
0.75 0.03 0.32 0.2 0.13 0.58 0.09 0.19

0.25 0.09 0.01 0.04
CDKN2A 0.5 0.03 0.03 0.01 0.04
0.75 0.02 0.01 0.04

0.25 0.10 0.10 0.19
CDKN2C 0.5 0.02 0.07 0.13 0.19 0.26 0.10 0.19
0.75 0.05 0.33 0.25 0.19 0.34 0.10 0.19

0.25 0.07 0.03 0.05
DLL3 0.5 0.06 0.03 0.05
0.75 0.06 0.12 0.07 0.01 0.16 0.03 0.05

Note: Zero estimates are left blank.

Fig. 1.

Fig. 1

Graphic view of the first 30 regression coefficients estimated by different methods. Estimates are thresholded at 0.4 and −0.4, and only nonzero estimates are shown.

The covariates selected by Het-QR are shown in Table 7. Consistent with the model assumption, the estimated regression coefficients show heterogeneity among quantiles. For example, the CDKN2C gene has zero coefficient at τ = 0.25, and nonzero coefficients at τ = 0.5 and 0.75. In contrast, some other genes, such as BMP2 and SLC4A4, have nonzero coefficients across all the considered quantiles. This suggests that the expression of PDGFRA is influenced by other genes in a delicate manner that may not be fully characterized by least square methods or quantile regression methods that fail to account for the genetic heterogeneity. CDKN2C encodes a cyclin-dependent kinase, and BMP2 and SLC4A4 encode a bone morphogenetic protein and a sodium bicarbonate cotransporter, respectively. This indicates that PDGFRA’s expression is associated with genes with a wide spectrum of cellular functions. The gene EGFR has non-positive regression coefficients, suggesting that there may be some negative control between PDGFRA and EGFR. Future biological studies may provide new insight into the gene regulation of PDGFRA.

Table 7.

The model selected by the Het-QR method.

Estimated regression coefficients

Gene τ = 0.25 τ = 0.5 τ = 0.75
SDHA 0.2
BMP2 0.35 0.31 0.34
CDKN2C 0.13 0.25
DLL3 0.07
EGFR −0.08 −0.29
GRIA2 0.28 0.22 0.18
LTF 0.07
OLIG2 0.14 0.30 0.38
PLAT 0.20 0.21 0.25
SLC4A4 −0.21 −0.25 −0.24
TAGLN −0.20
TMEM100 0.20 0.17

One main purpose of variable selection is to apply the selected variables from one dataset to other datasets to guide statistical analysis. Along this line, we further collect the brain tumor data from the cancer genome atlas (TCGA) project, which contains 567 subjects. We apply the models selected by different methods from the training data to the TCGA data to assess the prediction accuracy of the different models. We randomly split the TCGA data into two halves, and use one half to estimate the regression coefficients and the other half to calculate the prediction error; the prediction error is then averaged across the two halves. We repeat the random-splitting 400 times, and calculate the average of the prediction errors. Het-QR appears to have a slightly lower prediction error than the other compared ones, but the difference among the seven methods is generally small; in detail, the observed prediction errors are 1.349 (QR-LASSO), 1.351 (QR-aLASSO), 1.345 (Het-QR), 1.362 (FAL-BIC), 1.513 (FAL-AIC), 1.355 (FAS-BIC), and 1.430 (FAS-AIC).

6. Discussion

In this article, we have proposed a variable selection method that is able to conduct joint variable selection and estimation for multiple quantiles simultaneously. The joint selection/estimation allows one to harness the strength shared among multiple quantiles and to achieve a model that is closer to the truth. In particular, our approach is able to handle the heterogeneous sparsity, under which a covariate contributes to some (but not all) of the quantiles. By considering the heterogeneous sparsity, one can better dissect the regression structure of the trait over the covariates, which in turn leads to more accurate characterization of the underlying biological mechanism.

We have conducted a series of simulation studies to evaluate the performance of our proposed approach and other approaches. Our simulation studies show that the proposed method has superior performance to its peer methods. In real data analysis, our method tends to yield a sparser model than the compared methods. The benefit of achieving a sparse model is of great importance to biological studies, because it helps biological investigators to narrow down important candidate covariates (such as genes or proteins), so that research efforts can be leveraged more efficiently. Our analysis indicates that the regression coefficients at different quantiles can be quite heterogeneous. We suggest that the interpretation of the results be guided by biological knowledge and scientific insight, and that the variability be examined by experimental studies. FAL and FAS were mainly designed to generate interquantile shrinkage for quantile regression; when a smooth γ(τ) (with respect to τ) is desired, these two methods are highly suitable and are indeed the only available methods to achieve such a goal.

We have also provided theoretical proof for the proposed method under the situation that p can grow to infinity. Our exploratory experiments suggest that Het-QR can be potentially applied to ‘p > n’, although theoretical work is still needed to guide future experiments in this direction. Wang et al. (2012) proposed a novel approach for studying asymptotics under the ‘p > n’ situation, and they focused on the penalties that can be written as the difference of two convex functions. The group penalty considered herein does not seem to fall into their framework. Further theoretical development is merited. While we have imposed equal weights for multiple quantiles in this paper, our method can be easily extended to accommodate different weights for different quantiles. Properly chosen weights may lead to improved efficiency of the estimated parameters (Zhao and Xiao, 2014).

Acknowledgments

The authors are grateful to the AE and two reviewers for many helpful and constructive comments. Dr. He’s research is supported by the Institutional Support from the Fred Hutchinson Cancer Research Center. Dr. Kong’s research is supported by Natural Sciences and Engineering Research Council of Canada. The authors thank Dr. Huixia Judy Wang for generously providing the code for FAL and FAS, which greatly expands the breadth of this manuscript. The authors thank Dr. Li Hsu for helpful discussions.

The results shown here are in part based upon data generated by the TCGA Research Network: http://cancergenome.nih.gov/.

Appendix

Transformation of the objective function

Our proof is in vein with the proof of Proposition 1 in Huang et al. (2009). Consider the transformed objective function

minθ,ξm=1Mi=1nρm(YiUiTθm)+λ1j=1pξj+j=1pξj1(m=1Mωmj|γmj|)=minθ{m=1Mi=1nρm(YiUiTθm)+minξ{λ1j=1pξj+j=1pξj1(m=1Mωmj|γmj|)}} (5)

By Cauchy–Schwarz inequality, we have

λ1j=1pξj+j=1pξj1(m=1Mωmj|γmj|)2λ1j=1p(m=1Mωmj|γmj|)12,

then it follows that (5) is equivalent to

minθm=1Mi=1nρm(YiUiTθm)+2λ1j=1p(m=1Mωmj|γmj|)12. (6)

Now, let 2λ1=nλn, then (6) is identical to the original objective function (2).

Derivation of the primal and dual problem

Let λmj=ξj1ωmj, then in step 2 of Section 2.2, we aim to solve

minγ,γ0{m=1Mi=1nρm(YiZiTγmγm0)+j=1pm=1Mλmj|γmj|}.

Let en denote the unit vector of length n, and λm the vector of λmj(j = 1, …, p). With slight abuse of notation, let Y be the n × 1 vector consisting of Yi, and Z the n × p matrix consisting of Zi. The above objective function is equivalent to

minum,υm,γm,sm,tmm=1MτmenTum+(1τm)enTυm+λmTsm+λmTtm,

subject to um − υm = YZγm − γm0en, sm − tm = γm, um ≥ 0, υm ≥ 0, sm ≥ 0, and tm ≥ 0.

Let 0r be the zero vector of length r, λ*=(λ1T,,λMT)T,s*=(s1T,,sMT)T,t*=(t1T,,tMT),u*=(u1T,,uMT)T,υ*=(υ1T,,υMT)T. Let Y(M) denote the vector in which Y is stacked by M times. Let

c=(0MpT,0MT,λ*T,λ*T,τ1enT,,τMenT,(1τ1)enT,,(1τM)enT)T,

x=(γT,γ0T,s*T,t*T,u*T,υ*T)T, and b=(Y(M)T,0MpT)T. Then, the linear program primal of the above objective function can be written as

minxcTx,

subject to Ax = b and (s*T, t*T, u*T, υ*T)T ≥ 0, where A is defined as follows. A is a matrix consisting of two rows of blocks. The first row of A consists of 6 blocks, A11 = IMZ, A12 = IMen, A13 = [0]nM×Mp, A14 = [0]nM×Mp, A15 = InM, and A16 = −InM. The second row of A consists of the following 6 blocks, A21 = IMp, A22 = [0]Mp × M, A23 = −IMp, A24 = IMp, A25 = [0]nM×nM, and A26 = [0]nM×nM.

Then using standard linear program arguments, we obtain the dual as

maxd˜bTd˜,

subject to T = S1 + S2 and ∈ [0, 1]nM+Mp, where

S1=((1τ1)enTZ,,(1τM)enTZ,n(1τ1),,n(1τM))T,

S2 = 1/2 × (R, [0]Mp×M)T eMp, R=diag((2λT,,2λMT)T), and is defined as follows. consists of two rows of blocks. The first row of includes two blocks, 11 = IMZ and 12 = IMen. The second row includes two blocks, 21 = R and 22 = [0]Mp×M.

Computation time

For p = 100 under the auto-regressive structure, i.e., Table 2, we calculate the summary statistics of the CPU time (seconds). The average time (and the standard error of the sample mean) for QR, QR-LASSO, QR-aLASSO, FAL, FAS, Het-QR is 1.6(0.003), 10.2(0.017), 10.0(0.030), 867.0(8.059), 455.3(2.641), 90.3(0.391), respectively. For p = 600 under the auto-regressive structure, the CPU time for QR-LASSO is 372.6(1.453); the time for QR-aLASSO and Het-QR is 362.0(1.729) and 3717.5(36.401), respectively (excluding the time for calculating penalty weights).

Proof of the theorems

We now give the proof of the theorems. We note that throughout the proof, the upper letter C in different formulas stands for different constants.

Recall that the definitions of 𝒩, I, II, sn and Im have been given in the main text. Define the index set J = {1 ≤ jpn : there exists 1 ≤ mM such that γmj*0} with the cardinality |J| = kn. For every fixed jJ, define the index set Mj={1mM:γmj*0}. Clearly, the oracle index set I = {(m, j) : mMj, jJ}. For every fixed 1 ≤ mM, define IIm={1jpn:γmj*=0}.

We need to define some notations for our proof. Let the vectors γmI, γmI* and γ̂mI be the subvectors of γm, γm* and γ̂m corresponding to the index set Im, respectively. Define the subvectors of γ, γ* and γ̂ corresponding to the oracle index set I as γI=(γ1IT,,γMIT)T,γI*=(γ1I*T,,γMI*T)T and γ^I=(γ^1IT,,γ^MIT)T. Let θmI=(γm0,γmIT)T,θmI*=(γm0*,γmI*T)T and θ^mI=(γ^m0,γ^mIT)T. Define the vector θI as the subvector of the parameter vector θ corresponding to the oracle index set I. Recall that the vectors θI* and θ̂I are the subvectors of the vectors θ* and θ̂ corresponding to the oracle index set I. Clearly, θI=(θ1IT,,θMIT)T,θI*=(θ1I*T,,θMI*T)T and θ^I=(θ^1IT,,θ^MIT)T.

Similarly, let the vectors γmII, γmII* and γ̂mII be the subvectors of γm, γm* and γ̂m corresponding to the index set IIm, respectively. Define the subvectors of γ, γ* and γ̂ corresponding to the index set II as γII=(γ1IIT,,γMIIT)T,γII*=(γ1II*T,,γMII*T)T and γ^II=(γ^1IIT,,γ^MIIT)T. Define the vector θII as the subvector of the parameter vector θ corresponding to the index set II. Recall that the vectors θII* and θ̂II are the subvectors of the vectors θ* and θ̂ corresponding to the index set II, respectively. Clearly, θII = γII, θII*=γII*=0 and θ̂II = γ̂II.

For convenience, write γ=(γIT,γIIT)T,γ*=(γI*T,γII*T)T and γ^=(γ^IT,γ^IIT)T; and θ=(θIT,θIIT)T,θ*=(θI*T,θII*T)T and θ^=(θ^IT,θ^IIT)T.

We first give a lemma related to the loss function Qn(θ). The lemma plays an important role in the proof of our theorems.

Lemma .1

Under conditions (L1)–(L4), we have

Qn(θ)=Qn(θ*)+AnT(θθ*)+12(θθ*)TBn(θθ*)+Rn(θ),

where sup1mMsupθmθm*2ηn|Rn(θ)|=Op(n12p34ηn32)+Op(np12ηn3) and

An=(A1T,,AMT)T  with Am=i=1n(I(Yi<UiTθm*)τm)Ui,
Bn=Diag(B11,B22,,BMM)  with Bmm=i=1nf(UiTθm*|Zi)UiUiT. (7)

Proof of Lemma .1

Let ψm(u) be a sub-derivative of the quantile function ρm(u), then ψm(u) = τmI(u < 0) + lm I(u = 0) with lm ∈ [−1, 0]. Let Tn = Qn(θ) − Qn*). Then, there exists an lmi ∈ [−1, 0] for every 1 ≤ mM and 1 ≤ in such that

Tn=m=1Mi=1nψm(YiUiTθ¯m)UiT(θmθm*)=m=1Mi=1n(τmI(Yi<UiTθ¯m)+lmiI(Yi=UiTθ¯m))UiT(θmθm*)=m=1Mi=1n(I(Yi<UiTθm*)τm)UiT(θmθm*)m=1Mi=1n(I(Yi<UiTθm*)I(Yi<UiTθ¯m))UiT(θmθm*)m=1Mi=1nlmiI(Yi=UiTθ¯m)UiT(θmθm*)Tn1Tn2Tn3 (8)

where θ̄m is on the linear segment between θm and θm*, and may be written as θ¯m=θm*+ηm(θmθm*) with ηm ∈ (0, 1). For Tn3, note that Yi has a continuous conditional distribution given Zi, hence almost surely I(Yi=UiTθ¯m)=0 for all i = 1, …, n and m = 1, …, M, thus Tn3 = 0 almost surely. Subsequently, we can write

Tn=(Tn1E(Tn1))(Tn2E(Tn2))+E(Tn),

where E(Tn) = E(Tn1) + E(Tn2), and E denotes the conditional expectation given Z. Note that E(Tn1) = 0 because E(I(Yi<UiTθm*)τm|Zi)=F(UiTθm*|Zi)τm=0 from (1). Rename (Tn2E(Tn2)) as Rn2 and E(Tn) as Tn4, then we have

Tn=Tn1Rn2+Tn4. (9)

For Rn2 (recall Tn2 in (8)), let ζmi(t)=I(YiUiTθm*<0)I(YiUiTθm*<UiTt), then

Rn2=m=1M(i=1n{ζmi(ηm(θmθm*))E(ζmi(ηm(θmθm*)))}Ui)T(θmθm*)=m=1Mϕn(m)T(θmθm*) (10)

where ϕn(m)=i=1n{ζmi(ηm(θmθm*))E(ζmi(ηm(θmθm*)))}Ui. Note that |ζmi(t)|=|I(YiUiTθm*<0)I(YiUiTθm*<UiTt)|(|YiUiTθm*||UiTt|) and that f (t|Zi) is bounded under condition (L1) and Ui2Cp12 under condition (L2). Hence, making use of independence, under conditions (L1) and (L2), for all 1 ≤ mM and θmθm*2ηn, we can see

Eϕn(m)22=Ei=1n{ζmi(ηm(θmθm*))E(ζmi(ηm(θmθm*)))}Ui22=i=1nE{ζmi(ηm(θmθm*))E(ζmi(ηm(θmθm*)))}2Ui22CnpEζmi(ηm(θmθm*))2CnpP(|Yi<UiTθm*||UiT(θmθm*)|)CnpP(|Yi<UiTθm*|Cp12θmθm*2).

Note that

P(|YiUiTθm*|Cp12θmθm*2)=F(UiTθm*+Cp12θmθm*2)F(UiTθm*Cp12θmθm*2)|f(ξmi|Zi)|p12θmθm*2Cp12θmθm*2,

where ξmi is between UiTθm*+Cp12θmθm*2 and UiTθm*Cp12θmθm*2. Thus, Eϕn(m)22Cnp32ηn. By Chebyshev’s inequality, we get sup1mMsupθmθm*2ηnϕn(m)2=Op(n12p34ηn12). Together with (10), by Cauchy–Schwarz’s inequality, we get

sup1mMsupθmθm*2ηn|Rn2|=Op(n12p34ηn12)O(ηn)=Op(n12p34ηn32). (11)

For Tn4, the third term in (9), write E(Tn) = en(θ) − en*), where

en(θ)m=1Mi=1nEρm(YiUiTθm).

Eρm(YiUiTθm) is second order differentiable with respect to θm under condition (L1), with gradient Gmi(θ)=E{(τmI(Yi<UiTθm))Ui}=(F(UiTθm|Zi)τm)Ui, and Hessian matrix Hmi(θ)=f((UiTθm)|Zi)UiUiT.

Let G(θ) and H(θ) be gradient and Hessian matrix of en(θ), then G(θ)=m=1Mi=1nGmi(θ) and H(θ)=m=1Mi=1nHmi(θ)=m=1Mi=1nf((UiTθm)|Zi)UiUiT. It is easy to see that G*) = 0 by F(UiTθm*|Zi)=τm in (1). By Taylor expansion of Tn4 = E(Tn) = en(θ) − en*) at θ*, we have

Tn4=12(θθ*)TH(θ*+ξ(θθ*))(θθ*)=12m=1Mi=1nf(ζmi|Zi)(θmθm*)TUiUiT(θmθm*),=12m=1Mi=1n{f(ζmi|Zi)f(UiTθm*|Zi)}(θmθm*)TUiUiT(θmθm*)+12m=1Mi=1nf(UiTθm*|Zi)(θmθm*)TUiUiT(θmθm*)Rn4+Tn41. (12)

where ξ ∈ (0, 1) and ζmi is between UiTθm and UiTθm*. Trivially,

|Rn4|Cm=1Msup1in|f(ζmi|Zi)f(UiTθm*|Zi)|(θmθm*)Ti=1nUiUiT(θmθm*). (13)

Note that f ′(t|Zi) is bounded under condition (L1), Ui2Cp12 under condition (L2), and λmax(n1i=1nUiUiT)C under condition (L3). Hence, for all 1 ≤ in, 1 ≤ mM, and θmθm*2ηn, we have |f(ζmi|Zi)f(UiTθm*|Zi)|C|UiT(θmθm*)|CUi2θmθm*2Cp12ηn, and (θmθm*)Ti=1nUiUiT(θmθm*)nλmax(n1i=1nUiUiT)θmθm*22Cnηn2. Hence, from (13), we get sup1mMsupθmθm*2ηn|Rn4|Cnp12ηn3. Together with (9), (11) and (12), we obtain Tn = Tn1 + Tn41 + Rn(θ), where sup1mMsupθmθm*2ηn|Rn(θ)|=Op(n12p34ηn32)+Op(np12ηn3), and

Tn1=m=1Mi=1n(I(Yi<UiTθm*)τm)UiT(θmθm*)=AnT(θθ*),
Tn41=12m=1Mi=1nf(UiTθm*|Zi)(θmθm*)TUiUiT(θmθm*)=12(θθ*)TBn(θθ*).

This completes the proof of the lemma.

Proof of Theorem 1

Recall the definition of Ln(θ) = Qn(θ) + Pn(γ) in (2). Let θ − θ* = νn u, where νn > 0, u ∈ ℝM(p+1) and ‖u2 = 1. It is easy to see that ‖θ − θ*2 = νn. Based on the continuity of Ln, if we can prove that in probability

infu2=1Ln(θ*+νnu)>Ln(θ*), (14)

then the minimal value point of Ln* + νnu) on {u : ‖u2 ≤ 1} exists and lies in the unit ball {u : ‖u2 ≤ 1} in probability. We will prove that (14) holds.

For Qn(θ), because θmθm*2θθ*2 for all m = 1, 2, …, M, by Lemma .1 with ηn = νn under conditions (L1) to (L4), we get

qn(θ)Qn(θ)Qn(θ*)=AnT(θθ*)+12(θθ*)TBn(θθ*)+Rn(θ), (15)

where supθθ*2νn|Rn(θ)|=Op(n12p34νn32)+Op(np12νn3).

For Pn(γ), let pn(γ) = Pn(γ) − Pn*). Define p1n(γ) = pnI, 0), that is,

p1n(γ)=nλnjJ(mMjωmj|γmj|)12nλnjJ(mMjωmj|γmj*|)12. (16)

Clearly, p1n(γ) ≤ pn(γ) and p1n*) = pn*) = 0.

Define ln(θ) = qn(θ) + pn(γ) and l1n(θ) = qn(θ) + p1n(γ), both of which are continuous. Clearly, l1n(θ) ≤ ln(θ) and l1n*) = ln*) = 0. Note that (14) is equivalent to that in probability

infu2=1l1n(θ*+νnu)>0. (17)

Note that

|p1n(γ)|=nλn|jJ(mMjωmj|γmj|)12jJ(mMjωmj|γmj*|)12|nλnjJ|(mMjωmj|γmj|)12(mMjωmj|γmj*|)12|nλnjJ(mMjωmj|γmjγmj*|)12,

where the last inequality follows from the fact that ||x||y|||xy|. Note that γmjγmj*=νnumj where umj is a component of u. Hence,

|p1n(γ)|nλndnI12νn12jJ(mMj|umj|)12nλndnI12νn12kn12((m,j)I|umj|)12nλndnI12νn12kn12sn14u212=nλndnI12νn12sn34u212, (18)

where kn = |J| ≤ sn. By (15) and the above, we have

l1n(θ*+νnu)=qn(θ*+νnu)+p1n(θ*+νnu)=νnAnTu+12νn2uTBnu+RL(u), (19)

where supu2=1|RL(u)|=Op(n12p34νn32)+Op(np12νn3)+O(nλndnI12νn12sn34).

For the quadratic term in (19), from condition (L5),

12νn2uTBnuC42nνn2u22. (20)

For the linear term AnTu in (19), by the independence of (Zi, Yi) and (Zj, Yj) for all ij and the fact that E(I(Yi<UiTθm*)τm|Zi)=0, we get

E(AnTAn)=Em=1Mi=1n(I(Yi<UiTθm*)τm)UiTj=1n(I(Yj<UjTθm*)τm)Uj=m=1Mi=1nE((I(Yi<UiTθm*)τm)2Ui22)Cnp.

Then, it follows that

An2=Op((np)12), (21)

which implies that supu21|AnTu|=Op((np)12). Together with (19) and (20), in probability,

infu2=1l1n(θ*+νnu)C42nνn2C(np)12νnCn12p34νn32Cnp12νn3CnλndnI12νn12sn34C42nνn{νnCn12p12Cn12p34νn12Cp12νn2CλndnI12sn34νn12}. (22)

Now take νn=C0(n12p12) where C0 is a sufficiently large constant. Under condition (L4), i.e., λndnI12=o(sn34pn34n34), for the last three terms in (22), we can check that n12p34νn12Cn14+12α1νn=o(νn),p12νn2Cn12+α1νn=o(νn), and λndnI12sn34νn12CλndnI12sn34pn34n34νn=o(νn). Hence,

infu2=1l1n(θ*+νnu)Cnνn2  in probability. (23)

Therefore, in probability there exists a local minimizer θ̂ of Ln(θ) such that ‖θ̂ − θ*2 < νn. This completes the proof of the theorem.

Proof of Theorem 2

For the quantile function Qn(θ), because θmθm*2θθ*2 for all 1 ≤ mM, by Lemma .1 with ηn = νn, under conditions (L1)–(L4), we have

Qn(θ)Qn(θ*)=AnT(θθ*)+12(θθ*)TBn(θθ*)+Rn(θ), (24)

where supθθ*2νn|Rn(θ)|=Op(n12p34νn32)+Op(np12νn3).

Let θ − θ* = νnu where νn > 0 and u ∈ ℝM(p+1). Then ‖θ − θ*2 ≤ νn if and only if ‖u2 ≤ 1. Let u=(uIT,uIIT)T where uI and uII are subvectors of u corresponding to the index sets I and II, respectively. Clearly, u22=uI22+uII22. Note that θI=θI*+νnuI and θII=θII*+νnuII=νnuII.

Define the ball Θ̃n = {θ = θ* + νnu ∈ Θn : ‖u2 ≤ 1} with νn=C0n12p12. For any θ=(θIT,θIIT)TΘ˜n, we can see (θIT,0T)Tθ*2=θIθ1*2νn, and ‖θII2 = νn‖θII2, where ‖θII2 ≤ 1.

Consider that QnI, θII) − QnI, 0) = QnI, θII) − Qn*) − (QnI, 0) − Qn*))

=12(0T,θIIT)Bn(0T,θIIT)T+AnT(0T,θIIT)T(θITθI*T,0T)Bn(0T,θIIT)T+(Rn(θ)Rn(θI,0))In1+In2+In3+r1n(θ).

From (21), we can see supθΘ˜n|In2|An2θII2=Op((np)12)νnuII2=Op(puII2). Under condition (L5), Bn(0T,θIIT)T22=(0T,θIIT)Bn2(0T,θIIT)Tn2λmax2(n1Bn)θII22Cn2νn2uII22. we have supθΘ˜n|In3|θIθI*2Bn(0T,θIIT)T2Cnνn2Cp. From (24), we have supθΘ˜n|r1n(θ)|=Op(n14p32). Hence,

Qn(θI,θII)Qn(θI,0)=In1+r2n(θ), (25)

where supθΘ˜n|r2n(θ)|=Op(puII2)+Op(n14p32). Recall LnI, θII) − LnI, 0) = QnI, θII) − QnI, 0) + PnI, γII) − PnI, 0). From (25), we get

Ln(θI,θII)Ln(θI,0)Pn(γI,γII)Pn(γI,0)+r2n(θ). (26)

Note that (nλn)−1 (PI, γII) − PnI, 0))

=j=1p(m=1Mωmj|γmj|)12jJ(mMjωmj|γmj|)12=jJc(m=1Mωmj|γmj|)12+jJ(m=1Mωmj|γmj|)12jJ(mMjωmj|γmj|)12jJcm=1Mωmj|γmj|2(m=1Mωmj|γmj|)12+jJmMjcωmj|γmj|2(m=1Mωmj|γmj|)12. (27)

For all θ ∈ Θ̃n, we have |γmj||γmj*|+1C, which implies that m=1Mωmj|γmj|Cωn for all j = 1, 2, …, p, where ‖ωn = max1≤mM, 1≤jpmj|. Recall that dnII=min(m,j)II{ωmj}ωn12. From (27), it follows that for all θ ∈ Θ̃n,

Pn(γI,γII)Pn(γI,0)Cnλnωn12(jJcm=1Mωmj|γmj|+jJmMjcωmj|γmj|)=Cnλnωn12(m,j)IIωmj|γmj|CnλndnII(m,j)II|γmj|CnλndnIIγII2=CλndnII(np)12uII2. (28)

Define Ωn = {θ = θ* + νnu ∈ Θ̃n : ‖uII2 > 0} and Ωnc={θ=θ*+νnuΘ˜n:uII=0}. Clearly, Θ˜n=ΩnΩnc. From (26) and (28), we obtain in probability

infθΩn(Ln(θI,θII)Ln(θI,0))infθΩn(Pn(γI,γII)Pn(γI,0))supθΘ˜n|r2n(θ)|C˜1λndnII(np)12uII2C˜2puII2C˜2n14p32p(uII2(C˜1λndnIIn12p12C˜2)C˜2n14p12)

where 1 and 2 are positive constants. Under the given conditions, λndnIIn12p12 and n14p120 as n → ∞. Hence, infθ∈Ωn (LnI, θII) − LnI, 0) > 0 in probability. Thus, infθ∈Ωn Ln(θ) ≥ infθ∈Ωn LnI, θII) − LnI, 0)) + infθ∈Ωn LnI, 0) > infθ∈Ωn LnI, 0) = infθ∈Θ̃n LnI, 0) ≥ infθ∈Θ̃n Ln(θ), which implies that infθΘ˜nLn(θ)=infθΩncLn(θ). Therefore, the minimal value point of Ln(θ) on Θ̃n only lies in its subset Ωnc.

From Theorem 1, we know that in probability θ̃ ∈ Θ̃n and that θ̂ is a local minimizer of Ln(θ). Hence, θ^Ωnc in probability, which implies that γ̂II = 0.

Proof of Theorem 3

Let θIθI*=νnu where νn > 0 and u ∈ ℝM+sn. Because θmIθmI*2θIθI*2=νnu2 for all 1 ≤ mM, and due to conditions (L1)–(L3) and that 0<α0<α1<16, Lemma .1 implies that

Qn(θI,0)Qn(θI*,0)=Qn(θI*+νnu,0)Qn(θI*,0)=νnAnITu+12νn2uTBnIu+rn(u), (29)

where supu21|rn(u)|=Op(n12sn34νn32)+Op(nsn12νn3), BnI is given in (4),

AnI=(A1T,,AMT)T  with  Am=i=1n(I(Yi<UiTθm*)τm)UimI, (30)

and UimI is given in Section 3. Note that AnI and BnI are the sub-vector(matrix) of An and Bn in (6) corresponding to the index set I, respectively.

For Pn(γ), define pn(γ) = Pn(γ) − Pn*). Let uI be the subvector of u corresponding to the subvector γI in θI. Then, we have

pn(γI,0)=pn(γI*+νnuI,0)=nλnjJ(mMjωmj|γmj|)12nλnjJ(mMjωmj|γmj*|)12,

which is p1n(γ) given in (16). By (18) in the proof of Theorem 1, we have |pn(γI,0)|=|pn(γI*+νnuI,0)|nλndnI12νn12sn34u212. This, combined with (29), implies that

Ln(θI*+νnu,0)Ln(θI*,0)=νnAnITu+12νn2uTBnIu+rl(u), (31)

where supu21|r1(u)|=Op(n12sn34νn32)+Op(nsn12νn3)+O(nλndnI12νn12sn34).

Define the ball ΘnI={(θIT,0T)T:θIθI*=νnu} with νn=C0(n1pn)12, where u ∈ ℝM+sn and C0 is a positive constant. Given that νn=C0(n1pn)12,λndnI12=O(n1pn12), and 0<α0<α1<16, we see that n12sn34νn32Cn14sn34pn34=op(1),nsnνn3Cn12sn12pn32=o(n14sn34pn34)=op(1) and nλndnI12νn12sn34λndnI12n34sn34pn14=O(n14sn34pn34)=op(1). Hence, considering (31), for all (θIT,0T)TΘnI we have Ln(θI,0)Ln(θI*,0)=AnIT(θIθI*)+12(θIθI*)TBnI(θIθI*)+op(1), which implies that for all (θIT,0T)TΘnI,

Ln(θI,0)Ln(θI*,0)+12AnITBnI1AnI=12(BnI12AnI+BnI12(θIθI*))T(BnI12AnI+BnI12(θIθI*))+op(1). (32)

By Theorems 1 and 2, we know that in probability a local minimizer θ̂ of Ln(θ) − Ln*) lies in the ball ΘnI, which implies that θ^=(θ^IT,0T)TΘnI in probability. Hence, from (32), we have

BnI12(θ^IθI*)=BnI12AnI+op(1)t, (33)

where t ∈ ℝM+sn is an unit vector. Since tTBnI t ≤ λmax(BnI) ≤ λmax(Bn) ≤ Cn by condition (L5), we have BnI12t=Op(n12)t. Multiplying both sides of (33) by a vector bTBnI12, where b ∈ ℝM+sn is any unit vector, we obtain

bTBnI(θ^IθI*)=bTAnI+op(n1/2). (34)

Let ξn = bTAnI, and write b=(b1T,,bMT)T where bm is the subvector of b corresponding to the subvector θmI* of θI*. By the definition of AnI in (30), we see that

ξn=m=1Mi=1n(ψmi*UimITbm)=i=1nζi,

where ζi=m=1Mψmi*UimITbm with ψmi*=I(Yi<UimITθmI*)τm. Clearly, {ζi, i = 1, …, n} is an independent sequence. Next, we will verify that ξn satisfies the Lindeberg’s condition

σn2i=1nE(ζi2I(|ζi|σn))0, (35)

where σn2=Var(ξn).

For ζi=m=1Mψmi*UimITbm, it is easy to see that E(ζi)=m=1ME(E(ψmi*|Zi)UimITbm)=0 from the fact E(ψmi*|Zi)=F(UimITθmI*|Zi)τm=F(UiTθm*|Zi)τm=0, and that

E(ζi2)=E(m=1Mψmi*UimITbm)2=m=1Ml=1ME(E(ψmi*ψli*|Zi)bmT(UimIUilIT)bl)=m=1Ml=1ME((min(τm,τl)τmτl)bmT(UimIUilIT)bl)=bTΣnb,

where Σn is given in (3). Hence, we obtain that E(ξn)=i=1nE(ζi)=0 and that, by independence Var(ξn)=i=1nE(ζi2)=nbTΣnb. Under condition (L6), we obtain

σn2=nbTΣnbnλmin(Σn)bTbCn. (36)

Note that (UimITbm)2UimI22bm22Csnb22Csn under condition (L2). By Cauchy–Schwarz’s inequality, ζi2=(m=1Mψmi*UimITbm)2Mm=1M(ψmi*UimITbm)2Csnm=1M(ψmi*)2, which implies that ζi2Csn from the fact |ψmi*|1+τm2. Hence, we have

i=1nE(ζi2I(|ζi|σn))Csni=1nE(I(ζi2σn2))Csni=1nP(Csnm=1M(ψmi*)2σn2)Csni=1nE(Csnm=1M(ψmi*)2)σn2Cnsn2σn2.

Together with (36), this implies that, as n → ∞,

σn2i=1nE(ζi2I(|ζi|σn))Cnsn2σn4Csn2nCn1+2α00,

which shows that Lindeberg’s condition (35) holds. Hence,

(nbTΣnb)12bTAnI=bTAnIσn=ξnE(ξn)σnN(0,1).

Together with (34) and (36), this implies that

(nbTΣnb)12bTBnI(θ^IθI*)=(nbTΣnb)12bTAnI+op(1)N(0,1).

This completes the proof of the theorem.

References

  1. Avery CL, He Q, North KE, Ambite JL, Boerwinkle E, Fornage M, Hindorff LA, Kooperberg C, Meigs JB, Pankow JS, et al. A phenomics-based strategy identifies loci on APOC1, BRAP, and PLCG1 associated with metabolic syndrome phenotype domains. PLoS Genet. 2011;7:e1002322. doi: 10.1371/journal.pgen.1002322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 2001;96:1348–1360. [Google Scholar]
  3. Gasso G, Rakotomamonjy A, Canu S. Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Trans. Signal Process. 2009;57:4686–4698. [Google Scholar]
  4. Holland EC. Glioblastoma multiforme: the terminator. Proc. Natl. Acad. Sci. 2000;97:6242–6244. doi: 10.1073/pnas.97.12.6242. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Huang J, Ma S, Xie H, Zhang CH. A group bridge approach for variable selection. Biometrika. 2009;96:339–355. doi: 10.1093/biomet/asp020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Jiang L, Bondell HD, Wang HJ. Interquantile shrinkage and variable selection in quantile regression. Comput. Statist. Data Anal. 2014;69:208–219. doi: 10.1016/j.csda.2013.08.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Jiang L, Wang HJ, Bondell HD. Interquantile shrinkage in regression models. J. Comput. Graph. Statist. 2013;22:970–986. doi: 10.1080/10618600.2012.707454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Koenker R. Quantile regression for longitudinal data. J. Multivariate Anal. 2004;91:74–89. [Google Scholar]
  9. Koenker R. quantreg: Quantile Regression. 2015 URL: http://CRAN.R-project.org/package=quantreg R package version 5.11. [Google Scholar]
  10. Koenker R, Bassett JG. Regression quantiles. Econometrica. 1978;46:33–50. [Google Scholar]
  11. Landmark-Høyvik H, Dumeaux V, Nebdal D, Lund E, Tost J, Kamatani Y, Renault V, Børresen-Dale AL, Kristensen V, Edvardsen H. Genome-wide association study in breast cancer survivors reveals SNPs associated with gene expression of genes belonging to MHC class I and II. Genomics. 2013;102:278–287. doi: 10.1016/j.ygeno.2013.07.006. [DOI] [PubMed] [Google Scholar]
  12. Li Y, Zhu J. L1-norm quantile regression. J. Comput. Graph. Statist. 2008;17:163–185. [Google Scholar]
  13. Mazumder R, Friedman JH, Hastie T. Sparsenet: Coordinate descent with nonconvex penalties. J. Amer. Statist. Assoc. 2011;106:1125–1138. doi: 10.1198/jasa.2011.tm09738. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Peng L, Xu J, Kutner N. Shrinkage estimation of varying covariate effects based on quantile regression. Stat. Comput. 2014;24:853–869. doi: 10.1007/s11222-013-9406-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Puputti M, Tynninen O, Sihto H, Blom T, Mäenpää H, Isola J, Paetau A, Joensuu H, Nupponen NN. Amplification of KIT, PDGFRA, VEGFR2, and EGFR in gliomas. Mol. Cancer Res. 2006;4:927–934. doi: 10.1158/1541-7786.MCR-06-0085. [DOI] [PubMed] [Google Scholar]
  16. Tibshirani R. Regression shrinkage and selection via the lasso. J. RStat. Soc. Ser. B. 1996;58:267–288. [Google Scholar]
  17. Tibshirani R, Saunders M, Rosset S, Zhu J, Knight K. Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B. 2005;67:91–108. [Google Scholar]
  18. Wang S, Nan B, Zhu N, Zhu J. Hierarchically penalized Cox regression with grouped variables. Biometrika. 2009;96:307–322. [Google Scholar]
  19. Wang L, Wu Y, Li R. Quantile regression for analyzing heterogeneity in ultra-high dimension. J. Amer. Statist. Assoc. 2012;107:214–222. doi: 10.1080/01621459.2012.656014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Wu Y, Liu Y. Variable selection in quantile regression. Statist. Sinica. 2009;19:801–817. [Google Scholar]
  21. Zhao Z, Xiao Z. Efficient regressions via optimally combining quantile information. Econometric Theory. 2014;30:1272–1314. doi: 10.1017/S0266466614000176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Zou H. The adaptive Lasso and its oracle properties. J. Amer. Statist. Assoc. 2006;101:1418–1429. [Google Scholar]
  23. Zou H, Yuan M. Regularized simultaneous model selection in multiple quantiles regression. Comput. Statist. Data Anal. 2008;52:5296–5304. [Google Scholar]

RESOURCES