Abstract
Data subject to heavy-tailed errors are commonly encountered in various scientific fields. To address this problem, procedures based on quantile regression and Least Absolute Deviation (LAD) regression have been developed in recent years. These methods essentially estimate the conditional median (or quantile) function. They can be very different from the conditional mean functions, especially when distributions are asymmetric and heteroscedastic. How can we efficiently estimate the mean regression functions in ultra-high dimensional setting with existence of only the second moment? To solve this problem, we propose a penalized Huber loss with diverging parameter to reduce biases created by the traditional Huber loss. Such a penalized robust approximate quadratic (RA-quadratic) loss will be called RA-Lasso. In the ultra-high dimensional setting, where the dimensionality can grow exponentially with the sample size, our results reveal that the RA-lasso estimator produces a consistent estimator at the same rate as the optimal rate under the light-tail situation. We further study the computational convergence of RA-Lasso and show that the composite gradient descent algorithm indeed produces a solution that admits the same optimal rate after sufficient iterations. As a byproduct, we also establish the concentration inequality for estimating population mean when there exists only the second moment. We compare RA-Lasso with other regularized robust estimators based on quantile regression and LAD regression. Extensive simulation studies demonstrate the satisfactory finite-sample performance of RA-Lasso.
Keywords: High dimension, Huber loss, M-estimator, Optimal rate, Robust regularization
1 Introduction
Our era has witnessed the massive explosion of data and a dramatic improvement of technology in collecting and processing large data sets. We often encounter huge data sets that the number of features greatly surpasses the number of observations. It makes many traditional statistical analysis tools infeasible and poses great challenge on developing new tools. Regularization methods have been widely used for the analysis of high-dimensional data. These methods penalize the least squares or the likelihood function with the L1-penalty on the unknown parameters (Lasso, Tibshirani (1996)), or a folded concave penalty function such as the SCAD (Fan and Li, 2001) and the MCP(Zhang, 2010). However, these penalized least-squares methods are sensitive to the tails of the error distributions, particularly for ultrahigh dimensional covariates, as the maximum spurious correlation between the covariates and the realized noises can be large in those cases. As a result, theoretical properties are often obtained under light-tailed error distributions (Bickel, Ritov and Tsybakov, 2009; Fan and Lv, 2011). Besides regularization methods, traditional stagewise selection methods (e.g forward selection) have also been extended to the high-dimensional setting. For instance, Fan and Lv (2008) proposed a Sure Independence Screening method and Wang (2009) studied the stagewise selection methods in high-dimension setting. These methods are usually built on marginal correlations between the response and covariates, hence they also need light-tail assumptions on the errors.
To tackle the problem of heavy-tailed errors, robust regularization methods have been extensively studied. Li and Zhu (2008), Wu and Liu (2009) and Zou and Yuan (2008) developed robust regularized estimators based on quantile regression for the case of fixed dimensionality. Belloni and Chernozhukov (2011) studied the L1-penalized quantile regression in high dimensional sparse models. Fan, Fan, and Barut (2014) further considered an adaptively weighted L1 penalty to alleviate the bias problem and established the oracle property and asymptotic normality of the corresponding estimator. Other robust estimators were developed based on Least Absolute Deviation (LAD) regression. Wang (2013) studied the L1-penalized LAD regression and showed that the estimator achieves near oracle risk performance under the high dimensional setting.
The above methods essentially estimate the conditional median (or quantile) regression, instead of the conditional mean regression function. In the applications where the mean regression is of interest, these methods are not feasible unless a strong assumption is made that the distribution of errors is symmetric around zero. A simple example is the heteroscedastic linear model with asymmetric noise distribution. Another example is to estimate the conditional variance function such as ARCH model (Engle, 1982). In these cases, the conditional mean and conditional median are very different. Another important example is to estimate large covariance matrix without assuming light-tails. We will explain this more in details in Section 5. In addition, LAD-based methods tend to penalize strongly on small errors. If only a small proportion of samples are outliers, they are expected to be less efficient than the least squares based method.
A natural question is then how to conduct ultrahigh dimensional mean regression when the tails of errors are not light? How to estimate the sample mean with very fast concentration when the distribution has only bounded second moment? These simple questions have not been carefully studied. LAD-based methods do not intend to answer these questions as they alter the problems of the study. This leads us to consider Huber loss as another way of robustification. The Huber loss (Huber, 1964) is a hybrid of squared loss for relatively small errors and absolute loss for relatively large errors, where the degree of hybridization is controlled by one tuning parameter. Lambert-Lacroix and Zwald (2011) proposed to use the Huber loss together with the adaptive LASSO penalty for the robust estimation. However, they needed the strong assumption that the distribution of errors is symmetric around zero. Unlike their method, we waive the symmetry requirement by allowing the regularization parameter to diverge (or converge if its reciprocal is used) in order to reduce the bias induced by the Huber loss when the distribution is asymmetric. In this paper, we consider the regularized approximate quadratic (RA-Lasso) estimator with an L1 penalty and show that it admits the same L2 error rate as the optimal error rate in the light-tail situation. In particular, if the distribution of errors is indeed symmetric around 0 (where the median and mean agree), this rate is the same as the regularized LAD estimator obtained in Wang (2013). Therefore, the RA-Lasso estimator does not lose efficiency in this special case. In practice, since the distribution of errors is unknown, RA-Lasso is more flexible than the existing methods in terms of estimating the conditional mean regression function.
A by-product of our method is that the RA-Lasso estimator of the population mean has the exponential type of concentration even in presence of the finite second moment. Catoni (2012) studied this type of problem and proposed a class of losses to result in a robust M-estimator of mean with exponential type of concentration. We further extend his idea to the sparse linear regression setting and show that Catoni loss is another choice in order to reach the optimal rate.
As done in many other papers, estimators with nice sampling properties are typically defined through the optimization of a target function such as the penalized least-squares. The properties that are established are not necessarily the same as the ones that are computed. Following the framework of Agarwal, Negahban, and Wainwright (2012), we propose the composite gradient descent algorithm for solving the RA-Lasso estimator and develop the sampling properties by taking computational error into consideration. We show that the algorithm indeed produces a solution that admits the same optimal L2 error rate as the theoretical estimator after sufficient number of iterations.
This paper is organized as follows. First, in Section 2, we introduce the RA-Lasso estimator and give the non-asymptotic upper bound for its L2 error. We show that it has the same rate as the optimal rate under light-tails. In Section 3, we study the property of the composite gradient descent algorithm for solving our problem and show that the algorithm produces a solution that performs as well as the theoretical solution. In Section 4, we apply the idea to robust estimation of mean and large covariance matrix. In Section 5, we show similar results for Catoni loss in robust sparse regression. Section 6 gives estimation of residual variance. Numerical studies are given in Section 7 and 8 to compare our method with two competitors. Proofs of Theorems 1 and 2 are given in the appendix, which together imply the main result (Theorem 3). Proof of Theorem 5 regarding the concentration of the robust mean estimator is also given in the appendix. Proofs of supporting lemmas and remaining theorems are given in an on-line supplementary file. The relevant matlab code is available at the site: http://orfe.princeton.edu/~jqfan/papers/15/RA-Lasso.zip.
2 RA-Lasso estimator
We consider the linear regression model
| (2.1) |
where are independent and identically distributed (i.i.d) p-dimensional covariate vectors, are i.i.d errors, and β* is a p-dimensional regression coefficient vector. The i.i.d assumption on εi indeed allows conditional heteroscedastic models, where εi can depend on xi. For example, it can be , where σ(xi) is a function of xi and is independent of xi. We consider the high-dimensional setting, where log(p) = O(nb) for some constant 0 < b < 1. The distributions of x and ε|x are assumed to both have mean 0. Under this assumption, β* is related to the mean effect of y conditioning on x, which is assumed to be of interest. β* differs from the median effect of y conditioning on x, especially under the heteroscedastic models or more general models. Therefore, the LAD-based methods are not applicable.
To adapt for different magnitude of errors and robustify the estimation, we propose to use the Huber loss (Huber, 1964):
| (2.2) |
The Huber loss is quadratic for small values of x and linear for large values of x. The parameter α controls the blending of quadratic and linear penalization. The least squares and the LAD can be regarded as two extremes of the Huber loss for α = 0 and α = ∞, respectively. Deviated from the traditional Huber's estimator, the parameter α converges to zero in order to reduce the biases of estimating the mean regression function when the conditional distribution of εi is not symmetric. On the other hand, α cannot shrink too fast in order to maintain the robustness. In this paper, we regard α as a tuning parameter, whose optimal value will be discussed later in this section. In practice, α needs to be tuned by some data-driven method. By letting α vary, we call the robust approximate quadratic (RA-quadratic) loss.
To estimate β*, we propose to solve the following convex optimization problem:
| (2.3) |
To assess the performance of , we study the property of , where ∥·∥2 is the Euclidean norm of a vector. When λn converges to zero sufficiently fast, is a natural M-estimator of , which is the population minimizer under the RA-quadratic loss and varies by α. In general, differs from β*. But, since the RA-quadratic loss approximates the quadratic loss as α tends to 0, is expected to converge to β*. This property will be established in Theorem 1. Therefore, we decompose the statistical error into the approximation error and the estimation error . The statistical error is then bounded by
In the following, we give upper bounds of the approximation and estimation error, respectively. We show that is upper bounded by the same rate as the optimal rate under light tails, as long as the two tuning parameters α and λn are properly chosen. We first give the upper bound of the approximation error under some moment conditions on x and ε|x. We assume that , where the radius ρ2 is a sufficiently large constant. This is a mild assumption, which is implied by (C2) and a reasonable assumption that var(y) > ∞, since .
Theorem 1
Under the following conditions:
-
(C1)
E{E(|ε|k|x)}2 ≤ Mk < ∞, for some k ≥ 2.
-
(C2)
0 < κl ≤ λmin(E[xxT]) ≤ λmax(E[xxT]) ≤ κu < ∞,
-
(C3)
For any , xTν is sub-Gaussian with parameter at most , i.e. , for any ,
there exosts a universal positive constant C1, such that .
Theorem 1 reveals that the approximation error vanishes faster if higher moments of ε|x exist. We next give the non-asymptotic upper bound of the estimation error . This part differs from the existing work regarding the estimation error of high dimensional regularized M-estimator (Negahban, et al., 2012; Agarwal, Negahban, and Wainwright, 2012) as the population minimizer now varies with α. However, we will show that the upper bound of the estimation error does not depend on α, given a uniform sparsity condition.
In order to be solvable in the high-dimensional setting, β* is usually assumed to be sparse or weakly sparse, i.e. many elements of β* are zero or small. By Theorem 1, converges to β* as α goes to 0. In view of this fact, we assume that is uniformly weakly sparse when α is sufficiently small. In particular, we assume that there exists a small constant r > 0, such that belongs to an Lq-ball with a uniform radius Rq that
| (2.4) |
for all α ε (0, r] and some q ε (0,1]. When the conditional distribution of εi is symmetric, for all α and j. Therefore the condition reduces to that β* is in the Lq-ball. When the conditional distribution of εi is asymmetric, we give a sufficient condition showing that if β* belongs to an Lq-ball with radius Rq/2, (2.4) holds for all , where c is a positive constant. In fact, for any . Using this,
By Theorem 1, . Hence, if , we have for all .
Since the RA-quadratic loss in convex, we show that with high probability the estimation error belongs to a star-shaped set, which depends on α and the threshold level η of signals.
Lemma 1
Under Conditions (C1) and (C3), with the choice of and , where v and L are positive constants depending on M2 and κ0, and κλ is a sufficiently large constant such that , it holds with probability greater than 1 – 2 exp(–c0n) that,
where , η is a positive constant, and ΔSαη denotes the subvector of Δ with indices in set Sαη.
We further verify a restricted strong convexity (RSC) condition, which has been shown to be critical in the study of high dimensional regularized M-estimator (Negahban, et al., 2012; Agarwal, Negahban, and Wainwright, 2012). Let
| (2.5) |
where , Δ is a p-dimensional vector and is the gradient of at the point of β.
Definition 1
The loss function satisfies RSC condition on a set S with curvature and tolerance if
Next, we show that with high probability the RA-quadratic loss (2.2) satisfies RSC for and all with uniform constants and that do not depend on α. To prove the RSC at and a stronger version in Lemma 4, we first give a uniform lower bound of for all , and , where cu is a positive constant, depending on Mk, κl, κu and κ0.
Lemma 2
Under conditions (C1)-(C3), for all , and , there exist uniform positive constants and such that, with probability at least ,
| (2.6) |
Lemma 3
Suppose conditions (C1)-(C3) hold and assume that
| (2.7) |
by choosing η = λn, with probability at least , the RSC condition holds for for any with and .
Lemma 3 shows that, even though is unknown and the set depends on α, RSC holds with uniform constants that do not depend on α. This further gives the following upper bound of the estimation error , which also does not depend on α.
Theorem 2
Under conditions of Lemma 1 and 3, there exist positive constants c1, c2, and C2 such that with probability at least 1 – c1 exp(–c2n),
Finally, Theorems 1 and 2 together lead to the following main result, which gives the non-asymptotic upper bound of the statistical error .
Theorem 3
Under conditions of Lemmas 1 and 3, with probability at least 1 – c1 exp(–c2n),
| (2.8) |
where the constants and .
Next, we compare our result with the existing results regarding the robust estimation of high dimensional linear regression model.
When the conditional distribution of ε is symmetric around 0, then for any α, which has no approximation error. If ε has heavy tails in addition to being symmetric, we would like to choose α sufficiently large to robustify the estimation. Theorem 2 implies that has a convergence rate of , where . The rate is the same as the minimax rate (Raskutti, Wainwright, and Yu, 2011) for weakly sparse model under the light tails. In a special case that q = 0, converges at a rate of , where s is the number of nonzero elements in β*. This is the same rate as the regularized LAD estimator in Wang (2013) and the regularized quantile estimator in Belloni and Chernozhukov (2011). It suggests that our method does not lose efficiency for symmetric heavy-tailed errors.
If the conditional distribution of ε is asymmetric around 0, the quantile and LAD based methods are inconsistent, since they estimate the median instead of the mean. Theorem 3 shows that our estimator still achieves the optimal rate as long as . Recall from conditions in Lemmas 1 and 3 that we also need to choose α, such that for some constants cl and cu. Given the sparsity condition (2.7), α can be chosen to meet the above three requirements. In terms of estimating the conditional mean effect, errors with heavy but asymmetric tails give the case where the RA-Lasso has the biggest advantage over the existing estimators.
In practice, the distribution of errors is unknown. Our method is more flexible than the existing methods as it does not require symmetry and light-tail assumptions. The tuning parameter α plays a key role by adapting to errors with different shapes and tails. In reality, the optimal values of tuning parameters α and λn can be chosen by a two-dimensional grid search using cross-validation or information-based criterion, for example, Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC). More specifically, the searching grid is formed by partitioning a rectangle in the scale of (log(α), log(λn)). The optimal values are then found by the combination that minimizing AIC, BIC or the cross-validated measurement (such as Mean Squared Error).
3 Geometric convergence of computational error
The gradient descent algorithm (Nesterov, 2007; Agarwal, Negahban, and Wainwright, 2012) is usually applied to solve the convex problem (2.3). For example, we can replace the RA-quadratic loss with its local isotropic quadratic approximation (LQA) and iteratively solve the following optimization problem:
| (3.1) |
where γu is a sufficiently large fixed constant whose condition is specified in (3.3) and the side constraint “∥β∥1 ≤ ρ” is introduced to guarantee good performance in the first few iterations and ρ is allowed to be sufficiently large such that β* is feasible. The isotropic local quadratic approximation allows an expedient computation. To solve (3.1), the update can be computed by a two-step procedure. We first solve (3.1) without the norm constraint, which is the soft-threshold of the vector at level λn, and call the solution . If , set . Otherwise, is obtained by further projecting onto the L1-ball {β : ∥β∥1 ≤ ρ}. The projection can be done (Duchi, et al., 2008) by soft-thresholding at level πn, where πn is given by the following procedure: (1) sort into ; (2) find and .
Agarwal, Negahban, and Wainwright (2012) considered the computational error of such firstorder gradient descent method. They showed that, for a convex and differentiable loss functions and decomposable penalty function p(β), the error has the same rate as for all sufficiently large t, where , and . Different from their setup, our population minimizer varies by α. Nevertheless, as converges to the true effect β*, by a careful control of a, we can still show that has the same rate as , where is the theoretical solution of (2.3) and is as defined in (3.1).
The key is that the RA-quadratic loss function satisfies the restricted strong convexity (RSC) condition and the restricted smoothness condition (RSM) with some uniform constants, namely as defined in (2.5) satisfies the following conditions:
| (3.2) |
| (3.3) |
for all β and Δ in some set of interest, with parameters γl, τl, γu and τu that do not depend on α. We show that such conditions hold with high probability.
Lemma 4
Under condition (C1)-(C3), for all , and , with probability greater than 1 – c1 exp(–c2n), (3.2) and (3.3) hold with γl = κ1, , γu = 3κu, τu = κu(log p)/n.
We further give an upper bound of computational error in Theorem 4. It shows that with high probability, is dominated by after sufficient iterations, as long as , which is required for consistency of any method over the weak sparse Lq ball by the known minimax results (Raskutti, Wainwright, and Yu, 2011). Denote . Theorems 3 and 4 below imply that, with high probability,
when the sample size is large enough to ensure . Therefore, has the same rate as . Hence, from a statistical point of view, there is no need to iterate beyond t steps.
Theorem 4
Under conditions of Theorem 3, suppose we choose λn as in Lemma 1 and also satisfying
where |Sαη| denotes the cardinality of set Sαη and , then with probability at least 1 – c1 exp(–c2n), there exists a generic positive constant d3 such that
| (3.4) |
for all iterations
where and is the initial value satisfying , is the tolerance level, κ, and ε are some constants as will be defined in (19) and (20) in the on-line supplementary file, respectively.
4 Robust estimation of mean and covariance matrix
The estimation of mean can be regarded as a univariate linear regression where the covariate equals to 1. In that special case, we have more explicit concentration result for the RA-mean estimator, which is the estimator that minimizes the RA-quadratic loss. Let be an i.i.d sample from some unknown distribution with E(yi) = μ and var(yi) = σ2. The RA-mean estimator of μ is the solution of
for parameter α → 0, where the influence function ψ(x) = x if |x| ≤ 1, ψ(x) = 1, if x > 1 and ψ(x) = −1 if x < −1. The following theorem gives the exponential type of concentration of around μ.
Theorem 5
Assume and let where v ≥ σ. Then,
| . |
The above result provides fast concentration of the mean estimation with only two moments assumption. This is very useful for large scale hypothesis testing (Efron, 2010; Fan, Han, and Gu, 2012) and covariance matrix estimation (Bickel and Levina, 2008; Fan, Liao and Mincheva, 2013), where uniform convergence is required. Taking the estimation of large covariance matrix as an example, in order for the elements of the sample covariance matrix to converge uniformly, the aforementioned authors require the underlying multivariate distribution be sub-Gaussian. This restrictive assumptions can be removed if we apply the robust estimation with concentration bound. Regarding σij = E XiXj as the expected value of the random variable XiXj (it is typically not the same as the median of XiXj), it can be estimated with accuracy
where and is RA-mean estimator using data . Since there are only O(p2) elements, by taking δ = p–a for a > 2 and the union bound, we have
when is bounded. This robustified covariance estimator requires much weaker condition than the sample covariance and has far wide applicability than the sample covariance. It can be regularized further in the same way as the sample covariance matrix.
5 Connection with Catoni loss
Catoni (2012) considered the estimation of the mean of heavy-tailed distribution with fast concentration. He proposed an M-estimator by solving
where the influence function ψc(x) is chosen such that –log(1–x+x2/2) ≤ ψc(x) ≤ log(1+x+x2/2). He showed that this M-estimator has the exponential type of concentration by only requiring the existence of the variance. It performed as well as the sample mean under the light-tail case.
Catoni's idea can also be extended to the linear regression setting. Suppose we replace the RA-quadratic loss in (2.3) with Catoni loss
where the influence function ψc(t) is given by
Let be the corresponding solution. Then, has the same non-asymptotic upper bound as the RA-Lasso, which is stated as follows.
Theorem 6
Suppose condition (C1) holds for k = 2 or 3, (C2), (C3) and (2.7) hold. Then there exist generic positive constants c1, c2, d4 and d5, depending on Mk, κ0, κl, κu and κλ, such that with probability at least 1 – c1 exp(–c2n),
Unlike the RA-lasso, the order of bias of cannot be further improved, even when higher conditional moments of errors exist beyond the third order. The reason is that the Catoni loss is not exactly the quadratic loss over any finite intervals. Similar results regarding the computational error of could also be established as in Theorem 4, since the RSC/RSM conditions also hold for Catoni loss with uniform constants.
6 Variance Estimation
We estimate the unconditional variance σ2 = Eε2 based on the RA-Lasso estimator and a cross-validation scheme. To ease the presentation, we assume the data set can be evenly divided into J folds with m observations in each fold. Then, we estimate σ2 by
where is the RA-Lasso estimator obtained by using data points outside the j-th fold. We show that is asymptotically efficient. Different from the existing cross-validation based method (Fan, Guo, and Hao, 2012), light-tail assumption is not needed due to the utilization of the RA-Lasso estimator.
Theorem 7
Under conditions of Theorem 3, if for q ε (0,1), and , then
7 Simulation Studies
In this section, we assess the finite sample performance of the RA-Lasso and compare it with other methods through various models. We simulated data from the following high dimensional model
| (7.1) |
where we generated n = 100 observations and the number of parameters was chosen to be p = 400. We chose the true regression coefficient vector as
where the first 20 elements are all equal to 3 and the rest are all equal to 0. To involve various shapes of error distributions, we considered the following five scenarios:
Normal with mean 0 and variance 4 (N(0,4));
Two times the t-distribution with degrees of freedom 3 (2t3);
Mixture of Normal distribution(MixN): 0.5N(−1, 4) + 0.5N(8, 1);
Log-normal distribution (LogNormal): ε = e1+1.2Z, where Z is standard normal.
Weibull distribution with shape parameter = 0.3 and scale parameter = 0.5 (Weibull).
In order to meet the model assumption, the errors were standardized to have mean 0. Table 1 categorizes the five scenarios according to the shapes and tails of the error distributions.
Table 1.
Summary of the shapes and tails of five error distributions
| Light Tail | Heavy Tail | |
|---|---|---|
| Symmetric | N (0, 4) | 2t3 |
| Asymmetric | MixN | LogNormal, Weibull |
To obtain our estimator, we iteratively applied the gradient descent algorithm. We compared RA-Lasso with two other methods in high-dimensional setting: (a) Lasso: the penalized least-squares estimator with L1-penalty as in Tibshirani (1996); and (b) R-Lasso: the R-Lasso estimator in Fan, Fan, and Barut (2014), which is the same as the regularized LAD estimator with L1-penalty as in Wang (2013). Their performance under the five scenarios was evaluated by the following four measurements:
-
(1)
L2 error, which is defined as .
-
(2)
L1 error, which is defined as .
-
(3)
Number of false positives (FP), which is number of noise covariates that are selected.
-
(4)
Number of false negatives (FN), which is number of signal covariates that are not selected.
We also measured the relative gain of RA-Lasso with respect to R-Lasso and Lasso, in terms of the difference to the oracle estimator. The oracle estimator is defined to be the least square estimator by using the first 20 covariates only. Then, the relative gain of RA-Lasso with respect to Lasso (RGA,L) in L2 and L1 norm are defined as
The relative gain of RA-Lasso with respect to R-Lasso (RGA,R) is defined similarly.
For RA-Lasso, the tuning parameters λn and α were chosen optimally based on 100 independent validation datasets. We ran a 2-dimensional grid search to find the best (λn, α) pair that minimizes the mean L2-loss of the 100 validation datasets. Such an optimal pair was then used in the simulations. Similar method was applied in choosing the tuning parameters in Lasso and R-Lasso.
The above simulation model is based on the additive model (7.1), in which error distribution is independent of covariates. However, this homoscedastic model makes the conditional mean and the conditional median differ only by a constant. To further examine the deviations between the mean regression and median regression, we also simulated the data from the heteroscedastic model
| (7.2) |
where the constant makes . Note that and therefore c is chosen so that the average noise levels is the same as that of εi. For both the homoscedastic and the heteroscedastic models, we ran 100 simulations for each scenario. The mean of each performance measurement is reported in Table 2 and 3, respectively.
Table 2.
Simulation results of Lasso, R-Lasso and RA-Lasso under homoscedastic model. (7.1)
| Lasso | R-Lasso | RA-Lasso | RGA,L | RGA,R | ||
|---|---|---|---|---|---|---|
| N(0, 4) | L2 loss | 4.54 | 4.40 | 4.53 | 1.00 | 0.96 |
| L1 loss | 27.21 | 29.11 | 27.21 | 1.00 | 1.08 | |
| FP, FN | 52.10, 0.09 | 66.36, 0.17 | 52.10, 0.09 | |||
| 2t3 | L2 loss | 6.04 | 5.10 | 5.47 | 1.14 | 0.91 |
| L1 loss | 35.22 | 33.07 | 30.42 | 1.19 | 1.10 | |
| FP, FN | 47.13, 0.34 | 65.84, 0.22 | 41.34, 0.28 | |||
| MixN | L2 loss | 6.14 | 6.44 | 6.13 | 1.00 | 1.06 |
| L1 loss | 40.46 | 46.18 | 38.48 | 1.06 | 1.23 | |
| FP, FN | 65.99, 0.34 | 80.31, 0.33 | 58.05, 0.39 | |||
| LogNormal | L2 loss | 11.08 | 12.16 | 10.10 | 1.14 | 1.30 |
| L1 loss | 53.17 | 57.18 | 51.58 | 1.04 | 1.14 | |
| FP, FN | 26.5, 15.00 | 27.20, 6.90 | 37.20, 3.90 | |||
| Weibull | L2 loss | 7.77 | 7.11 | 6.62 | 1.23 | 1.10 |
| L1 loss | 55.65 | 50.49 | 42.93 | 1.34 | 1.20 | |
| FP, FN | 78.70, 0.71 | 77.13, 0.56 | 62.27,0.52 | |||
Table 3.
Simulation results of Lasso, R-Lasso and RA-Lasso under heteroscedastic model (7.2).
| Lasso | R-Lasso | RA-Lasso | RGA,L | RGA,R | ||
|---|---|---|---|---|---|---|
| N(0, 4) | L2 loss | 4.60 | 4.34 | 4.60 | 1.00 | 0.93 |
| L1 loss | 27.16 | 27.14 | 27.15 | 1.00 | 1.00 | |
| FP, FN | 48.78, 0.10 | 58.25, 0.27 | 48.78, 0.10 | |||
| 2t3 | L2 loss | 8.08 | 6.71 | 6.70 | 1.26 | 1.01 |
| L1 loss | 41.16 | 42.76 | 38.52 | 1.08 | 1.12 | |
| FP, FN | 55.33, 0.67 | 71.67, 0.33 | 45.33, 0.33 | |||
| MixN | L2 loss | 6.26 | 6.54 | 6.25 | 1.00 | 1.06 |
| L1 loss | 41.26 | 46.95 | 39.25 | 1.06 | 1.23 | |
| FP, FN | 65.98, 0.34 | 80.30, 0.32 | 58.80 0.34 | |||
| LogNormal | L2 loss | 10.86 | 9.19 | 8.48 | 1.43 | 1.13 |
| L1 loss | 57.52 | 57.18 | 53.20 | 1.10 | 1.09 | |
| FP, FN | 29.70, 5.70 | 54.10, 2.00 | 54.30, 1.50 | |||
| Weibull | L2 loss | 7.40 | 8.81 | 5.53 | 1.53 | 1.92 |
| L1 loss | 40.95 | 47.82 | 34.65 | 1.23 | 1.48 | |
| FP, FN | 38.87,0.96 | 35.31, 2.90 | 58.15,0.39 | |||
Tables 2 and 3 indicate that our method had the biggest advantage when the errors were asymmetric and heavy-tailed (LogNormal and Weibull). In this case, R-Lasso had larger L1 and L2 errors due to the bias for estimating the conditional median instead of the mean. Even though Lasso did not have bias in the loss component (quadratic loss), it did not perform well due to its sensitivity to outliers. The advantage of our method is more pronounced in the heteroscedastic model than in the homoscedastic model. Both of them clearly indicate that if the errors come from asymmetric and heavy-tailed distributions, our method is better than both Lasso and R-Lasso. When the errors were symmetric and heavy-tailed (2t3), our estimator performed closely as R-Lasso, both of which outperformed Lasso. The above two cases evidently showed that RA-Lasso was robust to the outliers and did not lose efficiency when the errors were indeed symmetric. Under the light-tailed scenario, if the errors were asymmetric (MixN), our method performed similarly as Lasso. R-Lasso performed worse, since it had bias. For the regular setting (N(0, 4)), where the errors were light-tailed and symmetric, the three methods were comparable with each other.
In conclusion, RA-Lasso is more flexible than Lasso and R-Lasso. The tuning parameter α automatically adapts to errors with different shapes and tails. It enables RA-Lasso to render consistently satisfactory results under all scenarios.
8 Real Data Example
In this section, we use a microarray data to illustrate the performance of Lasso, R-Lasso and RA-Lasso. Huang, et al. (2011) studied the role of innate immune system on the development of atherosclerosis by examining gene profiles from peripheral blood of 119 patients. The data were collected using Illumina HumanRef8 V2.0 Bead Chip and are available on Gene Expression Omnibus. The original study showed that the toll-like receptors (TLR) signaling pathway plays an important role on triggering the innate immune system in face of atherosclerosis. Under this pathway, the “TLR8” gene was found to be a key atherosclerosis-associated gene. To further study the relationship between this key gene and the other genes, we regressed it on another 464 genes from 12 different pathways (TLR, CCC, CIR, IFNG, MAPK, RAPO, EXAPO, INAPO, DRS, NOD, EPO, CTR) that are related to the TLR pathway. We applied Lasso, R-Lasso and RA-Lasso to this data. The tuning parameters for all methods were chosen by using five-fold cross validation. Figure 1 shows our choice of the penalization parameter based on the cross validation results. For RA-Lasso, the choice of a was insensitive to the results and was fixed at 5. We then applied the three methods with the above choice of tuning parameters to select significant genes. The QQ-plots of the residuals from the three methods are shown in Figure 2. The selected genes by the three methods are reported in Table 4. After the selection, we regressed the expression of TLR8 gene on the selected genes, the t-values from the refittings are also reported in Table 4.
Figure 1.
Five-fold cross validation results: black dot marks the choice of the penalization parameter.
Figure 2.
QQ plots of the residuals from three methods.
Table 4.
Selected genes by Lasso, R-Lasso and RA-Lasso.
| Lasso | CRK 0.23 | ||||||
| R-Lasso | CSF3 | IL10 | AKT1 | KPNB1 | TLR2 | GRB2 | MAPK1 |
| −2.46 | 2.24 | 1.68 | 1.49 | 1.41 | −1.06 | 0.98 | |
| DAPK2 | TOLLIP | TLR1 | TLR3 | SHC1 | PSMD1 | F12 | |
| 0.7 | −0.68 | 0.52 | 0.33 | −0.28 | 0.27 | 0.24 | |
| EPOR | TJP1 | GAB2 | |||||
| −0.17 | −0.12 | −0.01 | |||||
| RA-Lasso | CSF3 | CD3E | BTK | CLSPN | RELA | AKT1 | IRS2 |
| −2.95 | 2.67 | 2.37 | 1.93 | 1.88 | 1.61 | 1.55 | |
| IL10 | MAP2K4 | PMAIP1 | BCL2L11 | AKT3 | DUSP10 | IRF4 | |
| 1.52 | 1.17 | −1.14 | −1.13 | −1.01 | 0.97 | −0.95 | |
| IFI6 | TLR1 | PSMB8 | KPNB1 | IFNG | FADD | TJP1 | |
| 0.86 | 0.82 | 0.79 | 0.77 | −0.74 | 0.65 | −0.57 | |
| CR2 | IL2 | PSMC2 | HSPA8 | SHC1 | SPI1 | IFNA6 | |
| 0.57 | −0.47 | 0.38 | −0.35 | −0.33 | −0.28 | 0.28 | |
| FYN | EPOR | MASP1 | PRKCZ | TOLLIP | BAK1 | ||
| −0.24 | 0.24 | −0.24 | 0.24 | −0.19 | 0.14 | ||
Table 4 shows that Lasso only selected one gene. R-Lasso selected 17 genes. Our proposed RA-Lasso selected 34 genes. Eight genes (CSF3, IL10, AKT1, TOLLIP, TLR1, SHC1, EPOR, and TJP1) found by R-Lasso were also selected by RA-Lasso. Compared with Lasso and R-Lasso, our method selected more genes, which could be useful for a second-stage confirmatory study. It is clearly seen from Figure 2 that the residuals from the fitted regressions had heavy right tail and skewed distribution. We learn from the simulation studies in Section 7 that RA-Lasso tends to perform better than Lasso and R-Lasso in this situation. For further investigation, we randomly chose 24 patients as the test set; applied three methods to the rest patients to obtain the estimated coefficients, which in return were used to predict the responses of 24 patients. We repeated the random splitting 100 times, the boxplots of the Mean Absolute/Squared Error of predictions are shown in Figure 3. RA-Lasso has better predictions than Lasso and R-Lasso.
Figure 3.
Boxplot of the Mean Absolute/Squared Error of predictions.
Supplementary Material
Acknowledgment
The authors thank the Joint Editor, the Associate Editor and two referees for their valuable comments, which lead to great improvement of the paper.
Appendix: Proofs of Theorem 1, 2 and 5
Proof of Theorem 1
Let . Since β* minimizes , it follows from condition(C2) that
| (A.1) |
Let . Then, since is the minimizer of , we have
By Taylor's expansion, we have
| (A.2) |
where is a vector lying between β* and and . With Pε denoting the distribution of ε conditioning on x and Eε the corresponding expectation, we have
Therefore, is further bounded by
| (A.3) |
Note that,
where the last inequality follows from (C1) and (C2). On the other hand, by (C3), is sub-Gaussian, hence its 2k-th moment is bounded by , for a universal positive constant c depending on k only. Then,
These results together with (A.1) and (A.3) completes the proof.
Proof of Theorem 2
Let A1 and A2 denote the events that Lemma 1 and Lemma 3 hold, respectively. By Theorem 1 of Negahban, et al. (2012), within , it holds that
where (i) follows from the choice of η = λn, in (ii) we assume that the sample size n is large enough such that 2κ1κ2(n−1 log p)1/2 ≤ 1 and observe that κ1 = κl/4. On the other hand, by Lemma 1 and 3, , where and .
Proof of Theorem 5
The proof follows the same spirit as the proof of Proposition 2.4 in Catoni (2012). The influence function ψ(x) of RA-quadratic loss satisfies
Using this and independence, with , we have
Similarly, . Define
By Chebyshev inequality,
Similarly, P(r(θ) < B−(θ)) ≤ δ.
Let θ+ be the smallest solution of the quadratic equation B+(θ+ = 0 and θ− be the largest solution of the equation B−(θ_) = 0. Under the assumption that and the choice of , we have . Therefore,
Similarly,
With , , Since the map is non-increasing, under event
i.e. . Meanwhile, .
Footnotes
Supported in part by NSF Grants DMS-1206464 and DMS-1406266 and NIH grants R01-GM072611-9 and NIH R01-GM100474-4.
References
- Agarwal A, Negahban S, Wainwright MJ. Fast global convergence of gradient methods for high-dimensional statistical recovery. The Annals of Statistics. 2012;40:2452–2482. [Google Scholar]
- Alexander KS. Rates of growth and sample moduli for weighted empirical processes indexed by sets. Probability Theory and Related Fields. 1987;75:379–423. [Google Scholar]
- Belloni A, Chernozhukov V. L1-penalized quantile regression in high-dimensional sparse models. The Annals of Statistics. 2011;39:82–130. [Google Scholar]
- Bickel PJ, Levina E. Covariance regularization by thresholding. The Annals of Statistics. 2008;36:2577–2604. [Google Scholar]
- Bickel PJ, Ritov Y, Tsybakov A. Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics. 2009;37:1705–1732. [Google Scholar]
- Bühlmann P, Van De Geer S. Statistics for high-dimensional data: methods, theory and applications. Springer; 2011. [Google Scholar]
- Catoni O. Challenging the empirical mean and empirical variance: a deviation study. Annales de l'Institut Henri Poincaré, Probabilités et Statistiques. 2012;48:1148–1185. [Google Scholar]
- Duchi J, Shalev-Shwartz S, Singer Y, Chandra T. Efficient projections onto the L1-ball for learning in high dimensions. Proceedings of the 25th international conference on Machine learning. 2008:272–279. [Google Scholar]
- Efron B. Correlated z-values and the accuracy of large-scale statistical estimates. Journal of the American Statistical Association. 2010;105:1042–1055. doi: 10.1198/jasa.2010.tm09129. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Engle RF. Autoregressive conditional heteroscedasticy with estimates of the variance of U.K. inflation. Econometrica. 1982;50:987–1008. [Google Scholar]
- Fan J, Fan Y, Barut E. Adaptive robust variable selection. The Annals of Statistics. 2014;42:324–351. doi: 10.1214/13-AOS1191. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Guo S, Hao N. Variance estimation using refitted cross-validation in ultrahigh dimensional regression. The Annals of Statistics. 2014;42:324–351. doi: 10.1111/j.1467-9868.2011.01005.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Han X, Gu W. Estimating false discovery proportion under arbitrary covariance dependence (with discussion). Jour. Roy. Statist. Soc. B. 2012;74:37–65. doi: 10.1080/01621459.2012.720478. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association. 2001;96:1348–1360. [Google Scholar]
- Fan J, Liao Y, Mincheva M. Large covariance estimation by thresholding principal orthogonal complements (with discussion). Jour. Roy. Statist. Soc. B. 2013;75:603–680. doi: 10.1111/rssb.12016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Lv J. Sure independence screening for ultrahigh dimensional feature space. Jour. Roy. Statist. Soc. B. 2008;70:849–911. doi: 10.1111/j.1467-9868.2008.00674.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Lv J. Non-concave penalized likelihood with NP-Dimensionality. IEEE – Information Theory. 2011;57:5467–5484. doi: 10.1109/TIT.2011.2158486. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang CC, Liu K, Pope RM, Du P, Lin S, Rajamannan NM, et al. Activated TLR signaling in atherosclerosis among women with lower Framingham risk score: the multi-ethnic study of atherosclerosis. PLoS ONE. 2011;6:e21067. doi: 10.1371/journal.pone.0021067. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huber PJ. Robust estimation of a location parameter. The Annals of Mathematical Statistics. 1964;35:73–101. [Google Scholar]
- Lambert-Lacroix S, Zwald L. Robust regression through the Hubers criterion and adaptive lasso penalty. Electronic Journal of Statistics. 2011;5:1015–1053. [Google Scholar]
- Ledoux M, Talagrand M. Probability in Banach Spaces: isoperimetry and processes. Springer; 1991. [Google Scholar]
- Li Y, Zhu J. L1-norm quantile regression. Journal of Computational and Graphical Statistics. 2008;17:163–185. [Google Scholar]
- Loh P-L, Wainwright MJ. Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima. Advances in Neural Information Processing Systems. 2013:476–484. [Google Scholar]
- Massart P, Picard J. Concentration inequalities and model selection. Springer; 2007. [Google Scholar]
- Negahban SN, Ravikumar P, Wainwright MJ, Yu B. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Statistical Science. 2012;27:538–557. [Google Scholar]
- Nesterov Y. Technical Report 76, Center for Operations Research and Economietrics (CORE) Catholic Univ. Louvain (UCL); 2007. Gradient methods for minimizing composite objective function. [Google Scholar]
- Raskutti G, Wainwright MJ, Yu B. Minimax rates of estimation for high-dimensional linear regression over Lq-balls. Information Theory, IEEE Transactions on Information Theory. 2011;57:6976–6994. [Google Scholar]
- Rivasplata O. Subgaussian random variables: an expository note. 2012 [Google Scholar]
- Tibshirani R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, B. 1996;58:267–288. [Google Scholar]
- Van de Geer S. Empirical Processes in M-estimation. Cambridge university press Cam bridge; 2000. [Google Scholar]
- Wang H. Forward regression for ultra-high dimensional variable screening. Journal of American Statistical Association. 2009;104:1512–1524. [Google Scholar]
- Wang L. The L1 penalized LAD estimator for high dimensional linear regression. Journal of Multivariate Analysis. 2013;120:135–151. [Google Scholar]
- Wu Y, Liu Y. Variable selection in quantile regression. Statistica Sinica. 2009;19:801. [Google Scholar]
- Zhang C-H. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics. 2010;38:894–942. [Google Scholar]
- Zou H, Yuan M. Composite quantile regression and the oracle model selection theory. The Annals of Statistics. 2008;36:1108–1126. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.



