Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Nov 10.
Published in final edited form as: Ann Appl Stat. 2011 Jan 1;5(1):232–253. doi: 10.1214/10-AOAS388

COORDINATE DESCENT ALGORITHMS FOR NONCONVEX PENALIZED REGRESSION, WITH APPLICATIONS TO BIOLOGICAL FEATURE SELECTION

Patrick Breheny 1,1, Jian Huang 1,2
PMCID: PMC3212875  NIHMSID: NIHMS332857  PMID: 22081779

Abstract

A number of variable selection methods have been proposed involving nonconvex penalty functions. These methods, which include the smoothly clipped absolute deviation (SCAD) penalty and the minimax concave penalty (MCP), have been demonstrated to have attractive theoretical properties, but model fitting is not a straightforward task, and the resulting solutions may be unstable. Here, we demonstrate the potential of coordinate descent algorithms for fitting these models, establishing theoretical convergence properties and demonstrating that they are significantly faster than competing approaches. In addition, we demonstrate the utility of convexity diagnostics to determine regions of the parameter space in which the objective function is locally convex, even though the penalty is not. Our simulation study and data examples indicate that nonconvex penalties like MCP and SCAD are worthwhile alternatives to the lasso in many applications. In particular, our numerical results suggest that MCP is the preferred approach among the three methods.

Key words and phrases: Coordinate descent, penalized regression, lasso, SCAD, MCP, optimization

1. Introduction

Variable selection is an important issue in regression. Typically, measurements are obtained for a large number of potential predictors in order to avoid missing an important link between a predictive factor and the outcome. This practice has only increased in recent years, as the low cost and easy implementation of automated methods for data collection and storage has led to an abundance of problems for which the number of variables is large in comparison to the sample size.

To reduce variability and obtain a more interpretable model, we often seek a smaller subset of important variables. However, searching through subsets of potential predictors for an adequate smaller model can be unstable [Breiman (1996)] and is computationally unfeasible even in modest dimensions.

To avoid these drawbacks, a number of penalized regression methods have been proposed in recent years that perform subset selection in a continuous fashion. Penalized regression procedures accomplish this by shrinking coefficients toward zero in addition to setting some coefficients exactly equal to zero (thereby selecting the remaining variables). The most popular penalized regression method is the lasso [Tibshirani (1996)]. Although the lasso has many attractive properties, the shrinkage introduced by the lasso results in significant bias toward 0 for large regression coefficients.

Other authors have proposed alternative penalties, designed to diminish this bias. Two such proposals are the smoothly clipped absolute deviation (SCAD) penalty [Fan and Li (2001)] and the mimimax concave penalty [MCP; Zhang (2010)]. In proposing SCAD and MCP, their authors established that SCAD and MCP regression models have the so-called oracle property, meaning that, in the asymptotic sense, they perform as well as if the analyst had known in advance which coefficients were zero and which were nonzero.

However, the penalty functions for SCAD and MCP are nonconvex, which introduces numerical challenges in fitting these models. For the lasso, which does possess a convex penalty, least angle regression [LARS; Efron et al. (2004)] is a remarkably efficient method for computing an entire path of lasso solutions in the same order of time as a least squares fit. For nonconvex penalties, Zou and Li (2008) have proposed making a local linear approximation (LLA) to the penalty, thereby yielding an objective function that can be optimized using the LARS algorithm.

More recently, coordinate descent algorithms for fitting lasso-penalized models have been shown to be competitive with the LARS algorithm, particularly in high dimensions [Friedman et al. (2007); Wu and Lange (2008); Friedman, Hastie and Tibshirani (2010)]. In this paper we investigate the application of coordinate descent algorithms to SCAD and MCP regression models, for which the penalty is nonconvex. We provide implementations of these algorithms through the publicly available R package, ncvreg (available at http://cran.r-project.org).

Methods for high-dimensional regression and variable selection have applications in many scientific fields, particularly those in high-throughput biomedical studies. In this article we apply the methods to two such studies—a genetic association study and a gene expression study—each possessing a different motivation for sparse regression models. In genetic association studies, very few genetic markers are expected to be associated with the phenotype of interest. Thus, the underlying data-generating process is likely to be highly sparse, and sparse regression methods likely to perform well. Gene expression studies may also have sparse underlying representations; however, even when they are relatively “dense,” there may be separate motivations for sparse regression models. For example, data from microarray experiments may be used to discover biomarkers and design diagnostic assays. To be practical in clinical settings, such assays must use only a small number of probes [Yu et al. (2007)].

In Section 2 we describe algorithms for fitting linear regression models penalized by MCP and SCAD, and discuss their convergence. In Section 3 we discuss the modification of those algorithms for fitting logistic regression models. Issues of stability, local convexity and diagnostic measures for investigating these concerns are discussed, along with the selection of tuning parameters, in Section 4. The numerical efficiency of the proposed algorithm is investigated in Section 5 and compared with LLA algorithms. The statistical properties of lasso, SCAD and MCP are also investigated and compared using simulated data (Section 5) and applied to biomedical data (Section 6).

2. Linear regression with nonconvex penalties

Suppose we have n observations, each of which contains measurements of an outcome yi and p features {xi1, …, xip}. We assume without loss of generality that the features have been standardized such that i=1nxij=0,n1i=1nxij2=1 and i=1nyi=0. This ensures that the penalty is applied equally to all covariates in an equivariant manner, and eliminates the need for an intercept. This is standard practice in regularized estimation; estimates are then transformed back to their original scale after the penalized models have been fit, at which point an intercept is introduced.

We will consider models in which the expected value of the outcome depends on the covariates through the linear function E(y) = η = Xβ. The problem of interest involves estimating the vector of regression coefficients β. Penalized regression methods accomplish this by minimizing an objective function Q that is composed of a loss function plus a penalty function. In this section we take the loss function to be squared error loss:

Qλ,γ(β)=12ni=1n(yiηi)2+j=1ppλ,γ(βj), (2.1)

where pλ, γ (·) is a function of the coefficients indexed by a parameter λ that controls the tradeoff between the loss function and penalty, and that also may be shaped by one or more tuning parameters γ. This approach produces a spectrum of solutions depending on the value of λ; such methods are often referred to as regularization methods, with λ the regularization parameter.

To find the value of β that optimizes (2.1), the LLA algorithm makes a linear approximation to the penalty, then uses the LARS algorithm to compute the solution. This process is repeated iteratively1 until convergence for each value of λ over a grid. Details of the algorithm and its implementation may be found in Zou and Li (2008).

The LLA algorithm is inherently inefficient to some extent, in that it uses the path-tracing LARS algorithm to produce updates to the regression coefficients. For example, over a grid of 100 values for λ that averages 10 iterations until convergence at each point, the LLA algorithm must calculate 1000 lasso paths to produce a single approximation to the MCP or SCAD path.

An alternative to LLA is to use a coordinate descent approach. Coordinate descent algorithms optimize a target function with respect to a single parameter at a time, iteratively cycling through all parameters until convergence is reached. The idea is simple but efficient—each pass over the parameters requires only O(np) operations. If the number of iterations is smaller than p, the solution is reached with even less computation burden than the np2 operations required to solve a linear regression problem by QR decomposition. Furthermore, since the computational burden increases only linearly with p, coordinate descent algorithms can be applied to very high-dimensional problems.

Coordinate descent algorithms are ideal for problems that have a simple closed form solution in a single dimension but lack one in higher dimensions. The basic structure of a coordinate descent algorithm is, simply: For j in {1, …, p}, to partially optimize Q with respect to βj with the remaining elements of β fixed at their most recently updated values. Like the LLA algorithm, coordinate descent algorithms iterate until convergence is reached, and this process is repeated over a grid of values for λ to produce a path of solutions.

The efficiency of coordinate descent algorithms comes from two sources: (1) updates can be computed very rapidly, and (2) if we are computing a continuous path of solutions (see Section 2.4), our initial values will never be far from the solution and few iterations will be required. Rapid updates are possible because the minimization of Q with respect to βj can be obtained from the univariate regression of the current residuals r = yXβ on xj, at a cost of O(n) operations. The specific form of these updates depends on the penalty and whether linear or logistic regression is being performed, and will be discussed further in their respective sections.

In this section we describe coordinate descent algorithms for least squares regression penalized by SCAD and MCP, as well as investigate the convergence of these algorithms.

2.1. MCP

Zhang (2010) proposed the MCP, defined on [0, ∞) by

pλ,γ(θ)={λθθ22γ,ifθγλ,12γλ2,ifθ>γλ,pλ,γ(θ)={λθγ,ifθγλ,0,ifθ>γλ (2.2)

for λ ≥ = 0 and γ > 1. The rationale behind the penalty can be understood by considering its derivative: MCP begins by applying the same rate of penalization as the lasso, but continuously relaxes that penalization until, when θ > γλ, the rate of penalization drops to 0. The penalty is illustrated in Figure 1.

Fig. 1.

Fig. 1

Shapes of the lasso, SCAD and MCP penalty functions. The panel on the left plots the penalties themselves, whereas the panel on the right plots the derivative of the penalty. Note that none of the penalties are differentiable at βj = 0.

The rationale behind the MCP can also be understood by considering its univariate solution. Consider the simple linear regression of y upon x, with unpenalized least squares solution z = n−1xy (recall that x has been standardized so that xx = n). For this simple linear regression problem, the MCP estimator has the following closed form:

β^=fMCP(z,λ,γ)={S(z,λ)11/γ,ifzγλ,z,ifz>γλ, (2.3)

where S is the soft-thresholding operator [Donoho and Johnstone (1994)] defined for λ ≥ 0 by

S(z,λ)={zλ,ifz>λ,0,ifzλ,z+λ,ifz<λ. (2.4)

Noting that S(z, λ) is the univariate solution to the lasso, we can observe by comparison that MCP scales the lasso solution back toward the unpenalized solution by an amount that depends on γ. As γ → ∞, the MCP and lasso solutions are the same. As γ → 1, the MCP solution becomes the hard thresholding estimate zI|z| > λ. Thus, in the univariate sense, the MCP produces the “firm shrinkage” estimator of Gao and Bruce (1997).

In the special case of an orthonormal design matrix, subset selection is equivalent to hard-thresholding, the lasso is equivalent to soft-thresholding, and MCP is equivalent to firm-thresholding. Thus, the lasso may be thought of as performing a kind of multivariate soft-thresholding, subset selection as multivariate hard-thresholding, and the MCP as multivariate firm-thresholding.

The univariate solution of the MCP is employed by the coordinate descent algorithm to obtain the coordinate-wise minimizer of the objective function. In this setting, however, the role of the unpenalized solution is now played by the unpenalized regression of xj’s partial residuals on xj, and denoted zj. Introducing the notation −j to refer to the portion that remains after the j th column or element is removed, the partial residuals of xj are rj = yXj βj, where βj is the most recently updated value of β. Thus, at step j of iteration m, the following three calculations are made:

  1. calculate
    zj=n1xjrj=n1xjr+βj(m),
  2. update βj(m+1)fMCP(zj,λ,γ),

  3. update rr(βj(m+1)βj(m))xj,

where the last step ensures that r always holds the current values of the residuals. Note that zj can be obtained by regressing xj on either the partial residuals or the current residuals; using the current residuals is more computationally efficient, however, as it does not require recalculating partial residuals for each update.

2.2. SCAD

The SCAD penalty Fan and Li (2001) defined on [0, ∞) is given by

pλ,γ(θ)={λθ,ifθλ,γλθ0.5(θ2+λ2)γ1,ifλ<θγλ,γ2(γ21)2(γ1),ifθ>γλ,pλ(θ)={λ,ifθλ,γλθγ1,ifλ<θγλ,0,ifθ>γλ (2.5)

for λ ≥ = 0 and γ > 2. The rationale behind the penalty is similar to that of MCP. Both penalties begin by applying the same rate of penalization as the lasso, and reduce that rate to 0 as θ gets further away from zero; the difference is in the way that they make the transition. The SCAD penalty is also illustrated in Figure 1.

The univariate solution for a SCAD-penalized regression coefficient is as follows, where again z is the unpenalized linear regression solution:

β^=fSCAD(z,λ,γ)={S(z,γ),ifz2λ,S(z,γλ/(γ1))11/(γ1),if2λ<zγλ,z,ifz>γλ. (2.6)

This solution is similar to, although not the same as, the MCP solution/firm shrinkage estimator. Both penalties rescale the soft-thresholding solution toward the unpenalized solution. However, although SCAD has soft-thresholding as the limiting case when γ → ∞, it does not have hard thresholding as the limiting case when γ → 2. This univariate solution plays the same role in the coordinate descent algorithm for SCAD as fMCP(zj, λ, γ) played for MCP.

2.3. Convergence

We now consider the convergence of coordinate descent algorithms for SCAD and MCP. We begin with the following lemma.

Lemma 1

Let Qjλ, γ (β) denote the objective function Q, defined in (2.1), as a function of the single variable βj, with the remaining elements of β fixed. For the SCAD penalty with γ > 2 and for the MCP with γ > 1, Qj, λ, γ (β) is a convex function of βj for all j.

From this lemma, we can establish the following convergence properties of coordinate descent algorithms for SCAD and MCP.

Proposition 1

Let {β(k)} denote the sequence of coefficients produced at each iteration of the coordinate descent algorithms for SCAD and MCP. For all k = 0, 1, 2, …,

Qλ,γ(β(k+1))Qλ,γ(β(k)).

Furthermore, the sequence is guaranteed to converge to a point that is both a local minimum and a global coordinate-wise minimum of Qλ, γ.

Because the penalty functions of SCAD and MCP are not convex, neither the coordinate descent algorithms proposed here nor the LLA algorithm are guaranteed to converge to a global minimum in general. However, it is possible for the objective function Q to be convex with respect to β even though it contains a nonconvex penalty component. In particular, letting c* denote the minimum eigenvalue of n−1XX, the MCP objective function is convex if γ > 1/c*, while the SCAD objective function is convex if γ > 1 + 1/c* [Zhang (2010)]. If this is the case, then the coordinate descent algorithms converge to the global minimum. Section 4 discusses this issue further with respect to high-dimensional settings.

2.4. Pathwise optimization and initial values

Usually, we are interested in obtaining β̂ not just for a single value of λ, but for a range of values extending from a maximum value λmax for which all penalized coefficients are 0 down to λ = 0 or to a minimum value λmin at which the model becomes excessively large or ceases to be identifiable. When the objective function is strictly convex, the estimated coefficients vary continuously with λ ∈ [λmin, λmax] and produce a path of solutions regularized by λ. Examples of such paths may be seen in Figures 2 and 5.

Fig. 2.

Fig. 2

Example MCP coefficient paths for simulated data where p > n. The shaded region is the region in which the objective function is not locally convex. Note that the solutions are continuous and stable in the locally convex regions, but discontinuous and erratic otherwise.

Fig. 5.

Fig. 5

MCP coefficient paths for various values of γ for the two studies of Section 6. The shaded region depicts areas that are not locally convex, and a vertical line is drawn at the value of λ selected by BIC. For the sparse genetic association data, small values of γ produce the best fit; for the dense microarray data, large values are preferred.

Because the coefficient paths are continuous (under strictly convex objective functions), a reasonable approach to choosing initial values is to start at one extreme of the path and use the estimate β̂ from the previous value of λ as the initial value for the next value of λ. For MCP and SCAD, we can easily determine λmax, the smallest value for which all penalized coefficients are 0. From (2.3) and (2.6), it is clear that λmax = zmax, where zmax=maxj{n1xjy} [for logistic regression (Section 3) λmax is again equal to maxj {zj}, albeit with zj as defined in (3.4) and the quadratic approximation taken with respect to the intercept-only model]. Thus, by starting at λmin with β(0) = 0 and proceeding toward λmax, we can ensure that the initial values will never be far from the solution.

For all the numerical results in this paper, we follow the approach of Friedman, Hastie and Tibshirani (2010) and compute solutions along a grid of 100 λ values that are equally spaced on the log scale.

3. Logistic regression with nonconvex penalties

For logistic regression, it is not possible to eliminate the need for an intercept by centering the response variable. For logistic regression, then, y will denote the original vector of 0–1 responses. Correspondingly, although X is still standardized, it now contains an unpenalized column of 1’s for the intercept, with corresponding coefficient β0. The expected value of y once again depends on the linear function η = Xβ, although the model is now

P(yi=1xi1,,xip)=πi=eηi1+eηi. (3.1)

Estimation of the coefficients is now accomplished via minimization of the objective function

Qλ,γ(β)=1ni=1n{yilogπi+(1yi)log(1πi)}+j=1ppλ,γ(βj). (3.2)

Minimization can be approached by first obtaining a quadratic approximation to the loss function based on a Taylor series expansion about the value of the regression coefficients at the beginning of the current iteration, β(m). Doing so results in the familiar form of the iteratively reweighted least squares algorithm commonly used to fit generalized linear models [McCullagh and Nelder (1989)]:

Qλ,γ(β)12n(yXβ)W(yXβ)+j=1ppλ,γ(βj), (3.3)

where , the working response, is defined by

y=Xβ(m)+W1(yπ)

and W is a diagonal matrix of weights, with elements

wi=πi(1πi),

and π is evaluated at β(m). We focus on logistic regression here, but the same approach could be applied to fit penalized versions of any generalized linear model, provided that the quantities and W [as well as the residuals rXβ(m), which depend on implicitly] are replaced with expressions specific to the particular response distribution and link function.

With this representation, the local linear approximation (LLA) and coordinate descent (CD) algorithms can be extended to logistic regression in a rather straightforward manner. At iteration m, the following two steps are taken:

  1. Approximate the loss function based on β(m).

  2. Execute one iteration of the LLA/CD algorithm, obtaining β(m+1).

These two steps are then iterated until convergence for each value of λ. Note that for the coordinate descent algorithm, step (2) loops over all the covariates. Note also that step 2 must now involve the updating of the intercept term, which may be accomplished without modification of the underlying framework by setting λ = 0 for the intercept term.

The local linear approximation is extended in a straightforward manner to reweighted least squares by distributing W, obtaining the transformed covariates and response variable W1/2X and W1/2, respectively. The implications for coordinate descent algorithms are discussed in the next section.

Briefly, we note that, as is the case for the traditional iteratively reweighted least squares algorithm applied to generalized linear models, neither algorithm (LLA/CD) is guaranteed to converge for logistic regression. However, provided that adequate safeguards are in place to protect against model saturation, we have not observed failure to converge to be a problem for either algorithm.

3.1. Fixed scale solution

The presence of observation weights changes the form of the coordinate-wise updates. Let vj=n1xjWxj, and redefine r = W−1 (yπ) and

zj=1nxjW(yXjβj) (3.4)
=1nxjWr+vjβj(m). (3.5)

Now, the coordinate-descent update for MCP is

βj{S(zj,λ)vj1/γ,ifzjvjγλ,zjvj,ifzj>vjγλ (3.6)

for γ > 1/νj and for SCAD,

βj{S(zj,λ)vj,ifzjλ(vj+1),S(zj,γλ/(γ1))vj1/(γ1),ifλ(vj+1)<zjvjγλ,zjvj,ifzj>vjγλ (3.7)

for γ > 1 + 1/νj. Updating of r proceeds as in the linear regression case.

As is evident from comparing (2.3)/(2.6) with (3.6)/(3.7), portions of both numerator and denominator are being reweighted in logistic regression. In comparison, for linear regression, νj is always equal to 1 and this term drops out of the solution.

This reweighting, however, introduces some difficulties with respect to the choice and interpretation of the γ parameter. In linear regression, the scaling factor by which solutions are adjusted toward their unpenalized solution is a constant [1 – 1/γ for MCP, 1 – 1/(γ − 1) for SCAD] for all values of λ and for each covariate. Furthermore, for standardized covariates, this constant has a universal interpretation for all linear regression problems, meaning that theoretical arguments and numerical simulations investigating γ do not need to be rescaled and reinterpreted in the context of applied problems.

In logistic regression, however, this scaling factor is constantly changing, and is different for each covariate. This makes choosing an appropriate value for γ difficult in applied settings and robs the parameter of a consistent interpretation.

To illustrate the consequences of this issue, consider an attempt to perform logistic regression updates using γ = 3.7, the value suggested for linear regression in Fan and Li (2001). Because wi cannot exceed 0.25, γ cannot exceed 1/νj and the solution is discontinuous and unstable. Note that this problem does not arise from the use of any particular algorithm—it is a direct consequence of the poorly behaved objective function with this value of γ.

3.2. Adaptive rescaling

To resolve these difficulties, we propose an adaptive rescaling of the penalty parameter γ to match the continually changing scale of the covariates. This can be accomplished by simply replacing pλ, γ (|βj|) with pλ, γ (|νj βj|). The algorithmic consequences for the LLA algorithm are straightforward. For coordinate descent, the updating steps become simple extensions of the linear regression solutions:

βjf(zj,λ,γ)vj.

Note that, for MCP,

S(zj,λ)vj(11/γ)=S(zj,λ)vj1/γ,

where γ* = γ/νj. Thus, the adaptively rescaled solution is still minimizing the objective function (3.3), albeit with an alternate set of shape parameters { γj} that are unknown until convergence is attained.

Note that rescaling by νj does not affect the magnitude of the penalty (λ), only the range over which the penalty is applied (γ). Is it logical to apply different scales to different variables? Keep in mind that, since xjxj=n for all j, the rescaling factor νj will tend to be quite similar for all covariates. However, consider the case where a covariate is predominantly associated with observations for which π̂i is close to 0 or 1. For such a covariate, adaptive rescaling will extend the range over which the penalization is applied. This seems to be a reasonable course of action, as large changes in this coefficient produce only small changes in the model’s fit, and provide less compelling evidence of a covariate’s importance.

SCAD does not have the property that its adaptively rescaled solution is equal to a solution of the regular SCAD objective function with different shape parameters. This is due to the fact that the scale of the penalty is tied to the scale of the coefficient by the θ < λ clause. One could make this clause more flexible by reparameterizing SCAD so that p′(θ) = λ in the region θ < γ2λ, where γ2 would be an additional tuning parameter. In this generalized case, adaptively rescaled SCAD would minimize a version of the original objective function in which the γ parameters are rescaled by νj, as in the MCP case.

As we will see in Section 5, this adaptive rescaling increases interpretability and makes it easier to select γ.

4. Convexity, stability and selection of tuning parameters

4.1. Convexity and stability

As mentioned in Section 2.3, the MCP/SCAD-penalized linear regression objective function is convex provided that γ > 1/c* (MCP) or γ > 1 + 1/c* (SCAD). This result can be extended to logistic regression as well via the following proposition.

Proposition 2

Let c*(β) denote the minimum eigenvalue of n−1XWX, where W is evaluated at β. Then the objective function defined in (3.2) is a convex function of β on the region where c*(β) > 1/γ for MCP, and where c*(β) > 1/(γ − 1) for SCAD.

A convex objective function is desirable for (at least) two reasons. First, for any algorithm (such as coordinate descent) which converges to a critical point of the objective function, convexity ensures that the algorithm converges to the unique global minimum. Second, convexity ensures that β̂ is continuous with respect to λ, which in turn ensures good initial values for the scheme described in Section 2.4, thereby reducing the number of iterations required by the algorithm.

There are obvious practical benefits to using an algorithm that converges rapidly to the unique global solution. However, convexity may also be desirable from a statistical perspective. In the absence of convexity, β̂ is not necessarily continuous with respect to the data—that is, a small change in the data may produce a large change in the estimate. Such estimators tend to have high variance [Breiman (1996); Bruce and Gao (1996)] in addition to being unattractive from a logical perspective. Furthermore, discontinuity with respect to λ increases the difficulty of choosing a good value for the regularization parameter.

4.2. Local convexity diagnostics

However, it is not always necessary to attain global convexity. In high-dimensional settings where p > n, global convexity is neither possible nor relevant. In such settings, sparse solutions for which the number of nonzero coefficients is much lower than p are desired. Provided that the objective function is convex in the local region that contains these sparse solutions, we will still have stable estimates and smooth coefficient paths in the parameter space of interest.

Of considerable practical interest, then, is a diagnostic that would indicate, for nonconvex penalties such as SCAD and MCP, which regions of the coefficient paths are locally convex and which are not. Here, we introduce a definition for local convexity and a diagnostic measure which can be easily computed from the data and which can indicate which regions of the coefficient paths retain the benefits of convexity and which do not.

Recall the conditions for global convexity: γ must be greater than 1/c* for MCP (1 + 1/c* for SCAD), where c* denoted the minimum eigenvalue of n−1XX. We propose modifying this cutoff so that only the covariates with nonzero coefficients (the covariates which are “active” in the model) are included in the calculation of c*. Note that neither γ nor X change with λ. What does vary with λ is set of active covariates; generally speaking, this will increase as λ decreases (with correlated/collinear data, however, exceptions are possible). Thus, local convexity of the objective function will not be an issue for large λ, but may cease to hold as λ is lowered past some critical value λ*.

Specifically, let β̂ (λ) denote the minimizer of (2.1) for a certain value of λ, A(λ) = {j: β̂j (λ) ≠ 0} denote the active set of covariates, U (λ) = A(λ) ∪ A) denote the set of covariates that are either currently active given a value λ or that will become active imminently upon the lowering of λ by an infinitesimal amount, and let XU (λ) denote the design matrix formed from only those covariates for which jU (λ), with c*(λ) denoting the minimum eigenvalue of n−1XU (λ)XU (λ). Now, letting

λ=inf{λ:γ>1/c(λ)}forMCP

and

λ=inf{λ:γ>1+1/c(λ)}forSCAD,

we define the λ interval over which the objective function is “locally convex” to be (∞, λ*). Correspondingly, the objective function is locally nonconvex (and globally nonconvex) in the region [λ*, 0]. Because c*(λ) changes only when the composition of U (λ) changes, it is clear that λ* must be a value of λ for which A(λ) ≠ A).

For logistic regression, let c*(λ) represent the minimum eigenvalue of n−1XU (λ)WXU (λ)Dp, where W is evaluated at β̂ (λ) and Dp is a diagonal matrix that depends on the penalty function. For the fixed scale estimation of Section 3.1, Dp has elements {1/γ} for MCP and {1/γ − 1} for SCAD, while for the adaptively rescaled estimation of Section 3.2, Dp has elements { n1xkWxk/γ} for MCP and { n1xkWxk/(γ1)} for SCAD. For c*(λ) defined in this manner, λ* equals the smallest value of λ such that c*(λ) > 0.

The practical benefit of these diagnostics can be seen in Figure 2, which depicts examples of coefficient paths from simulated data in which n = 20 and p = 50. As is readily apparent, solutions are smooth and well behaved in the unshaded, locally convex region, but discontinuous and noisy in the shaded region which lies to the right of λ*. Figure 2 contains only MCP paths; the corresponding figures for SCAD paths look very similar. Figure 2 displays linear regression paths; paths for logistic regression can be seen in Figure 5.

The noisy solutions in the shaded region of Figure 2 may arise from numerical convergence to suboptimal solutions, inherent statistical variability arising from minimization of a nonconvex function, or a combination of both; either way, however, the figure makes a compelling argument that practitioners of regression methods involving nonconvex penalties should know which region their solution resides in.

4.3. Selection of γ and λ

Estimation using MCP and SCAD models depends on the choice of the tuning parameters γ and λ. This is usually accomplished with cross-validation or using an information criterion such as AIC or BIC. Each approach has its shortcomings.

Information criteria derived using asymptotic arguments for unpenalized regression models are on questionable ground when applied to penalized regression problems where p > n. Furthermore, defining the number of parameters in models with penalization and shrinkage present is complicated and affects lasso, MCP and SCAD differently. Finally, we have observed that AIC and BIC have a tendency, in some settings, to select local mimima in the nonconvex region of the objective function.

Cross-validation does not suffer from these issues; on the other hand, it is computationally intensive, particularly when performed over a two-dimensional grid of values for γ and λ, some of which may not possess convex objective functions and, as a result, converge slowly. This places a barrier in the way of examining the choice of γ, and may lead practitioners to use default values that are not appropriate in the context of a particular problem.

It is desirable to select a value of γ that produces parsimonious models while avoiding the aforementioned pitfalls of nonconvexity. Thus, we suggest the following hybrid approach, using BIC, cross-validation and convexity diagnostics in combination, which we have observed to work well in practice. For a path of solutions with a given value of γ, use AIC/BIC to select λ and use the aforementioned convexity diagnostics to determine the locally convex regions of the solution path. If the chosen solution lies in the region below λ*, increase γ to make the penalty more convex. On the other hand, if the chosen solution lies well above λ*, one can lower γ without fear of nonconvexity. Once this process has been iterated a few times to find a value of γ that seems to produce an appropriate balance between parsimony and convexity, use cross-validation to choose λ for this value of γ. This approach is illustrated in Section 6.

5. Numerical results

5.1. Computational efficiency

In this section we assess the computational efficiency of the coordinate descent and LLA algorithms for fitting MCP and SCAD regression models. We examine the time required to fit the entire coefficient path for linear and logistic regression models.

In these simulations, the response for linear regression was generated from the standard normal distribution, while for logistic regression, the response was generated as a Bernoulli random variable with P (yi = 1) = 0.5 for all i.

To investigate whether or not the coordinate descent algorithm experiences difficulty in the presence of correlated predictors, covariate values were generated in one of two ways: independently from the standard normal distribution (i.e., no correlation between the covariates), and from a multivariate normal distribution with a pair wise correlation of ρ = 0.9 between all covariates.

To ensure that the comparison between the algorithms was fair and not unduly influenced by failures to converge brought on by nonconvexity or model saturation, n was chosen to equal 1000 and γ was set equal to 1/c*, thereby ensuring a convex objective function for linear regression. This does not necessarily ensure that the logistic regression objective function is convex, although with adaptive rescaling, it works reasonably well. In all of the cases investigated, the LLA and coordinate descent algorithms converged to the same path (within the accuracy of the convergence criteria). The median times required to fit the entire coefficient path are presented as a function of p in Figure 3.

Fig. 3.

Fig. 3

Time required (in seconds) to fit the entire coefficient paths for linear and logistic regression, employing either the local linear approximation (LLA) algorithm or the coordinate descent (CD) algorithm. Both axes are on the log scale. Times displayed are averaged over 100 independently generated data sets. The coordinate descent algorithm is at least 100 times faster than the LLA algorithm at all points.

Interestingly, the slope of both lines in Figure 3 is close to 1 in each setting (on the log scale), implying that both coordinate descent and LLA exhibit a linear increase in computational burden with respect to p over the range of p investigated here. However, coordinate descent algorithm is drastically more efficient—up to 1000 times faster. The coordinate descent algorithm is somewhat (2–5 times) slower in the presence of highly correlated covariates, although it is still at least 100 times faster than LLA in all settings investigated.

5.2. Comparison of MCP, SCAD and lasso

We now turn our attention to the statistical properties of MCP and SCAD in comparison with the lasso. We will examine two instructive sets of simulations, comparing these nonconvex penalty methods to the lasso. The first set is a simple series designed to illustrate the primary differences between the methods. The second set is more complex, and designed to mimic the applications to real data in Section 6.

In the simple settings, covariate values were generated independently from the standard normal distribution. The sample sizes were n = 100 for linear regression and n = 400 for logistic regression. In the complex settings, the design matrices from the actual data sets were used.

For each data set, the MCP and SCAD coefficient paths were fit using the coordinate descent algorithm described in this paper, and lasso paths were fit using the glmnet package [Friedman, Hastie and Tibshirani (2010)]. Tenfold cross-validation was used to choose λ.

Setting 1

We begin with a collection of simple models in which there are four nonzero coefficients, two of which equal to +s, the other two equal to −s. Given these values of the coefficient vector, responses were generated from the normal distribution with mean ηi and variance 1 for linear regression, and for logistic regression, responses were generated according to (3.1).

For the simulations in this setting, we used the single value γ = 3 in order to illustrate that reasonable results can be obtained without investigation of the tuning parameter if adaptive rescaling is used for penalized logistic regression. Presumably, the performance of both MCP and SCAD would be improved if γ was chosen in a more careful manner, tailored to each setting; nevertheless, the simulations succeed in demonstrating the primary statistical differences between the methods.

We investigate the estimation efficiency of MCP, SCAD and lasso as s and p are varied. These results are shown in Figure 4.

Fig. 4.

Fig. 4

Relative (to the lasso) mean squared error for MCP- and SCAD-penalized linear and logistic regression. MSE was calculated for each penalty on 100 independently generated data sets, and the ratio of the medians is plotted. MCP and SCAD greatly outperform lasso for large coefficients, but not necessarily for small coefficients.

Figure 4 illustrates the primary difference between MCP/SCAD and lasso. MCP and SCAD are more aggressive modeling strategies, in the sense that they allow coefficients to take on large values much more easily than lasso. As a result, they are capable of greatly outperforming the lasso when large coefficients are present in the underlying model. However, the shrinkage applied by the lasso is beneficial when coefficients are small, as seen in the regions of Figure 4 in which s is near zero. In such settings, MCP and SCAD are more likely to overfit the noisy data. This is true for both linear and logistic regression. This tendency is worse for logistic regression than it is for linear regression, worse for MCP than it is for SCAD, and worse when γ is closer to 1 (results supporting this final comparison not shown).

Setting 2

In this setting we examine the performance of MCP, SCAD and lasso for the kinds of design matrices often seen in biomedical applications, where covariates contain complicated patterns of correlation and high levels of multi-collinearity. Section 6 describes in greater detail the two data sets, which we will refer to here as “Genetic Association” and “Microarray,” but it is worth noting here that for the Genetic Association case, n = 800 and p = 532, while for the Microarray case, n = 38 and p = 7129.

In both simulations the design matrix was held constant while the coefficient and response vectors changed from replication to replication. In the Genetic Association case, 5 coefficient values were randomly generated from the exponential distribution with rate parameter 3, given a random (+/−) sign, and randomly assigned to one of the 532 SNPs (the rest of the coefficients were zero). In the Microarray case, 100 coefficient values were randomly generated from the normal distribution with mean 0 and standard deviation 3, and randomly assigned among the 7129 features. Once the coefficient vectors were generated, responses were generated according to (3.1). This was repeated 500 times for each case, and the results are displayed in Table 1.

Table 1.

Simulation results: Setting 2% correct refers to the percent of variables selected by the model that had nonzero coefficients in the generating model

Penalty Misclassification error (%) Model size Correct (%)
Genetic association
 MCP (γ = 1.5) 38.7 2.1 54.7
 SCAD (γ = 2.5) 38.7 9.5 25.0
 Lasso 38.8 12.6 20.6
Microarray
 MCP (γ = 5) 19.7 4.3 3.2
 MCP (γ = 20) 16.7 7.5 4.3
 SCAD (γ = 20) 16.4 8.7 4.3
 Lasso 16.2 8.8 4.3

The Genetic Association simulation—reflecting the presumed underlying biology in these kinds of studies—was designed to be quite sparse. We expected the more sparse MCP method to outperform the lasso here, and it does. All methods achieve similar predictive accuracy, but MCP does so using far fewer variables and with a much lower percentage of spurious covariates in the model.

The Microarray simulation was designed to be unfavorable to sparse modeling—a large number of nonzero coefficients, many of which are close to zero. As is further discussed in Section 6, γ should be large in this case; when γ = 5, MCP produces models with diminished accuracy in terms of both prediction and variable selection. However, it is not the case that lasso is clearly superior to MCP here; if γ is chosen appropriately, the predictive accuracy of the lasso can be retained, with gains in parsimony, by applying MCP with γ = 20.

In the microarray simulation, the percentage of variables selected that have a nonzero coefficient in the generating model is quite low for all methods. This is largely due to high correlation among the features. The methods achieve good predictive accuracy by finding predictors that are highly correlated with true covariates, but are unable to distinguish the causative factors from the rest in the network of interdependent gene expression levels. This is not an impediment for screening and diagnostic applications, although it may or may not be helpful for elucidating the pathway of disease.

SCAD, the design of which has similarities to both MCP and lasso, lies in between the two methodologies. However, in these two cases, SCAD is more similar to lasso than it is to MCP.

6. Applications

6.1. Genetic association

Genetic association studies have become a widely used tool for detecting links between genetic markers and diseases. The example that we will consider here involves data from a case-control study of age-related macular degeneration consisting of 400 cases and 400 controls. We confine our analysis to 30 genes that previous biological studies have suggested may be related to the disease. These genes contained 532 markers with acceptably high call rates and minor allele frequencies. Logistic regression models penalized by lasso, SCAD and MCP were fit to the data assuming an additive effect for all markers (i.e., for a homozygous dominant marker, xij = 2, for a heterozygous marker, xij = 1, and for a homozygous recessive marker, xij = 0). As an illustration of computational savings in a practical setting, the LLA algorithm required 17 minutes to produce a single MCP path of solutions, as compared to 2.5 seconds for coordinate descent.

As described in Section 4.3, we used BIC and convexity diagnostics to choose an appropriate value of γ; this is illustrated in the top half of Figure 5. The biological expectation is that few markers are associated with the disease; this is observed empirically as well, as a low value of γ = 1.5 was observed to balance sparsity and convexity in this setting. In a similar fashion, γ = 2.5 was chosen for SCAD.

Ten-fold cross-validation was then used to choose λ for MCP, SCAD and lasso. The number of parameters in the selected model, along with the corresponding cross-validation errors, are listed in Table 2. The results indicate that MCP is better suited to handle this sparse regression problem than either SCAD or lasso, achieving a modest reduction in cross-validated prediction error while producing a much more sparse model. As one would expect, the SCAD results are in between MCP and lasso.

Table 2.

Genetic association results

Penalty Model size CV error (%)
MCP 7 39.4
SCAD 25 40.6
Lasso 103 40.9

6.2. Gene expression

Next, we examine the gene expression study of leukemia patients presented in Golub et al. (1999). In the study the expression levels of 7129 genes were recorded for 27 patients with acute lymphoblastic leukemia (ALL) and 11 patients with acute myeloid leukemia (AML). Expression levels for an additional 34 patients were measured and reserved as a test set. Logistic regression models penalized by lasso, SCAD and MCP were fit to the training data.

Biologically, this problem is more dense than the earlier application. Potentially, a large number of genes are affected by the two types of leukemia. In addition, the sample size is much smaller for this problem. These two factors suggest that a higher value of γ is appropriate, an intuition borne out in Figure 5. The figure suggests that γ ≈ 20 may be needed in order to obtain an adequately convex objective function for this problem (we used γ = 20 for both MCP and SCAD).

With γ so large, MCP and SCAD are quite similar to lasso; indeed, all three methods classify the test set observations in the same way, correctly classifying 31/34. Analyzing the same data, Park and Hastie (2007) find that lasso-penalized logistic regression is comparable with or more accurate than several other competing methods often applied to high-dimensional classification problems. The same would therefore apply to MCP and SCAD as well; however, MCP achieved its success using only 11 predictors, compared to 13 for lasso and SCAD. This is an important consideration for screening and diagnostic applications such as this one, where the goal is often to develop an accurate test using as few features as possible in order to control cost.

Note that, even in a problem such as this one with a sample size of 38 and a dozen features selected, there may still be an advantage to the sparsity of MCP and the parsimonious models it produces. To take advantage of MCP, however, it is essential to choose γ wisely—using the value γ = 5 (much too sparse for this problem) tripled the test error to 9/34. We are not aware of any method that can achieve prediction accuracy comparable to MCP while using only 11 features or fewer.

7. Discussion

The results from the simulation studies and data examples considered in this paper provide compelling evidence that nonconvex penalties like MCP and SCAD are worthwhile alternatives to the lasso in many applications. In particular, the numerical results suggest that MCP is often the preferred approach among the three methods.

Many researchers and practitioners have been reluctant to embrace these methods due to their lack of convexity, and for good reason: nonconvex objective functions are difficult to optimize and often produce unstable solutions. However, we provide here a fast, efficient and stable algorithm for fitting MCP and SCAD models, as well as introduce diagnostic measures to indicate which regions of a coefficient path are locally convex and which are not. Furthermore, we introduce an adaptive rescaling for logistic regression which makes selection of the tuning parameter γ much easier and more intuitive. All of these innovations are publicly available as an open-source R package (http://cran.r-project.org) called ncvreg. We hope that these efforts remove some of the barriers to the further study and use of these methods in both research and practice.

Acknowledgments

The authors would like to thank Professor Cun-Hui Zhang for providing us an early version of his paper on MCP and sharing his insights on the related topics, and Rob Mullins for the genetic association data analyzed in Section 6. We also thank the editor, associate editor and two referees for comments that improved both the clarity of the writing and the quality of the research.

APPENDIX

Although the objective functions under consideration in this paper are not differentiable, they possess directional derivatives and directional second derivatives at all points β and in all directions u for β, uInline graphic. We use duQ and du2Q to represent the derivative and second derivative of Q in the direction u.

Proof of Lemma 1

For all βj ∈ (−∞, ∞),

min{d2Qj,λ,γ(β),d+2Qj,λ,γ(β)}{11γforMCP,11γ1forSCAD.

Thus, Qj, λ, γ (β) is a strictly convex function on (− ∞, ∞) if γ > 1 for MCP and if γ > 2 for SCAD.

Proof of Proposition 1

Tseng (2001) establishes sufficient conditions for the convergence of cyclic coordinate descent algorithms to coordinate-wise minima. The strict convexity in each coordinate direction established in Lemma 1 satisfies the conditions of Theorems 4.1 and 5.1 of that article. Because Q is continuous, either theorem can be directly applied to demonstrate that the coordinate descent algorithms proposed in this paper converge to coordinate-wise minima. Furthermore, because all directional derivatives exist, every coordinate-wise minimum is also a local minimum.

Proof of Proposition 2

For all βInline graphic,

minu{du2Qλ,γ(β)}{1nXWX1γIforMCP,1nXWX1γ1IforSCAD.

Thus, Qλ, γ (β) is a convex function on the region for which c*(β) > 1/γ for MCP, and where c*(β) > 1/(γ − 1) for SCAD.

Footnotes

1

In Zou and Li (2008), the authors also discuss a one-step version of LLA starting from an initial estimate satisfying certain conditions. Although this is an interesting topic, we focus here on algorithms that minimize the specified MCP/SCAD objective functions and thereby possess the oracle properties demonstrated in Fan and Li (2001) and Zhang (2010) without requiring a separate initial estimator.

Contributor Information

Patrick Breheny, Email: patrick.breheny@uky.edu.

Jian Huang, Email: jian-huang@uiowa.edu.

References

  1. Breiman L. Heuristics of instability and stabilization in model selection. Ann Statist. 1996;24:2350–2383. [Google Scholar]
  2. Bruce AG, Gao HY. Understanding WaveShrink: Variance and bias estimation. Biometrika. 1996;83:727–745. [Google Scholar]
  3. Donoho DL, Johnstone JM. Ideal spatial adaptation by wavelet shrinkage. Biometrika. 1994;81:425–455. [Google Scholar]
  4. Efron B, Hastie T, Johnstone I, Tibshirani R. Least angle regression. Ann Statist. 2004;32:407–451. [Google Scholar]
  5. Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J Amer Statist Assoc. 2001;96:1348–1361. [Google Scholar]
  6. Friedman J, Hastie T, Tibshirani R. Regularization paths for generalized linear models via coordinate descent. J Statist Softw. 2010;33:1–22. [PMC free article] [PubMed] [Google Scholar]
  7. Friedman J, Hastie T, Höfling H, Tibshirani R. Pathwise coordinate optimization. Ann Appl Statist. 2007;1:302–332. [Google Scholar]
  8. Gao HY, Bruce AG. WaveShrink with firm shrinkage. Statist Sinica. 1997;7:855–874. [Google Scholar]
  9. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, et al. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science. 1999;286:531–536. doi: 10.1126/science.286.5439.531. [DOI] [PubMed] [Google Scholar]
  10. McCullagh P, Nelder JA. Generalized Linear Models. Chapman and Hall/CRC; Boca Raton, FL: 1989. [Google Scholar]
  11. Park MY, Hastie T. L1 regularization path algorithm for generalized linear models. J Roy Statist Soc Ser B. 2007;69:659–677. [Google Scholar]
  12. Tibshirani R. Regression shrinkage and selection via the lasso. J Roy Statist Soc Ser B. 1996;58:267–288. [Google Scholar]
  13. Tseng P. Convergence of a block coordinate descent method for nondifferentiable minimization. J Optim Theory Appl. 2001;109:475–494. [Google Scholar]
  14. Wu TT, Lange K. Coordinate descent algorithms for lasso penalized regression. Ann Appl Statist. 2008;2:224–244. [Google Scholar]
  15. Yu J, Yu J, Almal AA, Dhanasekaran SM, Ghosh D, Worzel WP, Chinnaiyan AM. Feature selection and molecular classification of cancer using genetic programming. Neoplasia. 2007;9:292–303. doi: 10.1593/neo.07121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Zhang CH. Nearly unbiased variable selection under minimax concave penalty. Ann Statist. 2010;38:894–942. [Google Scholar]
  17. Zou H, Li R. One-step sparse estimates in nonconcave penalized likelihood models. Ann Statist. 2008;36:1509–1533. doi: 10.1214/009053607000000802. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES