Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jan 2.
Published in final edited form as: Electron J Stat. 2012 Nov 9;6:2125–2149. doi: 10.1214/12-EJS740

The graphical lasso: New insights and alternatives

Rahul Mazumder 1,2,*, Trevor Hastie 3,
PMCID: PMC4281944  NIHMSID: NIHMS637369  PMID: 25558297

Abstract

The graphical lasso [5] is an algorithm for learning the structure in an undirected Gaussian graphical model, using ℓ1 regularization to control the number of zeros in the precision matrix Θ = Σ−1 [2, 11]. The R package GLASSO [5] is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of GLASSO can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform GLASSO.

By studying the “normal equations” we see that, GLASSO is solving the dual of the graphical lasso penalized likelihood, by block coordinate ascent; a result which can also be found in [2]. In this dual, the target of estimation is Σ, the covariance matrix, rather than the precision matrix Θ. We propose similar primal algorithms P-GLASSO and DP-GLASSO, that also operate by block-coordinate descent, where Θ is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate sub-problems. We conclude that DP-GLASSO is superior from several points of view.

Keywords and phrases: Graphical lasso, sparse inverse covariance selection, precision matrix, convex analysis/optimization, positive definite matrices, sparsity, semidefinite programming

1. Introduction

Consider a data matrix Xn×p, a sample of n realizations from a p-dimensional Gaussian distribution with zero mean and positive definite covariance matrix Σ. The task is to estimate the unknown Σ based on the n samples — a challenging problem especially when np, when the ordinary maximum likelihood estimate does not exist. Even if it does exist (for pn), the MLE is often poorly behaved, and regularization is called for. The Graphical Lasso [5] is a regularization framework for estimating the covariance matrix Σ, under the assumption that its inverse Θ = Σ−1 is sparse [2, 11, 8]. Θ is called the precision matrix; if an element θjk = 0, this implies that the corresponding variables Xj and Xk are conditionally independent, given the rest. Our algorithms focus either on the restricted version of Θ or its inverse W = Θ−1. The graphical lasso problem minimizes a ℓ1-regularized negative log-likelihood:

minimizeΘ0f(Θ):=log det(Θ)+tr(SΘ)+λΘ1. (1.1)

Here S is the sample covariance matrix, Θ1 denotes the sum of the absolute values of Θ, and λ is a tuning parameter controlling the amount of ℓ1 shrinkage. This is a semidefinite programming problem (SDP) in the variable Θ [4].

In this paper we revisit the GLASSO algorithm proposed by Friedman, Hastie and Tibshirani [5] for solving (1.1); we analyze its properties, expose problems and issues, and propose alternative algorithms more suitable for the task.

Some of the results and conclusions of this paper can be found in [2], both explicitly and implicitly. We re-derive some of the results and derive new results, insights and algorithms, using a unified and more elementary framework.

Notation

We denote the entries of a matrix An×n by aij. ‖A1 denotes the sum of its absolute values, ‖A the maximum absolute value of its entries, ‖AF is its Frobenius norm, and abs(A) is the matrix with elements |aij|. For a vector u ∈ ℜq, ‖u1 denotes the ℓ1 norm, and so on.

From now on, unless otherwise specified, we will assume that λ > 0.

2. Review of the GLASSO algorithm

We use the frame-work of “normal equations” as in [6, 5]. Using sub-gradient notation, we can write the optimality conditions (aka “normal equations”) for a solution to (1.1) as

Θ1+S+λΓ=0, (2.1)

where Γ is a matrix of component-wise signs of Θ:

γjk=sign(θjk)ifθjk0γjk[1,1]ifθjk=0 (2.2)

(we use the notation γjk ∈ Sign(θjk)). Since the global stationary conditions of (2.1) require θjj to be positive, this implies that

wii=sii+λ,i=1,,p, (2.3)

where W = Θ−1.

GLASSO uses a block-coordinate method for solving (2.1). Consider a partitioning of Θ and Γ:

Θ=(Θ11θ12θ21θ22),Γ=(Γ11γ12γ21γ22) (2.4)

where Θ11 is (p − 1) × (p − 1), θ12 is (p − 1) × 1 and θ22 is scalar. W and S are partitioned the same way. Using properties of inverses of block-partitioned matrices, observe that W = Θ−1 can be written in two equivalent forms:

(W11w12w21w22)=((Θ11θ12θ21θ22)1W11θ12θ22·1θ22θ21W11θ12θ222) (2.5)
=(Θ111+Θ111θ12θ21Θ111θ22θ21Θ111θ12Θ111θ12θ22θ21Θ111θ12·1θ22θ21Θ111θ12) (2.6)

GLASSO solves for a row/column of (2.1) at a time, holding the rest fixed. Considering the pth column of (2.1), we get

w12+s12+λγ12=0. (2.7)

Reading off w12 from (2.5) we have

w12=W11θ12/θ22 (2.8)

and plugging into (2.7), we have:

W11θ12θ22+s12+λγ12=0. (2.9)

GLASSO operates on the above gradient equation, as described below.

As a variation consider reading off w12 from (2.6):

Θ111θ12(θ22θ21Θ111θ12)+s12+λγ12=0. (2.10)

The above simplifies to

Θ111θ12w22+s12+λγ12=0, (2.11)

where w22=1/(θ22θ21Θ111θ12) is fixed (by the global stationary conditions (2.3)). We will see that these two apparently similar estimating equations (2.9) and (2.11) lead to very different algorithms.

The GLASSO algorithm solves (2.9) for β = θ12/θ22, that is

W11β+s12+λγ12=0, (2.12)

where γ12 ∈ Sign(β), since θ22 > 0. (2.12) is the stationarity equation for the following ℓ1 regularized quadratic program:

minimizeβp1{12βW11β+βs12+λβ1}, (2.13)

where W11 ≻ 0 is assumed to be fixed. This is analogous to a lasso regression problem of the last variable on the rest, except the cross-product matrix S11 is replaced by its current estimate W11. This problem itself can be solved efficiently using elementwise coordinate descent, exploiting the sparsity in β. From β^, it is easy to obtain w^12 from (2.8). Using the lower-right element of (2.5), θ^22 is obtained by

1θ^22=w22β^w^12. (2.14)

Finally, θ^12 can now be recovered from β^ and θ^22. Notice, however, that having solved for β and updated w12, GLASSO can move onto the next block; disentangling θ12 and θ22 can be done at the end, when the algorithm over all blocks has converged. The GLASSO algorithm is outlined in Algorithm 1. We show in Lemma 3 in Section 8 that the successive updates in GLASSO keep W positive definite.

Figure 1 (left panel, black curve) plots the objective f(Θ(k)) for the sequence of solutions produced by GLASSO on an example. Surprisingly, the curve is not monotone decreasing, as confirmed by the middle plot. If GLASSO were solving (1.1) by block coordinate-descent, we would not anticipate this behavior.

FIG 1.

FIG 1

[Left panel] The objective values of the primal criterion (1.1) and the dual criterion (4.1) corresponding to the covariance matrix W produced by GLASSO algorithm as a function of the iteration index (each column/row update). [Middle Panel] The successive differences of the primal objective values — the zero crossings indicate non-monotonicity. [Right Panel] The successive differences in the dual objective values — there are no zero crossings, indicating that GLASSO produces a monotone sequence of dual objective values.

A closer look at steps (2.8) and (2.9) of the GLASSO algorithm leads to the following observations:


Algorithm 1 GLASSO algorithm [5]
  1. Initialize W = S + λI.

  2. Cycle around the columns repeatedly, performing the following steps till convergence:
    1. Rearrange the rows/columns so that the target column is last (implicitly).
    2. Solve the lasso problem (2.13), using as warm starts the solution from the previous round for this column.
    3. Update the row/column (off-diagonal) of the covariance using w^12 (2.8).
    4. Save β^ for this column in the matrix B.
  3. Finally, for every row/column, compute the diagonal entries θ^jj using (2.14), and convert the B matrix to Θ.


  1. We wish to solve (2.7) for θ12. However θ12 is entangled in W11, which is (incorrectly) treated as a constant.

  2. After updating θ12, we see from (2.6) that the entire (working) covariance matrix W changes. GLASSO however updates only w12 and w21.

These two observations explain the non-monotone behavior of GLASSO in minimizing f(Θ). Section 3 shows a corrected block-coordinate descent algorithm for Θ, and Section 4 shows that the GLASSO algorithm is actually optimizing the dual of problem (1.1), with the optimization variable being W.

3. A corrected GLASSO block coordinate-descent algorithm

Recall that (2.11) is a variant of (2.9), where the dependence of the covariance sub-matrix W11 on θ12 is explicit. With α = θ12w22 (with w22 ≥ 0 fixed), Θ11 ≻ 0, (2.11) is equivalent to the stationary condition for


Algorithm 2 P-GLASSO Algorithm
  1. Initialize W = diag(S) + λI, and Θ = W−1.

  2. Cycle around the columns repeatedly, performing the following steps till convergence:
    1. Rearrange the rows/columns so that the target column is last (implicitly).
    2. Compute Θ111 using (3.3).
    3. Solve (3.1) for α, using as warm starts the solution from the previous round of row/column updates. Update θ^12=α^/w22, and θ^22 using (3.2).
    4. Update Θ and W using (2.6), ensuring that ΘW = Ip.
  3. Output the solution Θ (precision) and its exact inverse W (covariance).


minimizeαp1{12αΘ111α+αs12+λα1}. (3.1)

If α^ is the minimizer of (3.1), then θ^12=α^/w22. To complete the optimization for the entire row/column we need to update θ22. This follows simply from (2.6)

θ^22=1w22+θ^21Θ111θ^12, (3.2)

with w22 = s22 + λ.

To solve (3.1) we need Θ111 for each block update. We achieve this by maintaining W = Θ−1 as the iterations proceed. Then for each block

  • we obtain Θ111 from
    Θ111=W11w12w21/w22; (3.3)
  • once θ12 is updated, the entire working covariance matrix W is updated (in particular the portions W11 and w12), via the identities in (2.6), using the known Θ111.

Both these steps are simple rank-one updates with a total cost of O(p2) operations.

We refer to this as the primal graphical lasso or P-GLASSO, which we present in Algorithm 2.

The P-GLASSO algorithm requires slightly more work than GLASSO, since an additional O(p2) operations have to be performed before and after each block update. In return we have that after every row/column update, Θ and W are positive definite (for λ > 0) and ΘW = Ip.

4. What is GLASSO actually solving?

Building upon the framework developed in Section 2, we now proceed to establish that GLASSO solves the convex dual of problem (1.1), by block coordinate ascent. We reach this conclusion via elementary arguments, closely aligned with the framework we develop in Section 2. The approach we present here is intended for an audience without much of a familiarity with convex duality theory [4].

Figure 1 illustrates that GLASSO is an ascent algorithm on the dual of the problem 1.1. The red curve in the left plot shows the dual objective rising monotonely, and the rightmost plot shows that the increments are indeed positive. There is an added twist though: in solving the block-coordinate update, GLASSO solves instead the dual of that subproblem.

4.1. Dual of the ℓ1 regularized log-likelihood

We present below the following lemma, the conclusion of which also appears in [2], but we use the framework developed in Section 2.

Lemma 1

Consider the primal problem (1.1) and its stationarity conditions (2.1). These are equivalent to the stationarity conditions for the box-constrained SDP

maximizeΓ~:Γ~λg(Γ~):=log det(S+Γ~)+p (4.1)

under the transformation S+Γ~=Θ1.

Proof

The (sub)gradient conditions (2.1) can be rewritten as:

(S+λΓ)1+Θ=0 (4.2)

where Γ = sgn(Θ). We write Γ~=λΓ and observe that Γ~λ. Denote by abs(Θ) the matrix with element-wise absolute values.

Hence if (Θ, Γ) satisfy (4.2), the substitutions

Γ~=λΓ;P=abs(Θ) (4.3)

satisfy the following set of equations:

(S+Γ~)1+Psgn(Γ~)=0P(abs(Γ~)λ1p1p)=0Γ~λ. (4.4)

In the above, P is a symmetric p × p matrix with non-negative entries, 1p1p denotes a p×p matrix of ones, and the operator ‘*’ denotes element-wise product. We observe that (4.4) are the KKT optimality conditions for the box-constrained SDP (4.1). Similarly, the transformations Θ=Psgn(Γ~) and Γ=Γ~/λ show that conditions (4.4) imply condition (4.2). Based on (4.2) the optimal solutions of the two problems (1.1) and (4.1) are related by S+Γ~=Θ1.  □

Notice that for the dual, the optimization variable is Γ~, with S+Γ~=Θ1=W. In other words, the dual problem solves for W rather than Θ, a fact that is suggested by the GLASSO algorithm.

Remark 1

The equivalence of the solutions to problems (4.1) and (1.1) as described above can also be derived via convex duality theory [4], which shows that (4.1) is a dual function of the ℓ1 regularized negative log-likelihood (1.1). Strong duality holds, hence the optimal solutions of the two problems coincide [2].

We now consider solving (4.4) for the last block γ~12 (excluding diagonal), holding the rest of Γ~ fixed. The corresponding equations are

θ12+p12sgn(γ~12)=0p12(abs(γ~12)λ1p1)=0γ~12λ. (4.5)

The only non-trivial translation is the θ12 in the first equation. We must express this in terms of the optimization variable γ~12. Since s12+γ~12=w12, using the identities in (2.5), we have W111(s12+γ~12)=θ12/θ22. Since θ22 > 0, we can redefine p~12=p12/θ22, to get

W111(s12+γ~12)+p~12sgn(γ~12)=0p~12(abs(γ~12)λ1p1)=0γ~12λ. (4.6)

The following lemma shows that a block update of GLASSO solves (4.6) (and hence (4.5)), a block of stationary conditions for the dual of the graphical lasso problem. Curiously, GLASSO does this not directly, but by solving the dual of the QP corresponding to this block of equations.

Lemma 2

Assume W110. The stationarity equations

W11β^+s12+λγ^12=0, (4.7)

where γ^12Sign(β^), correspond to the solution of the ℓ1-regularized QP:

minimizeβp112βW11β+βs12+λβ1. (4.8)

Solving (4.8) is equivalent to solving the following box-constrained QP:

minimizeγp112(s12+γ)W111(s12+γ)subject toγλ, (4.9)

with stationarity conditions given by (4.6), where the β^ and γ~12 are related by

β^=W111(s12+γ~12). (4.10)
Proof

(4.7) is the KKT optimality condition for the ℓ1 regularized QP (4.8). We rewrite (4.7) as

β^=W111(s12+λγ~12)=0. (4.11)

Observe that β^i=sgn(β^)|βi|i and γ^121. Suppose β^,γ^12 satisfy (4.11), then the substitutions

γ~12=λγ^12,p~12=abs(β^) (4.12)

in (4.11) satisfy the stationarity conditions (4.6). It turns out that (4.6) is equivalent to the KKT optimality conditions of the box-constrained QP (4.9). Similarly, we note that if γ~12, p~12 satisfy (4.6), then the substitution

γ^12=γ~12/λ;β^=p~12sgn(γ~12)

satisfies (4.11). Hence the β^ and γ^12 are related by (4.10).  □

Remark 2

The above result can also be derived via convex duality theory[4], where (4.9) is actually the Lagrange dual of the ℓ1 regularized QP (4.8), with (4.10) denoting the primal-dual relationship. [2, Section 3.3] interpret (4.9) as an ℓ1 penalized regression problem (using convex duality theory) and explore connections with the set up of [8].

Note that the QP (4.9) is a (partial) optimization over the variable w12 only (since s12 is fixed); the sub-matrix W11 remains fixed in the QP. Exactly one row/column of W changes when the block-coordinate algorithm of GLASSO moves to a new row/column, unlike an explicit full matrix update in W11, which is required if θ12 is updated. This again emphasizes that GLASSO is operating on the covariance matrix instead of Θ. We thus arrive at the following conclusion:

Theorem 4.1

GLASSO performs block-coordinate ascent on the box-constrained SDP (4.1), the Lagrange dual of the primal problem (1.1). Each of the block steps are themselves box-constrained QPs, which GLASSO optimizes via their Lagrange duals.

In our annotation perhaps GLASSO should be called DD-GLASSO, since it performs dual block updates for the dual of the graphical lasso problem. Banerjee, Ghaoui and d’Aspremont [2], the paper that inspired the original GLASSO article [5], also operates on the dual. They however solve the block-updates directly (which are box constrained QPs) using interior-point methods.

5. A new algorithm — DP-GLASSO

In Section 3, we described P-GLASSO, a primal coordinate-descent method. For every row/column we need to solve a lasso problem (3.1), which operates on a quadratic form corresponding to the square matrix Θ111. There are two problems with this approach:

  • the matrix Θ111 needs to be constructed at every row/column update with complexity O(p2);

  • Θ111 is dense.

We now show how a simple modification of the ℓ1-regularized QP leads to a box-constrained QP with attractive computational properties.

The KKT optimality conditions for (3.1), following (2.11), can be written as:

Θ111α+s12+λsgn(α)=0. (5.1)

Algorithm 3 DP-GLASSO algorithm
  1. Initialize Θ = diag(S + λI)−1.

  2. Cycle around the columns repeatedly, performing the following steps till convergence:
    1. Rearrange the rows/columns so that the target column is last (implicitly).
    2. Solve (5.3) for γ~ and update
      θ^12=Θ11(s12+γ~)/w22
    3. Solve for θ22 using (5.5).
    4. Update the working covariance w12=s12+γ~.

Along the same lines of the derivations used in Lemma 2, the condition above is equivalent to

q~12sgn(γ~)+Θ11(s12+γ~)=0q~12(abs(γ~)λ1p1)=0γ~λ (5.2)

for some vector (with non-negative entries) q~12. (5.2) are the KKT optimality conditions for the following box-constrained QP:

minimizeγp112(s12+γ)Θ11(s12+γ)subject toγλ. (5.3)

The optimal solutions of (5.3) and (5.1) are related by

α^=Θ11(s12+γ~), (5.4)

a consequence of (5.1), with α^=θ^12·w22 and w22 = s22 + λ. The diagonal θ22 of the precision matrix is updated via (2.6):

θ^22=1(s12+γ~)θ^12w22 (5.5)

By strong duality, the box-constrained QP (5.3) with its optimality conditions (5.2) is equivalent to the lasso problem (3.1). Now both the problems listed at the beginning of the section are removed. The problem matrix Θ11 is sparse, and no O(p2) updating is required after each block.

The solutions returned at step 2(b) for θ^12 need not be exactly sparse, even though it purports to produce the solution to the primal block problem (3.1), which is sparse. One needs to use a tight convergence criterion when solving (5.3). In addition, one can threshold those elements of θ^12 for which γ~ is away from the box boundary, since those values are known to be zero.

Note that DP-GLASSO does to the primal formulation (1.1) what GLASSO does to the dual. DP-GLASSO operates on the precision matrix, whereas GLASSO operates on the covariance matrix.

6. Computational costs in solving the block QPs

The ℓ1 regularized QPs appearing in (2.13) and (3.1) are of the generic form

minimizeuq12uAu+au+λu1, (6.1)

for A0. In this paper, we choose to use cyclical coordinate descent for solving (6.1), as it is used in the GLASSO algorithm implementation of Friedman, Hastie and Tibshirani [5]. Moreover, cyclical coordinate descent methods perform well with good warm-starts. These are available for both (2.13) and (3.1), since they both maintain working copies of the precision matrix, updated after every row/column update. There are other efficient ways for solving (6.1), capable of scaling to large problems — for example first-order proximal methods [3, 9], but we do not pursue them in this paper.

The box-constrained QPs appearing in (4.9) and (5.3) are of the generic form:

minimizevq12(v+b)A~(v+b)subject tovλ (6.2)

for some Ã0. As in the case above, we will use cyclical coordinate-descent for optimizing (6.2).

In general it is more efficient to solve (6.1) than (6.2) for larger values of λ. This is because a large value of λ in (6.1) results in sparse solutions û; the coordinate descent algorithm can easily detect when a zero stays zero, and no further work gets done for that coordinate on that pass. If the solution to (6.1) has κ non-zeros, then on average κ coordinates need to be updated. This leads to a cost of O(), for one full sweep across all the q coordinates.

On the other hand, a large λ for (6.2) corresponds to a weakly-regularized solution. Cyclical coordinate procedures for this task are not as effective. Every coordinate update of v results in updating the gradient, which requires adding a scalar multiple of a column of Ã. If à is dense, this leads to a cost of O(q), and for one full cycle across all the coordinates this costs O(q2), rather than the O() for (6.1).

However, our experimental results show that DP-GLASSO is more efficient than GLASSO, so there are some other factors in play. When à is sparse, there are computational savings. If à has κq non-zeros, the cost per column reduces on average to O(κq) from O(q2). For the formulation (5.3) à is Θ11, which is sparse for large λ. Hence for large λ, GLASSO and DP-GLASSO have similar costs.

For smaller values of λ, the box-constrained QP (6.2) is particularly attractive. Most of the coordinates in the optimal solution v^ will pile up at the boundary points {−λ, λ}, which means that the coordinates need not be updated frequently. For problem (5.3) this number is also κ, the number of non-zero coefficients in the corresponding column of the precision matrix. If κ of the coordinates pile up at the boundary, then one full sweep of cyclical coordinate descent across all the coordinates will require updating gradients corresponding to the remaining qκ coordinates. Using similar calculations as before, this will cost O(q(qκ)) operations per full cycle (since for small λ, Ã will be dense). For the ℓ1 regularized problem (6.1), no such saving is achieved, and the cost is O(q2) per cycle.

Note that to solve problem (1.1), we need to solve a QP of a particular type (6.1) or (6.2) for a certain number of outer cycles (ie full sweeps across rows/columns). For every row/column update, the associated QP requires varying number of iterations to converge. It is hard to characterize all these factors and come up with precise estimates of convergence rates of the overall algorithm. However, we have observed that with warm-starts, on a relatively dense grid of λs, the complexities given above are pretty much accurate for DP-GLASSO (with warmstarts) specially when one is interested in solutions with small / moderate accuracy. Our experimental results in Section 9.1 and Appendix Section B support our observation.

We will now have a more critical look at the updates of the GLASSO algorithm and study their properties.

7. GLASSO: Positive definiteness, sparsity and exact inversion

As noted earlier, GLASSO operates on W — it does not explicitly compute the inverse W−1. It does however keep track of the estimates for θ12 after every row/column update. The copy of Θ retained by GLASSO along the row/column updates is not the exact inverse of the optimization variable W. Figure 2 illustrates this by plotting the squared-norm (ΘW1)F2 as a function of the iteration index. Only upon (asymptotic) convergence, will Θ be equal to W−1. This can have important consequences.

FIG 2.

FIG 2

Figure illustrating some negative properties of GLASSO using a typical numerical example. [Left Panel] The precision matrix produced after every row/column update need not be the exact inverse of the working covariance matrix — the squared Frobenius norm of the error is being plotted across iterations. [Right Panel] The estimated precision matrix Θ produced by GLASSO need not be positive definite along iterations; plot shows minimal eigen-value.

In many real-life problems one only needs an approximate solution to (1.1):

  • for computational reasons it might be impractical to obtain a solution of high accuracy;

  • from a statistical viewpoint it might be sufficient to obtain an approximate solution for Θ that is both sparse and positive definite

It turns out that the GLASSO algorithm is not suited to this purpose.

Since the GLASSO is a block coordinate procedure on the covariance matrix, it maintains a positive definite covariance matrix at every row/column update. However, since the estimated precision matrix is not the exact inverse of W, it need not be positive definite. Although it is relatively straightforward to maintain an exact inverse of W along the row/column updates (via simple rank-one updates as before), this inverse W−1 need not be sparse. Arbitrary thresholding rules may be used to set some of the entries to zero, but that might destroy the positive-definiteness of the matrix. Since a principal motivation of solving (1.1) is to obtain a sparse precision matrix (which is also positive definite), returning a dense W−1 to (1.1) is not desirable.

Figures 2 illustrates the above observations on a typical example.

The DP-GLASSO algorithm operates on the primal (1.1). Instead of optimizing the ℓ1 regularized QP (3.1), which requires computing Θ111, DP-GLASSO optimizes (5.3). After every row/column update the precision matrix Θ is positive definite. The working covariance matrix maintained by DP-GLASSO via w12:=s12+γ^ need not be the exact inverse of Θ. Exact covariance matrix estimates, if required, can be obtained by tracking Θ−1 via simple rank-one updates, as described earlier.

Unlike GLASSO, DP- GLASSO (and P- GLASSO) return a sparse and positive definite precision matrix even if the row/column iterations are terminated prematurely.

8. Warm starts and path-seeking strategies

Since we seldom know in advance a good value of λ, we often compute a sequence of solutions to (1.1) for a (typically) decreasing sequence of values λ1 > λ2 > ⋯ > λK. Warm-start or continuation methods use the solution at λi+1, as an initial guess for the solution at λi+1, and often yield great efficiency. It turns out that for algorithms like GLASSO which operate on the dual problem, not all warm-starts necessarily lead to a convergent algorithm. We address this aspect in detail in this section.

The following lemma states the conditions under which the row/column updates of the GLASSO algorithm will maintain positive definiteness of the covariance matrix W.

Lemma 3

Suppose Z is used as a warm-start for the GLASSO algorithm. If Z0 and ‖ZS ≤ λ, then every row/column update of GLASSO maintains positive definiteness of the working covariance matrix W.

Proof

Recall that the GLASSO solves the dual (4.1). Assume Z is partitioned as in (2.4), and the pth row/column is being updated. Since Z0, we have both

Z110and(z22z21(Z11)1z12)>0. (8.1)

Since Z11 remains fixed, it suffices to show that after the row/column update, the expression (w^22w^21(Z11)1w^12) remains positive. Recall that, via standard optimality conditions we have w^22=s22+λ, which makes w^22z22 (since by assumption, |z22s22| ≤ λ and z22 > 0). Furthermore, w^21=s21+γ^, where γ^ is the optimal solution to the corresponding box-QP (4.9). Since the starting solution z21 satisfies the box-constraint (4.9) ‖z21s21 ≤ λ, the optimal solution of the QP (4.9) improves the objective:

w^21(Z11)1w^12z21(Z11)1z12

Combining the above along with the fact that w^22z22 we see

w^22w^21(Z11)1w^12>0, (8.2)

which implies that the new covariance estimate W^0.  □

Remark 3

If the condition ‖ZS ≤ λ appearing in Lemma 3 is violated, then the row/column update of GLASSO need not maintain PD of the covariance matrix W.

We have encountered many counter-examples that show this to be true, see the discussion below.

The R package implementation of GLASSO allows the user to specify a warm-start as a tuple (Θ0, W0). This option is typically used in the construction of a path algorithm.

If (Θ^λ,W^λ) is provided as a warm-start for λ′ < λ, then the GLASSO algorithm is not guaranteed to converge. It is easy to find numerical examples by choosing the gap λ − λ′ to be large enough. Among the various examples we encountered, we briefly describe one here. Details of the experiment/data and other examples can be found in the Appendix A.1. We generated a data-matrix Xn×p, with n = 2, p = 5 with iid standard Gaussian entries. S is the sample covariance matrix. We solved problem (1.1) using GLASSO for λ = 0.9 × maxi≠j|sij|. We took the estimated covariance and precision matrices: W^λandΘ^λ as a warm-start for the GLASSO algorithm with λ′ = λ× 0.01. The GLASSO algorithm failed to converge with this warm-start. We note that W^λS=0.0402λ (hence violating the sufficient condition in Lemma 4) and after updating the first row/column via the GLASSO algorithm we observed that “covariance matrix” W has negative eigen-values — leading to a non-convergent algorithm. The above phenomenon is not surprising and easy to explain and generalize. If the warm-start fails to satisfy W^λS then during the course of the row/column updates the working covariance matrix may lose positive definiteness. In such a case, the block problems (QPs) may not correspond to valid convex programs (due to the lack of the postive-definiteness of the quadratic forms). This seems to be the fundamental reason behind the non-convergence of the algorithm. Since W^λ solves the dual (4.1), it is necessarily of the form W^λ=S+Γ~,forΓ~λ. In the light of Lemma 3 and also Remark 3, the warm-start needs to be dual-feasible in order to guarantee that the iterates W^ remain PD and hence for the sub-problems to be well defined convex programs. Clearly W^λ does not satisfy the box-constraint W^λSλ, for λ′ < λ However, in practice the GLASSO algorithm is usually seen to converge (numerically) when λ′ is quite close to λ. This is probably because the working covariance matrix remains positive definite and the block QPs are valid convex programs. If the difference between λ′ and λ is large then the algorithm may very likely get into trouble.

The following lemma establishes that any PD matrix can be taken as a warm-start for P-GLASSO or DP- GLASSOto ensure a convergent algorithm.

Lemma 4

Suppose Φ0 is a used as a warm-start for the P-GLASSO (or DP-GLASSO) algorithm. Then every row/column update of P-GLASSO (or DP-GLASSO) maintains positive definiteness of the working precision matrix Θ.

Proof

Consider updating the pth row/column of the precision matrix. The condition Φ0 is equivalent to both

Φ110and(ϕ22Φ21(Φ11)1Φ12)>0.

Note that the block Φ11 remains fixed; only the pth row/column of Θ changes ϕ21. gets updated to θ^21, as does θ^12. From (2.6) the updated diagonal entry θ^22 satisfies:

θ^22θ^21(Φ11)1θ^12=1(s22+λ)>0.

Thus the updated matrix Θ^ remains PD. The result for the DP-GLASSO algorithm follows, since both the versions P-GLASSO and DP-GLASSO solve the same block coordinate problem.  □

Remark 4

A simple consequence of Lemmas 3 and 4 is that the QPs arising in the process, namely the ℓ1 regularized QPs (2.13), (3.1) and the box-constrained QPs (4.9) and (5.3) are all valid convex programs, since all the respective matrices W11,Θ111andW111,Θ11 appearing in the quadratic forms are PD.

As exhibited in Lemma 4, both the algorithms DP-GLASSO and P-GLASSO are guaranteed to converge from any positive-definite warm start. This is due to the unconstrained formulation of the primal problem (1.1).

GLASSO really only requires an initialization for W, since it constructs Θ on the fly. Likewise DP-GLASSO only requires an initialization for Θ. Having the other half of the tuple assists in the block-updating algorithms. For example, GLASSO solves a series of lasso problems, where Θ play the role as parameters. By supplying Θ along with W, the block-wise lasso problems can be given starting values close to the solutions. The same applies to DP-GLASSO. In neither case do the pairs have to be inverses of each other to serve this purpose.

If we wish to start with inverse pairs, and maintain such a relationship, we have described earlier how O(p2) updates after each block optimization can achieve this. One caveat for GLASSO is that starting with an inverse pair costs O(p3) operations, since we typically start with W = S + λI. For DP-GLASSO, we typically start with a diagonal matrix, which is trivial to invert.

9. Experimental results & timing comparisons

We compared the performances of algorithms GLASSO and DP-GLASSO (both with and without warm-starts) on different examples with varying (n, p) values. While most of the results are presented in this section, some are relegated to the Appendix B. Section 9.1 describes some synthetic examples and Section 9.2 presents comparisons on a real-life micro-array data-set.

9.1. Synthetic experiments

In this section we present examples generated from two different covariance models — as characterized by the covariance matrix Σ or equivalently the precision matrix Θ. We create a data matrix Xn×p by drawing n independent samples from a p dimensional normal distribution MVN(0, Σ). The sample covariance matrix is taken as the input S to problem (1.1). The two covariance models are described below:

  • Type-1 The population concentration matrix Θ = Σ−1 has uniform sparsity with approximately 77 % of the entries zero.

    We created the covariance matrix as follows. We generated a matrix B with iid standard Gaussian entries, symmetrized it via 12(B+B) and set approximately 77% of the entries of this matrix to zero, to obtain B~ (say). We added a scalar multiple of the p dimensional identity matrix to B~ to get the precision matrix Θ=B~+ηIp×p, with η chosen such that the minimum eigen value of Θ is one.

  • Type-2 This example, taken from [11], is an auto-regressive process of order two — the precision matrix being tri-diagonal:
    θij={0.5,if|ji|=1,i=2,,(p1);0.25,if|ji|=2,i=3,,(p2);1,ifi=j,i=1,,p;and0otherwise.

For each of the two set-ups Type-1 and Type-2 we consider twelve different combinations of (n, p):

  1. p = 1000, n ∈ {1500, 1000, 500}.

  2. p = 800, n ∈ {1000, 800, 500}.

  3. p = 500, n ∈ {800, 500, 200}.

  4. p = 200, n ∈ {500, 200, 50}.

For every (n, p) we solved (1.1) on a grid of twenty λ values linearly spaced in the log-scale, with λi = 0.8i × {0.9λmax}, i = 1, ⋯, 20, where λmax = maxij |sij|, is the off-diagonal entry of S with largest absolute value. λmax is the smallest value of λ for which the solution to (1.1) is a diagonal matrix.

Since this article focuses on the GLASSO algorithm, its properties and alternatives that stem from the main idea of block-coordinate optimization, we present here the performances of the following algorithms:

  • Dual-Cold GLASSO with initialization W = S + λIp×p, as suggested in [5].

  • Dual-Warm The path-wise version of GLASSO with warm-starts, as suggested in [5]. Although this path-wise version need not converge in general, this was not a problem in our experiments, probably due to the fine-grid of λ values.

  • Primal-Cold DP-GLASSO with diagonal initialization Θ = (diag(S) + λI)−1.

  • Primal-Warm The path-wise version of DP-GLASSO with warm-starts.

We did not include P-GLASSO in the comparisons above since P-GLASSO requires additional matrix rank-one updates after every row/column update, which makes it more expensive. None of the above listed algorithms require matrix inversions (via rank one updates). Furthermore, DP-GLASSO and P-GLASSO are quite similar as both are doing a block coordinate optimization on the dual. Hence we only included DP-GLASSO in our comparisons. We used our own implementation of the GLASSO and DP-GLASSO algorithm in R. The entire program is written in R, except the inner block-update solvers, which are the real work-horses:

  • For GLASSO we used the lasso code crossProdLasso written in FORTRAN by [5];

  • For DP-GLASSO we wrote our own FORTRAN code to solve the box QP.

An R package dpglasso that implements DP-GLASSO is available on CRAN.

In the figure and tables that follow below, for every algorithm, at a fixed λ we report the total time taken by all the QPs — the ℓ1 regularized QP for GLASSO and the box constrained QP for DP-GLASSO till convergence All computations were done on a Linux machine with model specs: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz.

Convergence Criterion

Since DP-GLASSO operates on the the primal formulation and GLASSO operates on the dual — to make the convergence criteria comparable across examples we based it on the relative change in the primal objective values i.e. f (Θ) (1.1) across two successive iterations:

f(Θk)f(Θk1)|f(Θk1)|TOL, (9.1)

where one iteration refers to a full sweep across p rows/columns of the precision matrix (for DP-GLASSO) and covariance matrix (for GLASSO); and TOL denotes the tolerance level or level of accuracy of the solution. To compute the primal objective value for the GLASSO algorithm, the precision matrix is computed from W^ via direct inversion (the time taken for inversion and objective value computation is not included in the timing comparisons).

Computing the objective function is quite expensive relative to the computational cost of the iterations. In our experience convergence criteria based on a relative change in the precision matrix for DP-GLASSO and the covariance matrix for GLASSO seemed to be a practical choice for the examples we considered. However, for reasons we described above, we used criterion 9.1 in the experiments.

Observations

Figure 3 presents the times taken by the algorithms to converge to an accuracy of TOL = 10−4 on a grid of λ values.

FIG 3.

FIG 3

The timings in seconds for the four different algorithmic versions: GLASSO (with and without warm-starts) and DP-GLASSO (with and without warm-starts) for a grid of λ values on the log-scale. [Left Panel] Covariance model for Type-1, [Right Panel] Covariance model for Type-2. The horizontal axis is indexed by the proportion of zeros in the solution. The vertical dashed lines correspond to the optimal λ values for which the estimated errors Θ^λΘ1 (green) and Θ^λΘF (blue) are minimum.

The figure shows eight different scenarios with p > n, corresponding to the two different covariance models Type-1 (left panel) and Type-2 (right panel). It is quite evident that DP-GLASSO with warm-starts (Primal-Warm) outperforms all the other algorithms across all the different examples. All the algorithms converge quickly for large values of λ (typically high sparsity) and become slower with decreasing λ. For large p and small λ, convergence is slow; however for p > n, the non-sparse end of the regularization path is really not that interesting from a statistical viewpoint. Warm-starts apparently do not always help in speeding up the convergence of GLASSO; for example see Figure 3 with (n, p) = (500, 1000) (Type 1) and (n, p) = (500, 800) (Type 2). This probably further validates the fact that warm-starts in the case of GLASSO need to be carefully designed, in order for them to speed-up convergence. Note however, that GLASSO with the warm-starts prescribed is not even guaranteed to converge — we however did not come across any such instance among the experiments presented in this section.

Based on the suggestion of a referee we annotated the plots in Figure 3 with locations in the regularization path that are of interest. For each plot, two vertical dotted lines are drawn which correspond to the λs at which the distance of the estimated precision matrix Θ^λ from the population precision matrix is minimized wrt to the ‖·‖1 norm (green) and ‖·‖F norm (blue). The optimal λ corresponding to the ‖·‖1 metric chooses sparser models than those chosen by ‖·‖F; the performance gains achieved by DP-GLASSO seem to be more prominent for the latter λ.

Table 1 presents the timings for all the four algorithmic variants on the twelve different (n, p) combinations listed above for Type 1. For every example, we report the total time till convergence on a grid of twenty λ values for two different tolerance levels: TOL ∈ {10−4, 10−5}. Note that the DP-GLASSO returns positive definite and sparse precision matrices even if the algorithm is terminated at a relatively small/moderate accuracy level — this is not the case in GLASSO. The rightmost column presents the proportion of non-zeros averaged across the entire path of solutions Θ^λ, where Θ^λ is obtained by solving (1.1) to a high precision i.e. 10−6, by algorithms GLASSO and DP-GLASSO and averaging the results.

TABLE 1.

Table showing the performances of the four algorithms GLASSO (Dual-Warm/Cold) and DP-GLASSO (Primal-Warm/Cold) for the covariance model Type-1. We present the times (in seconds) required to compute a path of solutions to (1.1) (on a grid of twenty λ values) for different (n,p) combinations and relative errors (as in (9.1)). The rightmost column gives the averaged sparsity level across the grid of λ values. DP-GLASSO with warm-starts is consistently the winner across all the examples

p/n relative
error (TOL)
Total time (secs) to compute a path of solutions Average %
Zeros in path
Dual-Cold Dual-Warm Primal-Cold Primal-Warm
1000 / 500 10−4
10−5
3550.71
4706.22
6592.63
8835.59
2558.83
3234.97
2005.25
2832.15
80.2

1000 / 1000 10−4
10−5
2788.30
3597.21
3158.71
4232.92
2206.95
2710.34
1347.05
1865.57
83.0

1000 / 1500 10−4
10−5
2447.19
2764.23
4505.02
6426.49
1813.61
2199.53
932.34
1382.64
85.6

800 / 500 10−4
10−5
1216.30
1776.72
2284.56
3010.15
928.37
1173.76
541.66
798.93
78.8

800 / 800 10−4
10−5
1135.73
1481.36
1049.16
1397.25
788.12
986.19
438.46
614.98
80.0

800 / 1000 10−4
10−5
1129.01
1430.77
1146.63
1618.41
786.02
992.13
453.06
642.90
80.2

500 / 200 10−4
10−5
605.45
811.58
559.14
795.43
395.11
520.98
191.88
282.65
75.9

500 / 500 10−4
10−5
427.85
551.11
241.90
315.86
252.83
319.89
123.35
182.81
75.2

500 / 800 10−4
10−5
359.78
416.87
279.67
402.61
207.28
257.06
111.92
157.13
80.9

200 / 50 10−4
10−5
65.87
92.04
50.99
75.06
37.40
45.88
23.32
35.81
75.6

200 / 200 10−4
10−5
35.29
45.90
25.70
33.23
17.32
22.41
11.72
17.16
66.8

200 / 300 10−4
10−5
32.29
38.37
23.60
33.95
16.30
20.12
10.77
15.12
66.0

Again we see that in all the examples DP-GLASSO with warm-starts is the clear winner among its competitors. For a fixed p, the total time to trace out the path generally decreases with increasing n. There is no clear winner between GLASSO with warm-starts and GLASSO without warm-starts. It is often seen that DP-GLASSO without warm-starts converges faster than both the variants of GLASSO (with and without warm-starts).

Table 2 reports the timing comparisons for Type 2. Once again we see that in all the examples Primal-Warm turns out to be the clear winner.

TABLE 2.

Table showing comparative timings of the four algorithmic variants of glasso and DP-GLASSO for the covariance model in Type-2. This table is similar to Table 1, displaying results for Type-1. DP-GLASSO with warm-starts consistently outperforms all its competitors

p/n relative
error (TOL)
Total time (secs) to compute a path of solutions Average %
Zeros in path
Dual-Cold Dual-Warm Primal-Cold Primal-Warm
1000 / 500 10−4
10−5
6093.11
7707.24
5483.03
7923.80
3495.67
4401.28
1661.93
2358.08
75.6

1000 / 1000 10−4
10−5
4773.98
6054.21
3582.28
4714.80
2697.38
3444.79
1015.84
1593.54
76.70

1000 / 1500 10−4
10−5
4786.28
6171.01
5175.16
6958.29
2693.39
3432.33
1062.06
1679.16
78.5

800 / 500 10−4
10−5
2914.63
3674.73
3466.49
4572.97
1685.41
2083.20
1293.18
1893.22
74.3

800 / 800 10−4
10−5
2021.55
2521.06
1995.90
2639.62
1131.35
1415.95
618.06
922.93
74.4

800 / 1000 10−4
10−5
3674.36
4599.59
2551.06
3353.78
1834.86
2260.58
885.79
1353.28
75.9

500 / 200 10−4
10−5
1200.24
1574.62
885.76
1219.12
718.75
876.45
291.61
408.41
70.5

500 / 500 10−4
10−5
575.53
730.54
386.20
535.58
323.30
421.91
130.59
193.08
72.2

500 / 800 10−4
10−5
666.75
852.54
474.12
659.58
373.60
485.47
115.75
185.60
73.7

200 / 50 10−4
10−5
110.18
142.77
98.23
133.67
48.98
55.27
26.97
33.95
73.0

200 / 200 10−4
10−5
50.63
66.63
40.68
56.71
23.94
31.57
9.97
14.70
63.7

200 / 300 10−4
10−5
47.63
60.98
36.18
50.52
21.24
27.41
8.19
12.22
65.0

For np = 1000, we observe that Primal-Warm is generally faster for Type-2 than Type-1. This however, is reversed for smaller values of p ∈ {800, 500}. Primal-Cold is has a smaller overall computation time for Type-1 over Type-2. In some cases (for example np = 1000), we see that Primal-Warm in Type-2 converges much faster than its competitors on a relative scale than in Type-1 — this difference is due to the variations in the structure of the covariance matrix.

9.2. Micro-array example

We consider the data-set introduced in [1] and further studied in [10, 7]. In this experiment, tissue samples were analyzed using an Affymetrix Oligonucleotide array. The data was processed, filtered and reduced to a subset of 2000 gene expression values. The number of Colon Adenocarcinoma tissue samples is n = 62. For the purpose of the experiments presented in this section, we pre-screened the genes to a size of p = 725. We obtained this subset of genes using the idea of exact covariance thresholding introduced in our paper [7]. We thresholded the sample correlation matrix obtained from the 62 × 2000 microarray data-matrix into connected components with a threshold of 0.003641 — the genes belonging to the largest connected component formed our pre-screened gene pool of size p = 725. This (subset) data-matrix of size (n, p) = (62, 725) is used for our experiments.

The results presented below in Table 3 show timing comparisons of the four different algorithms: Primal-Warm/Cold and Dual-Warm/Cold on a grid of fifteen λ values in the log-scale. Once again we see that the Primal-Warm outperforms the others in terms of speed and accuracy. Dual-Warm performs quite well in this example.

TABLE 3.

Comparisons among algorithms for a microarray dataset with n = 62 and p = 725, for different tolerance levels (TOL). We took a grid of fifteen λ values, the average % of zeros along the whole path is 90.8

relative
error (TOL)
Total time (secs) to compute a path of solutions
Dual-Cold Dual-Warm Primal-Cold Primal-Warm
10−3 515.15 406.57 462.58 334.56
10−4 976.16 677.76 709.83 521.44

10. Conclusions

This paper explores some of the apparent mysteries in the behavior of the GLASSO algorithm introduced in [5]. These have been explained by leveraging the fact that the GLASSO algorithm is solving the dual of the graphical lasso problem (1.1), by block coordinate ascent. Each block update, itself the solution to a convex program, is solved via its own dual, which is equivalent to a lasso problem. The optimization variable is W, the covariance matrix, rather than the target precision matrix Θ. During the course of the iterations, a working version of Θ is maintained, but it may not be positive definite, and its inverse is not W. Tight convergence is therefore essential, for the solution Θ^ to be a proper inverse covariance. There are issues using warm starts with GLASSO, when computing a path of solutions. Unless the sequence of λs are sufficiently close, since the “warm start”s are not dual feasible, the algorithm can get into trouble.

We have also developed two primal algorithms P-GLASSO and DP-GLASSO. The former is more expensive, since it maintains the relationship W = Θ−1 at every step, an O(p3) operation per sweep across all row/columns. DP-GLASSO is similar in flavor to GLASSO except its optimization variable is Θ. It also solves the dual problem when computing its block update, in this case a box-QP. This box-QP has attractive sparsity properties at both ends of the regularization path, as evidenced in some of our experiments. It maintains a positive definite Θ throughout its iterations, and can be started at any positive definite matrix. Our experiments show in addition that DP-GLASSO is faster than GLASSO.

An R package dpglasso that implements DP-GLASSO is available on CRAN.

Acknowledgments

We would like to thank Robert Tibshirani and his research group at Stanford Statistics for helpful discussions. We are also thankful to the anonymous referees whose comments led to improvements in this presentation.

Appendix A: Additional numerical illustrations and examples

This section complements the examples provided in the paper with further experiments and illustrations.

A.1. Examples: Non-convergence of GLASSO with warm-starts

This section illustrates with examples that warm-starts for the GLASSO need not converge. This is a continuation of examples presented in Section 8.

Example 1

We took (n, p) = (2, 5) and setting the seed of the random number generator in R as set.seed(2008) we generated a data-matrix Xn×p with iid standard Gaussian entries. The sample covariance matrix S is given below:

(0.035976520.037922210.10585850.083606590.13667250.035976520.037922210.10585850.083606590.13667250.105858530.111583610.31148140.246006890.40214970.083606590.088128230.24600690.194295140.31761600.136672460.144064020.40214970.317616030.5192098)

With q denoting the maximum off-diagonal entry of S (in absolute value), we solved (1.1) using GLASSO at λ = 0.9 × q. The covariance matrix for this λ was taken as a warm-start for the GLASSO algorithm with λ′ = λ × 0.01. The smallest eigen-value of the working covariance matrix W produced by the GLASSO algorithm, upon updating the first row/column was: −0.002896128, which is clearly undesirable for the convergence of the algorithm GLASSO. This is why the algorithm GLASSO breaks down.

Example 2

The example is similar to above, with (n, p) = (10, 50), the seed of random number generator in R being set to set.seed(2008) and Xn×p is the data-matrix with iid Gaussian entries. If the covariance matrix W^λ which solves problem (1.1) with λ = 0.9 × maxij |sij| is taken as a warm-start to the GLASSO algorithm with λ′ = λ×0.1 — the algorithm fails to converge. Like the previous example, after the first row/column update, the working covariance matrix has negative eigen-values.

Appendix B: More examples and comparisons

This section is a continuation to Section 9, in that it provides further examples comparing the performance of algorithms GLASSO and DP-GLASSO. The experimental data is generated as follows. For a fixed value of p, we generate a matrix Ap×p with random Gaussian entries. The matrix is symmetrized by A ← (A + A′)/2. Approximately half of the off-diagonal entries of the matrix are set to zero, uniformly at random. All the eigen-values of the matrix A are lifted so that the smallest eigen-value is zero. The noiseless version of the precision matrix is given by Θ = A + τIp×p. We generated the sample covariance matrix S by adding symmetric positive semi-definite random noise N to Θ−1; i.e. s = Θ−1 + N, where this noise is generated in the same manner as A. We considered four different values of p ∈ {300, 500, 800, 1000} and two different values of τ ∈ {1, 4}.

For every p, τ combination we considered a path of twenty λ values on the geometric scale. For every such case four experiments were performed: Primal-Cold, Primal-Warm, Dual-Cold and Dual-Warm (as described in Section 9). Each combination was run 5 times, and the results averaged, to avoid dependencies on machine loads. Figure 4 shows the results. Overall, DP-GLASSO with warm starts performs the best, especially at the extremes of the path. We gave some explanation for this in Section 6. For the largest problems (p = 1000) their performances are comparable in the central part of the path (though DP-GLASSO dominates), but at the extremes DP-GLASSO dominates by a large margin.

FIG 4.

FIG 4

The timings in seconds for the four different algorithmic versions GLASSO (with and without warm-starts) and DP-GLASSO (with and without warm-starts) for a grid of twenty λ values on the log-scale. The horizontal axis is indexed by the proportion of zeros in the solution.

Footnotes

AMS 2000 subject classifications: Primary 62H99, 62-09; secondary 62-04.

1

this is the largest value of the threshold for which the size of the largest connected component is smaller than 800

Contributor Information

Rahul Mazumder, Email: rahul.mazumder@gmail.com, Massachusetts Institute of Technology, Cambridge, MA 02139; Department of Statistics Stanford, University Stanford, CA 94305.

Trevor Hastie, Email: hastie@stanford.edu, Departments of Statistics and Health Research and Policy, Stanford University, Stanford, CA 94305.

References

  • 1.Alon U, Barkai N, Notterman DA, Gish K, Ybarra S, Mack D, Levine AJ. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences of the United States of America. 1999;96:6745–6750. doi: 10.1073/pnas.96.12.6745. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Banerjee O, Ghaoui LE, d’Aspremont A. Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data. Journal of Machine Learning Research. 2008;9:485–516. MR2417243. [Google Scholar]
  • 3.Beck A, Teboulle M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J Imaging Sciences. 2009;2:183–202. MR2486527. [Google Scholar]
  • 4.Boyd S, Vandenberghe L. Convex Optimization. Cambridge University Press; 2004. MR2061575. [Google Scholar]
  • 5.Friedman J, Hastie T, Tibshirani R. Sparse inverse covariance estimation with the graphical lasso. Biostatistics. 2007;9:432–441. doi: 10.1093/biostatistics/kxm045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning, Second Edition: Data Mining, Inference, and Prediction (Springer Series in Statistics) 2. Springer; New York: 2009. MR2722294. [Google Scholar]
  • 7.Mazumder R, Hastie T. Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso. Journal of Machine Learning Research. 2012;13:781–794. MR2913718. [PMC free article] [PubMed] [Google Scholar]
  • 8.Meinshausen N, Bühlmann P. High-dimensional graphs and variable selection with the lasso. Annals of Statistics. 2006;34:1436–1462. MR2278363. [Google Scholar]
  • 9.Nesterov Y. Gradient methods for minimizing composite objective function Technical Report, Center for Operations Research and Econometrics (CORE) Catholic University of Louvain Tech Rep. 2007:76. [Google Scholar]
  • 10.Rothman AJ, Bickel PJ, Levina E, Zhu J. Sparse Permutation Invariant Covariance Estimation. Electronic Journal of Statistics. 2008;2:494–515. MR2417391. [Google Scholar]
  • 11.Yuan M, Lin Y. Model selection and estimation in the Gaussian graphical model. Biometrika. 2007;94:19–35. MR2367824. [Google Scholar]

RESOURCES