Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 May 2.
Published in final edited form as: Electron J Stat. 2016 May 31;10(1):1341–1392. doi: 10.1214/16-EJS1137

Joint Estimation of Precision Matrices in Heterogeneous Populations

Takumi Saegusa 1, Ali Shojaie 2
PMCID: PMC5412991  NIHMSID: NIHMS844108  PMID: 28473876

Abstract

We introduce a general framework for estimation of inverse covariance, or precision, matrices from heterogeneous populations. The proposed framework uses a Laplacian shrinkage penalty to encourage similarity among estimates from disparate, but related, subpopulations, while allowing for differences among matrices. We propose an efficient alternating direction method of multipliers (ADMM) algorithm for parameter estimation, as well as its extension for faster computation in high dimensions by thresholding the empirical covariance matrix to identify the joint block diagonal structure in the estimated precision matrices. We establish both variable selection and norm consistency of the proposed estimator for distributions with exponential or polynomial tails. Further, to extend the applicability of the method to the settings with unknown populations structure, we propose a Laplacian penalty based on hierarchical clustering, and discuss conditions under which this data-driven choice results in consistent estimation of precision matrices in heterogenous populations. Extensive numerical studies and applications to gene expression data from subtypes of cancer with distinct clinical outcomes indicate the potential advantages of the proposed method over existing approaches.

Keywords and phrases: Graph Laplacian, graphical modeling, heterogeneous populations, hierarchical clustering, high-dimensional estimation, precision matrix, sparsity

1. Introduction

Estimation of large inverse covariance, or precision, matrices has received considerable attention in recent years. This interest is in part driven by the advent of high-dimensional data in many scientific areas, including high throughput omics measurements, functional magnetic resonance images (fMRI), and applications in finance and industry. Applications of various statistical methods in such settings require an estimate of the (inverse) covariance matrix. Examples include dimension reduction using principal component analysis (PCA), classification using linear or quadratic discriminant analysis (LDA/QDA), and discovering conditional independence relations in Gaussian graphical models (GGM).

In high-dimensional settings, where the data dimension p is often comparable or larger than the sample size n, regularized estimation procedures often result in more reliable estimates. Of particular interest is the use of sparsity inducing penalties, specifically the ℓ1 or lasso penalty [30], which encourages sparsity in off-diagonal elements of the precision matrix [7, 8, 33, 34]. Theoretical properties of ℓ1-penalized precision matrix estimation have been studied under both multivariate normality, as well as some relaxations of this assumption [4, 19, 25, 26].

Sparse estimation is particularly relevant in the setting of GGMs, where conditional independencies among variables correspond to zero off-diagonal elements of the precision matrix [14]. The majority of existing approaches for estimation of high-dimensional precision matrices, including those cited in the previous paragraph, assume that the observations are identically distributed, and correspond to a single population. However, data sets in many application areas include observations from several distinct subpopulations. For instance, gene expression measurements are often collected for both healthy subjects, as well as patients diagnosed with different subtypes of cancer. Despite increasing evidence for differences among genetic networks of cancer and healthy subjects [11, 27], the networks are also expected to share many common edges. Separate estimation of graphical models for each of the subpopulations would ignore the common structure of the precision matrices, and may thus be inefficient; this inefficiency can be particularly significant in high-dimensional low sample settings, where pn.

To address the need for estimation of graphical models in related subpopulations, few methods have been recently proposed for joint estimation of K precision matrices Ω(k)=(ωij(k))i,j=1pp×p, k = 1, …, K [6, 9]. These methods extend the penalized maximum likelihood approach by combining the Gaussian likelihoods for the K subpopulations

n(Ω)=1nk=1Knk(log det (Ω(k))tr (Σ^n(k)Ω(k))). (1)

Here, nk and Σ^n(k) are the number of observations and the sample covariance matrix for the kth subpopulation, respectively, n=k=1Knk is the total sample size and tr(·) and det(·) denote matrix trace and determinant.

To encourage similarity among estimated precision matrices, Guo et al. [9] modeled the (i, j)-element of Ω(k) as product of a common factor θij and group-specific parameters γij(k), i.e. ωij(k)=δijγij(k). Identifiability of the estimates is ensured by assuming δij ≥ 0. A zero common factor δij = 0 induces sparsity across all subpopulations, whereas γij(k)=0 results in condition-specific sparsity for ωij(k). This reparametrization results in a non-convex optimization problem based on the Gaussian likelihood with ℓ1-penalties ∑ij δij and ijk=1K|γij(k)|. Danaher et al. [6] proposed two alternative estimators by adding an additional convex penalty to the graphical lasso objective function: either a fused lasso penalty ijkk|ωij(k)ωijk| (FGL), or a group lasso penalty ijk=1K(ωij(k))2 (GGL). The fused lasso penalty has also been used by Kolar et al. [13], for joint estimation of multiple graphical models in multiple time points. The fused lasso penalty strongly encourages the values of ωij(k) to be similar across all subpopulations, both in values as well as sparsity patterns. On the other hand, the group lasso penalty results in similar estimates by shrinking all ωij(k) across subpopulations to zero if k=1K(ωij(k))2 is small.

Despite their differences, methods of Guo et al. [9] and Danaher et al. [6] inherently assume that precision matrices in K subpopulations are equally similar to each other, in that they encourage ωij(k) and ωij(k) and ωij(k) and ωij(k) to be equally similar. However, when K > 2, some subpopulations are expected to be more similar to each other than others. For instance, it is expected that genetic networks of two subtypes of cancer be more similar to each other than to the network of normal cells. Similarly, differences among genetic networks of various strains of a virus or bacterium are expected to correspond to the evolutionary lineages of their phylogenetic trees. Unfortunately, existing methods for joint estimation of multiple graphical models ignore this heterogeneity in multiple subpopulations. Furthermore, existing methods assume subpopulation memberships are known, which limits their applicability in settings with complex but unknown population structures; an important example is estimation of genetic networks of cancer cells with unknown subtypes.

In this paper, we propose a general framework for joint estimation of multiple precision matrices by capturing the heterogeneity among subpopulations. In this framework, similarities among disparate subpopulations are presented using a subpopulation networkG(V, E, W), a weighted graph whose node set V is the set of subpopulations. The edges in E and the weights Wkk′ for (k, k′) ∈ E represent the degree of similarity between any two subpopulations k, k′. In the special case where Wkk = 1 for all k, k′, the subpopulation similarities are only captured by the structure of the graph G. An example of such a subpopulation network is the line graph corresponding to observations over multiple time points, which is used in estimation of time-varying graphical models [13]. As we will show in Section 2.3, other existing methods for joint estimation of multiple graphical models, e.g. proposals of Danaher et al. [6], can also be seen as special cases of this general framework.

Our proposed estimator is the solution to a convex optimization problem based on the Gaussian likelihood with both ℓ1 and graph Laplacian [15] penalties. The graph Laplacian has been used in other applications for incorporating a priori knowledge in classification [24], for principal component analysis on network data [28], and for penalized linear regression with correlated covariates [10, 15, 17, 18, 32, 37]. The Laplacian penalty encourages similarity among estimated precision matrices according to the subpopulation network G. The ℓ1-penalty, on the other hand, encourages sparsity in the estimated precision matrices. Together, these two penalties capture both unique patterns specific to each subpopulation, as well as common patterns shared among different subpopulations.

We first discuss the setting where G(V, E, W) is known from external information, e.g. known phylogenetic trees (Section 2), and later discuss the estimation of the subpopulation memberships and similarities using hierarchical clustering (Section 4). We propose an alternating methods of multipliers (ADMM) algorithm [3] for parameter estimation, as well as its extension for efficient computation in high dimensions by decomposing the problem into block-diagonal matrices. Although we use the Gaussian likelihood, our theoretical results also hold for non-Gaussian distributions. We establish model selection and norm consistency of the proposed estimator under different model assumptions (Section 3), with improved rates of convergence over existing methods based on penalized likelihood. We also establish the consistency of the proposed algorithm for the estimation of multiple precision matrices, in settings where the subpopulation network G or subpopulation memberships are unknown. To achieve this, we establish the consistency of hierarchical clustering in high dimensions, by generalizing recent results of Borysov et al. [1] to the setting of arbitrary covariance matrices, which is of independent interest.

The rest of the paper is organized as follows. In Section 2 we describe the formal setup of the problem and present our estimator. Theoretical properties of the proposed estimator are studied in Section 3, and Section 4 discusses the extension of the method to the setting where the subpopulation network is unknown. The ADMM algorithm for parameter estimation and its extension for efficient computation in high dimensions are presented in Section 5. Results of the numerical studies, using both simulated and real data examples, are presented in Section 6. Section 7 concludes the paper with a discussion. Technical proofs are collected in the Appendix.

2. Model and Estimator

2.1. Problem Setup

Consider K subpopulations with distributions ℘(k), k = 1, …, K. Let X(k) = (X(k),1, …, X(k),p)T ∈ ℝp be a random vector from the kth subpopulation with mean μk and the covariance matrix Σ0(k)=(σij(k))i,j=1p. Suppose that an observation comes from the kth subpopulation with probability πk > 0.

Our goal is to estimate the precision matrices Ω0(k)(Σ0(k))1(ωij(k))i,j=1p, k = 1, …, K. To this end, we use the Gaussian log-likelihood based on the correlation matrix (see Rothman et al. [26]) as a working model for estimation of true Ω0(k), k = 1, …, K. Let Xi(k), i = 1, …, nk, be independent and identically distributed (i.i.d.) copies from ℘(k), k = 1, …, K. We denote the correlation matrices and their inverse by Θ(k)=(θij(k))i,j=1p, and Ψ(k)=(ψij(k))i,j=1p, k = 1, …, K, respectively. The Gaussian log-likelihood based on the correlation matrix can then be written as

˜n(Θ)=1nk=1Knk(log det (Θ(k))tr (Ψn(k)Θ(k))), (2)

where Ψn(k), k = 1, …, K is the sample correlation matrix for subpopulation k.

Examining the derivative of (2), which consists of Ψ0(k)Ψn(k), k = 1, …, K, justifies its use as a working model for non-Gaussian data: the stationary points of (2) is Ψn(k), which gives a consistent estimate of Ψ0(k). Thus we do not, in general, need to assume multivariate normality. However, in certain applications, for instance LDA/QDA and GGM, the resulting estimate is useful only if the data follows a multivariate normal distribution.

2.2. The Laplacian Shrinkage Estimator

Let Θ = (Θ(1), …, Θ(K)) and write Θij=(θij(1),,θij(K))TK, i, j = 1, …, p for a vector of (i, j)-elements across subpopulations. Our proposed estimator, Laplacian Shrinkage for Inverse Covariance matrices from Heterogeneous populations (LASICH), first estimates the inverse of the correlation matrices for each of the K subpopulations, and then transforms them into the estimator of inverse covariance matrices, as in Rothman et al. [26]. In particular, we first obtain the estimate Θ̂ of the true inverse correlation matrix by solving the following optimization problem

Θ^ρnarg minΘ=ΘT,Θ0˜n(Θ)+ρnΘ1+ρnρ2ΘLarg minΘ=ΘT,Θ0˜n(Θ)+ρnk=1Kij|Θij(k)|+ρnρ2ijΘijL, (3)

where Θ = ΘT enforces the symmetry of individual inverse correlation matrices, i.e. Θ(k) = (Θ(k))T, and Θ ≻ 0 requires that Θ(k) is positive definite for k = 1, …, K. The ℓ1-penalty Θ1=k=1KΘ(k)1 in (3) encourages sparsity in estimated inverse correlation matrices. The graph Laplacian penalty, on the other hand, exploits the information in the subpopulation network G to encourage similarity among values of θij(k) and θij(k). The tuning parameters ρn and ρnρ2 control the size of each penalty term.

Figure 1 illustrates the motivation for the graph Laplacian penalty ‖ΘijL in (3). The gray-scale images in the figure show the hypothetical sparsity patterns of precision matrices Θ(1), Θ(2), Θ(3) for three related subpopulations. Here, Θ(1) consists of two blocks with one “hub” node in each block; in Θ(2) and Θ(3) one of the blocks is changed into a “banded” structure. It can be seen that one of the two blocks in both Θ(2) and Θ(3) have a similar sparsity pattern as Θ(1). However, Θ(2) and Θ(3) are not similar. The subpopulation network G in this figure captures the relationship among precision matrices of the three subpopulations. Such complex relationships cannot be captured using the existing approaches, e.g. Danaher et al. [6], Guo et al. [9], which encourage all precision matrices to be equally similar to each other. More generally, G can be a weighted graph, G(V, E, W), whose nodes represent the subpopulations 1, …, K. The edge weights W : E → ℝ+ represent the similarity among pairs of subpopulations, with larger values of Wkk′W (k, k′) > 0 corresponding to more similarity between precision matrices of subpopulations k and k′.

Fig 1.

Fig 1

Illustration of similarities in the sparsity patterns of precision matrices Ω(1), Ω(2) and Ω(3). Nonzero and zero off-diagonal entries are colored in black and white, respectively, while diagonal entires are colored in gray. The associated subpopulation network G reflects the similarities between precision matrices of subpopulations 1 and 2 and 1 and 3. The simulation experiments in Section 6.1 use a similar subpopulation network in a high-dimensional setting.

In this section, we assume that the weighted graph G is externally available, and defer the discussion of data-driven choices of G, based on hierarchical clustering, to Section 4. Given G, the (unnormalized) graph Laplacian penalty ‖ΘijL is defined as

ΘijL={k,k=1KWkk(θij(k)θij(k))2}1/2 (4)

where Wkk′ = 0 if k and k′ are not connected. The Laplacian shrinkage penalty can be alternatively written as ΘijL=ΘijTLΘij, where L=(lkk)k,k=1KK×K is the Laplacian matrix [5] of the subpopulation network G defined as

lkk={dkWkk,k=k,dk0,Wkk,kk,0,otherwise,

where dk = ∑k′≠kWkk′ is the degree of node k in G with Wkk′ = 0 if k and k′ are not connected. The Laplacian shrinkage penalty can also be defined in terms of the normalized graph Laplacian, ID−1/2W D−1/2, where D = diag(d1, …, dK) is the diagonal degree matrix. The normalized Laplacian penalty,

ΘijL={k,k=1KWkk(θij(k)dkθij(k)dk)2}1/2,

which we also denote as ‖ΘijL, imposes smaller shrinkage on coefficients associated with highly connected subpopulations. We henceforth primarily focus on the normalized penalty.

Given estimates of the inverse correlation matrices Θ̂(1), …, Θ̂(K) from (3), we obtain estimates of precision matrices Ω(k) by noting that Ω(k) = Ξ(k)Θ(k)Ξ(k), where Ξ(k) is the diagonal matrix of reciprocals of the standard deviations Ξ(k)=diag({σ11(k)}1/2,,{σpp(k)}1/2). Our estimator Ω^ρn=(Ω^ρn(1),,Ω^ρn(K)) of precision matrices Ω is thus defined as

Ω^ρn(k)={Ξ^(k)}1Θ^ρn(k){Ξ^(k)}1,k=1,,K,

where Ξ^(k)=diag(1/{σ^11(k)}1/2,,1/{σ^pp(k)}1/2) with sample variance σ^ii(k) for the ith element in the kth subpopulation.

A number of alternative strategies can be used instead of the graph Laplacian penalty in (3). First, similarity among coefficients of precision matrices can also be imposed using a ridge-type penalty, ΘijL2. The main difference is that our penalty ‖ΘijL discourages the inclusion of edges θij(1),,θij(K) if they are very different across the K subpopulations. Another option is to use the graph trend filtering [31], which impose a fused lasso penalty over the subpopulation graph G. Finally, ignoring the weights Wkk′ in (4), the Laplacian shrinkage penalty resembles the Markov random field (MRF) prior used in Bayesian variable selection with structured covariates [16]. While our paper was under review, we became aware of the recent work by Peterson et al. [23], who utilize an MRF prior to develop a Bayesian framework for estimation of multiple Gaussian graphical models. This method assumes that edges between pairs of random variable are formed independently, and is hence more suited for Erdős-Rényi networks. Our penalized estimation framework can be seen as an alternative to using an MRF prior to estimate the precision matrices in a mixture of Gaussian distributions.

2.3. Connections to Other Estimators

To connect our proposed estimator to existing methods for joint estimation of multiple graphical models, we first give an alternative interpretation of the graph Laplacian penalty ΘijL=(ΘijTLΘij)1/2 as a norm for a transformed version of θij(k)s. More specifically, consider the mapping gG : ℝK → ℝK defined based on the Laplacian matrix for graph G

gG(Θij)={0,k=k,Wkk(θij(k)2dkθij(k)2dk),kk,

if G has at least one edge. For a graph with no edges, define gGij) = IK⊗Θij = diag(Θij), where IK is the K-identity matrix, and ⊗ denotes the Kronecker product. It can then be seen that the graph Laplacian penalty can be rewritten as

ΘijL=gG(Θij)F.

where ‖·‖F is the Frobenius norm.

Using the above interpretation, other methods for joint estimation of multiple graphical models can be seen as penalties on transformations gGij) corresponding to different graphs G. We illustrate this connection using the hypothetical subpopulation network shown in Figure 2a.

Fig 2.

Fig 2

Comparison of subpopulation networks used in the penalty for different methods for joint estimation of multiple precision matrices: a) the true network, modeled by LASICH; b) FGL; c) GGL & Guo et al; and d) estimation of time-varying networks (Kolar & Xing, 2009); see Section 2.3 for details.

Consider first the FGL penalty of Danaher et al. [6], applied to elements of the inverse correlation matrix |θij(k)θij(k)|. Let GC be a complete unweighted graph (Wkk′ = 1 ∀kk′), in which all (K2) node-pairs are connected to each other (Figure 2b). It is then easy to see that

kl|θij(k)θij(l)|=2(K1)gGC(Θij)1,

where the factor of 2(K1) can be absorbed into the tuning parameter for the FGL penalty. A similar argument can also be applied to the GGL penalty of Danaher et al. [6], ‖Θij‖, by considering instead an empty graph Ge with no edges between nodes (Figure 2c). In this case, the mapping gG would give a diagonal matrix with elements θij(k), and hence ‖Θij‖ = ‖gGeij)‖F.

Unlike proposals of Danaher et al. [6], the estimator of Guo et al. [9] is based on a non-convex penalty, and does not naturally fit into the above framework. However, Lemma 2 in Guo et al. [9] establishes a connection between the optimal solutions of the original optimization problem, with those obtained by considering a single penalty of the form {k=1K|θij(k)|}1/2Θij1,2. Similar to GGL, the connection with the method of Guo et al. [9] can be build based on the above alternative formulation, by considering again the empty graph Ge (Figure 2c), but instead the ‖·‖1,2 penalty, which is a member of the CAP family of penalties [36]. More specifically,

{k=1K|ωij(k)|}1/2=gGe(Θij)1,2.

Using the above framework, it is also easy to see the connection between our proposed estimator and the proposal of Kolar et al. [13]: the total variation penalty in Kolar et al. [13] is closely related to FGL, with summation over differences in consecutive time points. It is therefore clear that the penalty of Kolar et al. [13] (up to constant multipliers) can be obtained by applying the graph Laplacian penalty defined for a line graph connecting the time points (Figure 2d).

The above discussion highlights the generality of the proposed estimator, and its connection to existing methods. In particular, while FGL and GGL/Guo et al. [9] consider extreme cases with isolated, or fully connected nodes, one can obtain more flexibility in estimation of multiple precision matrices by defining the penalty based on the known subpopulation network, e.g. based on phylogenetic trees or spatio-temporal similarities between fMRI samples. The clustering-based approach of Section 4 further extends the applicability of the proposed estimator to the settings where the subpopulation network in not known a priori. The simulation results in Section 6 show that the additional flexibility of the proposed estimator can result in significant improvements in estimation of multiple precision matrices, when K > 2. The above discussion also suggests that other variants of the proposed estimator can be defined, by considering other norms. We leave such extensions to future work.

3. Theoretical Properties

In this section, we establish norm and model selection consistency of the LASICH estimator. We consider a high-dimensional setting pnk, k = 1, …, K, where both n and p go to infinity. As mentioned in the Introduction, the normality assumption is not required for establishing these results. We instead require conditions on tails of random vectors X(k) for each k = 1, …, K. We consider two cases, exponential tails and polynomial tails, which both allow for distributions other than multivariate normal.

Condition 1 (Exponential Tails)

There exists a constant c1 ∈ (0, ∞) such that

𝔼[exp {t(Xj(k)μj(k))/(σjj(k))1/2}]ec12t2/2,t,k=1,,K,j=1,,p.

Condition 2 (Polynomial Tails)

There exist constants c2, c3 > 0 and c4such that

𝔼[{Xj(k)/(σjj(k))1/2}4(c2+c3+1)]c4,k=1,,K,j=1,,p.

Since we adopt the correlation-based Gaussian log-likelihood, we require the boundedness of the true variances to control the error between true and sample correlation matrices.

Condition 3 (Bounded variance)

There exist constants c5 > 0 andc6 < ∞ such that c5mink,jσjj(k)and maxk,jσjj(k)c6.

Condition 4 (Sample size)

Let λΘmaxkΘ0(k)2. Let

C1{2c52+c5+c63/2+2c55/2c6+(c54+2c55c6)1/2}1.
  1. (Exponential tails). It holds that
    nmax {12minkπk,21833C12(1+4c12)2c62λΘ4(1+L21/2)2s} log p,
    and log p/n → 0.
  2. (Polynomial tails). Let C2=supn{ρnn/log p}=O(1)where ρnis given in Lemma 1 in theAppendixand c7 > 0 be some constant. It holds that
    nmax {p1/c2c71/c2,2732C12C22K min kπkλΘ4(1+L21/2)2s log p}.

Condition 4 determines the sufficient sample size n = Σk for consistent estimation of precision matrices Θ(1), …, Θ(K) in relation to, among other quantities, the number of variables p, the sparsity pattern s and the spectral norm of the Laplacian matrix ‖L2 of the subpopulation network G. While a general characterization of ‖L2 is difficult, investigating its value in special cases provides insight into the effect of the underlying population structure on the required sample size. Consider, for instance, two extreme cases: for a fully connected graph G associated with K subpopulations, ‖L2 = 1/(K − 1); for a minimally connected “line” graph, corresponding to e.g. multiple time points, ‖L2 = 2: with K = 5, 30% more samples are needed for the line graph, compared to a fully connected network. The above calculations match our intuition that fewer samples are needed to consistently estimate precision matrices of K subpopulations that share greater similarities. This, of course, makes sense, as information can be better shared when estimating parameters of similar subpopulations. Note that, here L represents the Laplacian matrix of the true subpopulation network capturing the underlying population structure. The above conditions thus do not provide any insight into the effect of misspecifying the relationship between subpopulations, i.e., when an incorrect L is used. This is indeed an important issue that garners additional investigation; see Zhao and Shojaie [37] for some insight in the context of inference for high dimensional regression. In Section 4, we will discuss a data-driven choice of L that results in consistent estimation of precision matrices.

Before presenting the asymptotic results, we introduce some additional notations. For a matrix A=(aij)i,j=1pp×p, we denote the spectral norm ‖A2 = maxx∈ℝp,‖x‖=1Ax‖, and the element-wise ℓ-norm ‖A = maxi,j |ai,j| where ‖x‖ is the Euclidean norm for a vector x. We also write the induced ℓ-norm ‖A∞/∞ = supx=1Ax where ‖x = maxi |xi| for x = (x1, …, xp). For the ease of presentation, the results in this section are presented in asymptotic form; non-asymptotic results and proofs are deferred to the Appendix.

3.1. Consistency in Spectral Norm

Let s{(i,j):ω0,ij(k)0,i,j=1,,p,ij,k=1,,K}, and d=maxk,i{(i,j):ω0,ij(k)0,j=1,,p,ij}. The following theorem establishes the rate of convergence of the LASICH estimator, in spectral norm, under either exponential or polynomial tail conditions (Condition 1 or 2). Convergence rates for LASICH in ℓ-and Frobenius norm are discussed in Section 3.3.

Theorem 1

Suppose Conditions 3 and 4 hold. Under Condition 1 or 2,

k=1KΩ^ρn(k)Ω0(k)2=OP(λΘ4(s+1) log pn),

as n, p → ∞ where ρnis given in Lemma 1 in theAppendixwith γ = mink πk/2.

Theorem 1 is proved in the Appendix. The proof builds on tools from Negahban et al. [20]. However, our estimation procedure does not match their general framework: First, we do not penalize the diagonal elements of the inverse correlation matrices; our penalty is thus not a norm. Second, the Laplacian matrix is nonpositive definite. Thus, the Laplacian shrinkage penalty is not strictly convex. The results from Negahban et al. [20] are thus not directly applicable to our problem. To establish the estimation consistency, we first show, in Lemma 3, that the function r(·) = ‖·‖1 + ρ2‖·‖L is a seminorm, and is, moreover, convex and decomposable. We also characterize the subdifferential of this seminorm in Lemma 6, based on the spectral decomposition of the graph Laplacian L. The rest of the proof uses tools from Negahban et al. [20], Rothman et al. [26] and Ravikumar et al. [25], as well as new inequalities and concentration bounds. In particular, in Lemma 4 we establish a new ℓ bound for the empirical covariance matrix for random variables with polynomial tails, which is used to established the consistency in the spectral norm under Condition 2.

The convergence rate in Theorem 1 compares favorably to several other methods based on penalized likelihood. Few results are currently available for estimation of multiple precision matrices. An exception is Guo et al. [9], who obtained a slower rate of convergence Op({(s + p) log p/n}1/2) under the normality assumption and based on a bound on the Frobenius norm. Our rates of convergence are comparable to the results of Rothman et al. [26] for spectral norm convergence of a single precision matrix, obtained under the normality assumption. Ravikumar et al. [25], on the other hand, assumed the irrepresentability condition to obtain the rate Op({min{s + p, d2} log p/n}1/2) and Op({min{s + p, d2}pτ/(c2+c3+1)/n}1/2), under exponential and polynomial tail conditions, respectively, where τ > 2 is some scalar. The rate in Theorem 1 is obtained without assuming the irrepresentability condition. In fact, our rates of convergence are faster than those of Ravikumar et al. [25] given the irrepresentability condition 5 (see Corollary 1). Cai et al. [4] obtained improved rates of convergence under both tail conditions for an estimator that is not found by minimizing the penalized likelihood objective function, and may be nonpositive definite. Finally, note that the results in [4, 25, 26] are for separate estimation of precision matrices and hold for the minimum sample size across subpopulations, minknk, whereas our results hold for the total samples size Σknk.

3.2. Model Selection Consistency

Let S(k)={(i,j):ω0,ij(k)0,i,j=1,,p} be the support of Ω0(k), and denote by d the maximum number of nonzero elements in any rows of Ω0(k), k = 1, …, K. Define the event

(Ω^ρn,Ω0){sign(ω^ρn,ij(k))=sign(ω0,ij(k)),i,j=1,,p,k=1,,K}, (5)

where sign(a) is 1 if a > 0, 0 if a = 0 and −1 if a < 0. We say that an estimator Ω̂ρn of Ω0 is model-selection consistent if P{(Ωˆρn,Ω0)}1.

We begin by discussing an irrepresentability condition for estimation of multiple graphical models. This restrictive condition is commonly assumed to establish model selection consistency of lasso-type estimators, and is known to be almost necessary [19, 35]. For the graphical lasso, Ravikumar et al. [25] showed that the irrepresentability condition amounts to a constraint on the correlation between entries of the Hessian matrix Γ = Ω−1 ⊗ Ω−1 in the set S corresponding to nonzero elements of Ω, and those outside this set. Our irrepresentability condition is motivated by that in Ravikumar et al. [25], however, we adjust the index set S to also account for covariances of “non-edge variables” that are correlated with each other. More specifically, the description of irrepresentability condition in Ravikumar et al. [25] involves ΓSS consisting only of elements σijσkl with (i, j) ∈ S and (k, l) ∈ S. However, σij ≠ 0 for (i, j) ∉ S is not taken into account by this definition. We thus adjust the index set S so that ΓSS also includes elements σijσkl if (i, k) ∈ S and (j, l) ∈ S. This definition is based on the crucial observations that Γ = Σ ⊗ Σ involves the covariance matrix Σ instead of the precision matrix Ω, and that some variables are correlated (i.e., σij ≠ 0) even though they may be conditionally independent (i.e., ωij = 0). Defining S(k) for k = 1, …, K as above, we assume the following condition.

Condition 5 (Irrepresentability condition)

The inverse Θ0(k)of the correlation matrix Ψ0(k)satisfies the irrepresentability condition for S(k)with parameter α: (a) (Θ0(k)Θ0(k))S(k)S(k)and (Ψ0(k)Ψ0(k))S(k)S(k)are invertible, and (b) there exists some α ∈ (0, 1] such that

max(i,j)(S(k))cΓ{(i,j)}×S(k)(k){ΓS(k)S(k)(k)}111α, (6)

for k = 1, …, K where Γ(k)Ψ0(k)Ψ0(k).

In addition to the irrepresentability condition, we require bounds on the magnitude of θij(k)0 and their normalized difference.

Condition 6 (Lower bounds for the inverse correlation matrices)

There exists a constant c8 ∈ ℝ such that

θminmink=1,,K,ij|θ0,ij(k)|c8>0.

Moreover, for Ω0,ij ≠ 0, LΩ0,ij ≠ 0 and there exists a constant c9 > 0 such that

minlkk0,ω0,ij(k)dkω0,ij(k)dk0|θ0,ij(k)dkθ0,ij(k)dk|c9.

The first lower bound in Condition 6 is the usual “min-beta” condition for model selection consistency of lasso-type estimators. The second lower bound, which is represented here for the normalized Laplacian penalty, is a mild condition which ensures estimates based on inverse correlation matrices can be mapped to precision matrices. For any pair of subpopulations k and k′ connected in G it requires that if the difference in (normalized) entries of the entires of the precision matrices are nonzero, the difference in (normalized) entries of inverse correlation matrices are bounded away from zero. In other words, the bound guarantees that Θ0,ij is not in the null space of L, whenever Ω0,ij is outside of the null space. This bound can be relaxed if we use a positive definite matrix Lε = L + εI for ε > 0 small.

Our last condition for establishing the model selection consistency concerns the minimum sample size and the tuning parameter for the graph Laplacian penalty. This condition is necessary to control the ℓ-bound of the error Θ̂ρn − Θ0, as in Ravikumar et al. [25]. Our minimum sample size requirement is related to the irrepresentability condition. Let κΓ be the maximum of the absolute column sums of the matrices {(Γ(k))−1}S(k)S(k), k = 1, …, K, and κΨ be the maximum of the absolute column sums of the matrices Ψ0(k), k = 1, …, K. The minimum sample size in Ravikumar et al. [25] is also a function of the irrepresentability constant, in particular, their κΓ involves {(ΓS(k)S(k)(k))}1. There is, therefore, a subtle difference between our definition and theirs: in our definition, the matrix is first inverted and then partitioned, while in Ravikumar et al. [25], the matrix is first partitioned and then inverted. Corollary 2 establishes the model selection consistency under a weaker sample size requirement, by exploiting instead the control of the spectral norm in Theorem 1.

Condition 7 (Sample size and regularization parameters)

Let

C3=max {2634κΨ2κΓ2minkπk2max {1,2672κΨ4κΓ2α2 minkπk2},36c82,2432c92 minkdk}
  1. (Exponential tails). It holds
    n>12 log pminkπkmax {1,2632C12(1+c12)2c62C3d2}
  2. (Polynomial tails). It holds n>max{p1/c2c71/c2,C12C22C3d2 log p}.

  3. It holds that ρ2α2/{4L21/2(2α)}.

With these condition, we obtain

Theorem 2

Suppose that Conditions 3, 5, 6 and 7 hold. Under Condition 1 or 2, P(ℳ(Ω̂ρn, Ω0)) → 1 as n, p → ∞ where ρnis given in Lemma 1 in theAppendixwith γ = mink πk/2.

3.3. Additional Results

In this section, we establish norm and variable selection consistency of LASICH under alternative assumptions. Our first result gives better rates of convergence for consistency in the ℓ-, spectral and Frobenius norms, under the condition for model selection consistency. Our rates in Corollary 1 improve the previous results by Ravikumar et al. [25], and are comparable to that of Cai et al. [4] in the ℓ- and spectral norms under both tail conditions.

Corollary 1

Suppose the conditions in Theorem 2 hold. Then, under Condition 1 or 2,

k=1KΩ^ρn(k)Ω0(k)F=OP(min{λΘ4p(s+1),κΓ2(s+p)} log pn),
k=1KΩ^ρn(k)Ω0(k)2=OP(min{λΘ4(s+1),κΓ2d2} log pn),
k=1KΩ^ρn(k)Ω0(k)=OP(κΓ2 log pn).

Our next result in Corollary 2 establishes the model selection consistency under a weaker version of the irrepresentability condition (Condition 6). Aside from the difference in the index sets S(k), the form of the Condition 6 and the assumption of invertibility of (Ψ0(k)Ψ0(k))S(k)S(k) are similar to those in Ravikumar et al. [25]. On the other hand, Ravikumar et al. [25] do not require invertibility of (Θ0(k)Θ0(k))S(k)S(k). However, their proof is based on an application of Brouwer’s fixed point theorem, which does not hold for the corresponding function (Eq. (70) in page 973) since it involves a matrix inverse, and is hence not continuous on its range. The additional inevitability assumption in Condition 6 is used to address this issue in Lemma 11. The condition can be relaxed if we assume an alternative scaling of the sample size stated in Condition 8 below instead of Condition 7.

Condition 8

Let λΨ=maxkΨ0(k). Suppose ρ2α2/{4L21/2(2α)}and

  1. (Exponential tails)
    n>21933{min kπk}3C12(1+4c12)2c62λΘ4(1+ρ2L21/2)2s log p max{λΨ,4λΘ4α1},
    or
  2. (Polynomial tails)
    n>21233{min kπk}2K2C12C22λΘ4(1+ρ2L21/2)2s log p max{λΨ,4λΘ4α1}.

Corollary 2

Suppose that Conditions 3, 6 and 8 hold. Suppose also that Condition 5 holds without requiring the invertibility of (Θ0(k)Θ0(k))S(k)S(k). Then, under Condition 1 or 2, P(ℳ(Ω̂ρn, Ω0)) → 1 as n, p → ∞ where ρnis given in Lemma 1 in theAppendix with γ = mink πk/2.

4. Laplacian Shrinkage based on Hierarchical Clustering

Our proposed LASICH approach utilizes the information in the subpopulation network G. In practice, however, similarity between subpopulations may be difficult to ascertain or quantify. In this section, we present a modified LASICH framework, called HC-LASICH, which utilizes hierarchical clustering to learn the relationships among subpopulations. The information from hierarchical clustering is then used to define the weighted subpopulation network. Importantly, HC-LASICH can even be used in settings where the subpopulation membership is unavailable, for instance, to learn the genetic network of cancer patients, where cancer subtypes may be unknown.

We use hierarchical clustering with a complete, single or average linkage to estimate both the subpopulation memberships and the weighted subpopulation network G. Specifically, the length of a path between two subpopulations in the dendrogram is used as a measure of dissimilarity between two subpopulations; the weights for the subpopulation networks are simply defined by taking the inverse of these lengths. Throughout this section, we assume that the number of subpopulations K is known. While a number of methods have been proposed for estimating the number of subpopulations in hierarchical clustering (see e.g. Borysov et al. [1] and the references therein), the problem is beyond the scope of this paper.

Let I = (I(1), …, I(K)) be the subpopulation membership indicator such that I follows the multinomial distribution MultK (1, (π1, …, πK)) with parameter 1 and subpopulation membership probabilities (π1, …, πK) ∈ (0, 1)K. Note that I is missing and is to be estimated. Let Ii, i = 1, …, n be i.i.d. copies of I and I^i=(I^i1,,I^iK) be an estimated subpopulation indicator for the ith observation via hierarchical clustering. Based on the estimated subpopulation membership and subpopulation network Ĝ, we apply our method to obtain the estimator, HC-LASICH, Ω^HC,ρn=(Ω^HC,ρn(1),,Ω^HC,ρn(K)). Interestingly, HC-LASICH enjoys the same theoretical properties as LASICH, under the normality assumption. To show this, we first establish the consistency of hierarchical clustering in high dimensions, which is of independent interest. Our result is motivated by the recent work of [1], who study the consistency of hierarchical clustering for independent normal variables X(k) ~ N(k), σ(k)I); we establish similar results for multivariate normal distributions with arbitrary covariance structures. We make the following assumption.

Condition 9

For k, k′ = 1, …, K, let

λ¯(k)=p1j=1pλ(k),j,
μ(k,k)=p1Λk,k1/2Qk,kT[Σ(k)+Σ(k)]1/2[μ(k)μ(k)]2,

where λ(k),jis the eigenvalues of Σ(k)with λ(k),1 ≤ λ(k),2 ≤ … ≤ λ(k),p, and the spectral decomposition of Σ(k) + Σ(k′)is Σ(k)+Σ(k)=Qk,kΛk,kQk,kT. It holds that

μ(k,k)>2 min {λ¯(k),λ¯(k)}λ(k),pλ(k),p,kk,k,k=1,,K,
0<c10λ(k),jc11<,μ(k)c11,k=1,,K,j=1,,p.

for constants m and M.

Under the normality assumption, the following results shows that the probability of successful clustering converges to 1, as p, n → ∞.

Theorem 3

Suppose that that X(k), k = 1, …, K, is normally distributed. Under Condition 9,

P(I^i=Ii,i=1,,n)1, (7)

as n, p → ∞.

To proof of Theorem 3 generalizes recent results of Borysov et al. [1] to the case of arbitrary covariance structures. A key component of the proof is a new bound on the ℓ2 norm of a multivariate normal random variable with arbitrary mean and covariance matrix established in Lemma 14. The proof of the lemma uses new concentration inequalities for high-dimensional problems in [2], and may be of independent interest.

Note that the consistent estimation of subpopulation memberships (7) implies that the estimated hierarchy among clusters also matches the true hierarchy. Thus, with successful clustering established in Theorem 3, theoretical properties of Ω̂HC, ρn naturally follow.

Theorem 4

Suppose that X(k), k = 1, …, K, is normally distributed and that Condition 9 holds. (i) Under the conditions of Theorem 1,

k=1KΩ^HC,ρn(k)Ω0(k)2=OP(λΘ4(s+1) log pn).

Suppose, moreover, that the conditions of Theorem 2 holds. Then

k=1KΩ^HC,ρn(k)Ω0(k)F=OP(min{λΘ4p(s+1),κΓ2(s+p)} log pn),
k=1KΩ^HC,ρn(k)Ω0(k)2=OP(min{λΘ4(s+1),κΓ2d2} log pn),
k=1KΩ^HC,ρn(k)Ω0(k)=OP(κΓ2 log pn).

(ii) Under the conditions of Theorem 2,

P((Ω^HC,ρn,Ω0))1,  as n,p.

5. Algorithms

We develop an alternating directions method of multipliers (ADMM) to efficiently solve the convex optimization problem (3).

Let A(k)=(aij(k))i,j=1pp×p,B(k)=(bij(k))i,j=1pp×p,C(k)=(cij(k))i,j=1pp×p,D(k)=(dij(k))i,j=1pp×p, k = 1, … K. Define A = (A(1), …, A(K)), B = (B(1), …, B(K)), C = (C(1), …, C(K)), D = (D(1), …, D(K)), and cij(cij(1),,cij(K))TK,dij(dij(1),,dij(K))TK,eC,ij(eC,ij(1),,eC,ij(K))TK where EC(k)=(eC,ij(k))i,j=1p.

To facilitate the computation, we consider instead a perturbed graph Laplacian Lε = L + εI, where I is the identity matrix and ε > 0 is a small perturbation. The difference between solutions to the original and modified optimization problem is largely negligible for small ε; however, the positive definiteness of Lε results in more efficient computation. A similar idea was used in Guo et al. [9] and Rothman et al. [26] to avoid dividision by zero. The optimization problem (3) with L replaced by Lε can then be written as

minimizek=1Knkn(tr (Ψn(k)A(k))log det(A(k)))+ρnk=1KB(k)1+ρnρ2ij(cijTLεcij)1/2 (8)
s.t.A(k)=D(k),B(k)=D(k),Lεcij=Lεdijk=1,,K,i,j=1,,p.

Using Lagrange multipliers E = (EA, EB, EC)T, with EA=(EA(1),,EA(K)) with EA(k)p×p, k = 1, …, K, EB=(EB(1),,EB(K)) with EB(k)p×p, k = 1, …, K, and EC=(EC(1),,EC(K)) with EC(k)p×p, k = 1, …, K, the augmented Lagrangian in scaled form is given by

Lϱ(A,B,C,D,E)n1k=1Knk(tr (Ψn(k)A(k))log det(A(k)))+ρnk=1KB(k)1+ρnρ2ij(cijTLεcij)1/2+ϱ2k=1KA(k)D(k)+EA(k)F2+ϱ2k=1KB(k)D(k)+EB(k)F2+ϱ2i,jLε1/2cijLε1/2dij+eC,ijF2.

Here ϱ > 0 is a regularization parameter and Lε1/2 is the square root of Lε with Lε=(Lε1/2)TLε1/2.

The proposed ADMM algorithm is as follows.

  • Step 0. Initialize A(k) = A(k),0, B(k) = B(k),0, C(k) = C(k),0, D(k) = D(k),0, EA(k)=EA(k),0,EB(k)=EB(k),0,EC(k)=EC(k),0 and choose ϱ > 0. Select a scalar ϱ > 0.

  • Step m. Given the (m − 1)th estimates,
    • (Update A(k)) Find Am minimizing n(A)(ϱ/2)k=1KA(k)D(k),m1EA(k),m1 (see pages 46–47 of Boyd et al. [3] for details).
    • (Update B(k)) Compute Bij(k),m=Sρn/ϱ(Dij(k),m1EB,ij(k),m1), where Sy(x) is xy if x > y, is 0 if |x| ≤ y, and is x + y if x < −y.
    • (Update C(k)) For (x)+ = max{x, 0}, compute
      cijm=(1ρnρ2ϱLε1/2dijm1eC,ijm1)+(dijm1Lε1/2eC,ijm1).
    • (Update D(k)) Compute
      dijm=(2I+Lε)1{aijm+eA,ijm1+bijm+eB,ijm1+Lεcijm+(Lε1/2)TeC,ijm1}.
    • (Update EA) Compute EA(k),m=EA(k)+A(k),mD(k),m.
    • (Update EB) Compute EB(k),m=EB(k)+B(k),mD(k),m,
    • (Update EC) Compute eC,ij(k),m=eC,ij(k)+L1/2(cij(k),mdij(k),m).
  • Repeat the iteration until the maximum of the errors rA(k)=A(k)D(k),rB(k),m=B(k),mD(k),m,rC(k),m=C(k),mD(k),m, s(k),m = ϱ(D(k),mD(k),m−1) in the Frobenius norm is less than a specified tolerance level.

The proposed ADMM algorithm facilitates the estimation of parameters of moderately large problems. However, parameter estimation in high dimensions can be computationally challenging. We next present a result that determines whether the solution to the optimization problem (3), for given values of tuning parameters ρn, ρ2, is block diagonal. (Note that this result is an exact statement about the solution to (3), and does not assume block sparsity of the true precision matrices; see Theorems 1 and 2 of Danaher et al. [6] for similar results.) More specifically, the condition in Proposition 1 provides a very fast check, based on the entries of the empirical correlation matrices Ψn(k), k = 1, …, K, to identify the block sparsity pattern in Ω^ρn(k), k = 1, …, K after some permutation of the features.

Let UL = [u1uK] ∈ ℝK×K where u1, …, uK’s are eigenvectors of L corresponding to 0, λL,2, …, λL,K. Define ΛL1/2 as the diagonal matrix with diagonal elements 0, λL,21/2,,λL,K1/2.

Proposition 1

The solution Ω^ρn(k), k = 1, …, Kto the optimization problem(3)consists of the block diagonal matrices with the same block structure diag(Ω1, …, ΩB) among all groups if and only if for Ψn,ij=(ψn,ij(1),,ψn,ij(K))T

minυ[1,1]KΛL1/2UL(nknΨn,ijρnυ)ρnρ2, (9)

and for all i, j such that the (i, j) element is outside the blocks.

The proof of the Proposition is similar to Theorems 1 of Danaher et al. [6] and is hence omitted. Condition 9 can be easily verified by applying quadratic programming to the left hand side of the inequality. The solution to (3) can then be equivalently found by solving the optimization problem separately for each of the blocks; this can result in significant computational advantages for moderate to large values of ρnρ2.

6. Numerical Results

6.1. Simulation Experiments

We compare our method with four existing methods, graphical lasso, the method of Guo et al. [9], FGL and GGL of Danaher et al. [6]. For graphical lasso, estimation was carried out separately for each group with the same regularization parameter.

Our simulation setting is motivated by estimation of gene networks for healthy subjects and patients with two similar diseases caused by inactivation of certain biological pathways. We consider K = 3 groups with sample sizes n = (50, 100, 50) and dimension p = 100. Data are generated from multivariate normal distributions N(μ(k),(Ω0(k))1), k = 1, 2, 3; all precision matrices Ω0(k) are block diagonal with 4 blocks of equal size.

To create the precision matrices, we first generated a graph with 4 components of equal size, each as either an Erdős-Rényi or scale free graphs with 95 total edges. We randomly assigned Unif((−7, −5) ∪ (.5, .7)) values to nonzero entries of the corresponding adjacency matrix A and obtained a matrix Ã. We then added 0.1 to the diagonal of à to obtain a positive definite matrix Ω0(1). For each of subpopulations 2 and 3, we removed one of the components of the graph by setting the off diagonal entries of à to zero, and added a perturbation from Unif(−2, .2) to nonzero entries in Ã. Positive definite matrices Ω0(2) and Ω0(3) were obtained by adding 0.1 to the diagonal elements. All partial correlations ranges from .28 to .54 in the absolute values. A similar setting was considered in in Danaher et al. [6], where the graph included more components, but no perturbation was added. We consider two simulation settings, with known and unknown subpopulation network G.

6.1.1. Known subpopulation network G

In this case, we set μ(k) = 0, k = 1, 2, 3 and use the graph in Figure 1 as the subpopulation network.

Figures 3a,c show the average number of true positive edges versus the average number of detected edges over 50 simulated data sets. Results for multiple choices of the second tuning parameter are presented for FGL, GGL and LASICH. It can be seen that in both cases, LASICH outperforms other methods, when using relatively large values of ρ2. Smaller values of ρ2, on the other hand, give similar results as other methods of joint estimation of multiple graphical models. These results indicate that, when the available subpopulation network is informative, the Laplacian shrinkage constraint can result in significant improvement in estimation of the underlying network.

Fig 3.

Fig 3

Simulation results for joint estimation of multiple precision matrices with known subpopulation memberships. Results show the average number of true positive edges (a & c) and estimation error, in Frobenius norm (b & d) over 50 data sets with n = 200 multivariate normal observations generated from a graphical model with p = 100 features; results in top row (a & b) are for an Erdős-Rényi graph and those in bottom row (c & d) are for a scale free (power-law) graph.

Figures 3b,d show the estimation error, in Frobenius norm, versus the number of detected edges. LASICH has larger errors when the estimated graphs have very few edges, but, its error decreases as the number of detected edges increase, eventually yielding smaller errors than other methods. The non-convex penalty of Guo et al. [9] performs well in terms of estimation error, although determining the appropriate range of tuning parameter for this method may be difficult.

6.1.2. Unknown subpopulation network G

In this case, the subpopulation memberships and the subpopulation network G are estimated based on hierarchical clustering. We randomly generated μ(1) from a multivariate normal distribution with a covariance matrix σ2I. For subpopulations 2 and 3, the elements of μ(1) corresponding to the empty components of the graph were set to zero to obtain μ(2) and μ(3). Hierarchical clustering with complete linkage was applied to data to obtain the dendrogram; we took inverse of distances in the dendrogram to obtain similarity weights used in the graph Laplacian.

Figures 4 compares the performance of HC-LASICH, in terms of support recovery, to competing methods, in the setting where the subpopulation memberships and network are estimated from data (Section 4). Here the differences in subpopulation means μ(k,k′) are set up to evaluate the effect of clustering accuracy. The four settings considered correspond to average Rand indices of .6 .7, .8 and .9 across 50 data sets, respectively. Here the second tuning parameter for HC-LASICH, GGL and FGL is chosen according to the best performing model in Figure 3. As expected, changing the mean structure, and correspondingly the Rand index, does not affect the performance of other methods. The results indicate that, as long as features can be clustered in a meaningful way, HC-LASICH can result in improved support recovery. Data-adaptive choices of the tuning parameter corresponding to the Laplacian shrinkage penalty may result in further improvements in the performance of the HC-LASICH. However, we do not pursue such choices here.

Fig 4.

Fig 4

Simulation results for joint estimation of multiple precision matrices with unknown subpopulation memberships. Results show the average number of true positive edges over 50 data sets with n = 200 multivariate normal observations generated from a graphical model with over an Erdős-Rényi graph with p = 100 features. Results for HC-LASICH and FGL/GGL correspond to the best choice of the second tuning parameter among those in Figure 3a. The Rand indices for HC-LASICH are averages over 50 generated data sets.

6.2. Genetic Networks of Cancer Subtypes

Breast cancer is heterogenous with multiple clinically verified subtypes [22]. Jönsson et al. [12] used copy number variation and gene expression measurements to identify new subtypes of breast cancer and showed that the identified subtypes have distinct clinical outcomes. The genetic networks of these different subtypes are expected to share similarities, but to also have unique features. Moreover, the similarities among the networks are expected to corroborate with the clustering of the subtypes based on their molecular profiles. We applied network estimation methods of Section 6.1 to a subset of the microarray gene expression data from Jönsson et al. [12], containing data for 218 patients classified into three previously known subtypes of breast cancer: 46 Luminal-simple, 105 Luminal-complex and 67 Basal-complex samples. For ease of presentation, we focused on 50 genes with largest variances. The hierarchical clustering results of Jönsson et al. [12], reproduced in Figure 5 for the above three subtypes, were used to identify the subpopulation membership; reciprocals of distances in the dendrogram were used to define similarities among subtypes used in the graph Laplacian penalty.

Fig 5.

Fig 5

Dendrogram of hierarchical clustering of three subtypes of breast cancer from Jönsson et al. (2010) along with estimated gene networks using graphical lasso (Glasso), method of Guo et al., FGL and GGL of Daneher et al. (2014) and LASICH. Blue edges are common to Luminal subtypes and black edges are shared by all three subtypes; condition specific edges are drawn in gray.

To facilitate the comparison, tuning parameters were selected such that the estimated networks of the three subtypes using each method contained a total of 150 edges. For methods with two tuning parameters, pairs of tuning parameters were determined using the Bayesian information criterion (BIC), as described in Guo et al. [9]. Estimated genetic networks of the three cancer subtypes are shown in Figure 5. For each method, edges common in all three subtypes, those common in Luminal subtypes and subtype specific edges are distinguished.

In this example, separate graphical lasso estimates and FGL/GGL estimates are two extremes. Estimated network topologies from graphical lasso vary from subtype to subtype, and common structures are obscured; this variability may be because similarities among subtypes are not incorporated in the estimation. In contrast, FGL and GGL give identical networks for all subtypes, perhaps because both methods encourage the estimated networks of all subtypes to be equally similar. Intermediate results are obtained using LASICH and the method of Guo et al. [9]. The main difference between these two methods is that Guo et al. [9] finds more edges common to all three subtypes, whereas LASICH finds more edges common to the Luminal subtypes. This difference is likely because LASICH prioritizes the similarity between the Luminal subtypes via graph Laplacian while the method of Guo et al. [9] does not distinguish between the three subtypes. The above example highlights the potential advantages of LASICH in providing network estimates that better corroborate with the known hierarchy of subpopulations.

7. Discussion

We introduced a flexible method for joint estimation of multiple precision matrices, called LASICH, which is particularly suited for settings where observations belong to three or more subpopulations. In the proposed method, the relationships among heterogenous subpopulations is captured by a weighted network, whose nodes correspond to subpopulations, and whose edges capture their similarities. As a result, LASICH can model complex relationships among subpopulations, defined, for example, based on hierarchical clustering of samples.

We established asymptotic properties of the proposed estimator in the setting where the relationship among subpopulations is externally defined. We also extended the method to the setting of unknown relationships among subpopulations, by showing that clusters estimated from the data can accurately capture the true relationships. The proposed method generalizes existing convex penalties for joint estimation of graphical models, and can be particularly advantageous in settings with multiple subpopulations.

A particularly appealing feature of the proposed extension of LASICH is that it can also be applied in settings where the subpopulation memberships are unknown. The latter setting is closely related to estimation of precision matrices for mixture of Gaussian distributions. Both approaches have limitations and drawbacks: on the one hand, the extension of LASICH to unknown subpopulation memberships requires certain assumptions on differences of population means (Section 4). On the other hand, estimation of precision matrices for mixture of Gaussians is computationally challenging, and known rates of convergence of parameter estimation in mixture distributions (e.g. in Städler et al. [29]) are considerably slower.

Throughout this paper we assumed that the number of subpopulations is known. Extensions of this method to estimation of graphical models in populations with an unknown number of subpopulations would be particularly interesting for analysis of genetic networks associated with heterogeneity in cancer samples, and are left for future research.

Acknowledgments

This work was partially supported by NSF grants DMS-1161565 & DMS-1561814 to AS.

Appendix

8. Appendix: Proofs and Technical Detials

We denote true inverse correlation matrices as Θ0=(Θ0(1),,Θ0(K)) and true correlation matrices as Ψ0=(Ψ0(1),,Ψ0(K)), where Θ0(k)(Ψ0(k))1(θ0,ij(k))i,j=1p, and Ψ0(k)=(ψ0,ij(k))i,j=1p. The estimates of the population parameters are dented as Σ^n(k)=(σ^ij)i,j=1p,Ψn(k)=(ψn,ij)i,j=1p, and Θ^ρn(k)=(θ^ρn,ij(k))i,j=1p. For a vector x = (x1, …, xp)T and J ⊂ {1, …, p}, we denote xJ = (xj, jJ)T. For a matrix A, λk(A) is the kth smallest eigenvalue and A⃗ is the vectorization of A. For J ⊂ {(i, j) : i, j = 1, …, p} and A ∈ ℝp×p, A⃗J is a vector in ℝ|J| obtained by removing elements corresponding to (i, j) ∉ J from A⃗. A zero-filled matrix AJ ∈ ℝp×p is obtained from A by replacing aij by 0 for (i, j) ∉ J.

8.1. Consistency in Matrix Norms

Theorem 1 is a direct consequence of the following result.

Lemma 1
  1. Suppose that Condition 1 holds. Let γ ∈ (0, mink πk) be arbitrary. For
    nmax {6γlog p,21533C12γ3(1+4c12)2maxk,i{σii(k)}2λΘ4(1+ρ2L21/2)2s log p}
    and ρn=236C1(1+4c12)γ1/2 maxk,iσii(k)log p/n, we have with probability (1 − 2K/p)(1 − 2K exp(−2n(mink πk − γ)2)) that
    k=1KΘ^ρn(k)Θ0(k)F215/233/2C1γ3/2(1+4c12)maxk,iσii(k)λΘ2(1+ρ2L21/2)s log pn.
  2. Suppose that Condition 2 holds with pc7nc2, c2, c3, c7 > 0. For ρn = C1Kδnsatisfying
    2432C1ρn2γ2s(1+ρ2L21/2)2λΘ41/4
    and τ>(27+231+2432c4 maxk,i{σii(k)}2)/(9c4 maxk,i{σii(k)}2)we have with probability (1 − 2K exp(−2n(mink πk − γ)2))ν nthat
    k=1KΘ^ρn(k)Θ0(k)F2433/2C1γ2K(1+ρ2L21/2)λΘ2s1/2δn,
    where
    δnmaxk,i{σii(k)}2c4(4+τ)γ1log pn+(1+2maxk,i|μ(k),i|)maxk,i{σii(k)}2c4(4+τ)γ1log pn+2maxk,i,j𝔼|X(k),iX(k),j|I(|X(k),iX(k),j|γnlog p)+4{maxk,i𝔼|X(k),i|I(|X(k),i|γnlog p)}2+2(1+2maxk,i|μ(k),i|)maxk,i𝔼|X(k),i|I(|X(k),i|γnlog p)=O(log pn),
    and
    νn3c7c4 maxk,i{σii(k)}2(log p)c2+c3+1γc3nc3+c7c4 maxk,iσii(k)(log p)2(c2+c3+1)nc2+c3+1+8p2 exp (maxk,iσii(k)c4(4+γ) log p2 maxk,iσii(k)c4+maxk,i{σii(k)}2c4(64+16τ)/3)=o(1).

Our proofs adopt several tools from Negahban et al. [20]. Note however that our penalty does not penalize the diagonal elements, and is hence a seminorm; thus, their results do not apply to our case. We first introduce several notations. To treat multiple precision matrices in a unified way, our parameter space is defined to be the set ℝ̃(pK)×(pK) of (pK) × (pK) symmetric block diagonal matrices, where the kth diagonal block is a p × p matrix corresponding to the precision matrix of subpopulation k. We write A ∈ ℝ̃(pK)×(pK) for a K-tuple (A(k))k=1K of diagonal blocks A(k) ∈ ℝp×p. Note that for A, B ∈ ℝ̃(pK)×(pK), A,BpK=k=1KA(k),B(k)p where 〈·, ·〉p is the trace inner product on ℝp×p. In this parameter space, we evaluate the following map from ℝ̃(pK)×(pK) to ℝ given by

f(Δ)=˜n(Θ0+Δ)+˜n(Θ0)+ρn{r(Θ0+Δ)r(Θ0)},

where r : ℝ̃(pK)×(pK) ↦ ℝ is given by r(Θ) = ‖Θ‖1+ ρ2‖Θ‖L. This map provides information on the behavior of our criterion function in the neighborhood of Θ0. A similar map with a different penalty was studied in Rothman et al. [26]. A key observation is that f(0) = 0 and f(Δ̂n) ≤ 0 where Δ̂n = Θ̂ρn − Θ0.

The following lemma provides a non-asymptotic bound on the Frobenius norm of Δ (see Lemma 4 in Negahban et al. [21] for a similar lemma in a different context). Let S=k=1KS(k) be the union of the supports of Ω0(k). Define a model subspace ={Ω˜(pK)×(pK):ωij(k)=0,(i,j)S,k=1,,K} and its orthocomplement ={Ω˜(pK)×(pK):ωij(k)=0,(i,j)S,k=1,K} under the trace inner product in ℝ̃(pK)×(pK). For A=(aij)i,j=1pK˜(pK)×(pK), we write A = A + A where A and A are the projection of A into ℳ and , in the Frobenius norm, respectively. In other words, the (i, j)-element of A is aij if (i, j) ∈ S and zero otherwise, and the (i, j)-element of A is aij if (i, j) ∉ S and zero otherwise. Note that Θ0 ∈ ℳ. Define the set 𝒞 = {Δ ∈ ℝ̃(pK)×(pK) : r) ≤ 3r)}.

Lemma 2

Let ε > 0 be arbitrary. Suppose ρn2 max1kKΨ^n(k)Ψ0(k). Iff (Δ) > 0 for all elements Δ ∈ 𝒞 ⋂ {Δ ∈ ℝ̃(pK)×(pK) : ‖Δ‖F = ε} then ‖Δ̂nF ≤ ε.

Proof

We first show that Δ̂n ∈ 𝒞. We have by the convexity of −ℓ̃n(Θ) that

˜n(Θ0+Δ^n)+˜n(Θ0)|˜n(Θ0),Δ^n|.

It follows from Lemma 3(iv) with our choice ρn that the right hand side of the inequality is further bounded below by −2−1 ρn (r(Δ̂n,ℳ) + r(Δ̂n,ℳ)). Applying Lemma 3(iii), we obtain

0f(Δ^n)=˜n(Θ0+Δ^n)+˜n(Θ0)+r(Θ0+Δ^n)r(Θ0)ρn2r(Δ^n,)3ρn2r(Δ^n,),

or r(Δ̂n,ℳ) ≤ 3r(Δ̂n,ℳ). This verifies Δ̂n ∈ 𝒞. Note that f, as a function of Δ is sum of two convex functions ℓn and r, and is hence convex. Thus, the rest of the proof follows exactly as Lemma 4 in Negahban et al. [21].

Lemma 3

Let Δ ∈ ℝ̃(pK)×(pK).

  1. The gradient of ℓ̃n0) is a block diagonal matrix given by
    ˜n(Θ0)=n1 diag{n1(Ψ0(1)Ψ^n(1)),,nK(Ψ0(K)Ψ^n(K))}. (10)
  2. Let c > 0 be a constant. For ‖Δ‖Fc and nk/n ≥ γ > 0 for all k and n,
    ˜n(Θ0+Δ)+˜n(Θ0)+˜n(Θ0),Δγ2{λΘ+c}2ΔF2κn,cΔF2. (11)
  3. The map r is a seminorm, convex, and decomposable with respect to (ℳ, ℳ) in the sense that r1 + Θ2) = r1) + r2) for every Θ1 ∈ ℳ and Θ2 ∈ ℳ. Moreover,
    r(Θ0+Δ)r(Θ0)r(Δ)r(Δ).
  4. For Δ ∈ ℝ̃(pK)×(pK),
    |˜n(Θ0),Δ|r(Δ)max1kKΨ^n(k)Ψ0(k). (12)
  5. For Θ ∈ ℝ̃(pK)×(pK),
    r(Θ)(s+1)1/2(1+ρ2L21/2)ΘF.
Proof
  1. The result follows by taking derivatives blockwise.

  2. Rothman et al. [26] (page 500–502) showed that
    ˜n(Θ0+Δ)+˜n(Θ0)˜n(Θ0),Δ=k=1Knkn(log det(Θ0(k)+Δ(k))+log det(Θ0(k))+Ψ0(k),Δ(k))k=1KnknΔ(k)F22 min0υ1{Θ0(k)2+υΔ(k)2}2.
    Since ‖A2 ≤ ‖AF, nk/n ≥ γ and ‖Δ‖Fc, this is further bounded below by
    k=1Kγ2Δ(k)F2{Θ0(k)2+Δ(k)F}2κn,cΔF2.
  3. Because the graph Laplacian L is a positive semidefinite matrix, the triangle inequality r1 + Θ2) ≤ r1) + r2) holds. To see this let L = T be any Cholesky decomposition of L. Then
    {(x+y)TL(x+y)}1/2=L˜T(x+y)L˜Tx+L˜y={xTLx}1/2+{yTLy}1/2.
    It is clear that r(cΘ) = cr(Θ) for any constant c. Thus, given that r does not penalize the diagonal elements, it is a seminorm. The decomposability follows from the definition of r. The convexity follows from the same argument for the triangle inequality. Since Θ0 + Δ = Θ0 + Δ + Δ, the triangle inequality and the decomposability of r yield
    r(Θ0+Δ)r(Θ0)r(Θ0+Δ)r(Δ)r(Θ0)=r(Δ)r(Δ).
  4. We show that, for A, B ∈ ℝ̃(pK)×(pK) with diag(B) = 0, 〈A, B〉 ≤ r(A)‖B. If A is a diagonal matrix (or if A = 0), the inequality trivially holds since 〈A, B〉 = 0. If not, r(A) ≠ 0 so that
    A,Br(A)A1BA1=B.

    Since the diagonal elements of ∇ℓ̃n0) are all zero, the result follows.

  5. For s ≠ 0, we have
    r(Θ)ΘFsupΘk=1KΘ(k)1Θdiag(Θ)F+supΘρ2ijθijTLθijΘFs1/2+ρ2supΘijL2θijF2ΘFs1/2(1+ρ2L21/2).

    In the last inequality we used that j=1Ji=1Iaij2J1/2j=1Ji=1Iaij2, which follows by the concavity of the square root function. For s = 0, we trivially have 0=r(Θ)s1/2{1+ρ2L21/2}ΘF. Combining these two cases yields the desired result.

Next, we obtain an upper bound for max1kKΨ^n(k)Ψ0(k), which holds with high-probability assuming the tail conditions of the random vectors.

Lemma 4

Suppose that nk/n ≥ γ > 0 for all k and n.

  1. Suppose that Condition 1 holds. Then for n ≥ 6γ−1 log p we have
    P(Σ^nΣ0236(1+4c12)2γ1/2maxk,iσii(k)log pγn)2K/p. (13)
  2. Suppose that Condition 2 holds with c2, c3 > 0 and pc7nc2. Then we have for τ>maxk(27+231+2432c4 maxk,i{σii(k)}2)/(9c4 maxk,i{σii(k)}2)
    P(Σ^nΣ0k=1Kδn(k))Kνn (14)
    where
    δn(k)(1+2maxi|μ(k),i|)(2δn,1(k)+δn,2(k))+(δn,1(k))2+(δn,2(k))2+2δn,3(k),
    with
    δn,1(k)maxi,j𝔼|Xl(k),iXl(k),j|I(|Xl(k),iXl(k),j|nk1/2(log p)1/2),
    δn,2(k){c4maxk,i{σii(k)}2(4+τ) log p/nk}1/2,
    δn,3(k)maxi𝔼|Xl(k),i|I(|Xl(k),i|nk1/2(log p)1/2).
  3. Suppose that Condition 3 holds and that P(‖Σ̂n − Σ0bn) = o(1) and bn = o(1) as n → ∞. Then P(‖Ψ̂n − Ψ0C1bn) = o(1).

Proof
  1. This was proved by Ravikumar et al. [25].

  2. Note that
    Σ^n(k)Σ(k)=nk1l=1nk(Xl(k))2𝔼(X(k))2(X¯(k)μ(k))2μ(k)(X¯(k)μ(k))T(X¯(k)μ(k))(μ(k))T.
    We first evaluate the probability in (14) for nk1l=1nk(Xl(k))2𝔼(X(k))2. Let
    Yl(k),ijXl(k),iXl(k),j𝔼Xl(k),iXl(k),j,
    Ȳl(k),ijXl(k),iXl(k),jI(|Xl(k),iXl(k),j|nklog p)𝔼Xl(k),iXl(k),jI(|Xl(k),iXl(k),j|nklog p),
    l(k),ijYl(k),ijȲl(k),ij.
    We have
    P(maxi,j|l=1nkl(k),ij|2nkδn,1(k))P(maxi,j|l=1nkXl(k),iXl(k),jI(|Xl(k),iXl(k),j|nklog p)|nkδn,1(k))P(maxl,i(Xl(k),i)2nk1/2(log p)1/2)(xymax{x2,y2})pnk𝔼X0i4(c2+c3+1)(log p)c2+c3+1nkc2+c3+1(Markovs inequality)c7c4maxk,i{σii(k)}2(log p)c2+c3+1nkc3(pc7nc2)c7c4maxk,i{σii(k)}2(log p)c2+c3+1γc3nc3νn,1, (15)
    where the first inequality follows from the triangle inequality. Note that
    𝔼(Ȳl(k),ij)2𝔼[Xl(k),iXl(k),jI(|Xl(k),iXl(k),j|nklog p)]2𝔼|Xl(k),iXl(k),j|221(𝔼(Xl(k),i)4+𝔼(Xl(k),j)4)c4maxk,i{σii(k)}2.
    It follows from Bernstein’s inequality that
    P(maxi,j|l=1nkȲl(k),ij|nkδn,2(k))2p2 exp (c4maxk,i{σii(k)}2(4+τ) log p2c4maxk,i{σii(k)}2+2c4maxk,i{σii(k)}2(64+16τ)/3)νn,2. (16)
    Now, for τ>(27+231+2432c4 maxk,i{σii(k)}2)/(9c4 maxk,i{σii(k)}2), νn,2 → 0 as p → ∞. Note that for this to hold it suffices to have
    3c4maxk,i{σii(k)}2(4+τ)6c4maxk,i{σii(k)}2+8c4maxk,i{σii(k)}2(4+τ)>2,
    so that the power in the exponent is negative. This inequality reduces to
    3c4maxk,i{σii(k)}2τ>16c4maxk,i{σii(k)}2(4+τ).
    We can solve this by changing a quadratic equation for τ, since τ of our interest is positive. Combining (15) and (16) yields
    P(1nki=1nk(Xl(k))2𝔼(X(k))22δn,1(k)+δn,2(k))νn,1+νn,2. (17)
    Let
    Zl(k),iXl(k),i𝔼Xl(k),i,
    Z¯l(k),iXl(k),iI(|Xl(k),i|nk1/2(log p)1/2)𝔼Xl(k),iI(|Xl(k),i|nk1/2(log p)1/2),
    Z˜l(k),iUl(k),iZ¯l(k),i.
    Proceeding as for Yl(k),ij’s, we have
    P(maxi|l=1nkZ˜l(k),i|2nkδn,3(k))c7c4 maxk,i{σii(k)}2(log p)2(c2+c3+1)γc2+c3+1nc2+c3+1νn,3,
    and
    P(maxi|k=1nZ¯l(k),i|nkδn,2(k))νn,2.
    Thus, we have
    P((X¯(k)μ(k))2(δn,2(k))2+(2δn,3(k))2)P(maxi|X¯(k),iμ(k),i|(δn,1(k))2+(δn,2(k))2)P(maxi|k=1nZ¯l(k),i|nkδn,2(k))+P(maxi|l=1nkZ˜l(k),i|2nkδn,3(k))νn,2+νn,3, (18)
    and
    P((X¯(k)μ(k))(μ(k))Tmaxi|μ(k),i|(2δn,1(k)+δn,2(k)))P(maxi|X¯(k),iμ(k),i|2δn,1(k)+δn,2(k))νn,1+νn,2. (19)
    Combining (17)(19) yields
    P(Σ^n(k)Σ(k)(1+2 maxi|μ(k),i|)(2δn,1(k)+δn,2(k))+(δn,2(k))2+(2δn,3(k))2)3νn,1+4νn,2+νn,3=νn.

    Note that δn,1(k),δn,2(k),δn,3(k), νn,1, νn,2, νn,3 → 0 as n, p → ∞ if log p/n → 0. Note also that δn,1(k),δn,2(k) and (δn,3(k))2 are O(log p/n) on the set where nk/n ≥ γ.

    For example, we have by Jensen’s inequality that
    nlog p(δn,3(k))2=nlog pmaxi{𝔼|X(k),i|I{|X(k),i|nk1/2(log p)1/2}}2maxi𝔼nnknklog p|X(k),i|2I{|X(k),i|nk1/2(log p)1/2}γ1maxi𝔼|X(k),i|3I{|X(k),i|nk1/2(log p)1/2}c4γ1maxi{σii(k)}2.
  3. Given that |σ0,ij(k)|σ0,ii(k)σ0,jj(k),
    |ψn,ij(k)ψ0,ij(k)|=|σ^n,ij(k)σ^n,ii(k)σ^n,jj(k)σ0,ij(k)σ0,ii(k)σ0,jj(k)|=|σ0,ii(k)σ0,jj(k)(σ^n,ij(k)σ0,ij(k))+σ0,ij(k)(σ0,ii(k)σ0,jj(k)σ^n,ii(k)σ^n,jj(k))|σ^n,ii(k)σ^n,jj(k)σ0,ii(k)σ0,jj(k)σ0,ii(k)σ0,jj(k)σ^n,ii(k)σ^n,jj(k)σ0,ii(k)σ0,jj(k){|σ^n,ij(k)σ0,ij(k)|+|σ0,ii(k)σ0,jj(k)σ^n,ii(k)σ^n,jj(k)|},
    wherein
    σ0,ii(k)σ0,jj(k)σ^n,ii(k)σ^n,jj(k)=σ0,jj(k)σ0,ii(k)+σ^n,ii(k)(σ0,ii(k)σ^n,ii(k))+σ^n,ii(k)σ0,jj(k)+σ^n,jj(k)(σ0,jj(k)σ^n,jj(k)).
    Since bn → 0, bnc5/2 for n sufficiently large by Condition 3. On the event ‖Σ̂n − Σ0bn with n large, 0<c5/2σ0,ii(k)c5/2σ^n,ii(k)σ0,ii(k)+c5/2c6+c5/2. Thus,
    σ0,ii(k)σ0,jj(k)σ^n,ii(k)σ^n,jj(k)σ0,ii(k)σ0,jj(k)2(c5+2c6)c52
    σ0,jj(k)σ0,ii(k)+σ^n,ii(k)c62c5
    σ^n,ii(k)σ0,jj(k)+σ^n,jj(k)c5+2c62c5.
    It follows that
    |ψn,ij(k)ψ0,ij(k)|{2c52+c5+c63/2+2c55/2c6+(c54+2c55c6)1/2}maxk,i,j|σ^n,ij(k)σ0,ij(k)|.
    Thus, we have
    P(Ψ^nΨ0C1bn)P(Ψ^nΨ0C1bn,Σ^nΣ0<bn)+P(Σ^nΣ0bn)2P(Σ^nΣ0bn)0.

So far we have assumed nk/n ≥ γ in lemmas. We evaluate the probability of this event noting that nk ~ Binom(n, πk).

Lemma 5

Let ε > 0 such that γ ≡ mink πk − ε > 0. Then

P(minknk/nminkπkε)2K exp(2nε2). (20)
Proof

We have by Hoeffding’s inequality that

P(minknk/nminkπkε)P(k,nk/nminkπkε)P(k,nk/nπkε)P(k,|nk/nπk|ε)k=1KP(|nk/nπk|ε)2K exp(2nε2).
Proof of Lemma 1

We apply Lemma 2 to obtain the non-asymptotic error bounds.

We first compute a lower bound for f(Δ). Suppose ε ≤ c. For Δ ∈ 𝒞 ∩ {Δ ∈ ℝ̃(pK)×(pK) : ‖Δ‖F = ε}, we have by Lemma 3(ii) and (iii) that

f(Δ)˜n(Θ0),Δ+κn,cΔF2+ρn{r(Δ)r(Δ)}.

The assumption on ρn and Lemma 3(iii) and (iv) then yield

|˜n(Θ0),Δ|ρn2{r(Δ)+r(Δ)}.

From this inequality and Lemma 3(v) we have

f(Δ)κn,cΔF23ρn2r(Δ)κn,cΔF23ρn2(s+1)1/2(1+ρ2L21/2)ΔF.

Viewing the right hand side of the above inequality as a quadratic equation in ‖Δ‖F, we have f(Δ) > 0 if

ΔF3ρnκn,c(s+1)1/2(1+ρ2L21/2)εc>0.

Thus, if we show that there exists a c0 > 0 such that εc0c0, Lemma 2 yields that ‖Θ̂ρn − Θ0F ≤ εc0.

Consider the inequality (x + y)2z1/2y where x, y, z ≥ 0. This inequality holds for (x, y, z) such that x = y and xz1/2 = 1/4. We apply the inequality above with x = λΘ, y = c, z=2432ρn2γ2s(1+ρ2L21/2)2 and solve xz ≤ 1/4 for n. (i) For ρn=236C1(1+4c12)2γ1/2 maxk,iσii(k)log p/n, xz ≤ 1/4 yields

nmax {6 log pγ,21533C12(1+4c12)2γ3maxk,i{σii(k)}2λΘ4(1+ρ2L21/2)2s log p},

and (x + y)4z becomes

εmaxk{Θ0(k)2}221533(1+c12)2maxk,i(σii(k))2(1+ρ2L21/2)2γ3λΘ4s log pn.

(ii) For ρn = C1Kδn, there is no closed form solution for n. Note that δn → 0 if log p/n → 0 so that xz ≤ 1/4 holds for n sufficiently large, given that k=1Kδn(k)Kδn.

Computing appropriate probabilities using Lemmas 4 and 5 completes the proof.

Proof of Theorem 1

The estimation error Ω^ρn(k)Ω02(2) in the spectral norm can be bounded and evaluated in the same way as in the proof of Theorem 2 of Rothman et al. [26] together with Lemma 1.

8.2. Model Selection Consistency

Our proof is based on the primal-dual witness approach of Ravikumar et al. [25], with some modifications to overcome a difficulty in their proof when applying the fixed point theorem to a discontinuous function. First, we define the oracle estimator Θˇρn=(Θˇρn(1),,Θˇρn(K)) by

Θˇρn=arg minΘ(k)>0,Θ(k)=(Θ(k))T,Θ(s(k))c(k)=0n1k=1Knk(tr (Ψn(k)Θ(k))log det(Θ(k)))+ρnk=1KΘ(k)1+ρnρ2i,jΘijTLΘij, (21)

where Θ(S(k))c(k)=0 indicates that Θ(i,j)(k)=0 for (i, j) ∉ S(k).

Lemma 6
  1. Let A ∈ ℝp×pbe a positive semidefinite matrix with eigenvalues 0 ≤ λ1 ≤ λ2 ≤ ⋯ ≤ λpand corresponding eigenvectors ui satisfying uiuj, ij andui‖ = 1. The subdifferential xTAxof f(x)=xTAxis
    xTAx={Ax/xTAx,Ax0,{UΛ1/2y:y1},Ax=0.
    where U ∈ ℝp×phas ui as the ith columns and Λ1/2is the diagonal matrix with λi1/2, i = 1, …, p, as diagonal elements. Furthermore, the subgradients are bounded above, i.e.
    f(x)A21/2,  for all f(x)xTAx.
  2. Let A ∈ ℝp×pbe a positive semidefinite matrix and S = {Si} ⊂ {1, …, p}. Suppose ASS has eigenvalues 0 ≤ λ1,S ≤ λ2,S ≤ ⋯ ≤ λ|S|,Sand corresponding eigenvectors ui,S satisfying ui,Suj,S, ij andui,S‖ = 1. Let gS : ℝ|S| → ℝpbe a map defined by gS(x) = y where yi = xSj for i = Sj for and yi = 0 for iS. The subdifferential hA,S(x)=gS(x)TAgS(x)equals to the subdifferential of xTASSxgiven by
    xTASSx={ASSx/xTASSx,ASSx0,USΛS1/2{y:y1},ASSx=0.
    where US ∈ ℝ|S|×|S|has ui,S as the ith columns and ΛS1/2is the diagonal matrix with λi,S1/2, i = 1, …, |S|, as diagonal elements. For x with ASSx ≠ 0, there is a relationship between xTASSxand yTAyat y = gS(x) given by
    {AyyTAy}S=ASSxxTASSx,
    {AyyTAy}Sc=AScSxxTASSx.
    Subgradients are bounded above:
    hA,S(x)ASS21/2A21/2,fA,S(x)xTASSx.
Proof
  1. For x with Ax ≠ 0, f(x) is differentiable and the subgradient of f at x is simply the matrix derivative. By definition, for x with Ax = 0, the subgradient υ of f at x satisfies the following inequality
    yTAyyx,υ, (22)
    for all y. Choosing y = 2x and y = 0 yield 0 ≥ 〈x, υ〉 and 0 ≥ − 〈x, υ〉, implying 〈x, υ〉 = 0. The inequality (22) reduces to yTAyy,υ, for any y. If Ay = 0, a similar argument implies that 〈y, υ〉 = 0. Hence υ ⊥ y for every y with Ay = 0.
    Let j0 be the smallest index such that λj0 > 0. Because uj ’s form an orthonormal basis, any arbitrary vector y can be written as y=j=1pβjuj. Moreover, the null space of A is the span of u1, …, uj0−1. Thus, the subgradient υ can be written as υ=j=j0pαjuj. Thus, using the spectral decomposition of A as A=j=j0pλjujujT, we can write f(y)={j=j0pλjβj2}1/2. On the other hand, y,υ=j=j0pαjβj. Thus, the inequality (22) further reduces to
    {j=j0pλjβj2}1/2j=j0pαjβj,βj.
    It follows from the Cauchy-Schwartz inequality that the left hand side of the inequality is bounded from above;
    j=j0pαjβj=j=j0pαjλj1/2λj1/2βj{j=j0pαj2λj}1/2{j=j0pλjβj2}1/2.
    Thus,
    f(x)={υ:υ=j=j0pαjυj,j=j0pαj2λj1,αj}.

    It is easy to see that this set is the image of the map UΛ1/2 on the closed ball of radius 1.

    Given that ‖x ≤ ‖x‖, to establish the bound in the ℓ-norm, we compute the bound in the Euclidean norm. We use the same notation as in (i). For x with Ax ≠ 0,
    AxxTAx=UΛ1/2Λ1/2UTxΛ1/2UTxUΛ1/22.

    But UΛ1/22=supx=1UΛ1/2x=supxKUΛ1/2(UTx)/UTx=A21/2, because ‖UT x‖ = ‖x‖. For x with Ax = 0, Λ1/2y/yA21/2 for every y. Because of the form of the subdifferential and the fact that ‖U x‖ = ‖x‖, the result follows.

  2. Let BS be a product of elementary matrices for row and column exchange such that BSgS(x) = (x, 0). Notice that BS=BS1 and that BS=BST since BS only rearranges elements of vectors and exchanges rows by multiplication from the left. Note also that ‖BS2 ≤ ‖BS∞/∞ = 1, since ‖C2 ≤ ‖C∞/∞ for C = CT and each row of BS has only one element with value 1. Because
    {hA,S(x)}2=gS(x)TAgS(x)=(BSgS(x))T(BSABS)(BSgS(x))=xTASSx,
    the subdifferential of hA,S(x) follows from (ii). For x with ASSxx and y = gS(x), Ay=BSABS(x,0)T=BS(ASSx,AScSTx)T0 because of invertibility of BS. The relationship holds since
    [(Ay/yTAy)S(Ay/yTAy)Sc]=BSAyyTAy=1xTASSx[ASSxAScSx]

    An ℓ-bound follows from (i) and the fact that ASS2BS22A2=A2.

Lemma 7

For sample correlation matrices Ψ^n=(Ψ^n(1),,Ψ^n(K))and any ρn > 0, the convex problem(3)has a unique solution Θ^ρn=(Θ^ρn(1),,Θ^ρn(K)) with Θ^ρn(k)>0, k = 1, …, K, characterized by

n1nk(ψn,ij(k)[{Θ^ρn(k)}1]ij)+ρnÛ1,ij(k)+ρnρ2Û2,ij(k)=0, (23)

with Û1,ij(k)|θ^ρn,ij(k)|and (Û2,ij(1),,Û2,ij(K))TΘ^ρn,ijTLΘ^ρn,ijfor every ij and k = 1, …, K. Moreover,

n1nk(ψn,ii(k)[{Θ^ρn(k)}1]ii)+ρnÛ1,ij(k)+ρnρ2Û2,ij(k)=0, (24)

with Û1,ij(k)=Û2,ij(k)=0for every i = 1, …, p, andk = 1, …, K.

For each (i, j) ∈ S, let Sij={k:Θ0,ij(k)0}. The convex problem(21)has a unique solution Θˇρn=(Θˇρn(1),,Θˇρn(K))with Θˇρn(k)>0characterized by

n1nk(ψn,ij(k)[{Θˇρn(k)}1]ij)+ρnǓ1,ij(k)+ρnρ2Ǔ2,ij(k)=0, (25)

with Ǔ1,ij(k)|θˇρn,ij(k)| and Ǔ2,ij(k){Θˇρn,ij}SijTLSijSij{Θˇρn,ij}Sijfor every ij and k = 1, …, K. Moreover,

n1nk(ψn,ii(k)[{Θˇρn(k)}1]ii)+ρnǓ1,ij(k)+ρnρ2Ǔ2,ij(k)=0, (26)

with Ǔ1,ij(k)=Ǔ2,ij(k)=0for every i = 1, …, p, and k = 1, …, K.

Proof

A proof for the uniqueness of the solution is similar to the proof of Lemma 3 of Ravikumar et al. [25]. The rest is the KKT condition using Lemma 6.

We choose a pair Ũ = (Ũ1, Ũ2) of the subgradients of the first and second regularization terms evaluated at Θ̌ρn. For each (i, j) with Ω0,ij = 0 or with LΘ̌ρn,ij = 0, set

Ũ1,ij(k)=ρn1n1nk(ψn,ij(k)+[{Θˇρn(k)}1]ij),Ũ2,ij(k)=0,k=1,,K.

For (i, j) with ω0,ij(k)0, for all k = 1, …, K, set

Ũ1,ij(k)=Ǔ1,ij(k),Ũ2,ij(k)=Ǔ2,ij(k),k=1,,K.

For (i, j) with LΘ̌ρn,ij ≠ 0, Ω0,ij ≠ 0 but ω0,ij(k)=0 for some k′, set

Ũ1,ij(k)=ρn1n1nk(ψn,ij(k)+[{Θˇρn(k)}1]ij)ρ2lkΘˇρn,ijΘˇρn,ijTLΘˇρn,ij,

and

Ũ2,ij(k)=lkTΘˇρn,ijΘˇρn,ijTLΘˇρn,ij,

if ω0,ij(k)=0. Otherwise, let

Ũ1,ij(k)=Ǔ1,ij(k),Ũ2,ij(k)=lkTΘˇρn,ijΘˇρn,ijTLΘˇρn,ij.

Here, lk is the kth row of L.

The main idea of the proof is to show that (Θ̌ρn, Ũ) satisfies the optimality conditions of the original problem with probability tending to 1. In particular, we show the following equation, which holds by construction of Ũ1 and Ũ2, is in fact the KKT condition of the original problem (3):

n1nk(Ψ^n(k){Θˇρn(k)}1)+ρnŨ1(k)+ρnρ2Ũ2(k)=0. (27)

To this end, we show that Ũ1 and Ũ2 are both subgradients of the original problem. We can then conclude that the oracle estimator in the restricted problem (21) is the solution to the original problem (3). Then it follows from the uniqueness of the solution that Θ̌ρn = Θ̂ρn.

Let Ξ(k)=Ψ^n(k)Ψ0(k),R(k)(Δ(k))={Θˇρn(k)}1Ψ0(k)+Ψ0(k)Δ(k)Ψ0(k), and Δˇ(k)=Θˇρn(k)Θ0(k).

Lemma 8

Suppose that max{‖Ξ(k), R(k)(Δ̌(k))‖} ≤ αρn/8, and ρ2α2/{4L21/2(2α)}. Suppose moreover thatLΘ̌ρn,ij ≠ 0 for (i, j) ∈ S. Then |Ũ1,ij(k)|<1for (i, j) ∈ (S(k))c.

Proof

We rewrite (27) to obtain

nknΨ0(k)Δˇ(k)Ψ0(k)+nknΞ(k)nknR(k)(Δˇ(k))+ρnŨ1(k)+ρnρ2Ũ2(k)=0.

We further rewrite the above equation via vectorization;

nkn(Ψ0(k)Ψ0(k))Δˇ(k)+nknΞ(k)nknR(k)(Δˇ(k))+ρnŨ1(k)+ρnρ2Ũ2(k)=0.

We separate this equation into two equations depending on S(k);

nknΓS(k)S(k)(k)ΔˇS(k)(k)+nknΞS(k)(k)nknRS(k)(k)(Δˇ(k))+ρnŨ1,S(k)(k)+ρnρ2Ũ2,S(k)(k)=0,
nknΓ(S(k))cS(k)(k)ΔˇS(k)(k)+nknΞ(S(k))c(k)nknR(S(k))c(k)(Δˇ(k))+ρnŨ1,(S(k))c(k)+ρnρ2Ũ2,(S(k))c(k)=0. (28)

where (Ũ⃗l)JŨ⃗k,J, l = 1, 2. Here we used Δˇ(S(k))c(k)=0. Since ΓS(k)S(k)(k) is invertible, we solve the first equation to obtain

nknΔˇS(k)(k)=(ΓS(k)S(k)(k))1{nknΞS(k)(k)+nknRS(k)(k)(Δˇ(k))ρnŨ1,S(k)(k)ρnρ2Ũ2,S(k)(k)}.

Substituting this expression into (28) yields

Ũ1,(S(k))c(k)=ρn1Γ(S(k))cS(k)(k)(ΓS(k)S(k)(k))1(nknΞS(k)(k)nknRS(k)(k)(Δˇ(k)))+Γ(S(k))cS(k)(k)(ΓS(k)S(k)(k))1Ũ1,S(k)(k)+ρ2Γ(S(k))cS(k)(k)(ΓS(k)S(k)(k))1Ũ2,S(k)(k)ρn1(nknΞ(S(k))c(k)nknR(S(k))c(k)(Δˇ(k)))ρ2Ũ2,(S(k))c(k).

Taking the ℓ-norm yields

Ũ1,(S(k))c(k)ρn1Γ(S(k))cS(k)(k)(ΓS(k)S(k)(k))1/(ΞS(k)(k)+RS(k)(k)(Δˇ(k)))+Γ(S(k))cS(k)(k)(ΓS(k)S(k)(k))1/(Ũ1,S(k)(k)+ρ2Ũ2,S(k)(k))+ρn1(Ξ(S(k))c(k)+R(S(k))c(k)(Δˇ(k)))+ρ2Ũ2,(S(k))c(k)2αρn(Ξ(S(k))c(k)+R(S(k))c(k)(Δˇ(k)))+1α+(2α)ρ2L21/2.

Here we used that ‖Ax ≤ ‖A∞/∞ ≤ ‖A and Γ(S(k))cS(k)(k)(ΓS(k)S(k)(k))1/1α, and applied Lemma 6 to bound ‖Ũ⃗2, (S(k))c and ‖Ũ⃗2, (S(k)) by L21/2. We also used Ũ1,S(k)(k)=Ǔ1,S(k)(k)1 by construction of Ũ1 and the assumption that Θˇρn(k)0. It follows by the assumption of the lemma that

Ũ(S(k))c(k)2αρnαρn4+(1α)+(2α)ρ2L21/21α2α24+α24<1.
Lemma 9 (Lemma 5 of Ravikumar et al. [25])

Suppose that ‖Δ‖ ≤ 1/(3κΨd) with

(Δ(k))(S(k){(i,i):i=1,,p]})c=0.

ThenH(k)∞/∞ ≤ 3/2 where H(k)j=1(1)j(Ψ0(k)Δ(k))j, k = 1, …, K, andR(k)(k)) has representation R(k)(Δ(k))=Ψ0(k)Δ(k)Ψ0ΔH(k)Ψ0(k)with R(k)(Δ(k))(3/2)dΔ(k)2(κΨ)3.

Lemma 10

Suppose Δ21/(2 maxkΨ0(k)2)with Δ(S(k){(i,i)i=1p]})c(k)=0. ThenH(k)∞/∞ ≤ 2 where H(k)t=1(1)t(Ψ0(k)Δ(k))t, k = 1, …, K, andR(k)(k)) has representation R(k)(Δ(k))=Ψ0(k)Δ(k)Ψ0ΔH(k)Ψ0(k)with R(k)(Δ(k))2λΘ3Δ(k)22.

Proof

Note that the Neumann series for a matrix (IA)−1 converges if the operator norm of A is strictly less than 1, and that the ℓ-norm is bounded by the operator norm. A proof is similar to that of Lemma 5 of Ravikumar et al. [25] with the induced infinity norm ‖·‖∞/∞ replaced by the operator norm in appropriate inequalities.

The following lemma is similar to the statement of Lemma 6 of Ravikumar et al. [25].

Lemma 11

Suppose that

r4minkπkκΓ(maxkΞ(k)+ρn+ρnρ2L21/2)<16d max {κΨ,κΨ3κΓ},

for k = 1, …, K. Suppose moreover that (Θ0(k)Θ0(k))S(k)S(k)are invertible for k = 1, …, K. Then with probability 12K exp(n minkπk2/2),

maxkΘ˜ρn(k)Θ0(k)(3/2)r.
Proof

We apply Shauder’s fixed point theorem on the event mink πk/2 ≤ nk/n, which holds with probability 12K exp(n minkπk2/2) by Lemma 5 with ε = mink πk/2. We first define the function fk and its domain 𝒟k to which the fixed point theorem applies. Let (k) = S(k) ∪ {(i, i) : 1 ≤ ip}, and define

𝒟k={A𝕊p×p:xT(A+Θ0(k))x0,xp,AS¯(k)r,A(S¯(k))c=0},

where 𝕊p×p is the space of symmetric p × p matrices. Then 𝒟k is a convex, compact subset of the set of 𝕊p×p.

Let Ǔl(k)p×p, l = 1, 2, be zero-filled matrices whose (i, j)-element is Ǔl,ij(k) in Lemma 7 if (i, j) ∈ S(k) and zero otherwise. Define the map gk on the set of invertible matrices in ℝp×p by gk(B)=(nk/n)(B1Ψ^n(k))ρnǓ1(k)ρnρ2Ǔ2(k). Note that {gk(Θˇρn(k))}S(k)=0 is the KKT condition for the restricted problem (21). Let δ > 0 be a constant such that δ < min{1/2, 1/{10(4dr + 1)}}r and δ+r1/{6d max{κΨ,κΨ3κΓ}. Define a continuous function fk : 𝒟k ↦ 𝒟k as

(fk(A))ij={Aiji=j,{hk(A)Θ0(k)gk(A+Θ0(k)+δI)Θ0(k)+A}ij,ij,(i,j)S(k)0,otherwise,

where

hk(A)21 min{λ1(A+Θ0(k)),21}+21max{|λ1({Θ0(k)gk(A+Θ0(k)+δI)Θ0(k)}S(k)I)|,1}.

Let f˜k(A)=hk(A)Θ0(k)gk(A+Θ0(k)+δI)Θ0(k). Then fk(A) = (k(A))S(k) + A for A ∈ 𝒟k.

We now verify the conditions of Shauder’s fixed point theorem below. Once these conditions are established, the theorem yields that fk(A) = A. Since (fk(A))((k))c = A for any A ∈ 𝒟k, and hk(A) > 0, the solution A to fk(A) = A is determined by (Θ0(k)gk(A+Θ0(k)+δI)Θ0(k))S(k)=0. Vectorizing this equation to obtain (Θ0(k)Θ0(k))S(k)S(k){gk(A+Θ0(k)+δI)}S(k)=0, it follows from the invertibility of (Θ0(k)Θ0(k))S(k)S(k) that {gk(A+Θ0(k)+δI)}S(k)=0. By the uniqueness of the KKT condition, the solution is A=Θˇρn(k)Θ0(k)δI. Since A ∈ 𝒟k, and δ < r/2, we conclude Θˇρn(k)Θ0(k)(3/2)r.

In the following, we write A⃗ = vec(A) for a matrix A for notational convenience. For J ⊂ {(i, j) : i, j = 1, …, p}, vec(A)J should be understood as A⃗J.

The function fk is continuous on 𝒟k. To see this, note first that A+Θ0(k)+δI is positive definite for every A ∈ 𝒟k so that the inversion is continuous. Note also that all elements in the matrices involved with eigenvalues in hk(A) are uniformly bounded in 𝒟k, and hence the eigenvalues are also uniformly bounded.

To show that fk(A) ∈ 𝒟k, first we show that fk(A)+Θ0(k) is positive semidefinite. This follows because for any x ∈ ℝp

xT(fk(A)+Θ0(k))x=xT{(f˜k(A))S(k)I}x+xT(A+Θ0(k))x+xTxhk(A)λ1({Θ0(k)gk(A+Θ0(k)+δI)Θ0(k)}S(k)I)x2+λ1(A+Θ0(k))x2+x20.

To see this, note that if λAλ1({Θ0(k)gk(A+Θ0(k)+δI)Θ0(k)}S(k)I) is positive, then the inequality easily follows. On the other hand, if λA < −1, we have

hk(A)λAx221 min{λ1(A+Θ0(k)),21}x221x2(λ1(A+Θ0(k))/2+1/2)x2.

Lastly, if −1 ≤ λA < 0, we have

hk(A)λAx2|λA|[21 min{λ1(A+Θ0(k)),21}+1/2]x2|λA|(λ1(A+Θ0(k))/2+1/2)x2.

Next, we show that ‖fk(A)(k)r. Because diag(fk(A)) = diag(A), it suffices to show ‖fk(A)S(k)r. Since δ+r1/{6d max{κΨ,κΨ3κΓ},

Ψ0(k)(A+δI)/κΨdA+δIκΨd(r+δ)1/3.

It then follows from Lemma 9 that

R(A+δI)=(A+δI+Θ0(k))1Ψ0(k)+Ψ0(k)(A+δI)Ψ0(k)={Ψ0(k)(A+δI)}2H(k)Ψ0(k).

Thus, adding and subtracting Ψ0(k) yields

f˜k(A)+A=hk(A)Θ0(k)((nk/n){Ψ0(k)(A+δI)}2H(k)Ψ0(k)(nk/n)Ξ(k)ρnǓ1(k)ρnρ2Ǔ2(k))Θ0(k)+(1(nk/n)hk(A))A(nk/n)δhk(A)I.

Vectorization and restriction on S(k) gives

vec(fk(A))S(k)=vec(f˜k(A)+A)S(k)(nk/n)hk(A){(Γ(k))1}S(k)S(k)vec({Ψ0(k)(A+δI)}2H(k)Ψ0(k))S(k)+(1(nk/n)hk(A))vec(A)S(k)+(nk/n)δ+hk(A){(Γ(k))1}S(k)S(k){vec((nk/n)Ξ(k))S(k)+ρnvec(Ǔ1(k))S(k)+ρnρ2vec(Ǔ2(k))S(k)}, (29)

where {(Γ(k))1}S(k)S(k)=(Θ(k)Θ0(k))S(k)S(k). Here we used hk(A) ≤ (1/4 + 1/2)/1 = 3/4. For the first term of the upper bound in (29), it follows by the inequality ‖Ax ≤ ‖A∞/∞x for A ∈ ℝp×p and x ∈ ℝp, Lemma 9 and the choice of δ satisfying δ+r1/{6d max{κΨ,κΨ3κΓ} that

{(Γ(k))1}S(k)S(k)vec({Ψ0(k)(A+δI)}2H(k)Ψ0(k))S(k)κΓR(k)(A+δI)κΓ32dA+δI2κΨ3κΓ32dA+δI(r+δ)κΨ3(r+δ)/4.

For the second term, it follows by the assumption, the inequality that ‖Ax ≤ ‖A∞/∞x for A ∈ ℝp×p and x ∈ ℝp, and Lemma 6 that

{(Γ(k))1}S(k)S(k){nknvec(Ξ(k))S(k)+ρnvec(Ǔ1(k))S(k)+ρnρ2vec(Ǔ2(k))S(k)}κΓ(Ξ(k)+ρn+ρnρ2L21/2)=(min kπk)r/4(nk/n)r/2.

Thus, we can further bound ‖vec((k(A) + A)S(k))‖ by

nknhk(A)r+δ4+nknhk(A)r2+(1nknhk(A))r+nknδ=r{1nknhk(A)4}+nkn{1+hk(A)4}δ. (30)

Since

(Θ0(k)gk(A+Θ0(k)+δI)Θ0(k))S(k)AS(k)+Θ0(k)gk(A+Θ0(k)+δI)Θ0(k)+A)S(k),

and δ ≤ r/2, a similar reasoning shows that

(Θ0(k)gk(A+Θ0(k)+δI)Θ0(k))S(k)(nk/n){(Γ(k))1}S(k)S(k) vec ({Ψ0(k)(A+δI)}2H(k)Ψ0(k))S(k)+(2(nk/n))vec(A)S(k)+(nk/n)δ+{(Γ(k))1}S(k)S(k){(nk/n)vec(Ξ(k))S(k)+ρnvec(Ǔ1(k))S(k)+ρnρ2vec ((Ǔ2(k))S(k))}r+δ4+r2+2r+δ4r.

Thus, the inequality ‖B2 ≤ ‖B∞/∞ for B = BT implies that

|λ1({Θ0(k)gk(A+Θ0(k)+δI)Θ0(k)}S(k)I)|λ1({Θ0(k)gk(A+Θ0(k)+δI)Θ0(k)}S(k))2+1λ1({Θ0(k)gk(A+Θ0(k)+δI)Θ0(k)}S(k))/+14dr+1.

Hence hk(A) ≥ 1/(8dr + 2) for every A ∈ 𝒟k.

Now (30) is further bounded by r:

r{1nknhk(A)4}+nkn{1+hk(A)4}δr{1nknhk(A)4}+nkn{1+hk(A)4}r10(4dr+1)r{1nknhk(A)4}+nkn{1+hk(A)4}hk(A)r5rnknhk(A)hk2(A)20rr.

Here we used the fact that δ ≤ r/{10(4dr + 1)} and 1/(8dr + 2) ≤ hk(A) < 1. Thus, ‖(fk(A))S(k)r.

Since (fk(A))(S(k))c = 0 by definition, all the conditions for the fixed point theorem are established. This completes the proof.

We are now ready to prove Theorem 2. Note that Condition 7 implies that

ρn<min {minkπk72dκΓmin {1κΨ,1κΨ3κΓ,minkπk56κΨ3κΓα},c86,c9minkdk12}.
Proof of Theorem 2

We prove that the oracle estimator Θ̌ρn satisfies (I) the model selection consistency and (II) the KKT conditions of the original problem (3) with (Θ̌ρn, Ũ1, Ũ2). The model selection consistency of Θ̂ρn = Θ̌ρn then follows by the uniqueness of the solution to the original problem. The following discussion is on the event that mink πk/2 ≤ nk/n, k = 1, …, K, and maxk‖Ξ(k) ≤ α/8. Note that this event has probability approaching 1 by Lemmas 4 and 5.

First we obtain an ℓ-bound of the error of the oracle estimator. Note that by Condition 7 and the fact that α ∈ [0, 1)

α8+1+ρ2L21/2α8+1+α24(2α)3.

Thus, it follows from Condition 7 that

4minkπkκΓ(Ξ(k)+ρn+ρnρ2L21/2)<12κΓminkπkminkπk72dκΓmin {1κΨ,1κΨ3κΓ}=16d max{κΨ,κΨ3κΓ}.

Because (Θ0(k)Θ0(k))S(k)S(k) is invertible by Condition 5, we can apply Lemma 11 to obtain Θˇρn(k)Θ0(k)(6/minkπk)κΓ(Ξ(k)+ρn+ρnρ2L21/2) with probability approaching 1.

As a consequence of the ℓ-bound, Θ̌ρn,ij ≠ 0 for (i, j) ∈ S, because Θˇρn(k)Θ0(k)3ρnc8/2<mink=1,,K,ij|θ0,ij(k)| by Conditions 6 and 7. This establishes the model selection consistency of the oracle estimator.

Next, we show that the Oracle estimator satisfies the KKT condition of the original problem (3). As the first step, we prove Ũ1,ij(k)Θˇρn(k) for every i, j, k with probability approaching 1. Since Θ̌ρn,ij ≠ 0 for (i, j) ∈ S with probability approaching 1, Ũ1,ij(k)=Ǔ1,ij(k) for (i, j) ∈ S(k) by construction. For (i, j) ∈ (S(k))c, we need to prove |Ũ1,ij(k)|<1 for every i, j, k. To this end, it suffices to verify that R(k)(Θˇρn(k)Θ0(k))α/8 and apply Lemma 8. Applying Lemma 9 with Θˇρn(k)Θ0(k)(6/minkπk)κΓ(Ξ(k)+ρn+ρnρ2L21/2) and Condition 7 gives

R(k)(Θˇρn(k)Θ0(k))32dκΨ3Θˇρn(k)Θ0(k)232dκΨ3324κΓ2minkπk2ρn2486dκΨ3κΓ2minkπk2{minkπk72dκΓminkπk56κΨ3κΓα}ρnα8α.

Next, we prove that Ũ2,ijΘˇρn,ijLΘˇρn,ij for every (i, j). For (i, j) with ω0,ij(k)0 for all k = 1, …, K, Ũ2,ij=ǓρnΘˇρn,ijLΘˇρn,ij. For (i, j) with Ω0,ij = 0, Ũ2,ij=0Θˇρn,ijLΘˇρn,ij by Lemma 6. For (i, j) with Ω0,ij ≠ 0 and ω0,ij(k)=0 for some k′,

Ũ2,ij=LΘˇρn,ij/Θˇρn,ijLΘˇρn,ijΘˇρn,ijLΘˇρn,ij

if LΘ̌ρn,ij ≠ 0. To see LΘ̌ρn,ij ≠ 0 holds with probability approaching 1, let (k, k′) ∈ S with kk′ such that Θ0,ij(k)/dkΘ0,ij(k)/dk0. This pair (k, k′) exists by Condition 6 and the assumption LΘ0,ij ≠ 0. We assume without loss of generality θ0,ij(k)/dkθ0,ij(k)/dk>0. Since Θˇρn(k)Θ0(k)3ρnc9minkdk/12, it follows from Condition 7 that

θˇρn,ij(k)dkθˇρn,ij(k)dkθ0,ij(k)dkθ0,ij(k)dk3ρn(1dk+1dk)c93ρn(maxWk,k01dk+1dk)12c9.

Hence, Θˇρn,ijTLΘˇρn,ijWkkc92/4>0 or LΘ̌ρn,ij ≠ 0.

Finally, we show that Equation (27) for the KKT condition holds. For the (i, j)-element of the equation with Ω0, ij = 0, this equation hold by construction for every k = 1, …, K. For the (i, j)-element with ω0,ij(k)0 for every k = 1, …, K, the equation holds for every k = 1, …, K, because it is the equation for the KKT condition of the corresponding element in a restricted problem (21). For (i, j)-element with Ω0, ij ≠ 0 and ω0,ij(k)=0 for some k′, note that Θ̌ρn,ij ≠ 0 with probability approaching 1 and that the rearrangement in Θij and corresponding exchange of rows and columns of L for each i, j does not change the original and restricted optimization problems (3) and (21). Thus, with the appropriate rearrangement of elements and exchange of rows and columns, Ũ2,ij(k) with ω0,ij(k)0 is in fact Ǔ2,ij(k). Thus for such k the equation holds because of the corresponding KKT condition in the restricted problem (21). For other k, the equation holds by construction. We thus conclude the oracle estimator satisfies the KKT condition of the original problem (3). This completes the proof.

Proof of Corollary 1

In the proof of Theorem 2, the ℓ-bound of the error yields

Θ^ρn(k)Θ0(k)=OP(κΓρn).

Note that if one of two matrices A and B is diagonal, ‖AB ≤ ‖AB. Thus, we can proceed in the same way as in the proof of Theorem 2 of Rothman et al. [26] to conclude that

Ω^n(k)Ω0(k)=OP(κΓρn).

The result follows from a similar argument to the proof of Corollary 3 in Ravikumar et al. [25].

Proof of Corollary 2

It follows from Condition 8 and Lemma 1 applied to Θ̌ρn that Θˇρn(k)Θ0(k))21/(2λΘ). Then we can apply Lemma 10 instead of Lemma 9. The rest is similar to the proof of Theorem 2.

Hierarchical Clustering

For simplicity, we prove Theorem 3 for the case of K = 2; the proof can be easily generalized to K > 2. Let X and Y be the random variable from the first and subpopulation, respectively. Suppose that X = (X1, …, Xp)T ~ NX, ΣX) with μX = (μ1,X, …, μp,X) and the spectral decomposition ΣX=QXΛXQXT of ΣX where λ1,X, …, λp,X are the eigenvalues of ΣX and that Y ~ NY, ΣY) with μY = (μ1,Y, …, μp,Y) and the spectral decomposition ΣY=QYΛYQYT of ΣY where λ1,Y, …, λp,Y are the eigenvalues of ΣY. Define Z = (XY) = (Z1, …, Zp)T ~ NZ, ΣZ) with μZ = (μ1,Z, …, μp,Z) and the spectral decomposition ΣZ=QZΛZQZT of ΣZ where λ1,Z, …, λp, Z are the eigenvalues of ΣZ. Let X˜=(X˜1,,X˜)T=ΛX1/2QXTΣX1/2X,=(1,,)T=ΛY1/2QYTΣY1/2Y and Z˜=(Z˜1,,Z˜)T=ΛZ1/2QZTΣZ1/2Z. Then ~ N(μ̃X, ΛX), ~ N(μ̃Y, ΛY) and ~ N(μ̃Z, ΛZ), where

μ˜X=(μ˜1,X,,μ˜p,X)TΛX1/2QXTΣX1/2μX,
μ˜Y=(μ˜1,Y,,μ˜p,Y)TΛY1/2QYTΣY1/2μY,
μ˜Z=(μ˜1,Z,,μ˜p,Z)TΛZ1/2QZTΣZ1/2μZ.

Let also

μX˜2=μ˜X2/p,μ2=μ˜Y2/p,μZ˜2=μ˜Z2/p,
λ¯X=k=1pλk,X/p,λ¯Y=k=1pλk,Y/p,λ¯Z=k=1pλk,Z/p.
Lemma 12 (Lemma 1 of Borysov et al. [1])

Let W1, …, Wp be independent non-negative random variables with finite second moments. Let S=j=1p(Wj𝔼Wj)and υ=j=1p𝔼Wj2. Then for any t > 0 P(S ≤ −t) ≤ exp(−t2/(2υ)).

The following lemma is an extension of Lemma 2 in Borysov et al. [1].

Lemma 13

Let 0<a<μX˜2+λ¯X. Then

P(X2<ap)exp (p2(μX˜2+λ¯Xa)22j=1p(μ˜j,X4+6μ˜k,X2λj,X+3λj,X2)).
Proof

Note that elements of are independent and that j ~ N(μ̃j,X, λj,X). Thus, we have

𝔼X˜j2=μ˜j,X2+λj,X,   Var(X˜j2)=2(λj,X2+2μ˜j,X2λj,X),
𝔼X˜j4=μ˜j,X4+6μ˜j,X2λj,X+3λj,X2.

Applying Lemma 12 with Wi=X˜i2, since P (‖X2 < ap) = P (‖2 < ap), we get

P(X2<ap)=P[j=1p(X˜j2μ˜j,X2λj,X)<p(μX˜2+λ¯Xa)]exp (p2(μX˜2+λ¯Xa)22j=1p(μ˜j,X4+6μ˜j,X2λj,X+3λj,X2)).

The following is an extension of Lemma 3 in Borysov et al. [1].

Lemma 14

Let a>λ¯X+μX˜2. Then

P(X2>ap)exp (12(p+j=1paλj,Xj=1p1+2aλj,X)).
Proof

By Markov’s inequality, for t>j=1pλj,X+μ˜j,X2, we get

P(j=1pXj2t)=P(j=1pX˜j2t)=P[exp (j=1pγX˜j2γλj,Xγμ˜j,X2)exp (γtγj=1p(λj,x+μ˜j,X2))]exp (γ(tj=1p(μ˜j,X2+λj,X)))j=1p𝔼 exp((γλj,X)X˜j2/λj,x)=exp (γ(tj=1pμ˜j,X2))j=1pexp (γλj,X12 log(12γλj,X))×exp (γμ˜j,X212γλj,X).

Since for all u ∈ (0, 1), −log(1 − u) − uu2/{2(1 − u)} (see page 28 of Boucheron et al. [2]), the above display is bounded above by

exp (γ(ti=1pμ˜i,X2))i=1pexp (γ2λi,X212γλi,X) exp (γμ˜i,X212γλi,X).

Using the following result from Boucheron et al. [2]

infγ(0,1/c)υγ22(1cγ)tγ=υc2h(ctυ).

wherein h(u)=1+u1+2u, u > 0, we further obtain the upper bound

exp (γi=1pμ˜i,X2)i=1pexp (12(1+tλi,Xp1+2tλi,Xp)) exp (γμ˜i,X212γλi,X).

Taking γ ↓ 0, the upper bound becomes

exp (12(p+i=1ptλi,Xpi=1p1+2tλi,Xp)).

Choosing t = ap, we have

P(i=1pX˜i2ap)exp (12(p+i=1paλi,Xi=1p1+2aλi,X)).

Note that f(u) = (1 + 2u)1/2u for u ≥ 0 because f′(0) = 1 and f′ is decreasing for u > 0. Thus, P(i=1pX˜i2ap)0 as p → ∞.

Proof of Theorem 3

For simplicity, we present the proof for the case of K = 2; the proof can be easily generalized to K > 2. Let n1 and n2 be the sample sizes for the first and second subpopulations, respectively. Define

E1={maxi,jXiXj<mink,lXkYl},E3={maxi,jXiXj2<ap},
E2={maxi,jYiYj<mink,lXkYl},E4={maxi,jYiYj2<ap},
E5={maxk,lXkYl2>ap},

for a fixed a > 0 satisfying the assumption. The intersection E1E2 is contained in the event that the clustering performs in the way that two subpopulations are joined in the last step. The intersection E3E4E5 is also contained in E1E2, or in other words, P((E1E2)c)P(E3c)+P(E4c)+P(E5c). Thus, it suffices to show that P(E3c)+P(E4c)+P(E5c)0 as n, p → ∞.

For E3c and E4c we have by Lemma 14 that

P(E3c)i,jnP(XiXj2>ap)=n1(n11)2P(X1X22>ap)n1(n11)2 exp (12(p+l=1pa2λl,Xl=1p1+aλl,X))exp(12(p+l=1pa2λl,Xl=1p1+aλl,X)+2 log n1)= exp (p2(1+1pl=1pa2λl,X1pl=1p1+aλl,X+4 log n1p))

and that

P(E4c)exp (p2(1+1pl=1pa2λl,Y1pl=1p1+aλl,Y+4log n2p)).

for a satisfying a > 2 max{λ̄X, λ̄Y}.

Note that log nk/p → 0, k = 1, 2 as n1, n2, p → ∞. Moreover x1+2x0 for x > 0. Thus, P(E3c)0 and P(E4c)0 as n1, n2, p → ∞. For E5c, we have by Lemma 13 that

P(E5c)i,jP(XiYj2<ap)n1n2P(X1Y12<ap)exp(p2(μZ˜2+λ¯Za)22l=1p(μ˜i,Z4+6μ˜l,Z2λl,Z+3λl,Z2)+log n1n2)

for a<μZ˜2+λ¯Z. Given the assumption c10 ≤ λj,Xc11, c10 ≤ λj,Yc11, max{|μj,X|, |μj,Y|} ≤ c11, j = 1, 2, …. Thus, we get P(E5c)0 as n1, n2, p → 1.

Since 2λ̄X − λp,X − λp,Y ≥ 2λ̄X − λ̄Z, and 2λ̄Y − λp,X − λp,Y ≥ 2λ̄Y − λ̄Z, the assumption that μZ˜2>2 min{λ¯X,λ¯Y}λp,Xλp,Y implies that there exists a such that a < μ̄ + λ̄Z and a > 2 max{λ̄X, λ̄Y}. This completes the proof.

Contributor Information

Takumi Saegusa, Department of Mathematics, University of Maryland, College Park, MD 20742 USA.

Ali Shojaie, Department of Biostatistics, University of Washington, Seattle, WA 98195 USA.

References

  • 1.Borysov Petro, Hannig Jan, Marron JS. Asymptotics of hierarchical clustering for growing dimension. Journal of Multivariate Analysis. 2014;124:465–479. [Google Scholar]
  • 2.Boucheron Stéphane, Lugosi Gábor, Massart Pascal. Concentration inequalities: A nonasymptotic theory of independence. Oxford University Press; 2013. [Google Scholar]
  • 3.Boyd Stephen, Parikh Neal, Chu Eric, Peleato Borja, Eckstein Jonathan. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning. 2011;3(1):1–122. [Google Scholar]
  • 4.Cai Tony, Liu Weidong, Luo Xi. A constrained ℓ1 minimization approach to sparse precision matrix estimation. J. Amer. Statist. Assoc. 2011;106(494):594–607. ISSN 0162-1459. [Google Scholar]
  • 5.Chung Fan RK. Spectral graph theory. Vol. 92. American Mathematical Soc; 1997. [Google Scholar]
  • 6.Danaher Patrick, Wang Pei, Witten Daniela M. The joint graphical lasso for inverse covariance estimation across multiple classes. Journal of the Royal Statistical Society: Series B (Statistical Methodology. 2014;76(2):373–397. doi: 10.1111/rssb.12033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.d’Aspremont Alexandre, Banerjee Onureena, Ghaoui Laurent El. First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl. 2008;30(1):56–66. ISSN 0895-4798. [Google Scholar]
  • 8.Friedman Jerome, Hastie Trevor, Tibshirani Robert. Sparse inverse covariance estimation with the graphical lasso. Biostatistics. 2007;9(3):432–441. doi: 10.1093/biostatistics/kxm045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Guo Jian, Levina Elizaveta, Michailidis George, Zhu Ji. Joint estimation of multiple graphical models. Biometrika. 2011;98(1):1–15. doi: 10.1093/biomet/asq060. ISSN 0006-3444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Huang Jian, Ma Shuangge, Li Hongzhe, Zhang Cun-Hui. The sparse Laplacian shrinkage estimator for high-dimensional regression. Ann. Statist. 2011;39(4):2021–2046. doi: 10.1214/11-aos897. ISSN 0090-5364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Ideker Trey, Krogan Nevan J. Differential network biology. Molecular systems biology. 2012;8(1) doi: 10.1038/msb.2011.99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Jönsson Göran, Staaf Johan, Vallon-Christersson Johan, Ringnér Markus, Holm Karolina, Hegardt Cecilia, Gunnarsson Haukur, Fagerholm Rainer, Strand Carina, Agnarsson Bjarni A, et al. Genomic subtypes of breast cancer identified by array-comparative genomic hybridization display distinct molecular and clinical characteristics. Breast Cancer Research. 2010;12(3):1–14. doi: 10.1186/bcr2596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Kolar Mladen, Song Le, Xing Eric P. Sparsistent learning of varying-coefficient models with structural changes. Advances in Neural Information Processing Systems. 2009:1006–1014. [Google Scholar]
  • 14.Lauritzen Steffen L. Graphical models. Oxford University Press; 1996. [Google Scholar]
  • 15.Li Caiyan, Li Hongzhe. Variable selection and regression analysis for graph-structured covariates with an application to genomics. Ann. Appl. Stat. 2010;4(3):1498–1516. doi: 10.1214/10-AOAS332. ISSN 1932-6157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Li Fan, Zhang Nancy R. Bayesian variable selection in structured high-dimensional covariate spaces with applications in genomics. Journal of the American Statistical Association. 2010;105(491):1202–1214. [Google Scholar]
  • 17.Liu F, Lozano AC, Chakraborty S, LI F. A graph laplacian prior for variable selection and grouping. Biometrika. 2011;98(1):1–31. [Google Scholar]
  • 18.Liu Fei, Chakraborty Sounak, Li Fan, Liu Yan, Lozano Aurelie C, et al. Bayesian regularization via graph laplacian. Bayesian Analysis. 2014;9(2):449–474. [Google Scholar]
  • 19.Meinshausen Nicolai, Bühlmann Peter. High-dimensional graphs and variable selection with the lasso. Ann. Statist. 2006;34(3):1436–1462. ISSN 0090-5364. [Google Scholar]
  • 20.Negahban Sahand N, Ravikumar Pradeep, Wainwright Martin J, Yu Bin. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Stat. Sci. 2012;27(4):538–557. [Google Scholar]
  • 21.Negahban Sahand N, Ravikumar Pradeep, Wainwright Martin J, Yu Bin. Supplementary material for “a unified framework for high-dimensional analysis of m-estimators with decomposable regularizers”. Stat. Sci. 2012 [Google Scholar]
  • 22.Perou Charles M, Sørlie Therese, Eisen Michael B, van de Rijn Matt, Jeffrey Stefanie S, Rees Christian A, Pollack Jonathan R, Ross Douglas T, Johnsen Hilde, Akslen Lars A, et al. Molecular portraits of human breast tumours. Nature. 2000;406(6797):747–752. doi: 10.1038/35021093. [DOI] [PubMed] [Google Scholar]
  • 23.Peterson Christine, Stingo Francesco C, Vannucci Marina. Bayesian inference of multiple gaussian graphical models. Journal of the American Statistical Association. 2015;110(509):159–174. doi: 10.1080/01621459.2014.896806. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Rapaport Franck, Zinovyev Andrei, Dutreix Marie, Barillot Emmanuel, Vert Jean-Philippe. Classification of microarray data using gene networks. BMC Bioinformatics. 2007;8 doi: 10.1186/1471-2105-8-35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Ravikumar Pradeep, Wainwright Martin J, Raskutti Garvesh, Yu Bin. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence. Electron. J. Stat. 2011;5:935–980. ISSN 1935-7524. [Google Scholar]
  • 26.Rothman Adam J, Bickel Peter J, Levina Elizaveta, Zhu Ji. Sparse permutation invariant covariance estimation. Electron. J. Stat. 2008;2:494–515. ISSN 1935-7524. [Google Scholar]
  • 27.Sedaghat Nafiseh, Saegusa Takumi, Randolph Timothy, Shojaie Ali. Comparative study of computational methods for reconstructing genetic networks of cancer-related pathways. Cancer Informatics. 2014;13(Suppl 2):55–66. 09. doi: 10.4137/CIN.S13781. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Shojaie Ali, Michailidis George. Penalized principal component regression on graphs for analysis of subnetworks. In: Lafferty John D, Williams Christopher KI, Shawe-Taylor John, Zemel Richard S, Culotta Aron., editors. NIPS. Curran Associates, Inc; 2010. pp. 2155–2163. [Google Scholar]
  • 29.Städler Nicolas, Bühlmann Peter, Van De Geer Sara. ℓ1-penalization for mixture regression models. Test. 2010;19(2):209–256. [Google Scholar]
  • 30.Tibshiranit Robert. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) 1996;58(1):267–288. [Google Scholar]
  • 31.Wang Yu-Xiang, Sharpnack James, Smola Alex, Tibshirani Ryan J. Trend filtering on graphs. arXiv preprint arXiv:1410.7690. 2014 [Google Scholar]
  • 32.Weinberger Kilian Q, Sha Fei, Zhu Qihui, Saul Lawrence K. Graph laplacian regularization for large-scale semidefinite programming. Advances in neural information processing systems (NIPS) 2006:1489–1496. [Google Scholar]
  • 33.Yuan Ming. High dimensional inverse covariance matrix estimation via linear programming. J. Mach. Learn. Res. 2010;11:2261–2286. ISSN 1532-4435. [Google Scholar]
  • 34.Yuan Ming, Lin Yi. Model selection and estimation in the Gaussian graphical model. Biometrika. 2007;94(1):19–35. ISSN 0006-3444. [Google Scholar]
  • 35.Zhao Peng, Yu Bin. On model selection consistency of lasso. The Journal of Machine Learning Research. 2006;7:2541–2563. [Google Scholar]
  • 36.Zhao Peng, Rocha Guilherme, Yu Bin. The composite absolute penalties family for grouped and hierarchical variable selection’. Annals of Statistics. 2009;37(6A):3468–3497. [Google Scholar]
  • 37.Zhao Sen, Shojaie Ali. A significance test for graph-constrained estimation. Biometrics (forthcoming) 2015 doi: 10.1111/biom.12418. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES