Skip to main content
. Author manuscript; available in PMC: 2015 Oct 1.
Published in final edited form as: J Am Stat Assoc. 2014 May 13;109(508):1683–1696. doi: 10.1080/01621459.2014.921182

Table 3.

Average entropy loss, denoted by EL, (SD in parentheses) and average quadratic loss, denoted by QL, (SD in parentheses), based on 100 simulations, for estimating multiple precision matrices in Example 3. Here “Smooth”, “Lasso”, “TLP”, “Our-con” and “Ours” denote estimation of individual matrices with kernel smoothing method proposed in [25], the L1 sparseness penalty, that with non-convex TLP penalty, the convex counterpart of our method with the Li-penalty for sparseness and clustering, and our non-convex estimates by solving (2) with penalty (6). The best performer is bold-faced.

Set-up Method n = 120 n = 300

(p,L) EL QL EL QL
(30, 4) Smooth 0.468(.042) 0.941(.097) 0.231(.056) 0.476(.034)
Lasso 1.158(.062) 2.534(.175) 0.736(.036) 1.434(.086)
TLP 1.546(.100) 3.625(.262) 0.575(.045) 1.301(.121)
Our-con 0.897(.066) 1.823(.166) 0.699(.038) 1.317(.085)
Ours 0.501(.063) 1.143(.160) 0.247(.017) 0.524(.042)
(200, 4) Smooth 6.882(.220) 13.26(.535) 2.578(.066) 4.843(.130)
Lasso 10.37(.173) 21.92(.498) 5.449(.094) 10.84(.211)
TLP 12.34(.202) 25.87(.560) 5.523(.153) 12.08(.365)
Our-con 6.091(.199) 12.34(.484) 4.625(.098) 8.625(.209)
Ours 5.079(.265) 11.84(.658) 1.682(.038) 3.551(.096)
(20, 30) Smooth 0.490(.021) 0.878(.042) 0.278(.008) 0.492(.015)
Lasso 0.786(.020) 1.670(.052) 0.564(.012) 1.066(.027)
TLP 0.987(.036) 2.454(.107) 0.355(.014) 0.819(.035)
Our-con 0.653(.023) 1.281(.055) 0.528(.012) 0.959(.027)
Ours 0.317(.013) 0.730(.036) 0.183(.005) 0.391(.014)
(10, 90) Smooth 0.230(.008) 0.409(.017) 0.115(.003) 0.203(.005)
Lasso 0.318(.008) 0.694(.022) 0.205(.004) 0.398(.008)
TLP 0.402(.012) 1.010(.043) 0.148(.004) 0.346(.011)
Our-con 0.240(.008) 0.487(.019) 0.180(.004) 0.335(.007)
Ours 0.158(.005) 0.369(.014) 0.082(.002) 0.176(.005)