Skip to main content
Oxford University Press logoLink to Oxford University Press
. 2017 Jun 16;104(3):649–663. doi: 10.1093/biomet/asx030

Expandable factor analysis

Sanvesh Srivastava *,, Barbara E Engelhardt *, David B Dunson *
PMCID: PMC5793687  PMID: 29430037

Summary

Bayesian sparse factor models have proven useful for characterizing dependence in multivariate data, but scaling computation to large numbers of samples and dimensions is problematic. We propose expandable factor analysis for scalable inference in factor models when the number of factors is unknown. The method relies on a continuous shrinkage prior for efficient maximum a posteriori estimation of a low-rank and sparse loadings matrix. The structure of the prior leads to an estimation algorithm that accommodates uncertainty in the number of factors. We propose an information criterion to select the hyperparameters of the prior. Expandable factor analysis has better false discovery rates and true positive rates than its competitors across diverse simulation settings. We apply the proposed approach to a gene expression study of ageing in mice, demonstrating superior results relative to four competing methods.

Keywords: Expectation-maximization algorithm, Factor analysis, Shrinkage prior, Sparsity, Variable selection

1. Introduction

Factor analysis is a popular approach to modelling covariance matrices. Letting Inline graphic, Inline graphic and Inline graphic denote the true number of factors, number of dimensions and Inline graphic covariance matrix, respectively, factor models set Inline graphic, where Inline graphic is the loadings matrix and Inline graphic is a diagonal matrix of positive residual variances. To allow computation to scale to large Inline graphic, Inline graphic is commonly assumed to be of low rank and sparse. These assumptions imply that Inline graphic and the number of nonzero loadings is small. A practical problem is that Inline graphic and the locations of zeros in Inline graphic are unknown. A number of Bayesian approaches exist to model this uncertainty in Inline graphic and sparsity (Carvalho et al., 2008; Knowles & Ghahramani, 2011), but conventional approaches that rely on posterior sampling are intractable for large sample sizes Inline graphic and dimensions Inline graphic. Continuous shrinkage priors have been proposed that lead to computationally efficient sampling algorithms (Bhattacharya & Dunson, 2011), but the focus is on estimating Inline graphic, with Inline graphic treated as a nonidentifiable nuisance parameter. Our goal is to develop a computationally tractable approach for inference on Inline graphic that models the uncertainty in Inline graphic and the locations of zeros in Inline graphic. To do this, we propose a novel shrinkage prior and a corresponding class of efficient inference algorithms for factor analysis.

Penalized likelihood methods provide computationally efficient approaches for point estimation of Inline graphic and Inline graphic. If Inline graphic is known, then many such methods exist (Kneip & Sarda, 2011; Bai & Li, 2012). Sparse principal components analysis estimates a sparse Inline graphic assuming Inline graphic, where Inline graphic is the Inline graphic identity matrix (Jolliffe et al., 2003; Zou et al., 2006; Shen & Huang, 2008; Witten et al., 2009). The assumptions of spherical residual covariance and known Inline graphic are restrictive in practice. There are several approaches to estimating Inline graphic. In econometrics, it is popular to rely on test statistics based on the eigenvalues of the empirical covariance matrix (Onatski, 2009; Ahn & Horenstein, 2013). It is also common to fit the model for different choices of Inline graphic, and choose the best value based on an information criterion (Bai & Ng, 2002). Recent approaches instead use the trace norm or the sum of column norms of Inline graphic as a penalty in the objective function to estimate Inline graphic (Caner & Han, 2014). Alternatively, Ročková & George (2016) use a spike-and-slab prior to induce sparsity in Inline graphic with an Indian buffet process allowing uncertainty in Inline graphic; a parameter-expanded expectation-maximization algorithm is then used for estimation.

We propose a Bayesian approach for estimation of a low-rank and sparse Inline graphic, allowing Inline graphic to be unknown. Our approach relies on a novel multi-scale generalized double Pareto prior, inspired by the generalized double Pareto prior for variable selection (Armagan et al., 2013) and by the multiplicative gamma process prior for loadings matrices (Bhattacharya & Dunson, 2011). The latter approach focuses on estimation of Inline graphic, but does not explicitly estimate Inline graphic or Inline graphic. The proposed prior leads to an efficient and scalable computational algorithm for obtaining a sparse estimate of Inline graphic with appealing practical and theoretical properties. We refer to our method as expandable factor analysis because it allows the number of factors to increase as more dimensions are added and as Inline graphic increases.

Expandable factor analysis combines the representational strengths of Bayesian approaches with the computational benefits of penalized likelihood methods. The multi-scale generalized double Pareto prior is concentrated near low-rank matrices; in particular, a high probability is placed around matrices with rank Inline graphic. Local linear approximation of the penalty imposed by the prior equals a sum of weighted Inline graphic penalties on the elements of Inline graphic. This facilitates maximum a posteriori estimation of a sparse Inline graphic using an extension of the coordinate descent algorithm for weighted Inline graphic-regularized regression (Zou & Li, 2008). The hyperparameters of our prior are selected using a version of the Bayesian information criterion for factor analysis. Under the theoretical set-up for high-dimensional factor analysis in Kneip & Sarda (2011), we show that the estimates of loadings are consistent and that the estimates of nonzero loadings are asymptotically normal.

2. Expandable factor analysis

2.1. Factor analysis model

Consider the usual factor model. Let Inline graphic, Inline graphic and Inline graphic be the mean-centred data matrix, latent factor matrix and residual error matrix, respectively, where Inline graphic and Inline graphic are unknown. We use index Inline graphic for samples, index Inline graphic for dimensions, and index Inline graphic for factors. If Inline graphic is the residual error variance matrix, then the factor model for Inline graphic is

yid=j=1kzijλdj+eid,zijN(0,1),eidσd2N(0,σd2), (1)

where Inline graphic and Inline graphic are independent Inline graphic. Equivalently,

yi=Λzi+ei,yi=(yi1,,yip)T,zi=(zi1,,zik)T,ei=(ei1,,eip)T (2)

for sample Inline graphic and Inline graphicInline graphic. Similarly, model (1) reduces to regression in the space of latent factors

yd=Zλd+ed,yd=(y1d,,ynd)T,λd=(λd1,,λdk)T,ed=(e1d,,end)T (3)

for dimension Inline graphicInline graphic. Unlike usual regression, the design matrix Inline graphic in (3) is unknown.

Penalized estimation of Inline graphic is typically based on (2) or (3). The loss is estimated as the regression-type squared error after imputing Inline graphic using the eigendecomposition of the empirical covariance matrix Inline graphic or an expectation-maximization algorithm. The choice of penalty on Inline graphic presents a variety of options. If the goal is to select factors that affect any of the Inline graphic variables, then the sum of column norms of Inline graphic can be used as a penalty; a recent example is the group bridge penalty, Inline graphic, where Inline graphic and Inline graphic is an upper bound on Inline graphic. The selected factors correspond to the nonzero columns of the estimated Inline graphic (Caner & Han, 2014). To further obtain elementwise sparsity, a nonconcave variable selection penalty can be applied to the elements in Inline graphic. The estimate of Inline graphic depends on the choice of criterion for selecting the tuning parameters (Hirose & Yamamoto, 2015).

Our expandable factor analysis differs from this typical approach in several important ways. We start from a Bayesian perspective, and place a prior on Inline graphic that is structured to allow uncertainty in Inline graphic while shrinking towards loadings matrices with many zeros and Inline graphic. If Inline graphic is an upper bound on Inline graphic, then the prior is designed to automatically allow a slow rate of growth in Inline graphic as the number of dimensions Inline graphic increases by concentrating in neighbourhoods of matrices with rank bounded above by Inline graphic. To our knowledge, this is a unique feature of our approach, justifying its name. Expandability is an appealing characteristic, as more factors should be needed to accurately model the dependence structure as the dimension of the data increases.

2.2. Multi-scale generalized double Pareto prior

We would like to design a prior on Inline graphic such that maximum a posteriori estimates of Inline graphic have the following four characteristics:

  • (a) the estimate of a loading with large magnitude should be nearly unbiased;

  • (b) a thresholding rule, such as soft-thresholding, is used to estimate the loadings so that loadings estimates with small magnitudes are automatically set to zero;

  • (c) the estimator of any loading is continuous in the data to limit instability; and

  • (d) the Inline graphic-norm of the Inline graphicth column of the estimated Inline graphic does not increase as Inline graphic increases.

The first three properties are related to nonconcave variable selection (Fan & Li, 2001). Properties (b) and (d) together ensure existence of a column index after which all estimated loadings are identically zero. Automatic relevance determination and multiplicative gamma process priors satisfy (d) but fail to satisfy (b). No existing prior for loadings matrices satisfies properties (a)–(d) simultaneously (Carvalho et al., 2008; Bhattacharya & Dunson, 2011; Knowles & Ghahramani, 2011).

In order to satisfy these four properties and obtain a computationally efficient inference procedure, it is convenient to start with a prior for a loadings matrix Inline graphic having infinitely many columns; in practice, all of the elements will be estimated to be zero after a finite column index that corresponds to the estimated number of factors. Bhattacharya & Dunson (2011) showed that the set of loadings matrices Inline graphic that leads to well-defined covariance matrices is

C={Λ:max1dpj=1λdj2<}.

We propose a multi-scale generalized double Pareto prior for Inline graphic having support on Inline graphic. This prior is constructed to concentrate near low-rank matrices, placing high probability around matrices with rank at most Inline graphic.

The multi-scale generalized double Pareto prior on Inline graphic specifies independent generalized double Pareto priors on Inline graphic (Inline graphic; Inline graphic) so that the density of Inline graphic is

pmgdP(Λ)=d=1pj=1pgdP(λdjαj,ηj),pgdP(λdjαj,ηj)=αj2ηj(1+|λdj|ηj)(αj+1), (4)

where Inline graphic is the generalized double Pareto density with parameters Inline graphic and Inline graphic (Armagan et al., 2013). This prior on Inline graphic ensures that properties (a)–(c) are satisfied. Property (d) is satisfied by choosing parameter sequences Inline graphic and Inline graphic (Inline graphic) such that two conditions hold: the prior measure Inline graphic on Inline graphic has density Inline graphic in (4), and Inline graphic has Inline graphic as its support. These conditions hold for the form of Inline graphic and Inline graphic (Inline graphic) specified by the following lemma.

Lemma 1

If Inline graphic, Inline graphic (Inline graphic) and Inline graphic, then Inline graphic.

The proof is given in the Supplementary Material, along with the other proofs.

As in Bhattacharya & Dunson (2011), we truncate to a finite number of columns for tractable computation. This truncation is accomplished by mapping Inline graphic to Inline graphic, with Inline graphic retaining the first Inline graphic columns of Inline graphic. The choice of Inline graphic is such that Inline graphic is arbitrarily close to Inline graphic, where distance between Inline graphic and Inline graphic is measured using the Inline graphic-norm of their elementwise difference. In addition, for computational convenience, we assume that the hyperparameters Inline graphic and Inline graphic (Inline graphic) are analytic functions of the parameters Inline graphic and Inline graphic, respectively, with these functions satisfying the conditions of Lemma 1.

The following lemma defines the forms of Inline graphic and Inline graphic (Inline graphic) in terms of Inline graphic and Inline graphic.

Lemma 2

If Inline graphic, Inline graphic, Inline graphic and Inline graphic for Inline graphic, then Inline graphic, where Inline graphic has density Inline graphic in (4) with hyperparameters Inline graphic and Inline graphic (Inline graphic). Furthermore, given Inline graphic, there exists a positive integer Inline graphic for every Inline graphic such that for all Inline graphic, Inline graphic, Inline graphic (Inline graphic) and Inline graphic, we have that Inline graphic where Inline graphic.

The penalty imposed on the loadings by the prior grows exponentially with Inline graphic as the column index increases. This property of the prior ensures that all the loadings are estimated to be zero after a finite column index, which corresponds to the estimated number of factors.

3. Estimation algorithm

3.1. Expectation-maximization algorithm

We rely on an adaptation of the expectation-maximization algorithm to estimate Inline graphic and Inline graphic. Choose a positive integer Inline graphic of order Inline graphic as the upper bound on Inline graphic; the estimate of the number of factors will be less than or equal to Inline graphic. The results are not sensitive to the choice of Inline graphic due to the properties of the multi-scale generalized double Pareto prior, provided Inline graphic is sufficiently large. If Inline graphic is too small, then the estimated number of factors will be equal to the upper bound, suggesting that this bound should be increased. Given Inline graphic, define Inline graphic and Inline graphic (Inline graphic) as in Lemma 2, with Inline graphic and Inline graphic being prespecified constants.

We present the objective function as a starting point for developing the coordinate descent algorithm and provide derivations in the Supplementary Material. Let Inline graphic and Inline graphic, where the superscript Inline graphic denotes an estimate at iteration Inline graphic and Inline graphic denotes the conditional expectation given Inline graphic, Inline graphic and Inline graphic based on (1). The objective function for parameter updates in iteration Inline graphic is

argminλd,σd2d=1,,pd=1p(n+22npklogσd2+wd(t)X(t)λd2wd(t)Twd(t)+(YTY/n)dd2pkσd2+[j=1kαj(δ)+1npklog{1+|λdj|ηj(ρ)}]), (5)

where Inline graphic and Inline graphic (Inline graphic).

3.2. Estimating parameters using a convex objective function

The objective (5) is written as a sum of Inline graphic terms. The Inline graphicth term corresponds to the objective function for the regularized estimation of the Inline graphicth row of the loadings matrix, Inline graphic, with a specific form of log penalty on Inline graphic (Zou & Li, 2008). Local linear approximation at Inline graphic of the log penalty on Inline graphic in (5) implies that each row of Inline graphic is estimated separately at iteration Inline graphic:

λdlla(t+1)=argminλdwd(t)X(t)λd22pkσd2(t)+j=1kαj(δ)+1npk{ηj(ρ)+|λdj(t)|}|λdj|(d=1,,p). (6)

This problem corresponds to regularized estimation of regression coefficients Inline graphic with Inline graphic as the response, Inline graphic as the design matrix, Inline graphic as the error variance, and a weighted Inline graphic penalty on Inline graphic.

The solution to (6) is found using block coordinate descent. Let column Inline graphic of Inline graphic and row Inline graphic of Inline graphic without the Inline graphicth element be written as Inline graphic and Inline graphic. Then the update to estimate Inline graphic is

λdjlla(t+1)=sign(λ~dj(t))fjj(t)(|λ~dj(t)|cdj(t))+,cdj(t)=σd2(t){αj(δ)+1}n{ηj(ρ)+|λdj(t)|}(j=1,,k), (7)

where Inline graphic and Inline graphic. Fix Inline graphic at Inline graphic in (5) to update Inline graphic in iteration Inline graphic as

σd2(t+1)=nn+2{(YTY/n)dd+λlla(t+1)TF(t)λlla(t+1)2ld(t)Tλlla(t+1)}. (8)

If any root-Inline graphic-consistent estimate of Inline graphic is used instead of Inline graphic in (6), then it acts as a warm starting point for the estimation algorithm. This leads to a consistent estimate of Inline graphic in one step of coordinate descent (Zou & Li, 2008). An implementation of this approach for known values of Inline graphic and Inline graphic is summarized in steps (i)–(iv) of Algorithm 1 using the R (R Development Core Team, 2017) package glmnet (Friedman et al., 2010).

Algorithm 1

Estimation algorithm for expandable factor analysis.

  • Notation :

    • 1. Inline graphic is the diagonal matrix containing diagonal elements of a symmetric matrix Inline graphic.

    • 2. Chol(Inline graphic) is the upper triangular Cholesky factorization of a symmetric positive-definite matrix Inline graphic.

    • 3. Inline graphic is a block-diagonal matrix with Inline graphic forming the diagonal blocks.

    • 4. Inline graphic, where Inline graphic.

  • Input :

    • 1. Data Inline graphic and upper bound Inline graphic on the rank of the loadings matrix.

    • 2. The Inline graphic-Inline graphic grid with Inline graphic grid indices (Inline graphic; Inline graphic).

  • Do :

    • 1. Centre data about their mean Inline graphic (Inline graphic; Inline graphic).

    • 2. Let Inline graphic. Then estimate eigenvalues and eigenvectors of Inline graphic: Inline graphic and Inline graphic.

    • 3. Define Inline graphic to be the matrix Inline graphic.

    • 4. Begin estimation of Inline graphic, Inline graphic and Inline graphic across the Inline graphic-Inline graphic grid:

    • For Inline graphic

    •    For Inline graphic

    •     (i) Define Inline graphic, Inline graphic if Inline graphic, and Inline graphic if Inline graphic (Inline graphic).

    •     (ii) Initialize the following statistics required in (7):
      Σ0=diag(Sy^y^Λ0Λ0T),Ω0=Λ0Λ0T+Σ0,G0=(Ω0)1Λ0,L0=Sy^y^G0,Δ0=IkΛ0TG0,F0=Δ0+G0TSy^y^G0,R0=Chol(F0).
    •     (iii) Define Inline graphic, Inline graphic, Inline graphic and Inline graphic required to solve (6):
      X=bdiag(R0,,R0p times),w={(Σ0)111,,(Σ0)111k times,,(Σ0)pp1,,(Σ0)pp1k times},y=vec{(R0)1L0T},v=1npk(α1+1η1+|λ110|,,αk+1ηk+|λ1k0|,,α1+1η1+|λp10|,,αk+1ηk+|λpk0|).
    •     (iv) Estimate Inline graphic in (7) and Inline graphic in (8) using the R package glmnet in three steps:

    •    Inline graphic result Inline graphic glmnetInline graphicx = Inline graphic, y = Inline graphic, weights = Inline graphic, intercept = FALSE,

            standardize = FALSE, penalty.factor = Inline graphic.

    •    Inline graphic Inline graphic coefInline graphicresult, s = Inline graphic, exact = TRUEInline graphic [-1, ].

    •    Inline graphic Inline graphic (Inline graphic).

    • (v) Set Inline graphic, Inline graphic, Inline graphic, and estimate the posterior weight Inline graphic in (10).

    • End for.

    • Set Inline graphic.

    • End for.

    • 5. Obtain grid index Inline graphic for the estimate of Inline graphic, where Inline graphic

  • Return :

    • Inline graphic , Inline graphic and Inline graphic.

The estimate of Inline graphic obtained using (7) satisfies properties (a)–(d) described earlier. The adaptive threshold Inline graphic in (7) ensures that property (a) is satisfied. The soft-thresholding rule to estimate Inline graphic ensures that property (b) is satisfied. The local linear approximation (6) has continuous first derivatives in the parameter space excluding zero, so property (c) is also satisfied (Zou & Li, 2008). The Inline graphic estimate satisfies property (d) due to the structured penalty imposed by the prior.

We comment briefly on the choice of prior and uncertainty quantification. We build on the generalized double Pareto prior instead of other shrinkage priors not only because the estimate of Inline graphic satisfies properties (a)–(d), but also because local linear approximation of the resulting penalty has a weighted Inline graphic form. We exploit this for efficient computations and use a warm starting point to estimate a sparse Inline graphic in one step using Algorithm 1. Uncertainty estimates of the nonzero loadings are obtained from Laplace approximation, and the remaining loadings are estimated as zero without uncertainty quantification.

3.3. Root-Inline graphic-consistent estimate of Inline graphic

The root-Inline graphic-consistent estimate of Inline graphic exists under Assumptions A0–A4 given in the Appendix. If Inline graphic and Inline graphic (Inline graphic) are the eigenvalues and eigenvectors of the empirical covariance matrix Inline graphic, then Inline graphic is the eigendecomposition of Inline graphic. It is known that Inline graphic is a root-Inline graphic-consistent estimator of Inline graphic if Inline graphic is fixed and Inline graphic. If Inline graphic, Inline graphic and Inline graphic, then Inline graphic is a root-Inline graphic-consistent estimator of Inline graphic; see the Supplementary Material for a proof. Scaling by Inline graphic is required because the largest eigenvalue of Inline graphic tends to infinity as Inline graphic (Kneip & Sarda, 2011). This scaling does not change our estimation algorithm for Inline graphic in (7), except that Inline graphic is changed to Inline graphic (Inline graphic).

3.4. Bayesian information criterion to select Inline graphic and Inline graphic

The parameter estimates in (7) and (8) depend on the hyperparameters through Inline graphic and Inline graphic, both of which are unknown. To estimate Inline graphic and Inline graphic, we use a grid search. Let Inline graphic and Inline graphic form a Inline graphic-Inline graphic grid. If Inline graphic is the value of (Inline graphic, Inline graphic) at grid index Inline graphic, then Inline graphic and Inline graphicInline graphic are the hyperparameters of our prior defined using Lemma 2, and Inline graphic and Inline graphic are the parameter estimates based on this prior. Algorithm 1 first estimates Inline graphic and Inline graphic for every Inline graphic by choosing warm starting points and then estimates (Inline graphic, Inline graphic) using all the estimated Inline graphic and Inline graphic. These two steps in the estimation of (Inline graphic, Inline graphic) are described next.

The structured penalty imposed by our prior implies that Inline graphic has the maximum number of nonzero loadings. Algorithm 1 exploits this structure by first estimating Inline graphic and then other loadings matrices along the Inline graphic-Inline graphic grid by successively thresholding nonzero loadings in Inline graphic to 0. Let Inline graphic be the set that contains the locations of nonzero loadings in Inline graphic. The estimation path of Algorithm 1 across the Inline graphic-Inline graphic grid is such that Inline graphic (Inline graphic) and Inline graphic.

After the estimation of Inline graphic and Inline graphicInline graphic, Inline graphic is set to Inline graphic if Inline graphic has the maximum posterior probability. Let Inline graphic be the cardinality of set Inline graphic. Given Inline graphic, there are Inline graphic loadings matrices that have Inline graphic nonzero loadings but differ in the locations of the nonzero loadings. Assuming that each of these matrices is equally likely to represent the locations of nonzero loadings in the true loadings matrix, the prior for Inline graphic is

pr(M(r,s)δr,ρs)(pk|M(r,s)|)1(r=1,,R;s=1,,S). (9)

Let Inline graphic be the posterior probability of Inline graphic. Then an asymptotic approximation to Inline graphic is

2logf(Y,Λ(r,s)δr,ρs)+|M(r,s)|logn+2|M(r,s)|log(pk) (10)

if terms of order smaller than Inline graphic are ignored, where Inline graphic is the joint density of Inline graphic and Inline graphic based on (1). The first term in (10) measures the goodness-of-fit, and the last two terms penalize complexity of a factor model with Inline graphic samples and Inline graphic loadings with the locations of nonzero loadings in Inline graphic. Theorem 3 in the next section shows that Inline graphic and Inline graphic have the same asymptotic order under certain regularity assumptions, where Inline graphic is the extended Bayesian information criteria of Chen & Chen (2008) and Inline graphic is an unknown constant. The analytic forms of Inline graphic and Inline graphic are the same when Inline graphic and terms of order smaller than Inline graphic are ignored, so we use Inline graphic for estimating Inline graphic in our numerical experiments.

4. Theoretical properties

Let Inline graphic and Inline graphic be the fixed points of Inline graphic and Inline graphic The updates (7) and (8) define the map Inline graphic, where Inline graphic. The following theorem shows that our estimation algorithm retains the convergence properties of the expectation-maximization algorithm.

Theorem 1

If Inline graphic represents the objective (5), then Inline graphic does not decrease at every iteration. Let Inline graphic be the local linear approximation of (5). Assume that Inline graphic only for stationary points of Inline graphic; then the sequence Inline graphic converges to its stationary point Inline graphic.

Let Inline graphic be the true loadings matrix and Inline graphic the residual variance matrix. We define Inline graphic (Inline graphic; Inline graphic) and express Inline graphic as having Inline graphic columns. The locations of true nonzero loadings are in the set Inline graphic. Let Inline graphic and Inline graphic be the estimates of Inline graphic and Inline graphic obtained using our estimation algorithm for a specific choice of Inline graphic and Inline graphic (Inline graphic); then Inline graphic is an estimator of Inline graphic. If Inline graphic and Inline graphic, then Inline graphic and Inline graphic retain elements of Inline graphic and Inline graphic with indices in the set Inline graphic. The following theorem specifies the asymptotic properties of Inline graphic, Inline graphic and Inline graphic.

Theorem 2

Suppose that Assumptions A0–A6 in the Appendix hold and that Inline graphic, Inline graphic and Inline graphic. Then, for any Inline graphic and Inline graphic

  • (i) Inline graphic, Inline graphic and Inline graphic are consistent estimators of Inline graphic, Inline graphic and Inline graphic, respectivelyInline graphic

  • (ii) Inline graphic and Inline graphic in distribution, where Inline graphic is a Inline graphic symmetric positive-definite matrix and Inline graphic.

Theorem 2 holds for any multi-scale generalized double Pareto prior with hyperparameters Inline graphic and Inline graphic (Inline graphic) that satisfies Assumption 5. In practice, the estimate of Inline graphic depends on the choice of Inline graphic and Inline graphic. Restricting the search to the hyperparameters indexed along the Inline graphic-Inline graphic grid, Algorithm 1 sets the values of the hyperparameters to Inline graphic and Inline graphic (Inline graphic), where Inline graphic achieves its maximum at grid index Inline graphic. The following theorem justifies this method of selecting hyperparameters and shows the asymptotic relationship between Inline graphic and Inline graphic.

Theorem 3

Suppose that the generalized double Pareto prior with hyperparameters defined using Inline graphic leads to estimation of Inline graphic. Let Inline graphic be another set that contains the locations of nonzero loadings in an estimated Inline graphic for a given Inline graphic. Define Inline graphic and Inline graphic. If Assumptions A0–A7 in the Appendix hold, then for any Inline graphic such that Inline graphic

  • (i) Inline graphic in probability as Inline graphic

  • (ii) Inline graphic as Inline graphic.

Let Inline graphic be a point on the Inline graphic-Inline graphic grid that leads to estimation of Inline graphic. Then, Theorem 3 shows that Algorithm 1 selects Inline graphic with probability tending to 1 because Inline graphic will be larger than any Inline graphic where Inline graphic is such that Inline graphic.

5. Data analysis

5.1. Set-up and comparison metrics

We compared our method with those of Caner & Han (2014), Hirose & Yamamoto (2015), Ročková & George (2016) and Witten et al. (2009). The first competitor was developed to estimate the rank of Inline graphic, and the last three competitors were developed to estimate Inline graphic. We used two versions of Roǎková and George’s method. The first version uses the expectation-maximization algorithm developed in Ročková & George (2016), and the second version adds an extra step in every iteration of the algorithm that rotates the loadings matrix using the varimax criterion.

We evaluated the performance of the methods for estimating Inline graphic on simulated data using the root mean square error, proportion of true positives, and proportion of false discoveries:

mean square error=d=1pj=1k(|λdj||λ^dj|)2/(pk),true positive rate=|M^M|/|M|,false discovery rate=|M^M|/|M^|,

where Inline graphic and Inline graphic are the true and estimated loadings matrices and Inline graphic and Inline graphic are the true and estimated locations of nonzero loadings. We assume that Inline graphic for any Inline graphic and Inline graphic. Since Inline graphic and Inline graphic could differ in sign, mean square error compared their magnitudes.

5.2. Simulated data analysis

The simulation settings were based on examples in Kneip & Sarda (2011). The number of dimensions varied among Inline graphic. The rank of every simulated loadings matrix was fixed at Inline graphic. The magnitudes of nonzero loadings in a column were equal and decreased as Inline graphic, Inline graphic, Inline graphic, Inline graphic and Inline graphic from the first to the fifth column. The signs of the nonzero loadings were chosen such that the columns of any loadings matrix were orthogonal, with a small fraction of overlapping nonzero loadings between adjacent columns:

λdj={2(6j),1+(j1)pkdjpk,1jk,2(6j),1+jpkd(j+1)pk,1jk1,2(6j),(j1)pkdjpk1,2jk,0,otherwise.

The error variances Inline graphic increased linearly from Inline graphic to Inline graphic for Inline graphic. With varying sample sizes Inline graphic, data were simulated using model (1) for all combinations of Inline graphic and Inline graphic. The simulation set-up was replicated ten times and all five methods were applied in every replication by fixing the upper bound on the number of factors at Inline graphic. The Inline graphic-Inline graphic grid had dimensions Inline graphic, and Inline graphic increased linearly from Inline graphic to Inline graphic while Inline graphic increased linearly from Inline graphic to Inline graphic when Inline graphic and from Inline graphic to Inline graphic when Inline graphic.

All five methods had the same computational complexity of Inline graphic for one iteration, but their runtimes differed depending on their implementations, with the method of Witten et al. (2009) being the fastest. Figure 1 shows that Hirose and Yamamoto’s method and both versions of Roǎková and George’s method significantly overestimated Inline graphic for large Inline graphic. The method of Witten et al. slightly overestimated Inline graphic across all settings. Caner and Han’s method showed excellent performance and accurately estimated Inline graphic across all simulation settings, except when Inline graphic and Inline graphic or Inline graphic. When Inline graphic was larger than 500, Assumption A4 was satisfied and our method accurately estimated Inline graphic as 5 in every setting, performing better than Caner and Han’s method when Inline graphic.

Fig. 1.

Fig. 1.

Rank estimate averaged across simulation replications for the methods of Caner & Han (2014) (crosses), Hirose & Yamamoto (2015) (squares), Ročková & George (2016) varimax-free version (empty circles), Ročková & George (2016) varimax version (filled circles) and Witten et al. (2009) (diamonds), as well as our estimation algorithm (triangles). In each panel the horizontal grey line represents the true number of factors; error bars represent Monte Carlo errors.

The four methods for estimating Inline graphic differed significantly in their root mean square errors, true positive rates, and false discovery rates; see Figs 24. Hirose and Yamamoto’s method had the highest false discovery rates and the lowest true positive rates across most settings. Both versions of Roǎková and George’s method estimated an overly dense Inline graphic across most settings, resulting in high true positive rates and high false discovery rates. The extra rotation step in the second version of Roǎková and George’s method resulted in excellent mean square error performance; however, varimax rotation is a post-processing step. A similar step to reduce the mean square error could be added to our method, for example by including a step to rotate the Inline graphic in step 3 of Algorithm 1 using the varimax criterion. When Inline graphic and Inline graphic were small, the method of Witten et al. achieved the lowest false discovery rates while our method achieved the highest true positive rates. When Inline graphic and Inline graphic were larger than 250 and 100, respectively, Assumption A4 was satisfied and our method simultaneously achieved the highest true positive rates and lowest false discovery rates while maintaining competitive mean square errors relative to the rotation-free methods.

Fig. 2.

Fig. 2.

Root mean square error averaged across simulation replications for the methods of Hirose & Yamamoto (2015) (squares), Ročková & George (2016) varimax-free version (empty circles), Ročková & George (2016) varimax version (filled circles) and Witten et al. (2009) (diamonds), as well as our estimation algorithm (triangles). Error bars represent Monte Carlo errors.

Fig. 4.

Fig. 4.

False discovery rate averaged across simulation replications for the methods of Hirose & Yamamoto (2015) (squares), Ročková & George (2016) original version (empty circles), Ročková & George (2016) varimax version (filled circles) and Witten et al. (2009) (diamonds), as well as our estimation algorithm (triangles). Error bars represent Monte Carlo errors.

Fig. 3.

Fig. 3.

True positive rate averaged across simulation replications for the methods of Hirose & Yamamoto (2015) (squares), Ročková & George (2016) varimax-free version (empty circles), Ročková & George (2016) varimax version (filled circles) and Witten et al. (2009) (diamonds), as well as our estimation algorithm (triangles). Error bars represent Monte Carlo errors.

5.3. Microarray data analysis

We used gene expression data on ageing in mice from the AGEMAP database (Zahn et al., 2007). There were 40 mice aged 1, 6, 16 and 24 months in this study. Each age group included five male and five female mice. Tissue samples were collected from 16 different tissues, including the cerebrum and cerebellum, for every mouse. Gene expression levels in every tissue sample were measured on a microarray platform. After normalization and removal of missing data, gene expression data were available for all 8932 probes across 618 microarrays. We used a factor model to estimate the effect of latent biological processes on gene expression variation.

AGEMAP data were centred before analysis following Perry & Owen (2010). Gene expression measurements were represented by Inline graphic, where Inline graphic and Inline graphic. Further, ageInline graphic represented the age of mouse Inline graphic and genderInline graphic was 1 if mouse Inline graphic was female and 0 otherwise. Least-squares estimates of the intercept, age effect and gender effect in the linear model Inline graphic (Inline graphic), with idiosyncratic error Inline graphic, were represented as Inline graphic, Inline graphic and Inline graphic. Using these estimates for Inline graphic, the mean-centred data were defined as

y^id=yidβ^0d+β^1dagei+β^2dgenderi(i=1,,n;d=1,,p).

Four mice were randomly held out, and all tissue samples for these mice in Inline graphic were used as test data. The remaining samples were used as training data. This set-up was replicated ten times. All four methods were applied to the training data in every replication by fixing the upper bound on the number of factors at 10. The Inline graphic-Inline graphic grid had dimensions Inline graphic, and Inline graphic increased linearly from Inline graphic to Inline graphic while Inline graphic increased linearly from Inline graphic to Inline graphic.

The results for all five methods were stable across all ten folds of crossvalidation. Caner and Han’s method, Hirose and Yamamoto’s method, both versions of Roǎková and George’s method, the method of Witten et al. and our method selected 10, 10, 10, 4 and 1, respectively, as the number of latent biological processes Inline graphic across all folds. Our result matched the result of Perry & Owen (2010), who confirmed the presence of one latent variable using rotation tests. Our simulation results and the findings in Perry & Owen (2010) strongly suggest that our method accurately estimated Inline graphic and the other methods overestimated Inline graphic.

We also estimated the factors for the test data. With Inline graphic denoting test datum Inline graphic and Inline graphic denoting the singular value decomposition of Inline graphic, the factor estimate of test datum Inline graphic was Inline graphic, where Inline graphic denotes the number of samples in the training data. Perry & Owen (2010) found that factor estimates for the tissue samples from cerebrum and cerebellum, respectively, had bimodal densities. We used the density function in R with default settings to obtain kernel density estimates of the factors. Hirose and Yamamoto’s method and both versions of Roǎková and George’s method estimated the number of factors as 10, which made the results challenging to interpret. The method of Witten et al. recovered bimodal densities in all four factors for both tissue samples, but it was unclear which of these four factors corresponded to the factor estimated by Perry & Owen (2010). Our method estimated the number of factors to be 1 and recovered the bimodal density in both tissue samples.

Supplementary Material

Supplementary Data

Acknowledgement

This work was supported by the U.S. National Institute of Environmental Health Sciences, National Institutes of Health, and National Science Foundation. We are grateful to the referees, associate editor, and editor for their comments and suggestions.

Supplementary material

Supplementary material available at Biometrika online includes derivation of the expectation-maximization algorithm, proofs of Lemmas 1 and 2 and Theorems 1–3, supporting figures for the results in § 5.3, and the R code used for data analysis.

Appendix

Assumptions

Assumptions A0–A4 follow from the theoretical set-up for high-dimensional factor models in Kneip & Sarda (2011). Assumption 5 is based on results in Zou & Li (2008) for variable selection.

Assumption A0.

Let Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic and Inline graphic (Inline graphic).

Assumption A1.

There exist finite positive constants Inline graphic, Inline graphic and Inline graphic such that Inline graphic, Inline graphic and Inline graphic (Inline graphic; Inline graphic).

Assumption A2.

There exists a constant Inline graphic such that Inline graphic, Inline graphic, Inline graphic and Inline graphic are Inline graphic-sub-Gaussian for every Inline graphic. A random variable Inline graphic is Inline graphic-sub-Gaussian if Inline graphic for any Inline graphic.

Assumption A3.

Let Inline graphic be the eigenvalues of Inline graphic; then there exists a Inline graphic such that Inline graphic, Inline graphic, Inline graphic, and Inline graphic.

Assumption A4.

The sample size Inline graphic and dimension Inline graphic are large enough that Inline graphic and Inline graphic.

Assumption A5.

Let Inline graphic be the upper bound on Inline graphic and let Inline graphic, Inline graphic, Inline graphic and Inline graphic (Inline graphic) be defined as in Lemma 2. Then Inline graphic, Inline graphic, Inline graphic and Inline graphic (Inline graphic) as Inline graphic, Inline graphic and Inline graphic.

Assumption A6.

The elements of the set Inline graphic are fixed and do not change as Inline graphic or Inline graphic increases to Inline graphic.

Model (2) is recovered upon substituting Inline graphic into Assumption A0. Assumption A1 ensures that Inline graphic is positive definite. Assumption A2 ensures that the empirical covariances are good approximations of the true covariances. Specifically, for any Inline graphic,

sup1j,lp|1ni=1nwijwilcov(wij,wil)|t,sup1j,lp|1ni=1neijeilcov(eij,eil)|t,sup1j,lp|1ni=1nwijeil|t,sup1j,lp|1ni=1nyijyilcov(yij,yil)|t

hold simultaneously with probability at least Inline graphic. If Inline graphic, then Inline graphic as Inline graphic and Inline graphic. Assumption A3 guarantees identifiability of Inline graphic when Inline graphic is large and Inline graphic. Assumption A4 is required to ensure that Inline graphic is a root-Inline graphic-consistent estimator of Inline graphic as Inline graphic, Inline graphic and Inline graphic.

One additional assumption is required to relate Inline graphic and Inline graphic.

Assumption A7

Let Inline graphic for a fixed constant Inline graphic such that Inline graphic.

Assumption A7 and equation (4.6) in Theorem 3 of Kneip & Sarda (2011) imply that Inline graphic for any Inline graphic such that Inline graphic, because Inline graphic as Inline graphic.

References

  1. Ahn S. C. & Horenstein A. R. (2013). Eigenvalue ratio test for the number of factors. Econometrica 81,1203–27. [Google Scholar]
  2. Armagan A., Dunson D. B. & Lee J. (2013). Generalized double Pareto shrinkage. Statist. Sinica 23,119–43. [PMC free article] [PubMed] [Google Scholar]
  3. Bai J. & Li K. (2012) Statistical analysis of factor models of high dimension. Ann. Statist. 40,436–65. [Google Scholar]
  4. Bai J. & Ng S. (2002). Determining the number of factors in approximate factor models. Econometrica 70,191–221. [Google Scholar]
  5. Bhattacharya A. & Dunson D. B. (2011). Sparse Bayesian infinite factor models. Biometrika 98,291–306. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Caner M. & Han X. (2014). Selecting the correct number of factors in approximate factor models: The large panel case with group bridge estimators. J. Bus. Econ. Statist. 32,359–74. [Google Scholar]
  7. Carvalho C. M., Chang J., Lucas J. E., Nevins J. R., Wang Q. & West M. (2008). High-dimensional sparse factor modeling: Applications in gene expression genomics. J. Am. Statist. Assoc. 103,1438–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Chen J. & Chen Z. (2008). Extended Bayesian information criteria for model selection with large model spaces. Biometrika 95,759–71. [Google Scholar]
  9. Fan J. & Li R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Statist. Assoc. 96,1348–60. [Google Scholar]
  10. Friedman J. H., Hastie T. J. & Tibshirani R. J. (2010). Regularization paths for generalized linear models via coordinate descent. J. Statist. Software 33,1–22. [PMC free article] [PubMed] [Google Scholar]
  11. Hirose K. & Yamamoto M. (2015). Sparse estimation via nonconcave penalized likelihood in factor analysis model. Statist. Comp. 25,863–75. [Google Scholar]
  12. Jolliffe I. T., Trendafilov N. T. & Uddin M. (2003). A modified principal component technique based on the LASSO. J. Comp. Graph. Statist. 12,531–47. [Google Scholar]
  13. Kneip A. & Sarda P. (2011). Factor models and variable selection in high-dimensional regression analysis. Ann. Statist. 39,2410–47. [Google Scholar]
  14. Knowles D. & Ghahramani Z. (2011). Nonparametric Bayesian sparse factor models with application to gene expression modeling. Ann. Statist. 5,1534–52. [Google Scholar]
  15. Onatski A. (2009). Testing hypotheses about the number of factors in large factor models. Econometrica 77,1447–79. [Google Scholar]
  16. Perry P. O. & Owen A. B. (2010). A rotation test to verify latent structure. J. Mach. Learn. Res. 11,603–24. [Google Scholar]
  17. R Development Core Team (2017). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.ISBN 3-900051-07-0.http://www.R-project.org. [Google Scholar]
  18. RokĉovÁ V. & George E. I. (2016). Fast Bayesian factor analysis via automatic rotations to sparsity. J. Am. Statist. Assoc. 111,1608–22. [Google Scholar]
  19. Shen H. & Huang J. Z. (2008). Sparse principal component analysis via regularized low rank matrix approximation. J. Mult. Anal. 99,1015–34. [Google Scholar]
  20. Witten D. M., Tibshirani R. J. & Hastie T. J. (2009). A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics 10,515–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Zahn J. M., Poosala S., Owen A. B., Ingram D. K., Lustig A., Carter A., Weeraratna A. T., Taub D. D., Gorospe M., Mazan-Mamczarz K. et al. (2007). AGEMAP: A gene expression database for aging in mice. PLoS Genet. 3,e201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Zou H., Hastie T. J. & Tibshirani R. J. (2006). Sparse principal component analysis. J. Comp. Graph. Statist. 15,265–86. [Google Scholar]
  23. Zou H. & Li R. (2008). One-step sparse estimates in nonconcave penalized likelihood models. Ann. Statist. 36,1509–33. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Data

Articles from Biometrika are provided here courtesy of Oxford University Press

RESOURCES