Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Aug 21.
Published in final edited form as: Ann Stat. 2017 Jun 13;45(3):1342–1374. doi: 10.1214/16-AOS1487

Asymptotics of empirical eigenstructure for high dimensional spiked covariance

Weichen Wang , Jianqing Fan †,‡,
PMCID: PMC5563862  NIHMSID: NIHMS842544  PMID: 28835726

Abstract

We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

Keywords and phrases: Asymptotic distributions, Principal component analysis, Spiked covariance model, Diverging eigenvalues, Approximate factor model, Relative risk management, False discovery proportion

1. Introduction

Principal Component Analysis (PCA) is a powerful tool for dimensionality reduction and data visualization. Its theoretical properties such as consistency and asymptotic distributions of the empirical eigenvalues and eigenvectors are challenging especially in high dimensional regime. For the past half century, substantial amount of efforts have been devoted to understanding empirical eigen-structure. An early effort came from Anderson (1963) who established the asymptotic normality of sample eigenvalues and eigenvectors under the classical regime with large sample size n and fixed dimension p. However, when dimensionality diverges at the same rate as the sample size, sample covariance matrix is a notoriously bad estimator with dramatically different eigen-structure from the population one. A lot of recent literature makes the endeavor to understand the behavior of the empirical eigenvalues and eigenvectors under high dimensional regime where both n and p go to infinity. See for example Baik, Ben Arous and Péché (2005); Bai (1999); Paul (2007); Johnstone and Lu (2009); Onatski (2012); Shen et al. (2013) and many related papers. For additional developments and references, see Bai and Silverstein (2009).

Among different structures of population covariance, the spiked covariance model is of great interest. It typically assumes several eigenvalues larger than the remaining, and focuses on recovering only these leading eigenvalues and their associated eigenvectors. The spiked part is of importance, as we are usually interested in the directions that explain the most variations of the data. In this paper, we consider a high dimensional spiked covariance model with the leading eigenvalues larger than the rest. We provide new understanding on how the spiked empirical eigenvalues and eigenvectors fluctuate around their theoretical counterparts and what their asymptotic biases are. Three quantities play an essential role in determining the asymptotic behavior of empirical eigen-structure: the sample size n, the dimension p, and the magnitude of leading eigenvalues {λj}j=1m. The natural question to ask is how the asymptotics of empirical engen-structure depends on the interplay of those quantities. We will give a unified answer to this important question in the principal component analysis. Theoretical properties of PCA have been investigated from three different perspectives: (i) random matrix theories, (ii) sparse PCA and (iii) approximate factor model.

The first angle to analyze PCA is through random matrix theories, where it is typically assumed p/n → γ ∈ (0, ∞) with bounded spike sizes. It is well known that if the true covariance matrix is identity, the empirical spectral distribution converges almost surely to the Marcenko-Pastur distribution (Bai, 1999) and when γ < 1 the largest and smallest eigenvalues converge almost surely to (1+γ)2 and (1γ)2 respectively (Bai and Yin, 1993; Johnstone, 2001). If the true covariance structure takes the form of a spiked matrix, Baik, Ben Arous and Péché (2005) showed that the asymptotic distribution of the top empirical eigenvalue exhibits an n2/3 scaling when the eigenvalue lies below a threshold 1+γ, and an n1/2 scaling when it is above the threshold (named BBP phase transition after the authors). The phase transition is further studied by Benaych-Georges and Nadakuditi (2011) and Bai and Yao (2012) under more general assumptions. For the case where we have the regular scaling, Paul (2007) investigated the asymptotic behavior of the corresponding empirical eigenvectors and showed that the major part of an eigenvector is normally distributed with a regular scaling n1/2. The convergence of principal component scores under this regime was considered by Lee, Zou and Wright (2010). The same random matrix regime has also been considered by Onatski (2012) in studying the principal component estimator for high-dimensional factor models. More recently, Koltchinskii and Lounici (2014a,b) revealed a profound link of concentration bounds of empirical eigen-structure with the effective rank defined as = tr(Σ)/λ1 (Vershynin, 2010). Their results extend the regime of bounded eigenvalues to a more general setting, although the asymptotic results in most cases still rely on the assumption = o(n), which essentially requires a low dimensionality, i.e. p/n → 0, if λ1 is bounded. In this paper, we consider the general regime of bounded p/(nλ1), which implies = O(n) and allows diverging λ1. More discussions will be given in Section 3.

A second line of efforts is through sparse PCA. According to Johnstone and Lu (2009), PCA does not generate consistent estimators for leading eigenvectors if p/n → γ ∈ (0, 1) with bounded eigenvalues. This motivates the development of sparse PCA, which leverages the extra assumption on the sparsity of eigenvectors. A large amount of literature has contributed to the topic of sparse PCA, for example Amini and Wainwright (2008); Vu and Lei (2012); Birnbaum et al. (2013); Berthet and Rigollet (2013); Ma (2013). Specifically, Vu and Lei (2012) derived optimal bound for the minimax estimation error of the first sparse leading eigenvector, while Cai, Ma and Wu (2015) conducted a more thorough study on the minimax optimal rates for estimating top eigenvalues and eigenvectors of spiked covariance matrices with jointly k-sparse eigenvectors. This type of work assumes bounded eigenvalues, which ignore the contributions of the strong signals from the data in many real applications. To make the problem solvable, sparsity assumptions on the eigenvectors are imposed. In contrast, driven by applications such as genomics, economics, and finance, this paper studies the contributions of the diverging eigenvalues (signals) to the estimation of their associated eigenvectors, without relying on sparsity assumptions on the eigenvectors.

In order to illustrate the third perspective, let us briefly review the approximate factor model (Bai, 2003; Fan, Liao and Mincheva, 2013) and see how the spiked eigenvalues arise naturally from the model. Consider the following data generating model:

yt=Bft+εt,  for  t=1,,T,

where yt is a p-dimensional vector observed at time t, ft ∈ ℝm is the vector of latent factors that drive the cross-sectional dependence at time t, B is the matrix of the corresponding loading coefficients, and εt is the idiosyncratic part that can not be explained by the factors. Assume without loss of generality that var(ft) = Im, the m × m identity matrix. Then, the model implies Σ = var(yt) = BB′ + Σε, where Σε = var(ε). It admits a low-rank plus sparse structure, when Σε is assumed to be sparse (Fan, Fan and Lv, 2008; Fan, Liao and Mincheva, 2013). The recovery of the low-rank and sparse matrices was considered thoroughly by Candès et al. (2011) and Chandrasekaran et al. (2011) under the incoherence condition in the noise-less setting and by Agarwal, Negahban and Wainwright (2012) in the noisy case. If the factor loadings {bi}ip (the transpose of rows of B) are i.i.d. samples from a population with mean zero and covariance Σb [this is a pervasive assumption commonly used in the factor models (Fan, Liao and Mincheva, 2013)], then by the law of large numbers, p1BB=p1i=1pbibiΣb, as p → ∞. In other words, the eigenvalues of BB′ are approximately

pλ1(Σb)(1+o(1)),,pλm(Σb)(1+o(1)),0,,0,

where λj(Σb) is the jth eigenvalue of Σb. Then, by Weyl’s theorem, we conclude that the eigenvalues of Σ

λj=pλj(Σb)(1+o(1)),  for j=1,,m, (1.1)

and the remaining are bounded, if ‖Σε‖ is bounded. Therefore, the factor model implies a spiked covariance with diverging leading eigenvalues. Fan, Liao and Mincheva (2013) showed that if the leading eigenvalues grow linearly with the dimension, then the corresponding eigenvectors can be consistently estimated as long as sample size goes to infinity. See Section 4 for more details.

Deviating from the classical random matrix and sparse PCA literature, we consider the high dimensional regime, allowing p/n → ∞. To take into account the contributions of the signals for PCA, we allow λj → ∞ for the first m leading eigenvalues. This leads to the third perspective for understanding PCA from this high dimensional setting. Shen et al. (2013) adopted this point of view and considered the regime of p/(nλj) → γj where 0 ≤ γj < ∞ for leading eigenvalues. This is more general than the bounded eigenvalue condition. Specifically if eigenvalues are bounded, we require the ratio p/n converges to a bounded constant as in the random matrix regime. On the other hand, if the dimension is much larger than the sample size, we offset the dimensionality by assuming increased signals or sample size, without additional sparse eigenvector assumption as in sparse PCA regime. In particular, as shown in (1.1), the strong (or pervasive) factors considered in financial applications corresponds to γj = 0 with the leading eigenvalues λjp; see for example Stock and Watson (2002); Bai (2003); Bai and Ng (2002); Fan, Liao and Mincheva (2013); Fan, Liao and Wang (2016). The weak or semi-strong factors considered by De Mol, Giannone and Reichlin (2008) and Onatski (2012) also imply bounded p/(nλ1), with p/n bounded and λjpθ for some θ ∈ [0, 1).

Hall, Marron and Neeman (2005); Jung and Marron (2009) started the research of high dimension low sample size (HDLSS) regime. With n fixed, Jung and Marron (2009) concluded that consistency of leading eigenvalues and eigenvectors is granted if λjpθ for θ > 1, which also corresponds to γj = 0. Shen et al. (2013) revealed an interesting fact that when γj ≠ 0, spiked sample eigenvalues almost surely converge to a biased quantity of the true eigenvalues; furthermore the corresponding sample eigenvectors show an asymptotic conical structure. However, their work focuses only on the consistency problem. In this study, we will consider the same regime as theirs, but focus more on the rates of convergence and the asymptotic distributions of the empirical eigen-structure, and under more relaxed conditions. Our results can be viewed as a natural extension of Paul (2007) to the high dimensional setting.

We would like to emphasize more on the scope and importance of our contributions here. Firstly, the regime we consider in this paper is p/(nλj) → γj ∈ [0, ∞) for jm, which permits high dimensionality p/n → ∞ and diverging eigenvalues without specifying their divergence rates. As we have argued, this encompasses many situations considered in the existing literature. It puts into the same framework of the typical random matrix regime with bounded eigenvalues and HDLSS analysis with fixed sample size. Secondly, the contributions of diverging eigenvalues are indeed recognized and accounted for in our theoretical developments. This avoids the restrictive assumptions on sparse eigenvectors. PCA without sparsity assumptions has been widely used in the diverging fields such as population association study (Yamaguchi-Kabata et al., 2008), genome-wide association study (Ringnér, 2008), microarray data (Landgrebe, Wurst and Welzl, 2002; Price et al., 2006), fMRI data (Thomas, Harshman and Menon, 2002), and financial returns (Chen and Shimerda, 1981; Chamberlain and Rothschild, 1983). Our efforts contribute to theoretical understanding why such a plain PCA works in these diverse fields. Finally, by allowing certain generality, we gain theoretical insights into how n, p and signal strength λj interplay.

The results are useful in two ways. On the one hand, they help quantify the biases of empirical eigen-structure and explain where they come from. Specifically, in Theorem 3.1, the bias of the jth sample eigenvalue (jm) is quantified by p/(nλj), which is also showed by Yata and Aoshima (2012, 2013) under different assumptions of the spiked covariance model. Our novel contribution lies in Theorem 3.2, revealing the bias of the jth sample eigenvector (jm). In (3.7), we provide a decomposition of each empirical eigenvector into a spiked part, which converges to the true eigenvector with a deflation factor also quantified by p/(nλj), and a non-spiked part, which creates a random bias distributed uniformly on an ellipse. More details will be presented in Section 3. On the other hand, the theoretical results provide new technical tools for analyzing factor models, which motivate the study. As we have seen, although it is natural to assume eigenvalues grow linearly with dimension, the assumption imposes a strong signal. Note that when p/(nλj) → 0, no biases will occur. So in Section 4, we consider to relax the order of spikes to slightly faster than p. By correcting the biases, we propose a new method called Shrinkage Principal Orthogonal complEment Thresholding (S-POET) and employ it to two applications: risk assessment of large portfolios (Pesaran and Zaffaroni, 2008; Fan, Liao and Shi, 2015) and false discovery proportion estimation for dependent test statistics (Leek and Storey, 2008; Fan, Han and Gu, 2012). Existing methodologies for those two problems reply on rather strong signal level, but we are able to relax it with the help of S-POET.

The paper is organized as follows. Section 2 introduces the notations, assumptions, and an important fact which serves as basis of our proofs. Sections 3.1 and 3.2 devote to the theoretical results for the sample eigenvalues and eigenvectors of the spiked covariance matrix. In Section 4, we discuss several applications of the theories in the previous section. Simulations are conducted in Section 5 to demonstrate the theoretical results at the finite sample and the performance of S-POET. Section 6 provides concluding remarks. The proofs for Section 3 are provided in the appendix and those for Section 4 are relegated to the supplementary material.

2. Assumptions and a simple fact

Assume that {Yi}i=1n is a sequence of i.i.d. random variables with zero mean and covariance matrix Σp×p. Let λ1, …, λp be the eigenvalues of Σ in descending order. We consider the spiked covariance model as follows.

Assumption 2.1

λ1 > λ2 > ⋯ > λm > λm+1 ≥ ⋯ ≥ λp > 0, where the non-spiked eigenvalues are bounded, i.e. c0 ≤ λjC0, j > m for constants c0, C0 > 0 and the spiked eigenvalues are well separated, i.e. ∃ δ0 > 0 such that minjmj − λj+1)/λj ≥ δ0.

The eigenvalues are divided into the spiked ones and bounded non-spiked ones. We do not need to specify the order of the leading eigenvalues nor require them to diverge. Thus, our results in Section 3 are applicable to both bounded and diverging leading eigenvalues. For simplicity, we only consider distinguishable eigenvalues (multiplicity 1) for the largest m eigenvalues and a fixed number m, independent of n and p.

The factor model y = Bf + ε with pervasive factors considered in Fan, Liao and Mincheva (2013) implies a spiked covariance model with λjp in (1.1) and satisfies the above assumption. For the interplay of the sample size n, dimension p and the spikes λj ’s, the following relationship is assumed as in Shen et al. (2013).

Assumption 2.2

Assume p > n. For the spiked part 1 ≤ jm, cj = p/(nλj) is bounded, and for the non-spiked part, (pm)1j=m+1pλj=c¯+o(n1/2).

We allow p/n → ∞ in any manner, though λj also needs to grow fast enough to ensure bounded cj. In particular, cj = o(1) is allowed as in the factor model. We do not assume the non-spiked eigenvalues are identical, as in most spiked covariance model literature (e.g. Paul (2007); Johnstone and Lu (2009)).

By spectral decomposition, Σ = ΓΛΓ′, where the orthonormal matrix Γ is constructed by the eigenvectors of Σ and Λ = diag(λ1, …, λp). Let Xi = Γ′Yi. Since the empirical eigenvalues are invariant and the empirical eigenvectors are equivariant under an orthonormal transformation, we focus the analysis on the transformed domain of Xi and then translate the results into those of the original data. Note that var(Xi) = Λ. Let Zi = Λ−1/2Xi be the elementwise standardized random vector.

Assumption 2.3

{Zi}i=1n are i.i.d copies of Z. The standardized random vector Z = (Z1, …, Zp) is sub-Gaussian with independent entries of mean zero and variance one. The sub-Gaussian norms of all components are uniformly bounded: maxjZjψ2C0, whereZjψ2 = supq≥1 q−1/2(E|Zj|q)1/q.

Since Var(Xi) = diag(λ1, λ2, …, λp), the first m population eigenvectors are simply unit vectors e1, e2, …, em with only one nonvanishing element. Denote the n by p transformed data matrix by X = (X1, X2, …, Xn)′. Then the sample covariance matrix is

Σ^p×p=1nXX=1ni=1nXiXi,

whose eigenvalues are denoted as λ̂1, λ̂2, …, λ̂p (λ̂j = 0 for j > n) with corresponding eigenvectors ξ̂1, ξ̂2, …, ξ̂p. Note that the empirical eigenvectors of data Yi’s are ξ^j(Y)=Γξ^j.

Let Zj be the jth column of the standardized X. Then each Zj has i.i.d sub-Gaussian entries with zero mean and unit variance. Exchanging the roles of rows and columns, we get the n by n Gram matrix

Σ˜n×n=1nXX=1nj=1pλjZjZj,

with the same nonzero eigenvalues λ̂1, λ̂2, …, λ̂n as Σ̂ and the corresponding eigenvectors u1, u2, …, un. It is well known that for i = 1, 2, …, n

ξ^i=(nλ^i)1/2Xui  and  ui=(nλ^i)1/2Xξ^i, (2.1)

while the other eigenvectors of Σ̂ constitute a (pn)-dimensional orthogonal complement of ξ̂1, …, ξ̂n.

By using this simple fact, for the specific case with c0 = C0 = 1 in Assumption 2.1, λj = 1 for j > m in Assumption 2.2, and Gaussian data in Assumption 2.3, Shen et al. (2013) showed that

λ^jλja.s.1+cj,1jm;

and

|ξ^j,ej|a.s.(1+cj)12,

where 〈a, b〉 denotes the inner product of two vectors. However, they fail to establish any results on convergence rates or asymptotic distributions of the empirical eigen-structure. This motivates the current paper.

The aim of this paper is to establish the asymptotic normality of the empirical eigenvalues and eigenvectors under more relaxed conditions. Our results are a natural extension of Paul (2007) to a more general setting with new insights, where the asymptotic normality of sample eigenvectors is derived using complicated random matrix techniques for Gaussian data under the regime of p/n → γ ∈ [0, 1). In comparison, our proof, based on the relationship (2.1), is much simpler and insightful for understanding the behavior of high dimensional PCA.

Here are some notations that we will use in the paper. For a general matrix M, we denote its matrix entry-wise max norm as ‖Mmax = maxi,j{|Mi,j|} and define the quantities M=λmax1/2(MM),MF=(i,jMi,j2)1/2, ‖M = maxij |Mi,j| to be its spectral, Frobenius and induced ℓ norms. If M is symmetric, we define λj(M) to be the jth largest eigenvalue of M and λmax(M), λmin(M) to be the maximal and minimal eigenvalues respectively. We denote tr(M) as the trace of M. For any vector v, its ℓ2 norm is represented by ‖v‖ while ℓ1 norm is written as ‖v1. We use diag(v) to denote the diagonal matrix with the same diagonal entries as v. For two random vectors a, b of the same length, we say a = b+OP(δ) if ‖ab‖ = OP(δ) and a = b + oP(δ) if ‖ab‖ = oP(δ). We denote ad for some distribution ℒ if there exists b ~ ℒ such that a = b + oP(1). Throughout the paper, C is a generic constant that may differ from line to line.

3. Asymptotic behavior of empirical eigen-structure

3.1. Asymptotic normality of empirical eigenvalues

Let us first study the behavior of the leading m empirical eigenvalues of Σ̂. Denote by λj(A) the jth largest eigenvalue of matrix A and recall that λ̂j = λj(Σ̂). We have the following asymptotic normality of λ̂j.

Theorem 3.1

Under Assumptions 2.1 – 2.3, {λ^j}j=1ms have independent limiting distributions. In addition,

n{λ^jλj(1+c¯cj+OP(λj1p/n))}dN(0,κj1), (3.1)

where κj is the kurtosis of Xj.

The theorem shows that the bias of λ̂jj is c¯cj+OP(λj1p/n). The second term is dominated by the first term since p > n and it is of order oP(n−1/2) if p=o(λj). The latter assumption is satisfied by the strong factor model in Fan, Liao and Mincheva (2013) and a part of weak or semi-strong factor model in Onatski (2012). The theorem reveals the bias is controlled by a term of rate p/(nλj). To get the asymptotically unbiased estimate, it requires cj = p/(nλj) → 0 for jm. This result is more general than that of Shen et al. (2013) and sheds a similar light to that of Koltchinskii and Lounici (2014a,b) i.e. ‖Σ̂Σ‖/‖Σ‖ → 0 almost surely if and only if the effective rank = tr(Σ)/λ1 is of order o(n), which is true when c1 = o(1). Our result here holds for each individual spike. Yata and Aoshima (2012, 2013) employed a similar technical trick and gave a comprehensive study on the asymptotic consistency and distributions of the eigenvalues. They got similar results under different conditions from ours. Our framework is more general here. If cj ↛ 0, bias reduction can also be made; see Section 4.2, where an estimator for is proposed. Under the bounded spiked covariance model considered in Baik, Ben Arous and Péché (2005), Johnstone and Lu (2009) and Paul (2007), it is assumed λj = c0 = C0, j > m so that = c0, the minimum eigenvalue of the population covariance matrix. Our result is also consistent with Anderson (1963)’s result that

n(λ^jλj)dN(0,2λj2),

for Gaussian data and fixed p and λj’s, where the non-spiked part does not exist and thus the bias OP(λj1p/n) disappears. The proof is relegated to the appendix.

3.2. Behavior of empirical eigenvectors

Let us consider the asymptotic distribution of the empirical eigenvectors ξ̂j’s corresponding to λ̂j, j = 1, 2, …, m. As in Paul (2007), each ξ̂j is divided into two parts corresponding to the spiked and non-spiked components, i.e. ξ^j=(ξ^jA,ξ^jB) where ξ̂jA is of length m.

Theorem 3.2

Under Assumptions 2.1 – 2.3, we have

  1. For the spiked part, if m = 1,
    2(1+c¯c1)c¯c1n(1+c¯c1ξ^1A1+OP(pnλ12))dN(0,κ11), (3.2)
    while if m > 1,
    n(ξ^jAξ^jAejA+OP(pnλj2))dNm(0,Σj), (3.3)
    for j = 1, 2, …, m, with
    Σj=k[m]\jajk2ekAekA,
    where [m] = {1, ⋯, m}, ekA is the first m elements of the unit vector ek, and ajk=limλj,λkλjλk/(λjλk), which is assumed to exist.
  2. For the non-spiked part, if we further assume the data is Gaussian, there exists pm dimensional vector h0 ~ Unif (Bpm(1)) such that
    D0ξ^jBξ^jBh0=OP(np)+oP(1n), (3.4)
    where D0=diag(c¯/λm+1,,c¯/λp) is a diagonal matrix and Unif (Bk(r)) denotes the uniform distribution over the centered sphere of radius r. In addition, the max norm of ξ̂jB satisfies
    ξ^jBmax=OP(p/(nλj3/2)+log p/(nλj)). (3.5)
  3. Furthermore, ξ^jA=(1+c¯cj)1/2+OP(λj1p/n+p/(n3/2λj)) and ξ^jB=(c¯cj1+c¯cj)1/2+OP(1/λj+p/(n2λj)). Together with (i), this implies the inner product between empirical eigenvector and the population one converges to (1 + c̄cj)−1/2 in probability and
    ξ^j,ej11+c¯cj=OP(λj1p/n+p/(n3/2λj))+OP(n1)I{m>1}. (3.6)

In the above theory, we assume ajk=limλj,λkλjλkλjλk exists. This is not restrictive if eigenvalues are well separated i.e. minjkmj − λk|/λj ≥ δ0 from assumption 2.1. The assumption obviously holds for the pervasive factor model, in which ajk=λj(Σb)λk(Σb)/(λk(Σb)λj(Σb)).

Theorem 3.2 is an extension of random matrix results into high dimensional regime. Its proof sheds light on how to use the smaller n × n matrix Σ̃ as a tool to understand the behavior of the larger p × p covariance matrix Σ̂. Specifically, we start from Σ̃uj = λ̂juj or identity (A.3) and then use the simple fact (2.1) to get a relationship (A.4) of eigenvector ξ̂j. Then (A.4) is rearranged as (A.6) which gives a clear separation of the dominating term, that is asymptotically normal, and the error term. This makes the whole proof much simpler in comparison with Paul (2007) who showed a similar type of result through a complicated representation of ξ̂j and λ̂j under more restricted assumptions. From this simple trick, we can understand deeply how some important high and low dimensional quantities link together and differ from each other.

Several remarks are in order. Firstly, since ξ^j(Y)=Γξ^j is the jth empirical eigenvector based on observed data Y, we have decomposition

ξ^j(Y)=ΓAξ^jA+ΓBξ^jB, (3.7)

where Γ = (ΓA, ΓB). Note that ΓAξ̂jA converges to the true eigenvector deflated by a factor of 1+c¯cj with the convergence rate OP(p/(nλj2)+p/(n3/2λj)+n1/2) while ΓBξ̂jB creates a random bias, which is distributed uniformly on an ellipse of (pm) dimension and projected into the p dimensional space spaned by ΓB. The two parts intertwine in such a way that correction for the biases of estimating eigenvectors is almost impossible. More details are discussed in Section 4 for the factor models. Secondly, it is clearly as in the eigenvalue case, the bias term λj1p/n Theorem 3.2 (i) disappears when p=o(λj). In particular, for the stronger factor given by (1.1), ξ^j(Y) is a consistent estimator. Thirdly, the situations m = 1 and m > 1 have slight difference in that multiple spikes could interact with each other. Especially this is reflected in the convergence of the angle between the empirical eigenvector and its population counterpart: the angle converges to (1 + c̄cj)−1/2 with an extra rate OP(1/n) which stems from estimating ξ̂jk for jkm (see proof of Theorem 3.2 (iii)). The difference will only be seen when the spike magnitude is higher than the order pnpn1/2. We will verify this by a simple simulation in Section 5. Finally, it is the first time that the max norm bound of the non-spiked part is derived. This bound will be useful for analyzing factor models in Section 4.

Theorem 3.2 again implies the results of Shen et al. (2013). It also generalizes the asymptotic distribution of non-spiked part from pure orthogonal invariant case of Paul (2007) to a more general setting. In particular, when p/n → ∞, the asymptotic distribution of the normalized non-spiked component is not uniform over a sphere any more, but over an ellipse. In addition, our result can be compared with the low dimensional case, where Anderson (1963) showed that

n(ξ^jej)dNp(0,k[m]\jλjλk(λjλk)2ekek), (3.8)

for fixed p and λj’s. Under our assumptions, since the spiked eigenvalues may go to infinity, the constants in the asymptotic covariance matrix are replaced by the limits ajk’s. Similar to the behavior of eigenvalues, the spiked part ξ̂jA preserves the normality property except for a bias factor 1/(1+ c̄cj) caused by the high dimensionality. Also, recent work of Koltchinskii and Lounici (2014b) provides general asymptotic results for the empirical eigenvectors from a spectral projector point of view, but they mainly focus on the regime of p/nλj → 0 or = o(n). Last but not least, it has been shown by Johnstone and Lu (2009) that PCA generates consistent eigenvector estimation if and only if p/n → 0 when the spike sizes are fixed. This motivates the study of sparse PCA. We take the spike magnitude into account and provide additional insights by showing that PCA consistently estimate eigenvalues and eigenvectors if and only if p/(nλj) → 0. This explains why Fan, Liao and Mincheva (2013) can consistently estimate the eigenvalues and eigenvectors while Johnstone and Lu (2009) can not.

4. Applications to factor models

In this section, we propose a method named Shrinkage Principal Orthogonal complEment Thresholding (S-POET) for estimating large covariance matrices induced by the approximate factor models. The estimator is based on correction of the bias of the empirical eigenvalues as specified in (3.1). We derive for the first time the bound for the relative estimation errors of covariance matrices under the spectral norm. The results are then applied to assessing large portfolio risks and estimating false discovery proportions, where the conditions in existing literature are significantly relaxed.

4.1. Approximate factor models

Factor models have been widely used in various disciplines. For example, it is used to extract information from financial market for sufficient forecasting of other time series (Stock and Watson, 2002; Fan, Xue and Yao, 2015) and to adjust heterogeneity for biological data aggregation of multiple sources (Leek et al., 2010; Fan et al., 2016). Consider the approximate factor model

yit=bift+uit, (4.1)

where yit is the observed data for the ith (i = 1, …, p) individual (e.g. returns of stocks) or component (e.g. expressions of genes) at time t = 1, …, T; ft is an m × 1 vector of latent common factors and bi is the factor loadings for the ith individual or component; uit is the idiosyncratic error, uncorrelated with the common factors. In genomics application, t can also index repeated experiments. For simplicity we assume there is no time dependency.

The factor model can be written into a matrix form as follows:

Y=BF+U, (4.2)

where Yp×T, Bp×m, FT×m, Up×T are respectively the matrix form of the observed data, the factor loading matrix, the factor matrix, and the error matrix. For identifiability, we impose the condition that cov(ft) = I. Thus, the covariance matrix is given by

Σ=BB+Σu, (4.3)

where Σu is the covariance matrix of the idiosyncratic error at any time t.

Under the assumption that Σu = (σu,ij)i,jp is sparse with its eigenvalues bounded away from zero and infinity, the population covariance exhibits a “low-rank plus sparse” structure. The sparsity is measured by the following quantity

mp=maxipjp|σu,ij|q,

for some q ∈ [0, 1] (Bickel and Levina, 2008). In particular, with q = 0, mp equals the maximum number of nonzero elements in each row of Σu.

In order to estimate the true covariance matrix with the above factor structure, Fan, Liao and Mincheva (2013) proposed a method called “POET” to recover the unknown factor matrix as well as the factor loadings. The idea is simply to first decompose the sample covariance matrix into the spiked and non-spiked part and estimate them separately. Specifically, recall Σ̂ = T−1YY′ and let {λ̂j} and {ξ̂j} be its corresponding eigenvalues and eigenvectors. They define

Σ^=j=1mλ^jξ^jξ^j+Σ^u, (4.4)

where Σ^u is the matrix after applying thresholding method (Bickel and Levina, 2008) to Σ^u=Σ^j=1mλ^jξ^jξ^j.

They showed that the above estimation procedure is equivalent to the least square approach that minimizes

(B^,F^)=arg minB,FYBFF2s.t.1TFF=Im,BB is diagonal. (4.5)

The columns of F^/T are the eigenvectors corresponding to the m largest eigenvalues of the T × T matrix T−1YY and = T−1Y. After B and F are estimated, the sample covariance of Û = Y′ can be formed: Σ̂u = T−1ÛÛ′. Finally thresholding is applied to Σ̂u to generate Σ^u=(σ^u,ij)p×p, where

σ^u,ij{σ^u,ij,i=j;sij(σ^u,ij)I(|σ^u,ij|τij),ij. (4.6)

Here sij(·) is the generalized shrinkage function (Antoniadis and Fan, 2001; Rothman, Levina and Zhu, 2009) and τij = τ(σ̂u,iiσ̂u,jj)1/2 is the entry-dependent threshold. The above adaptive threshold corresponds to applying thresholding with parameter τ to the correlation matrix of Σ̂u. The positive parameter τ will be determined later.

Fan, Liao and Mincheva (2013) showed that under Assumptions ?? - ?? listed in Appendix ?? in the supplementary material (Wang and Fan, 2015),

Σ^ΣΣ,F=OP(p log pT+mp(log pT+1p)(1q)/2), (4.7)

where ‖AΣ,F = p−1/2Σ−1/2−1/2F and ‖ · ‖F is the Frobenius norm. Note that

Σ^ΣΣ,F=p1/2Σ1/2Σ^Σ1/2IpF,

which measures the relative error in Frobenius norm. A more natural metric is relative error under the operator norm ‖AΣ = ‖Σ−1/2−1/2‖, which can not be obtained by using the technical device of Fan, Liao and Mincheva (2013). Note ‖AΣ,F ≤ ‖AΣ. Via our new results in the last section, we will establish a result under those two relative norms, under weaker conditions than their pervasiveness assumption. Note that the relative error convergence is particularly meaningful for spiked covariance matrix, as eigenvalues are in different scales.

4.2. Shrinkage POET under relative spectral norm

The discussion above reveals several drawbacks of POET. First, the spike size has to be of order p which rules out relatively weak factors. Second, it is well known that the empirical eigenvalues are inconsistent if the spiked eigenvalues do not significantly dominate the non-spiked part. Therefore, a proper correction or shrinkage is needed. See a recent paper by Donoho, Gavish and Johnstone (2014) for optimal shrinkage of eigenvalues.

Regarding to the first drawback, we relax the assumption ‖p−1BBΩ0‖ = o(1) in Assumption ?? to the following weaker assumption.

Assumption 4.1

ΛA1/2BBΛA1/2Ω0=o(1) for some Ω0 with eigenvalues bounded from above and below, where ΛA = diag(λ1, …, λm). In addition, we assume λm → ∞, λ1m is bounded from above and below.

This assumption does not require the first m eigenvalues of Σ to take on any specific rate. They can still be much smaller than p, although for simplicity we require them to diverge and share the same diverging rate. Since ‖Σu‖ is assumed to be bounded, the assumption λm → ∞ is also imposed to avoid the issue of identifiability. When λm does not diverge, more sophisticated condition is needed for identifiability (Chandrasekaran et al., 2011).

In order to handle the second drawback, we propose the Shrinkage POET (S-POET) method. Inspired by (3.1), the shrinkage POET modifies the first part in POET estimator (4.4) as follows:

Σ^S=j=1mλ^jSξ^jξ^j+Σ^u, (4.8)

where λ^jS=max{λ^jc¯p/n,0}, a simple soft thresholding correction. Obviously if λ̂j is sufficiently large, λ^jS/λj=λ^j/λjc¯cj=1+oP(1). Since is unknown, a natural estimator ĉ is such that the total of the eigenvalues remains unchanged:

tr(Σ^)=j=1m(λ^jĉp/n)+(pm)ĉ

or ĉ=(tr(Σ^)j=1mλ^j)/(pmpm/n). It has been shown by Lemma 7 of Yata and Aoshima (2012) that

(ĉc¯)pnλj=OP(tr(Σ^)j=1mλ^j(nm)λmc¯pnλm)=OP(n1).

Thus, replacing by ĉ, we have λ^jS/λj1=OP(λj1p/n+n1/2), i.e. the estimation error in ĉ is negligible. From Lemma 3.1, we can easily obtain the asymptotic normality, n(λ^jS/λj1)dN(0,κj1) if p=o(λj).

To get the convergence of relative errors under the operator norm, we also need the following additional assumptions:

Assumption 4.2

  1. {ut, ft}t≥1 are independently and identically distributed with 𝔼[uit] = 𝔼[uitfjt] = 0 for all ip, jm and tT.

  2. There exist positive constants c1 and c2 such that λmin(Σu) > c1, ‖Σu < c2, and mini,j Var(uitujt) > c1.

  3. There exist positive constants r1, r2, b1 and b2 such that for s > 0, ip, jm,
    (|uit|>s)exp((s/b1)r1) and (|fjt|>s)exp((s/b2)r2).
  4. There exists M > 0 such that for all ip, jm, |bij|Mλj/p.

  5. p(log T)1/r2=o(λm).

The first three conditions are common in factor model literature. If we write B = (1, …, m), by Weyl’s inequality we have max1≤jmj2j ≤ 1+‖Σu‖/λj = 1+o(1). Thus it is reasonable to assume the magnitude |bij | of factor loadings is of order λj/p in the fourth condition. The last condition is imposed to ease technical presentation.

Now we are ready to investigate ‖Σ̂SΣΣ. Suppose the SVD decomposition of Σ,

Σ=(Γp×mΩp×(pm))(Λm×mΘ(pm)×(pm))(ΓΩ).

Then obviously

Σ^SΣΣΣ12(Γ^Λ^SΓ^BB)Σ12+Σ12(Σ^uΣu)Σ12ΔL+ΔS, (4.9)

and

ΔSΣ1Σ^uΣuCΣ^uΣu. (4.10)

It can be shown

ΔL=(Λ12ΓΘ12Ω)(Γ^Λ^SΓ^BB)(ΓΛ12ΩΘ12)ΔL1+ΔL2, (4.11)

where ΔL1=Λ12Γ(Γ^Λ^SΓ^BB)ΓΛ12 and ΔL2=Θ12Ω(Γ^Λ^SΓ^BB)ΩΘ12. Thus in order to find the convergence rate of relative spectral norm, we need to consider the terms ΔL1, ΔL2 and ΔS separately. Notice that ΔL1 measures the relative error of the estimated spiked eigenvalues, ΔL2 reflects the goodness of the estimated eigenvectors, and ΔS controls the error of estimating the sparse idiosyncratic covariance matrix. To bound the relative Frobenius norm ‖Σ̂SΣΣ,F, we define similar quantities Δ̃L1, Δ̃L2, Δ̃S which replace the spectral norm by Frobenius norm multiplied by p−1/2. Note that (4.9)(4.11) also hold for relative Frobenius norm with Δ̃L1, Δ̃L2, Δ̃S. The following theorem reveals the rate of each term. Its proof will be provided in Appendix ?? of the supplementary material (Wang and Fan, 2015).

Theorem 4.1

Under Assumptions 2.1, 2.2, 2.3, 4.1 and 4.2, if p log p > max{T(log T)4/r2, T(log(pT))2/r1}, we have

Δ˜L1ΔL1=OP(T1/2),ΔL2=OP(pT),Δ˜L2=OP(pT),

and by the applying adaptive thresholding estimator (4.6) with

τij=CωT(σ^u,iiσ^u,jj)1/2,  and  ωT=log p/T+1/p,

we have

Δ˜SΔS=OP(mpωT1q).

Combining the three terms, Σ^SΣΣ=OP(p/T+mpωT1q) and Σ^SΣΣ,F=OP(p/T+mpωT1q).

The relative error convergence characterizes the accuracy of estimation for spiked covariance matrix. In contrast with the result on relative Frobenius norm, this is the first time that the relative rate under spectral norm is derived. As long as λm is slightly above p, we reach the same rates of convergence. Therefore, we can conclude S-POET is effective even under a much weaker signal level. Comparing the rate with (4.7), under relative Frobenius norm, we achieve a better rate without the artificial log term, thanks to the new asymptotic results.

4.3. Portfolio risk management

The risk of a given portfolio with allocation weight w is conventionally measured by its variance wΣw, where Σ is the volatility (covariance) matrix of the returns of underlying assets. To estimate large portfolio’s risks, it needs to estimate a large covariance matrix Σ and factor models are frequently used to reduce the dimensionality. This was the idea of Fan, Liao and Shi (2015) in which they used POET estimator to estimate Σ. However, the basic method for bounding the risk error |wΣ̂wwΣw| in their paper is

|wΣ^wwΣw|w12Σ^Σmax.

They assumed that the gross exposure of the portfolio is bounded, i.e. ‖w1 = O(1). Technically, when p is large, wΣw can be small. What an investor cares mostly is the relative risk error RE(w) = |wΣw/wΣw − 1|. Often w is a data-driven investment strategy, which depends on the past data. Regardless of what w is,

maxwRE(w)=Σ^ΣΣ,

which does not converge by Theorem 4.1 for p > T. Thus the question of interest is what kind of portfolio w will make the relative error converge. Decompose w as a linear combination of the eigenvectors of Σ, namely w = (Γ, Ω)η and η=(ηA,ηB). We have the following useful result for risk management.

Theorem 4.2

Under Assumptions 2.1, 2.2, 4.1,4.2 and the factor model (4.1) with Gaussian noises and factors, if there exists C1 > 0 such thatηB1C1, and assume λjpα for j = 1, …, m and TCpβ for α > 1/2, 0 < β < 1, α + β > 1, then the relative risk error is of order

RE(w)=|wΣ^SwwΣw1|=OP(Tmin{2(α+β1)β,12}+mpwT1q),

for α < 1. If α ≥ 1 orηA‖ ≥ C2, RE(w)=OP(mpwT1q).

The condition ‖ηB1C1 is generally weaker than ‖w1 = O(1). It does not limit the total exposure of investor’s position, but only put constraint on investment of the non-spiked section. Note that under the conditions of Theorem 4.2, p/(Tλj) → 0, and S-POET and POET are approximately the same. So the stated result is valid for POET too.

4.4. Estimation of false discovery proportion

Another important application of the factor model is the estimation of false discovery proportion. For simplicity, we assume Gaussian data Xi ~ N(μ, Σ) with an unknown correlation matrix Σ and wish to test which coordinates of μ are nonvanishing. Consider the test statistic Z=nX¯ where is the sample mean of all data. Then Z ~ N(μ*, Σ) with μ*=nμ. The problem is to test

H0j:μj*=0v.s.H1j:μj*0.

Define the number of discoveries R(t) = #{j : Pjt} and the number of false discoveries V(t) = #{true null : Pjt}, where Pj is the p-value associated with the jth test. Note that R(t) is observable while V(t) needs to be estimated. The false discovery proportion (FDP) is defined as FDP(t) = V(t)/R(t).

Fan and Han (2013) proposed to employ the factor structure

Σ=BB+A, (4.12)

where B=(λ1ξ1,,λmξm). λj and ξj are respectively the jth eigenvalue and eigenvector of Σ as before. Then Z can be stochastically decomposed as

Z=μ*+BW+K,

where W ~ N(0, Im) are m common factors and K ~ N(0, A), independent of W, are the idiosyncratic errors. For simplicity, assume the maximal number of nonzero elements of each row of A is bounded. In Fan and Han (2013), they argued that the asymptotic upper bound

FDPA(t)=i=1p[Φ(ai(zt/2+ηi))+Φ(ai(zt/2ηi))]/R(t) (4.13)

of FDP(t) should be a realistic target to estimate for dependence tests, where zt/2 is the t/2-quantile of the standard normal distribution ai = (1 − ‖bi2)−1/2, ηi=biW and bi is the ith row of B.

Realized factors W and the loading matrix B are typically unknown. If a generic estimator Σ̂ is provided, then we are able to estimate B and thus bi from its empirical eigenvalues and eigenvectors λ̂j’s and ξ̂j’s. W can be estimated by the least-squares estimate Ŵ = ()−1Z. Fan and Han (2013) proposed the following estimator for FDPA(t):

FDP^U(t)=i=1p[Φ(âi(zt/2+η^i))+Φ(âi(zt/2η^i))]/R(t), (4.14)

where âi = (1 − ‖i2)−1/2 and η^i=b^iŴ. The following assumptions are in their paper.

Assumption 4.3

There exists a constant h > 0 such that (i) R(t)/p > hp−θ for h > 0 and θ ≥ 0 as p → ∞ and (ii) âih, aih for all i = 1, …, p.

They showed that if Σ̂ is based on the POET estimator with a spike size λmp, under Assumptions ?? - ??, on the event that Assumption 4.3 holds,

|FDP^U,POET(t)FDPA(t)|=OP(pθ(log pT+μ*p)). (4.15)

Again we can relax the assumption on the spike magnitude from order p to much weaker Assumption 4.1. Since Σ is a correlation matrix, λ1 ≤ tr(Σ) = p. This, together with Assumption 4.1, leads us to consider leading eigenvalues of order pα for 1/2 < α ≤ 1.

Now we apply the proposed S-POET method to obtain Σ̂S and use it for FDP estimation. The following theorem shows the estimation error.

Theorem 4.3

If Assumptions 2.1, 2.2, 4.1, and 4.2 are applied to Gaussian independent data Xi ~ N(μ, Σ), and λjpα for j = 1, …, m, TCpβ for 1/2 < α ≤ 1, 0 < β < 1, α + β > 1, on the event that Assumption 4.3 holds, we have

|FDP^U,SPOET(t)FDPA(t)|=OP(pθ(μ*p12+Tmin{α+β1β,12})).

Comparing the result with (4.15), this convergence rate attained by S-POET is more general than the rate achieved before. The only difference is the second term, which is O(T−1/2) if α+12β1 and T−(α + β−1)/β otherwise. So we relax the condition from α = 1 in Fan and Han (2013) to α ∈ (1/2, 1]. This means a weaker signal than order p is actually allowed to obtain a consistent estimate of false discovery proportion.

5. Simulations

We conducted some simulations to demonstrate the finite sample behavior of empirical eigen-structure, the performance of S-POET, and validity of applying it to estimate false discovery proportion.

5.1. Eigen-structure

In this simulation, we set n = 50, p = 500 and Σ = diag(50, 20, 10, 1, …, 1), which has three spike eigenvalues (m = 3) λ1 = 50, λ2 = 20, λ3 = 10 and correspondingly c1 = 0.2, c2 = 0.5, c3 = 1. Data was generated from multivariate Gaussian. The number of simulations is 1000. The histograms of the standardized empirical eigenvalues n/2(λ^j/λj1cj), and their associated asymptotic distributions (standard normal) are plotted in Figure 1. The approximations are very good even for this low sample size n = 50.

Fig 1.

Fig 1

Behavior of empirical eigenvalues. The empirical distributions of n/2(λ^j/λj1cj) for j = 1, 2, 3 are compared with their asymptotic distributions N(0, 1).

Figure 2 shows the histograms of n(ξ^jA/ξ^jAejA) for the first three elements (the spiked part) of the first three eigenvectors. On the one hand, according to the asymptotic results, the values in the diagonal position should stochastically converge to 0 as observed. On the other hand, plots in the off-diagonal positions should converge in distribution to N(0, 1) for kj after standardization, which is indeed the case. We also report the correlations between the first three elements for the three eigenvectors based on those 1000 repetitions in Table 1. The correlations are all quite close to 0, which is consistent with the theory.

Fig 2.

Fig 2

Behavior of empirical eigenvectors. The histogram of the kth element of the jth empirical vector is depicted in the location (k, j) for k, j ≤ 3. Off-diagonal plots of values nξ^jk/ξ^jA/cjck(cjck)2 are compared to their asymptotic distributions N(0, 1) for kj while diagonal plots of values n(ξ^jj/ξ^jA1) are compared to stochastically 0.

Table 1.

The correlations between the first three elements for each of the three empirical eigenvectors based on 1000 repetitions

1st & 2nd elements 1st & 3rd elements 2nd & 3rd elements
1st Eigenvector 0.00156 −0.00192 −0.04112
2nd Eigenvector −0.02318 −0.00403 0.01483
3rd Eigenvector −0.02529 −0.04004 0.12524

For the normalized non-spiked part ξ̂jB/‖ξ̂jB‖, it should be distributed uniformly over the unit sphere. This can be tested by the results of Cai, Fan and Jiang (2013). For any n data points X1, …, Xn on a p-dimensional sphere, define the normalized empirical distribution of angles of each pair of vectors as

μn,p=1(n2)1i<jnδp2(π/2Θij),

where Θij ∈ [0, π] is the angle between vectors Xi and Xj. When the data are generated uniformly from a sphere, μn,p converges to the standard normal distribution with probability 1. Figure 3 shows the empirical distributions of all pairwise angles of the realized ξ̂jB/‖ξ̂jB‖ (j = 1, 2, 3) in 1000 simulations. Since number of such pairwise angels is (10002), the empirical distributions and the asymptotic distributions N(0, 1) are almost identical. The normality holds even for a small subset of the angles.

Fig 3.

Fig 3

The empirical distributions of all pairwise angles of the 1000 realized ξ̂jB/‖ξ̂jB‖ (j = 1, 2, 3) compared with their asymptotic distributions N(0, 1).

Lastly, we did simulations to verify the rate difference of 〈ξ̂j, ej〉 for m = 1 and m > 1, revealed in Theorem 3.2 (iii). We choose n = [10 × 1.2l] for l = 0, …, 9, p = [n3/100], where [·] represents rounding. We set λj = 1 for j ≥ 3 and consider two situations: (1) λ1 = p, λ2 = 1, (2) λ1 = 2λ2 = p. Under both cases, simulations were carried out 500 times and the corresponding angle of the empirical eigenvector and its truth was calculated for each simulation. The logarithm of the median absolute error of ξ^1,e11/1+c1 was plotted against log(n). Under the two situations, the rates of convergence are OP(n−3/2) and OP(n−1) respectively. Thus the slope of the curves should be −3/2 for a single spike and −1 for two spikes, which is indeed the case as shown in Figure 4.

Fig 4.

Fig 4

Difference of convergence rate of ξ^1,e11/1+c1 for models with a single spike and two spikes. The error should be expected to decrease at the rate of OP(n−3/2) and OP(n−1) respectively.

In short, all the simulation results match well with the theoretical results for the high dimensional regime.

5.2. Performance of S-POET

We demonstrate the effectiveness of S-POET in comparison with POET. A similar setting to the last section is used, i.e. m = 3 and c1 = 0.2, c2 = 0.5, c3 = 1. The sample size T ranges from 50 to 150 and p = [T3/2]. Note that when T = 150, p ≈ 1800. The spiked eigenvalues are determined from p/(Tλj) = cj so that λj is of order T, which is much smaller than p. For each pair of T and p, the following steps are used to generate observed data from the factor model for 200 times.

  1. Each row of B is simulated from the standard multivariate normal distribution and the jth column is normalized to have norm λj for j = 1, 2, 3.

  2. Each row of F is simulated from standard multivariate normal distribution.

  3. Set Σu=diag(σ12,,σp2) where σi’s are generated from Gamma(α, β) with α = β = 100 (mean 1, standard deviation 0.1). The idiosyncratic error U is simulated from N(0, Σu).

  4. Compute the observed data Y = BF′ + U.

Both S-POET and POET are applied to estimate the covariance matrix Σ = BB′ + Σu. Their mean estimation errors over 200 simulations, measured in relative spectral norm ‖Σ̂ΣΣ, relative Frobenius norm ‖Σ̂ΣΣ,F, spectral norm ‖Σ̂Σ‖ and max norm ‖Σ̂Σmax, are reported in Figure 5. The errors for sample covariance matrix are also depicted for comparison. First notice that no matter in what norm, S-POET uniformly outperforms POET and the sample covariance. It affirms the claim that shrinkage of spiked eigenvalues is necessary to maintain good performance when the spikes are not sufficiently large. Since the low rank part is not shrunk for POET, its error under the spectral norm is comparable and even slightly larger than that of the sample covariance matrix. The errors under max norm and relative Frobenius norm as expected decrease as T and p increase. However the error under the relative spectral norm does not converge: our theory shows it should increase in the order p/T=T.

Fig 5.

Fig 5

Estimation errors of covariance matrix under relative spectral, relative Frobenius, spectral and max norms using S-POET (red), POET (black) and sample covariance (blue).

5.3. FDP estimation

In this section, we report simulation results on FDP estimation by using both POET and S-POET. The data are simulated in a similar way as in Section 5.2 with p = 1000 and n = 100. The first m = 3 eigenvalues have spike size proportional to p/n which corresponds to α = β = 2/3 in Theorem 4.3. The true FDP is calculated by using FDP(t) = V (t)/R(t) with t = 0.01. The approximate FDP, FDPA(t), is calculated as in (4.13) with known B but estimated W given by Ŵ = (BB′)−1BZ. This FDPA(t) based on a known covariance matrix serves as a benchmark for our estimated covariance matrix to compare with. We employed POET and S-POET to get FDP^U,POET(t) and FDP^U,SPOET(t).

In Figure 6, three scatter plots are drawn to compare FDPA(t), FDP^U,POET(t) and FDP^U,SPOET(t) with the true FDP(t). The points are basically aligned along the 45 degree line, meaning that all of them are quite close to the true FDP. With the semi-strong signal λp/n, although much weaker than order p, POET accomplishes the task as well as S-POET. Both estimators perform as well as if we know the covariance matrix Σ, the benchmark.

Fig 6.

Fig 6

Comparison of estimated FDP’s with true values. The left plot assumes knowledge of B, the middle and right ones are corresponding to POET and S-POET methods respectively. The results are aligned along the 45-degree line, indicating the accuracy of the estimated FDP.

6. Conclusions

In this paper, we studied two closely related problems: the asymptotic behavior of empirical eigenvalues and eigenvectors under a general regime of bounded p/(nλj) and the large covariance estimation for factor models with relaxed signal level of p=o(λj).

The first study provides new technical tools for the derivation of error bounds for large covariance estimation under relative Frobenius norm (with better rate) and relative spectral norm (for the first time). The results motivate the new proposed covariance estimator S-POET for the second problem by correcting biases of the estimated leading eigenvalues. S-POET is demonstrated to have better sampling properties than POET, and this is convincingly verified in the simulation study. In addition, we are able to apply S-POET to two important applications, risk management and false discovery control, and relax the required signal to p. Those conclusions shed new lights for applications of factor models.

On the other hand, the second problem is a key motivation for us to study the empirical engen-structure in a more general high dimensional regime. We aim to understand why PCA works for pervasive factor models but fails classical random matrix problems, without sparsity assumptions. What are the fundamental limit for the PCA in high dimension? We clearly showed that for both empirical eigenvalues and vectors, consistency is granted once p/(nλj) → 0. Further, our theories give a fine-grained characterization of the asymptotic behavior under the generalized and unified regime, which includes the situation of bounded eigenvalues, HDLSS, and pervasive factor models, especially for empirical eigenvectors. The asymptotic rate of convergence is obtained as long as p/(nλj) is bounded, while the asymptotic distribution is fully described when p=o(λj). Some interesting phenomena, such as interaction between multiple spikes, are also revealed in our results. Our proofs are novel in that we clearly identify terms that keep the low-dimensional asymptotic normality and terms that generate the random biases. In sum, our results serve as a necessary complement of the random matrix literature when the signal diverges with dimensionality.

Supplementary Material

supp1

Acknowledgments

The research was partially supported by NSF grants DMS-1206464 and DMS-1406266 and NIH grants R01-GM072611-11 and NIH R01GM100474-04. We would like to thank the Editor, associate editor, and anonymous referees for constructive comments that lead to substantial improvement of the presentation and results of the paper.

APPENDIX A

PROOFS FOR SECTION 3

A.1. Proof of Theorem 3.1

We first provide three useful lemmas for the proof. Lemma A.1 provides non-asymptotic upper and lower bound for the eigenvalues of weighted Wishart matrix for sub-Gaussian distributions.

Lemma A.1

Let A1, …, Ans be n independent p dimensional sub-Gaussian random vectors with zero mean and identity variance, and the sub-Gaussian norms bounded by a constant C0. Then for every t ≥ 0, with probability at least 1 − 2 exp(−ct2), one has

w¯max{δ,δ2}λp(1ni=1nwiAiAi)λ1(1ni=1nwiAiAi)w¯+max{δ,δ2}.

where δ=Cp/n+t/n for constants C, c > 0, depending on C0. Here |wi|’s is bounded for all i and w¯=n1i=1nwi.

The above lemma is the extension of the classical Davidson-Szarek bound [Theorem II.7 of Davidson and Szarek (2001)] to the weighted sample co-variance with sub-Gaussian distribution. It was shown by Vershynin (2010) that the conclusion holds with wi = 1 for all i. With similar techniques to those developed in Vershynin (2010), we can obtain the above lemma for general bounded weights. The details are omitted.

Now in order to prove the theorem, let us define two quantities and treat them separately in the following two lemmas. Let

A=n1j=1mλjZjZj,  and  B=n1j=m+1pλjZjZj,

where Zj is columns of XΛ12. Then,

Σ˜=1nj=1pλjZjZj=A+B. (A.1)
Lemma A.2

Under Assumptions 2.1 – 2.3, as n → ∞,

n(λj(A)/λj1)dN(0,κj1),  for j=1,,m.

In addition, they are asymptotically independent.

Lemma A.3

Under Assumptions 2.1 – 2.3, for j = 1, ⋯, m, we have

λk(B)/λj=c¯cj+OP(λj1p/n)+oP(n12),  for k=1,2,,n.

The proofs of the above two lemmas will be given in Appendix?? in the supplementary material (Wang and Fan, 2015).

Proof of Theorem 3.1

By Wely’s Theorem, λj(A) + λn(B) ≤ λ̂j ≤ λj(A) + λ1(B): Therefore from Lemma A.3,

λ^jλj=λj(A)λj+c¯cj+OP(λj1pn)+oP(cjn1/2),

By Lemma A.2 and Slutsky’s theorem, we conclude that n(λ^j/λj(1+c¯cj+OP(λj1p/n))) converges in distribution to N(0, κj − 1) and the limiting distributions of the first m eigenvalues are independent.

A.2. Proofs of Theorem 3.2

The proof of Theorem 3.2 is mathematically involved. The basic idea for proving part (i) is outlined in Section 2. We relegate less important technical lemmas A.4 – A.6 to Appendix ?? in the supplementary material (Wang and Fan, 2015) in order not to distract the readers. The proof of part (ii) utilizes the invariance of standard Gaussian distribution under orthogonal transformations.

Proof of Theorem 3.2

(i) Let us start by proving the asymptotic normality of ξ̂jA for the case m > 1. Write

X=(ZAΛA12,ZBΛB12)=(λ1Z1,,λmZm,λm+1Zm+1,,λpZp),

where each Zj follows a sub-Gaussian distribution with mean 0 and identity variance In. Then by the eigenvalue relationship of equation (2.1), we have

ξ^jA=ΛA12ZAujnλ^j and uj=Xξ^jnλ^j=ZAΛA12ξ^jAnλ^j+ZBΛB12ξ^jBnλ^j. (A.2)

Recall uj is the eigenvector of the matrix Σ̃, that is, 1nXXuj=λ^juj. Using X=(ZAΛA12,ZBΛB12), we obtain

(In1nZAΛAλjZA)uj=DujΔuj, (A.3)

where we denote D=(nλj)1ZBΛBZBc¯cjIn, Δ = λ̂jj − (1 + c̄cj). We then left-multiply equation (A.3) by ΛA12ZA/nλ^j and employ relationship (A.2) to replace uj by ξ̂jA and ξ̂jB as follows:

(ImΛAλj)ξ^jA=ΛA12(1nZAZAIm)ΛA12λjξ^jA+ΛA12ZADZAΛA12nλ^jξ^jA+ΛA12ZADZBΛB12nλ^jξ^jBΔξ^jA. (A.4)

Further define

R=k[m]\jλjλjλkekAekA.

Then we have R(IΛA/λj)=ImejAejA. Note that R is only well defined if m > 1. Therefore, by left multiplying R to equation (A.4),

ξ^jAξ^jA,ejAejA=R(ΛAλj)12K(ΛAλj)12ξ^jA+RΛA12ZADZBΛB12nλ^jξ^jBΔRξ^jA, (A.5)

where K=n1ZAZAIn+λj(nλ^j)1ZADZA. Dividing both side by ‖ξ̂jA‖, we are able to write

ξ^jAξ^jAejA=R(ΛAλj)12K(ΛAλj)12ejA+rn, (A.6)

where

rn=(ξ^jAξ^jA,ejA1)ejA+R(ΛAλj)12K(ΛAλj)12(ξ^jAξ^jAejA)+RΛA12ZADZBΛB12nλ^jξ^jBξ^jAΔR(ξ^jAξ^jAejA). (A.7)
Lemma A.4

As n → ∞, rn=OP(λj1p/n+1/n).

By Lemma A.4, rn is a smaller order term. Since (ΛA/λj)12ejA=ejA,

n(ξ^jAξ^jAejA+OP(pnλj2))=nR(ΛAλj)12KejA+oP(1). (A.8)

Now let us derive normality of the right hand side of (A.8). According to the definition of R,

R(ΛAλj)12=k[m]\jλjλkλjλkekAekAk[m]\jajkekAekA. (A.9)

Let W=nKejA=(W1,,Wm) and W(−j) be the (m − 1)-dimensional vector without the jth element in W. Since the jth diagonal element of R is zero, R(ΛA/λj)12W depends only on W(−j).

Lemma A.5

W(j)+OP(λj1p/n)dN(0,Im1).

Therefore, by Lemma A.5 and Slutsky’s theorem,

nR(ΛAλj)12KejA+OP(pnλj2)dNm(0,k[m]\jajk2ekAekA).

Together with (A.8), we concludes (3.3) for the case m > 1.

Now let us turn to the case of m = 1. Since R is not defined for m = 1, we need to find a different derivation. Equivalently, (A.3) can be written as

1nZ1Z1u1+1nλ1ZBΛBZBu1=λ^1λ1u1.

Left-multiplying u1 and using relationship (A.2), we obtain easily

ξ^1A2=1c¯c1λ^1/λ1λ1λ^1u1Du1=1c¯c1λ^1/λ1+OP(λ11p/n),

where D is defined as before and D=OP(λ11p/n) according to the proof of Lemma A.4. Expanding 1c¯c1/x at the point of (1 + c̄c1), we have

ξ^1A=11+c¯c1+c¯c12(1+c¯c1)3/2(λ^1/λ1(1+c¯c1))+OP(pnλ12+c1n1).

Note that from Lemmas A.2 and A.3, λ^1/λ1(1+c¯c1)=(Z12/n1)+OP(λ11(p/n)1/2)+oP(cjn1/2). Therefore due to the fact n(Z12/n1) is asymptotically N(0, κ1 − 1), we conclude

2(1+c¯c1)3/2c¯c1n(ξ^1A11+c¯c1+OP(pnλ12))dN(0,κ11).

This completes the first part of the proof.

(ii) We now prove the conclusion for the non-spiked part ξ̂jB. Recall that Xi follows N(0, Λ). Consider XiR=diag(Im,D0)Xi where as defined in the theorem D0=diag(c¯/λm+1,,c¯/λp). Here the superscript R indicates rescaled data by diag(Im, D0). After rescaling, we have XiR~N(0,diag(ΛA,c¯Ipm)). Correspondingly, the n × p data matrix XR = X diag(Im, D0) = (XA, XBD0) where XA=ZAΛA12 and XB=ZBΛB12 as the notations before. Assume ξ^jR and ujR are eigenvectors given by Σ̂R and Σ̃R of the rescaled data XR and ξ^jR=(ξ^jAR,ξ^jBR). It has been proved by Paul (2007) that h0ξ^jBR/ξ^jBR is distributed uniformly over the unit sphere and is independent of ξ^jBR due to the orthogonal invariance of the non-spiked part of ξ^jBR. Hence it only remains to link ξ̂jB/‖ξ̂jB‖ with h0.

Note that Σ̃ = n−1XX′ and Σ̃R = n−1XRXR′, so

Σ˜Σ˜R=1nXB(ID02)XB=1nj=m+1p(λjc¯)ZjZj,

where the last term is of order OP(p/n) by Lemma A.1. Thus by the sin θ theorem of Davis and Kahan (1970), ujujR=OP(λj1p/n). Next we convert from uj to ξ̂jB using the basic relationship (2.1). We have,

D0ξ^jBξ^jBξ^jBRξ^jBR=D0XBujnλ^jξ^jBD0XBujRnλ^jRξ^jBRD0XBujnλj|λjλ^jξ^jB2λjλ^jRξ^jBR2|+D0XBnλ^jRξ^jBRujujRI+II.

First it is not hard to see II=OP(λj1p/n) since ujujR=OP(λj1p/n),XB/nλj=OP(cj),λj/λ^jR=OP(1) and 1/ξ^jBR=OP(1/cj). The last result is due to the following lemma.

Lemma A.6

ξ^jA=(1+c¯cj)1/2+OP(λj1p/n+cjn1/2) and ξ^jB=(c¯cj1+c¯cj)1/2+OP(1/λj+cjn1/2).

We claim I=OP(n/p)+oP(n1/2). Actually from the proof of Lemma A.6, we have

λ^jξ^jB2/λj=c¯cj+OP(λj1p/n)+oP(cjn1/2).

Then some elementary calculation gives the rate of I. Therefore, D0ξ^jB/ξ^jBh0=OP(n/p)+oP(n1/2). The conclusion (3.4) follows.

To prove the max norm bound (3.5) of ‖ξ̂jBmax, we first show h0max=OP(log p/p). Recall that h0 is uniformly distributed on the unit sphere of dimension pm. This follows easily from its normal representation. Let G to be (pm)-dimensional multivariate standard normal distributed, then h0=dG/G. It then follows

h0max=maxipm|Gi|/G=OP(log p/p).

From the derivation above,

ξ^jBmaxλ^jR/λ^jD01ξ^jBR(II+h0max),

which gives OP(cj(p/(nλj2)+log p/p))=OP(p/nλj3/2)+log p/(nλj)), given the fact that ξ^jBR=OP(cj) by Lemma A.6. Thus we are done with the second part of the proof.

(iii) The proof for the convergence of ‖ξ̂jA‖ and ‖ξ̂jB‖ are given in Lemma A.6. If m = 1, the result for ‖ξ̂jA‖ directly gives (3.6) with the same rate. For m > 1, from Lemma A.6 we have

ξ^jA2=(1+c¯cj)1+OP(p/(nλj2)+cjn1/2).

On the other hand, from Theorem 3.2 (i), ξ^jk2=OP(p/(nλj2)+1/n) for kjm. So ξ^j12=(1+c¯cj)1+OP(p/(nλj2)+cjn1/2+1/n), which implies (3.6).

Footnotes

SUPPLEMENTARY MATERIAL

Supplement: Technical proofs Wang and Fan (2015) (). This document contains technical lemmas for Section 3 and the comparison of assumptions and theoretical proofs for Section 4.

REFERENCES

  1. Agarwal A, Negahban S, Wainwright MJ. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. The Annals of Statistics. 2012;40:1171–1197. [Google Scholar]
  2. Amini AA, Wainwright MJ. Information Theory 2008. ISIT 2008. IEEE International Symposium on. IEEE; 2008. High-dimensional analysis of semidefinite relaxations for sparse principal components; pp. 2454–2458. [Google Scholar]
  3. Anderson TW. Asymptotic theory for principal component analysis. The Annals of Mathematical Statistics. 1963;34:122–148. [Google Scholar]
  4. Antoniadis A, Fan J. Regularization of wavelet approximations. Journal of the American Statistical Association. 2001;96 [Google Scholar]
  5. Bai Z. Methodologies in spectral analysis of large-dimensional random matrices, a review. Statist. Sinica. 1999;9:611–677. [Google Scholar]
  6. Bai J. Inferential theory for factor models of large dimensions. Econometrica. 2003;71:135–171. [Google Scholar]
  7. Bai J, Ng S. Determining the number of factors in approximate factor models. Econometrica. 2002;70:191–221. [Google Scholar]
  8. Bai Z, Silverstein JW. Spectral analysis of large dimensional random matrices. 2nd. Springer; 2009. [Google Scholar]
  9. Bai Z, Yao J. on sample eigenvalues in a generalized spiked population model. Journal of Multivariate Analysis. 2012;106:167–177. [Google Scholar]
  10. Bai Z, Yin Y. Limit of the smallest eigenvalue of a large dimensional sample covariance matrix. The Annals of Probability. 1993:1275–1294. [Google Scholar]
  11. Baik J, Ben Arous G, Péché S. Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices. The Annals of Probability. 2005:1643–1697. [Google Scholar]
  12. Benaych-Georges F, Nadakuditi RR. The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. Advances in Mathematics. 2011;227:494–521. [Google Scholar]
  13. Berthet Q, Rigollet P. Optimal detection of sparse principal components in high dimension. The Annals of Statistics. 2013;41:1780–1815. [Google Scholar]
  14. Bickel PJ, Levina E. Covariance regularization by thresholding. The Annals of Statistics. 2008:2577–2604. [Google Scholar]
  15. Birnbaum A, Johnstone IM, Nadler B, Paul D. Minimax bounds for sparse PCA with noisy high-dimensional data. The Annals of Statistics. 2013;41:1055. doi: 10.1214/12-AOS1014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Cai T, Fan J, Jiang T. Distributions of angles in random packing on spheres. The Journal of Machine Learning Research. 2013;14:1837–1864. [PMC free article] [PubMed] [Google Scholar]
  17. Cai T, Ma Z, Wu Y. Optimal estimation and rank detection for sparse spiked covariance matrices. Probability Theory and Related Fields. 2015;161:781–815. doi: 10.1007/s00440-014-0562-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Candès EJ, Li X, Ma Y, Wright J. Robust principal component analysis? Journal of the ACM (JACM) 2011;58:11. [Google Scholar]
  19. Chamberlain G, Rothschild M. Arbitrage, factor structure, and mean-variance analysis on large asset markets. Econometrica. 1983;51:1305–1324. [Google Scholar]
  20. Chandrasekaran V, Sanghavi S, Parrilo PA, Willsky AS. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization. 2011;21:572–596. [Google Scholar]
  21. Chen KH, Shimerda TA. An empirical analysis of useful financial ratios. Financial Management. 1981:51–60. [Google Scholar]
  22. Davidson KR, Szarek SJ. Local operator theory, random matrices and Banach spaces. In: Johnson WB, Lindenstrauss J, editors. Handbook of the Geometry of Banach Spaces. Vol. 1. Elsevier Science BV; 2001. pp. 317–366. [Google Scholar]
  23. Davis C, Kahan WM. The rotation of eigenvectors by a perturbation. III. SIAM Journal on Numerical Analysis. 1970;7:1–46. [Google Scholar]
  24. De Mol C, Giannone D, Reichlin L. Forecasting using a large number of predictors: Is Bayesian shrinkage a valid alternative to principal components? Journal of Econometrics. 2008;146:318–328. [Google Scholar]
  25. Donoho DL, Gavish M, Johnstone IM. Optimal shrinkage of eigenvalues in the Spiked Covariance Model. arXiv preprint arXiv:1311.0851. 2014 doi: 10.1214/17-AOS1601. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Fan J, Fan Y, Lu J. High dimensional covariance matrix estimation using a factor model. Journal of Econometrics. 2008;147:186–197. [Google Scholar]
  27. Fan J, Han X, Gu W. Estimating false discovery proportion under arbitrary covariance dependence. Journal of the American Statistical Association. 2012;107:1019–1035. doi: 10.1080/01621459.2012.720478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Fan J, Han X. Estimation of false discovery proportion with unknown dependence. arXiv preprint arXiv:1305.7007. 2013 doi: 10.1111/rssb.12204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Fan J, Liao Y, Mincheva M. Large covariance estimation by thresholding principal orthogonal complements. Journal of the Royal Statistical Society: Series B. 2013;75:1–44. doi: 10.1111/rssb.12016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Fan J, Liao Y, Shi X. Risks of large portfolios. Journal of Econometrics. 2015;186:367–387. doi: 10.1016/j.jeconom.2015.02.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Fan J, Liao Y, Wang W. Projected Principal Component Analysis in Factor Models. The Annals of Statistics. 2016;44:219–254. doi: 10.1214/15-AOS1364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Fan J, Xue L, Yao J. Sufficient Forecasting Using Factor Models. arXiv preprint arXiv:1505.07414. 2015 [Google Scholar]
  33. Fan J, Liu H, Wang W, Zhu Z. Heterogeneity Adjustment with Applications to Graphical Model Inference. arXiv preprint arXiv:1602.05455. 2016 doi: 10.1214/18-EJS1466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Hall P, Marron J, Neeman A. Geometric representation of high dimension, low sample size data. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2005;67:427–444. [Google Scholar]
  35. Johnstone IM. On the distribution of the largest eigenvalue in principal components analysis. The Annals of Statistics. 2001:295–327. [Google Scholar]
  36. Johnstone IM, Lu AY. On Consistency and Sparsity for Principal Components Analysis in High Dimensions. Journal of the American Statistical Association. 2009;104:682–693. doi: 10.1198/jasa.2009.0121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Jung S, Marron J. PCA consistency in high dimension, low sample size context. The Annals of Statistics. 2009;37:4104–4130. [Google Scholar]
  38. Koltchinskii V, Lounici K. Concentration Inequalities and Moment Bounds for Sample Covariance Operators. arXiv preprint arXiv:1405.2468. 2014a [Google Scholar]
  39. Koltchinskii V, Lounici K. Asymptotics and Concentration Bounds for Bilinear Forms of Spectral Projectors of Sample Covariance. arXiv preprint arXiv:1408.4643. 2014b [Google Scholar]
  40. Landgrebe J, Wurst W, Welzl G. Permutation-validated principal components analysis of microarray data. Genome Biol. 2002;3:1–11. doi: 10.1186/gb-2002-3-4-research0019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Lee S, Zou F, Wright FA. Convergence and prediction of principal component scores in high-dimensional settings. The Annals of Statistics. 2010;38:3605. doi: 10.1214/10-AOS821. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Leek JT, Storey JD. A general framework for multiple testing dependence. Proceedings of the National Academy of Sciences. 2008;105:18718–18723. doi: 10.1073/pnas.0808709105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Leek JT, Scharpf RB, Bravo HC, Simcha D, Langmead B, Johnson WE, Geman D, Baggerly K, Irizarry RA. Tackling the widespread and critical impact of batch effects in high-throughput data. Nature Reviews Genetics. 2010;11:733–739. doi: 10.1038/nrg2825. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Ma Z. Sparse principal component analysis and iterative thresholding. The Annals of Statistics. 2013;41:772–801. [Google Scholar]
  45. Onatski A. Asymptotics of the principal components estimator of large factor models with weakly inuential factors. Journal of Econometrics. 2012;168:244–258. [Google Scholar]
  46. Paul D. Asymptotics of sample eigenstructure for a large dimensional spiked covariance model. Statistica Sinica. 2007;17:1617–1642. [Google Scholar]
  47. Pesaran MH, Zaffaroni P. Optimal asset allocation with factor models for large portfolios. 2008 [Google Scholar]
  48. Price AL, Patterson NJ, Plenge RM, Weinblatt ME, Shadick NA, Reich D. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics. 2006;38:904–909. doi: 10.1038/ng1847. [DOI] [PubMed] [Google Scholar]
  49. Ringnér M. What is principal component analysis? Nature Biotechnology. 2008;26:303–304. doi: 10.1038/nbt0308-303. [DOI] [PubMed] [Google Scholar]
  50. Rothman AJ, Levina E, Zhu J. Generalized thresholding of large covariance matrices. Journal of the American Statistical Association. 2009;104:177–186. [Google Scholar]
  51. Shen D, Shen H, Zhu H, Marron J. Surprising asymptotic conical structure in critical sample eigen-directions. arXiv preprint arXiv:1303.6171. 2013 [Google Scholar]
  52. Stock JH, Watson MW. Forecasting using principal components from a large number of predictors. Journal of the American statistical association. 2002;97:1167–1179. [Google Scholar]
  53. Thomas CG, Harshman RA, Menon RS. Noise reduction in BOLD-based fMRI using component analysis. Neuroimage. 2002;17:1521–1537. doi: 10.1006/nimg.2002.1200. [DOI] [PubMed] [Google Scholar]
  54. Vershynin R. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027. 2010 [Google Scholar]
  55. Vu VQ, Lei J. Minimax rates of estimation for sparse PCA in high dimensions. arXiv preprint arXiv:1202.0786. 2012 [Google Scholar]
  56. Wang W, Fan J. Supplementary appendix to the paper “Asymptotics of empirical eigen-structure for high dimensional spiked covariance”. 2015 [Google Scholar]
  57. Yamaguchi-Kabata Y, Nakazono K, Takahashi A, Saito S, Hosono N, Kubo M, Nakamura Y, Kamatani N. Japanese population structure, based on SNP genotypes from 7003 individuals compared to other ethnic groups: effects on population-based association studies. The American Journal of Human Genetics. 2008;83:445–456. doi: 10.1016/j.ajhg.2008.08.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Yata K, Aoshima M. Effective PCA for high-dimension, low-sample-size data with noise reduction via geometric representations. Journal of Multivariate Analysis. 2012;105:193–215. [Google Scholar]
  59. Yata K, Aoshima M. PCA consistency for the power spiked model in high-dimensional settings. Journal of Multivariate Analysis. 2013;122:334–354. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supp1

RESOURCES