Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 May 30.
Published in final edited form as: Ann Stat. 2011 Jan 1;39(6):3320–3356. doi: 10.1214/11-AOS944

HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

Jianqing Fan 1, Yuan Liao 1, Martina Mincheva 1
PMCID: PMC3363011  NIHMSID: NIHMS367753  PMID: 22661790

Abstract

The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

Keywords and phrases: sparse estimation, thresholding, cross-sectional correlation, common factors, idiosyncratic, seemingly unrelated regression

1. Introduction

We consider a factor model defined as follows:

yit=bift+uit, (1.1)

where yit is the observed datum for the ith (i = 1, …, p) asset at time t = 1, …, T; bi is a K × 1 vector of factor loadings; ft is a K × 1 vector of common factors, and uit is the idiosyncratic error component of yit. Classical factor analysis assumes that both p and K are fixed, while T is allowed to grow. However, in the recent decades, both economic and financial applications have encountered very large data sets which contain high dimensional variables. For example, the World Bank has data for about two-hundred countries over forty years; in portfolio allocation, the number of stocks can be in thousands and be larger or of the same order of the sample size. In modeling housing prices in each zip code, the number of regions can be of order thousands, yet the sample size can be 240 months or twenty years. The covariance matrix of order several thousands is critical for understanding the co-movement of housing prices indices over these zip codes.

Inferential theory of factor analysis relies on estimating Σu, the variance-covariance matrix of the error term, and Σ, the variance-covariance matrix of yt = (y1t, …, ypt)′. In the literature, Σ = cov(yt) was traditionally estimated by the sample covariance matrix of yt:

Σsam=1T1t=1T(yty¯)(yty¯),

which was always assumed to be pointwise root-T consistent. However, the sample covariance matrix is an inappropriate estimator in high dimensional settings. For example, when p is larger than T, Σsam becomes singular while Σ is always strictly positive definite. Even if p < T, Fan, Fan and Lv (2008) showed that this estimator has a very slow convergence rate under the Frobenius norm. Realizing the limitation of the sample covariance estimator in high dimensional factor models, Fan, Fan and Lv (2008) considered more refined estimation of Σ, by incorporating the common factor structure. One of the key assumptions they made was the cross-sectional independence among the idiosyncratic components, which results in a diagonal matrix Σu=Eutut. The cross-sectional independence, however, is restrictive in many applications, as it rules out the approximate factor structure as in Chamberlain and Rothschild (1983). In this paper, we relax this assumption, and investigate the impact of the cross-sectional correlations of idiosyncratic noises on the estimation of Σ and Σu, when both p and T are allowed to diverge. We show that the estimated covariance matrices are still invertible with probability approaching one, even if p > T. In particular, when estimating Σ−1 and Σu1, we allow p to increase much faster than T, say, p = O (exp(Tα)), for some α ∈ (0, 1).

Sparsity is one of the commonly used assumptions in the estimation of high dimensional covariance matrices, which assumes that many entries of the off diagonal elements are zero, and the number of nonzero off-diagonal entries is restricted to grow slowly. Imposing the sparsity assumption directly on the covariance of yt, however, is inappropriate for many applications of finance and economics. In this paper we use the factor model and assume that Σu is sparse, and estimate both Σu and Σu1 using the thresholding method (Bickel and Levina (2008a), Cai and Liu (2011)) based on the estimated residuals in the factor model. It is assumed that the factors ft are observable, as in Fama and French (1992), Fan, Fan and Lv (2008), and many other empirical applications. We derive the convergence rates of both estimated Σ and its inverse respectively under various norms which are to be defined later. In addition, we achieve better convergence rates than those in Fan, Fan and Lv (2008).

Various approaches have been proposed to estimate a large covariance matrix: Bickel and Levina (2008a, 2008b) constructed the estimators based on regularization and thresholding respectively. Rothman, Levina and Zhou (2009) considered thresholding the sample covariance matrix with more general thresholding functions. Lam and Fan (2009) proposed penalized quasi-likelihood method to achieve both the consistency and sparsistency of the estimation. More recently, Cai and Zhou (2010) derived the minimax rate for sparse matrix estimation, and showed that the thresholding estimator attains this optimal rate under the operator norm. Cai and Liu (2011) proposed a thresholding procedure which is adaptive to the variability of individual entries, and unveiled its improved rate of convergence.

The rest of the paper is organized as follows. Section 2 provides the asymptotic theory for estimating the error covariance matrix and its inverse. Section 3 considers estimating the covariance matrix of yt. Section 4 extends the results to the seemingly unrelated regression model, a set of linear equations with correlated error terms in which the covariates are different across equations. Section 5 reports the simulation results. Finally, Section 6 concludes with discussions. All proofs are given in the appendix. Throughout the paper, we use λmin(A) and λmax(A) to denote the minimum and maximum eigenvalues of a matrix A. We also denote by ‖AF, ‖A‖ and ‖A the Frobenius norm, operator norm and elementwise norm of a matrix A respectively, defined respectively as ‖AF = tr1/2(A′A), A=λmax1/2(AA), and ‖A = maxi,j |Aij|. Note that, when A is a vector, both ‖A‖ and ‖AF are equal to the Euclidean norm.

2. Estimation of Error Covariance Matrix

2.1. Adaptive thresholding

Consider the following approximate factor model, in which the cross-sectional correlation among the idiosyncratic error components is allowed:

yit=bift+uit, (2.1)

where i = 1, ‥, , p and t = 1, …, T; bi is a K × 1 vector of factor loadings; ft is a K × 1 vector of observable common factors, uncorrelated with uit. Write

B=(b1,,bp),   yt=(y1t,,ypt),   ut=(u1t,,upt),

then model (2.1) can be written in a more compact form:

yt=Bft+ut, (2.2)

with E(ut|ft) = 0.

In practical applications, p can be thought of as the number of assets or stocks, or number of regions in spatial and temporal problems such as home price indices or sales of drugs, and in practice can be of the same order as, or even larger than T. For example, an asset pricing model may contain hundreds of assets while the sample size on daily returns is less than several hundreds. In the estimation of the optimal portfolio allocation, it was observed by Fan, Fan and Lv (2008) that the effect of large p on the convergence rate can be quite severe. In contrast, the number of common factors, K, can be much smaller. For example, the rank theory of consumer demand systems implies no more than three factors (e.g., Gorman (1981) and Lewbel (1991)).

The error covariance matrix

Σu=cov(ut),

itself is of interest for the inferential theory of factor models. For example, the asymptotic covariance of the least square estimator of B depends on Σu1, and in simulating home price indices over a certain time horizon for mortgage based securities, a good estimate of Σu is needed. When p is close to or larger than T, estimating Σu is very challenging. Therefore, following the literature of high dimensional covariance matrix estimation, we assume it is sparse, i.e., many of its off-diagonal entries are zeros. Specifically, let Σu = (σij)p×p. Define

mT=maxipjpI(σij0). (2.3)

The sparsity assumption puts an upper bound restriction on mT. Specifically, we assume:

mT2=o(TK2 logp). (2.4)

In this formulation, we even allow the number of factors K to be large, possibly growing with T.

A more general treatment (e.g., Bickel and Levina (2008a) and Cai and Liu (2011)) is to assume that the lq norm of the row vectors of Σu are uniformly bounded across rows by a slowly growing sequence, for some q ∈ [0, 1). In contrast, the assumption we make in this paper, i.e., q = 0, has clearer economic interpretation. For example, the firm returns can be modeled by the factor model, where uit represents a firm’s individual shock at time t. Driven by the industry-specific components, these shocks are correlated among the firms in the same industry, but can be assumed to be uncorrelated across industries, since the industry-specific components are not pervasive for the whole economy (Connor and Korajczyk (1993)).

We estimate Σu using the thresholding technique introduced and studied by Bickel and Levina (2008a), Rothman, Levina and Zhu (2009), Cai and Liu (2011), etc, which is summarized as follows: Suppose we observe data (X1, …, XT) of a p × 1 vector X, which follows a multivariate Gaussian distribution N (0, ΣX). The sample covariance matrix of X is thus given by:

SX=1Ti=1T(XiX¯)(XiX¯)=(sij)p×p.

Define the thresholding operator by 𝒯t(M) = (MijI (|Mij| ≥ t)) for any symmetric matrix M. Then 𝒯t preserves the symmetry of M. Let Σ^X𝒯=𝒯ωT(SX), where ωT=O(log p/T). Bickel and Levina (2008a) then showed that:

Σ^X𝒯ΣX=Op(ωTmT).

In the factor models, however, we do not observe the error term directly. Hence when estimating the error covariance matrix of a factor model, we need to construct a sample covariance matrix based on the residuals ûit before thresholding. The residuals are obtained using the plug-in method, by estimating the factor loadings first. Let i be the ordinary least square (OLS) estimator of bi, and

ûit=yitb^ift.

Denote by ût = (û1t, …, ûpt)′. We then construct the residual covariance matrix as:

Σ^u=1Tt=1Tûtût=(σ^ij).

Note that the thresholding value ωT=O(log p/T) in Bickel and Levina (2008a) is in fact obtained from the rate of convergence of maxij |sijΣX,ij |. This rate changes when sij is replaced with the residual ûij, which will be slower if the number of common factors K increases with T. Therefore, the thresholding value ωT used in this paper is adjusted to account for the effect of the estimation of the residuals.

2.2. Asymptotic properties of the thresholding estimator

Bickel and Levina (2008a) used a universal constant as the thresholding value. As pointed out by Rothman, Levina and Zhu (2009) and Cai and Liu (2011), when the variances of the entries of the sample covariance matrix vary over a wide range, it is more desirable to use thresholds that capture the variability of individual estimation. For this purpose, in this paper, we apply the adaptive thresholding estimator (Cai and Liu (2011)) to estimate the error covariance matrix, which is given by

Σ^u𝒯=(σ^ij𝒯),σ^ij𝒯=σ^ijI(|σ^ij|θ^ijωT)θ^ij=1Tt=1T(ûitûjtσ^ij)2, (2.5)

for some ωT to be specified later.

We impose the following assumptions:

Assumption 2.1. (i) {ut}t≥1 is stationary and ergodic such that each ut has zero mean vector and covariance matrix Σu. In addition, the strong mixing condition in Assumption 3.2 holds.

(ii) There exist constants c1, c2 > 0 such that c1 < λmin(Σu) ≤ λmax(Σu) < c2, and c1 < var(uitujt) < c2 for all ip, jp.

(iii) There exist r1 > 0 and b1 > 0, such that for any s > 0 and ip,

P(|uit|>s)exp((s/b1)r1). (2.6)

Condition (i) allows the idiosyncratic components to be weakly dependent. We will formally present the strong mixing condition in the next section. In order for the main results in this section to hold, it suffices to impose the strong mixing condition marginally on ut only. Roughly speaking, we require the mixing coefficient

α(T)=supA0,BT|P(A)P(B)P(AB)|

to decrease exponentially fast as T → ∞, where (0,T) are the σ-algebras generated by {ut}t=0 and {ut}t=T respectively.

Condition (ii) requires the nonsingularity of Σu. Note that Cai and Liu (2011) allowed maxj σjj to diverse when direct observations are available. Condition (ii), however, requires that σjj should be uniformly bounded. In factor models, a uniform upper bound on the variance of uit is needed when we estimate the covariance matrix of yt later. This assumption is satisfied by most of the applications of factor models. Condition (iii) requires the distributions of (u1t, …, upt) to have exponential-type tails, which allows us to apply the large deviation theory to 1Tt=1Tuitujtσij.

Assumption 2.2. There exist positive sequences κ1(p, T) = o(1), κ2(p, T) = o(1) and aT = o(1), and a constant M > 0, such that for all C > M,

P(maxip1Tt=1T|uitûit|2>CaT2)O(κ1(p,T)),P(maxip,tT|uitûit|>C)O(κ2(p,T)).

This assumption allows us to apply thresholding to the estimated error covariance matrix when direct observations are not available, without introducing too much extra estimation error. Note that it permits a general case when the original “data” is contaminated, including any type of estimate of the data when direct observations are not available, as well as the case when data is subject to measurement of errors. We will show in the next section that in a linear factor model when {uit}ip,tT are estimated using the OLS estimator, the rate of convergence aT2=(K2log p)/T.

The following theorem establishes the asymptotic properties of the thresholding estimator Σ^u𝒯, based on observations with estimation errors. Let γ1=3r11+r21, where r1 and r2 are defined in Assumptions 2.1, 3.2 respectively.

Theorem 2.1. Suppose γ < 1 and (log p)6/γ−1 = o(T). Then under Assumptions 2.1 and 2.2, there exist C1 > 0 and C2 > 0 such that for Σ^u𝒯 defined in (2.5) with

ωT=C1(log pT+aT),

we have,

P(Σ^u𝒯ΣuC2ωTmT)1O(1p2+κ1(p,T)+κ2(p,T)). (2.7)

In addition, if ωTmT = o(1), then with probability at least 1O(1p2+κ1(p,T)+κ2(p,T)),

λmin(Σ^u𝒯)0.5λmin(Σu),

and

(Σ^u𝒯)1Σu1C2ωTmT.

Note that we derive the result (2.7) without assuming the sparsity on Σu, i.e., no restriction is imposed on mT. When ωTmTo(1), (2.7) still holds, but Σ^u𝒯Σu does not converge to zero in probability. On the other hand, the condition ωTmT = o(1) is required to preserve the nonsingularity of Σ^u𝒯 asymptotically and to consistently estimate Σu1.

The rate of convergence also depends on the averaged estimation error of the residual terms. We will see in the next section that when the number of common factors K increases slowly, the convergence rate in Theorem 2.1 is close to the minimax optimal rate as in Cai and Zhou (2010).

3. Estimation of Covariance Matrix Using Factors

We now investigate the estimation of the covariance matrix Σ in the approximate factor model:

yt=Bft+ut,

where Σ = cov(yt). This covariance matrix is particularly of interest in many applications of factor models as well as corresponding inferential theories. When estimating a large dimensional covariance matrix, sparsity and banding are two commonly used assumptions for regularization (e.g., Bickel and Levina (2008a, 2008b)). In most of the applications in finance and economics, however, these two assumptions are inappropriate for Σ. For instance, the US housing prices in the county level are generally associated with a few national indices, and there is no natural ordering among the counties. Hence neither the sparsity nor banding is realistic for such a problem. On the other hand, it is natural to assume Σu sparse, after controling the common factors. Therefore, our approach combines the merits of both the sparsity and factor structures.

Note that

Σ=Bcov(ft)B+Σu.

By the Sherman-Morrison-Woodbury formula,

Σ1=Σu1Σu1B[cov(ft)1+BΣu1B]1BΣu1.

When the factors are observable, one can estimate B by the least squares method: = (1, …, p)′, where,

b^i=arg minbi1Tpt=1Ti=1p(yitbift)2.

The covariance matrix cov(ft) can be estimated by the sample covariance matrix

cov^(ft)=T1XXT2X11X,

where X = (f1, …, fT), and 1 is a T-dimensional column vector of ones. Therefore, by employing the thresholding estimator Σ^u𝒯 in (2.5), we obtain substitution estimators

Σ^𝒯=B^cov^(ft)B^+Σ^u𝒯, (3.1)

and

(Σ^𝒯)1=(Σ^u𝒯)1(Σ^u𝒯)1B^[cov^(ft)1+B^(Σ^u𝒯)1B^]1B^(Σ^u𝒯)1. (3.2)

In practice, one may apply a common thresholding λ to the correlation matrix of Σ̂u, and then use the substitution estimator similar to (3.1). When λ = 0 (no thresholding), the resulting estimator is the sample covariance, whereas when λ = 1 (all off-diagonals are thresholded), the resulting estimator is an estimator based on the strict factor model (Fan, Fan and Lv (2008)). Thus we have created a path (indexed by λ) which connects the nonparametric estimate of covariance matrix to the parametric estimate.

The following assumptions are made.

Assumption 3.1. (i) {ft}t≥1 is stationary and ergodic.

(ii) {ut}t≥1 and {ft}t≥1 are independent.

In addition to the conditions above, we introduce the strong mixing conditions to conduct asymptotic analysis of the least square estimates. Let 0, and T denote the σ-algebras generated by {(ft, ut) : −∞ ≤ t ≤ 0} and {(ft, ut) : Tt ≤ ∞} respectively. In addition, define the mixing coefficient

α(T)=supA0,BT|P(A)P(B)P(AB)|.

The following strong mixing assumption enables us to apply the Bernstein’s inequality in the technical proofs.

Assumption 3.2. There exist positive constants r2 and C such that for all t ∈ ℤ+,

α(t)exp(Ctr2).

In addition, we impose the following regularity conditions.

Assumption 3.3. (i) There exists a constant M > 0 such that for all i, j and t, Eyit2<M,E fit2<M, and |bij| < M.

(ii) There exists a constant r3 > 0 with 3r31+r21>1, and b2 > 0 such that for any s > 0 and iK,

P(|fit|>s)exp((s/b2)r3). (3.3)

Condition (ii) allows us to apply the Bernstein type inequality for the weakly dependent data.

Assumption 3.4. There exists a constant C > 0 such that λmin(cov(ft)) > C.

Assumptions 3.4 and 2.1 ensure that both λmin(cov(ft)) and λmin(Σ) are bounded away from zero, which is needed to derive the convergence rate of ‖(Σ̂𝒯)−1Σ−1‖ below.

The following lemma verifies Assumption 2.2, which derives the rate of convergence of the OLS estimator as well as the estimated residuals.

Let γ21=1.5r11+1.5r31+r21.

Lemma 3.1. Suppose K = o(p), K4(log p)2 = o(T) and (log p)2/γ2−1 = o(T). Then under the assumptions of Theorem 2.1 and Assumptions 3.1–3.4, there exists C > 0, such that

  1. P(maxipb^ibi>CK log pT)=O(1p2+1T2),
  2. P(maxip1Tt=1T|uitûit|2>CK2 log pT)=O(1p2+1T2),
  3. P(maxip,tT|uitûit|>CK(logT)1/r3log pT)=O(1p2+1T2).

By Lemma 3.1 and Assumption 2.2, aT=K(log p)/T, and κ1(p, T) = κ2(p, T) = p−2 + T−2. Therefore in the linear approximate factor model, the thresholding parameter ωT defined in Theorem 2.1 is simplified to: for some positive constant C1,

ωT=C1KlogpT. (3.4)

Now we can apply Theorem 2.1 to obtain the following theorem:

Theorem 3.1. Under the Assumptions of Lemma 3.1, there exist C1>0 and C2>0 such that the adaptive thresholding estimator defined in (2.5) with ωT2=C1K2log pT satisfies

  1. P(Σ^u𝒯ΣuC2mTKlog pT)=1O(1p2+1T2).
  2. If mTKlog pT=o(1), then with probability at least 1O(1p2+1T2),
    λmin(Σ^u𝒯)0.5λmin(Σu),
    and
    (Σ^u𝒯)1Σu1C2mTKlog pT.

Remark 3.1. We briefly comment on the terms in the convergence rate above.

  1. The term K appears as an effect of using the estimated residuals to construct the thresholding covariance estimator, which is typically small compared to p and T in many applications. For instance, the famous Fama-French three-factor model shows that K = 3 factors are adequate for the US equity market. In an empirical study on asset returns, Bai and Ng (2002) used the monthly data which contains the returns of 4883 stocks for sixty months. For their data set, T = 60, p = 4883. Bai and Ng (2002) determined K = 2 common factors.

  2. As in Bickel and Levina (2008a) and Cai and Liu (2011), mT, the maximum number of nonzero components across the rows of Σu, also plays a role in the convergence rate. Note that when K is bounded, the convergence rate reduces to Op(mT(log p)/T), the same as the minimax rate derived by Cai and Zhou (2010).

One of our main objectives is to estimate Σ, which is the p × p dimensional covarinace matrix of yt, assumed to be time invariant. We can achieve a better accuracy in estimating both Σ and Σ−1 by incorporating the factor structure than using the sample covariance matrix, as shown by Fan et al (2008) in the strict factor model case. When the cross sectional correlations among the idiosyncratic components (u1t, …, upt) are in presence, we can still take advantage of the factor structure. This is particularly essential when direct sparsity assumption on Σ is inapproriate.

Assumption 3.5. ‖p−1B′BΩ‖ = o(1) for some K × K symmetric positive definite matrix Ω such that λmin(Ω) is bounded away from zero.

Assumption 3.5 requires that the factors should be pervasive, i.e., impact every individual time series (Harding (2009)). It was imposed by Fan et al.(2008) only when they tried to establish the asymptotic normality of the covariance estimator. However, it turns out to be also helpful to obtain a good upper bound of ‖(Σ̂𝒯)−1Σ−1‖, as it ensures that λmax((B′Σ−1B)−1) = O(p−1).

Fan et al. (2008) obtained an upper bound of ‖Σ̂𝒯ΣF under the Frobenius norm when Σu is diagonal, i.e., there was no cross-sectional correlation among the idiosyncratic errors. In order for their upper bound to decrease to zero, p2 < T is required. Even with this restrictive assumption, they showed that the convergence rate is the same as the usual sample covariance matrix of yt, though the latter does not take the factor structure into account. Alternatively, they considered the entropy loss norm, proposed by James and Stein (1961):

Σ^𝒯ΣΣ=(p1tr[(Σ^𝒯Σ1I)2])1/2=p1/2Σ1/2(Σ^𝒯Σ)Σ1/2F.

Here the factor p−1/2 is used for normalization, such that ‖ΣΣ = 1. Under this norm, Fan et al. (2008) showed that the substitution estimator has a better convergence rate than the usual sample covariance matrix. Note that the normalization factor p−1/2 in the definition results in an averaged estimation error, which also cancels out the diverging dimensionality introduced by p. In addition, for any two p × p matrices A1 and A2:

A1A2Σ=p1/2Σ1/2(A1A2)Σ1/2F.Σ1/2(A1A2)Σ1/2A1A2·λmax (Σ1).

Combining with the estimated low-rank matrix Bcov(ft)B′, Theorem 3.1 implies the main theorem in this section:

Theorem 3.2. Suppose log T = o(p). Under the assumptions of Theorem 3.1 and Assumption 3.5, we have

  1. P(Σ^𝒯ΣΣ2CpK2(log p)2T2+CmT2K2 log pT)=1O(1p2+1T2),P(Σ^𝒯Σ2CK2 log p+CK4 log TT)=1O(1p2+1T2).
  2. If mTKlog pT=o(1), with probability at least 1O(1p2+1T2),
    λmin(Σ^𝒯)0.5λmin(Σu),
    and
    (Σ^𝒯)1Σ1CmTKlog pT

Note that we have derived a better convergence rate of (Σ̂𝒯)−1 than that in Fan et al.(2008). When the operator norm is considered, p is allowed to grow exponentially fast in T in order for (Σ̂𝒯)−1 to be consistent.

We have also derived the maximum elementwise estimation ‖Σ̂𝒯Σ. This quantity appears in risk assessment as in Fan, Zhang and Yu (2008). For any portfolio with allocation vector w, the true portfolio variance and the estimated one are given by w′Σw and w′Σ̂𝒯 w respectively. The estimation error is bounded by

|wΣ^𝒯wwΣw|Σ^𝒯Σw12,

where ‖w1, the l1 norm of w, is the gross exposure of the portfolio.

4. Extension: Seemingly Unrelated Regression

A seemingly unrelated regression model (Kmenta and Gilbert (1970)) is a set of linear equations in which the disturbances are correlated across equations. Specifically, we have

yit=bifit+uit,   ip,tT, (4.1)

where bi and fit are both Ki × 1 vectors. The p linear equations (4.1) are related because their error terms uit are correlated, i.e., the covariance matrix

Σu=(Euitujt)p×p

is not diagonal.

Model (4.1) allows each variable yit to have its own factors. This is important for many applications. In financial applications, the returns of individual stock depend on common market factors and sector-specific factors. In housing price index modeling, housing price appreciations depend on both national factors and local economy. When fit = ft for each ip, model (4.1) reduces to the approximate factor model (1.1) with common factors ft.

Under mild conditions, running OLS on each equation produces unbiased and consistent estimator of bi separately. However, since OLS does not take into account the cross sectional correlation among the noises, it is not efficient. Instead, statisticians obtain the best linear unbiased estimator (BLUE) via generalized least square (GLS). Write

yi=(yi1,,yiT),T×1,    Xi=(fi1,,fiT),T×Ki,    ip,y=(y1yp),    X=(X1000000Xp),    B=(b1bp).

The GLS estimator of B is given by Zellner (1962):

B^GLS=[X(Σ^u1IT)1X]1[X(Σ^u1IT)1y], (4.2)

where IT denotes a T × T identity matrix, ⊗ represents the Kronecker product operation, and Σ̂u is a consistent estimator of Σu.

In classical seemingly unrelated regression in which p does not grow with T, Σu is estimated by a two-stage procedure: (Kmenta and Gilbert (1970)): On the first stage, estimate B via OLS, and obtain residuals

ûit=yitb^ifit. (4.3)

On the second stage, estimate Σu by

Σ^u=(σ^ij)=(1Tt=1Tûitûjt)p×p. (4.4)

In high dimensional seemingly unrelated regression in which p > T, however, Σ̂u is not invertible, and hence the GLS estimator (4.2) is infeasible.

By the sparsity assumption of Σu, we can deal with this singularity problem by using the adaptive thresholding estimator, and produce a consistent nonsingular estimator of Σu:

Σ^u𝒯=(σ^ijI(|σ^ij|>θ^ijωT)),θ^ij=1Tt=1T(ûitûjtσ^ij)2. (4.5)

To pursue this goal, we impose the following assumptions:

Assumption 4.1. For each ip,

  1. {fit}t≥1 is stationary and ergodic.

  2. {ut}t≥1 and {fit}t≥1 are independent.

Assumption 4.2. There exists positive constants C and r2 such that for each ip, the strong mixing condition

α(t)exp(Ctr2)

is satisfied by (fit, ut):

Assumption 4.3. There exist constants M and C > 0 such that for all ip, jKi, tT

  1. Eyit2<M, |bij| < M, and E fit,j2<M.

  2. minip λmin(cov(fit)) > C.

Assumption 4.4. There exists a constant r4 > 0 with 3r41+r21>1, and b3 > 0 such that for any s > 0 and i, j,

P(|fit,j|>s)exp((s/b3)r4).

These assumptions are similar to those made in Section 3, except that here they are imposed on the sector-specific factors. The main theorem in this section is a direct application of Theorem 2.1, which shows that the adaptive thresholding produces a consistent nonsingular estimator of Σ̂u.

Theorem 4.1. Let K = maxip Ki and γ31=1.5r11+1.5r41+r21; suppose K = o(p), K4(log p)2 = o(T) and (log p)2/γ3−1 = o(T). Under Assumptions 2.1, 4.1–4.4, there exist constants C1 > 0 and C2 > 0 such that the adaptive thresholding estimator defined in (4.5) with ωT2=C1K2log pT satisfies

  1. P(Σ^u𝒯ΣuC2mTKlog pT)=1O(1p2+1T2).
  2. If mTKlog pT=o(1), then with probability at least 1O(1p2+1T2),
    λmin(Σ^u𝒯)0.5λmin(Σu),
    and
    (Σ^u𝒯)1Σu1C2mTKlog pT.

Therefore, in the case when p > T, Theorem 4.1 enables us to efficiently estimate B via feasible GLS:

B^GLS𝒯=[X((Σ^u𝒯)1IT)1X]1[X((Σ^u𝒯)1IT)1y].

5. Monte Carlo Experiments

In this section, we use simulation to demonstrate the rates of convergence of the estimators Σ̂𝒯 and (Σ̂𝒯)−1 that we have obtained so far. The simulation model is a modified version of the Fama-French three-factor model described in Fan, Fan, Lv (2008). We fix the number of factors, K = 3 and the length of time, T = 500, and let the dimensionality p gradually increase.

The Fama-French three-factor model (Fama and French (1992)) is given by

yit=bi1f1t+bi2f2t+bi3f3t+uit,

which models the excess return (real rate of return minus risk-free rate) of the ith stock of a portfolio, yit, with respect to 3 factors. The first factor is the excess return of the whole stock market, and the weighted excess return on all NASDAQ, AMEX and NYSE stocks is a commonly used proxy. It extends the capital assets pricing model (CAPM) by adding two new factors- SMB (“small minus big” cap) and HML (“high minus low” book/price). These two were added to the model after the observation that two types of stocks - small caps, and high book value to price ratio, tend to outperform the stock market as a whole.

We separate this section into three parts, calibration, simulation and results. Similar to Section 5 of Fan, Fan and Lv (2008), in the calibration part we want to calculate realistic multivariate distributions from which we can generate the factor loadings B, idiosyncratic noises {ut}t=1T and the observable factors {ft}t=1T. The data was obtained from the data library of Kenneth French’s website.

5.1. Calibration

To estimate the parameters in the Fama-French model, we will use the two-year daily data (t, t) from Jan 1st, 2009 to Dec 31st, 2010 (T=500) of 30 industry portfolios.

  1. Calculate the least squares estimator of t = Bf̃t + ut, and take the rows of , namely 1 = (b11, b12, b13), …, 30 = (b30,1, b30,2, b30,3), to calculate the sample mean vector μB and sample covariance matrix ΣB. The results are depicted in Table 1. We then create a mutlivariate normal distribution N3(μB, ΣB), from which the factor loadings {bi}i=1p are drawn.

  2. For each fixed p, create the sparse matrix Σu=D+ssdiag{s12,,sp2} in the following way. Let ût = tB̃f̃t. For i = 1, …, 30, let σ̂i denote the standard deviation of the residuals of the ith portfolio. We find min(σ̂i) = 0.3533, max(σ̂i) = 1.5222, and calculate the mean and the standard deviation of the σ̂i’s, namely σ̄ = 0.6055 and σSD = 0.2621.

    Let D=diag{σ12,,σp2}, where σ1, …, σp are generated independently from the Gamma distribution G(α, β), with mean αβ and standard deviation α1/2β. We match these values to σ̄ = 0.6055 and σSD = 0.2621, to get α = 5.6840 and β = 0.1503. Further, we create a loop that only accepts the value of σi if it is between min(σ̂i) = 0.3533 and max(σ̂i) = 1.5222.

    Create s = (s1, …, sp)′ to be a sparse vector. We set each si ~ N(0, 1) with probability 0.2p log p, and si = 0 otherwise. This leads to an average of 0.2plog p nonzero elements per each row of the error covariance matrix.

    Create a loop that generates Σu multiple times until it is positive definite.

  3. Assume the factors follow the vector autoregressive (VAR(1)) model ft = μ + Φft−1 + εt for some 3 × 3 matrix Φ, where εt’s are i.i.d. N3(0, Σε). We estimate Φ, μ and Σε from the data, and obtain cov(ft). They are summarized in Table 2.

Table 1.

Mean and covariance matrix used to generate b

μB ΣB
1.0641 0.0475 0.0218 0.0488
0.1233 0.0218 0.0945 0.0215
−0.0119 0.0488 0.0215 0.1261

Table 2.

Parameters of ft generating process

μ cov(ft) Φ
0.1074 2.2540 0.2735 0.9197 −0.1149 0.0024 0.0776
0.0357 0.2735 0.3767 0.0430 0.0016 −0.0162 0.0387
0.0033 0.9197 0.0430 0.6822 −0.0399 0.0218 0.0351

5.2. Simulation

For each fixed p, we generate (b1, …, bp) independently from N3(μB, ΣB), and generate {ft}t=1T and {ut}t=1T independently. We keep T = 500 fixed, and gradually increase p from 20 to 600 in multiples of 20 to illustrate the rates of convergence when the number of variables diverges with respect to the sample size.

Repeat the following steps N = 200 times for each fixed p:

  1. Generate {bi}i=1p independently from N3(μB, ΣB), and set B = (b1, …, bp)′.

  2. Generate {ut}t=1T independently from Np(0, Σu).

  3. Generate {ft}t=1T independently from the VAR(1) model ft = μ + Φft−1 + εt.

  4. Calculate yt = Bft + ut for t = 1, …, T.

  5. Set ωT=0.10Klog p/T to obtain the thresholding estimator (2.5) Σ^u𝒯 and the sample covariance matrices cov^(ft),Σ^y=1T1t=1T(yty¯)(yty¯)T.

We graph the convergence of Σ̂𝒯 and Σ̂y to Σ, the covariance matrix of y, under the entropy-loss norm ‖·‖Σ and the elementwise norm ‖·‖. We also graph the convergence of the inverses (Σ̂𝒯)−1 and Σ^y1 to Σ−1 under the operator norm. Note that we graph that only for p from 20 to 300. Since T = 500, for p > 500 the sample covariance matrix is singular. Also, for p close to 500, Σ̂y is nearly singular, which leads to abnormally large values of the operator norm. Lastly, we record the standard deviations of these norms.

5.3. Results

In Figures 13, the dashed curves correspond to Σ̂𝒯 and the solid curves correspond to the sample covariance matrix Σ̂y. Figure 1 and 2 present the averages and standard deviations of the estimation error of both of these matrices with respect to the Σ-norm and infinity norm, respectively. Figure 3 presents the averages and estimation errors of the inverses with respect to the operator norm. Based on the simulation results, we can make the following observations:

  1. The standard deviations of the norms are negligible when compared to their corresponding averages.

  2. Under the ‖·‖Σ, our estimate of the covariance matrix of y, Σ̂𝒯 performs much better than the sample covaraince matrix Σ̂y. Note that, in the proof of Theorem 2 in Fan, Fan, Lv(2008), it was shown that:
    Σ^yΣΣ2=Op(K3Tp)+Op(pT)+Op(K3/2T). (5.1)

    For a small fixed value of K, such as K = 3, the dominating term in (5.1) is O(pT). From Theorem 4.1, and given that mT = o(p1/4), the dominating term in the convergence of Σ^𝒯ΣΣ2 is Op(pT2+mT2log pT). So, we would expect our estimator to perform better, and the simulation results are consistent with the theory.

  3. Under the infinity norm, both estimators perform roughly the same. This is to be expected, given that the thresholding affects mainly the elements of the covariance matrix that are closest to 0, and the infinity norm depicts the magnitude of the largest elementwise absolute error.

  4. Under the operator norm, the inverse of our estimator, (Σ̂𝒯)−1 also performs significantly better than the inverse of the sample covariance matrix.

  5. Finally, when p > 500, the thresholding estimators Σ^u𝒯 and Σ̂𝒯 are still nonsingular.

Fig 1.

Fig 1

Averages and standard deviations of ‖Σ̂𝒯ΣΣ (dashed curve) and ‖Σ̂yΣΣ (solid curve) over N = 200 iterations, as a function of the dimensionality p.

Fig 3.

Fig 3

Averages and standard deviations of ‖(Σ̂𝒯)−1Σ−1‖ (dashed curve) and Σ^y1Σ1 (solid curve) over N = 200 iterations, as a function of the dimensionality p.

Fig 2.

Fig 2

Averages and standard deviations of ‖Σ̂𝒯Σ (dashed curve) and ‖Σ̂yΣ (solid curve) over N = 200 iterations, as a function of the dimensionality p.

In conclusion, even after imposing less restrictive assumptions on the error covariance matrix, we still reach an estimator Σ̂𝒯 that significantly outperforms the standard sample covariance matrix.

6. Conclusions and Discussions

We studied the rate of convergence of high dimensional covariance matrix of approximate factor models under various norms. By assuming sparse error covariance matrix, we allow for the presence of the cross-sectional correlation even after taking out common factors. Since direct observations of the noises are not available, we constructed the error sample covariance matrix first based on the estimation residuals, and then estimate the error covariance matrix using the adaptive thresholding method.We then constructed the covariance matrix of yt using the factor model, assuming that the factors follow a stationary and ergodic process, but can be weakly-dependent. It was shown that after thresholding, the estimated covariance matrices are still invertible even if p > T, and the rate of convergence of (Σ̂𝒯)−1 and (Σ^u𝒯)1 is of order Op(KmTlog p/T), where K comes from the impact of estimating the unobservable noise terms. This demonstrates when estimating the inverse covariance matrix, p is allowed to be much larger than T.

In fact, the rate of convergence in Theorem 2.1 reflects the impact of unobservable idiosyncratic components on the thresholding method. Generally, whether it is the minimax rate when direct observations are not available but have to be estimated is an important question, which is left as a research direction in the future.

Moreover, this paper uses the hard-thresholding technique, which takes the form of σ̂ijij) = σijI(|σij| > θij) for some pre-determined threshold θij. Recently, Rothman et al (2009) and Cai and Liu (2011) studied a more general thresholding function of Antoniadis and Fan (2001), which admits the form σ̂ijij) = sij), and also allows for soft-thresholding. It is easy to apply the more general thresholding here as well, and the rate of convergence of the resulting covariance matrix estimators should be straightforward to derive.

Finally, we considered the case when common factors are observable, as in Fama and French (1992). In some applications, the common factors are unobservable and need to be estimated (Bai (2003)). In that case, it is still possible to consistently estimate the covariance matrices using similar techniques as those in this paper. However, the impact of high dimensionality on the rate of convergence comes also from the estimation error of the unobservable factors. We plan to address this problem in a separate paper.

Acknowledgments

The research was partially supported by NIH Grant R01-GM072611, NSF Grant DMS-0704337, and NIH grant R01GM100474.

APPENDIX A: PROOFS FOR SECTION 2

A.1. Lemmas. The following lemmas are useful to be proved first, in which we consider the operator norm ‖A2 = λmax(A′A).

Lemma A.1. Let A be an m × m random matrix, B be an m × m deterministic matrix, and both A and B are semi-positive definite. If there exists a positive sequence {cT}T=1 such that for all large enough T, λmin(B) > cT. Then

P(λmin(A)0.5cT)P(AB0.5cT),andP(A1B12cT2AB)P(AB0.5cT).

Proof. For any v ∈ ℝm such that ‖v‖ = 1, under the event ‖AB‖ ≤ 0.5cT,

vAv=vBvv(BA)vλmin(B)AB0.5cT

Hence λmin(A) ≥ 0.5cT.

In addition, still under the event ‖AB‖ ≤ 0.5cT,

A1B1=A1(BA)B1λmin(A)1ABλmin(B)1=2cT1AB.

Q.E.D.

Lemma A.2. Suppose that the random variables Z1, Z2 both satisfy the exponential-type tail condition: There exist r1, r2 ∈ (0, 1) and b1, b2 > 0, such thats > 0,

P(|Zi|>s)exp(1(s/bi)ri),   i=1,2.

Then for some r3 and b3 > 0, and any s > 0,

P(|Z1Z2|>s)exp(1(s/b3)r3). (A.1)

Proof. We have, for any s > 0, M=(sb2r2/r1/b1)r1/(r1+r2), b = b1b2, and r = r1r2/(r1 + r2),

P(|Z1Z2|>s)P(M|Z1|>s)+P(|Z2|>M)exp(1(s/b1M)r1)+exp(1(M/b2)r2)=2 exp(1(s/b)r).

Pick up an r3 ∈ (0, r), and b3 > max{(r3/r)1/rb, (1 + log 2)1/rb}, then it can be shown that F(s) = (s/b)r − (s/b3)r3 is increasing when s > b3. Hence F(s) > F(b3) > log 2 when s > b3, which implies when s > b3,

P(|Z1Z2|>s)2exp(1(s/b)r)exp(1(s/b3)r3).

When sb3,

P(|Z1Z2|>s)1exp(1(s/b3)r3).

Q.E.D.

Lemma A.3. Under the Assumptions of Theorem 2.1, there exists a constant Cr > 0 that does not depend on (p, T), such that when C > Cr,

  1. P(maxi,jp|1Tt=1Tuitujtσij|>Clog pT)=O(1p2),
  2. P(maxi,jp|1Tt=1T(ûitûjtuitujt|>CaT)=O(1p2+κ1(p,T)),
  3. P(maxi,jp|σ^ijσij|>C(log pT+aT))=O(1p2+κ1(p,T)).

Proof. (i) By Assumption 2.1 and Lemma A.2, uitujt satisfies the exponential tail condition, with parameter r1/3 as shown in the proof of Lemma A.2. Therefore by the Bernstein’s inequality (Theorem 1 of Merlevède (2009)), there exist constants C1, C2, C3, C4 and C5 > 0 that only depend on b1, r1 and r2 such that for any i, jp, and γ1=3r11+r21,

P(|1Tt=1Tuitujtσij|s)T exp ((T s)γC1)+exp (T2s2C2(1+TC3))+exp ((T s)2C4Texp ((T s)γ(1γ)C5(logT s)γ)).

Using Bonferroni’s method, we have

P(maxi,jp|1Tt=1Tuitujtσij|>s)p2maxi,jpP(|1Tt=1Tuitujtσij|>s).

Let s=C(log p)/T for some C > 0. It is not hard to check that when (log p)2/γ−1 = o(T) (by assumption), for large enough C,

p2T exp ((T s)γC1)+p2exp ((T s)2C4Texp ((T s)γ(1γ)C5(logT s)γ))=o(1p2),

and

p2exp (T2s2C2(1+TC3))=O(1p2).

This proves (i).

(ii) For some C1>0 such that

P(maxip1Tt=1T(ûituit)2>C1aT2)=O(κ1(p,T)), (A.2)

under the event {maxip|1Tt=1Tuit2σii|maxipσii/4}{maxip1Tt=1T(ûituit)2C1aT2}, by Cauchy-Schwarz inequality,

Zmaxi,jp|1Tt=1T(ûitûjtuitujt)|maxi,jp|1Tt=1T(ûituit)(ûjtujt)|+2 maxi,jp|1Tt=1Tuit(ûjtujt)|maxip1Tt=1T(ûituit)2+2maxip1Tt=1Tuit2maxip1Tt=1T(ûituit)2C1aT2+254maxipσiiC1aT2.

Since aT = o(1), when C>3C1maxipσii, we have, for all large T,

CaT>C1aT2+254maxipσiiC1aT2,

and

P(ZCaT)1P(maxip|1Tt=1Tuit2σii|>maxipσii/4)P(maxip1Tt=1T(ûituit)2>C1aT2).

By part (i) and (A.2), P(ZCaT) ≥ 1 − O(p−2 + κ1(p, T)).

(iii) By (i) and (ii), there exists Cr > 0, when C > Cr, the displayed inequalities in (i) and (ii) hold. Under the event {maxi,jp|1Tt=1Tuitujtσij|C(log p)/T}{maxi,jp|1Tt=1Tûitûjtuitujt|CaT}, by the triangular inequality,

maxi,jp|σ^ijσij|maxi,jp|1Tt=1Tuitujtσij|+maxi,jp|1Tt=1Tûitûjtuitujt|C(log pT+aT).

Hence the desired result follows from part (i) and part (ii) of the lemma.

Q.E.D.

Lemma A.4. Under Assumptions 2.1, 2.2,

P(CLminijθ^ijmaxijθ^ijCU)1O(1p2+κ1(p,T)+κ2(p,T)),

where

CL=145minijvar(uitujt)CU=3maxipσii+4maxijvar(uitujt).

Proof. (i) Using Bernstein’s inequality and the same argument as in the proof of Lemma A.3(i), we have, there exists Cr>0,when C>Cr, and (log p)6/γ−1 = o(T),

P(maxi,jp|1Tt=1T(uitujtσij)2var(uitujt)|>Clog pT)=O(1p2).

For some C > 0, under the event i=14Ai, where

A1={maxi,jp|σijσ^ij|C(log pT+aT)}A2={maxip,tT|ûituit|min{12,(20 max iσii)1minij var(uitujt)}}A3={maxip|1Tt=1Tuit2σii|Clog pT}A4={maxi,jp|1Tt=1T(uitujtσij)2var(uitujt)|Clog pT},

we have, for any i, j, by adding and subtracting terms,

θ^i,j=1Tt(ûitûjtσ^ij)22Tt(ûitûjtσij)2+2maxi,j(σijσ^ij)24Tt(ûituit)2ûjt2+4Tt(ûjtujt)2uit2+4Tt(uitujtσij)2+O(log pT+aT2)4maxit|ûituit|2(maxiσ^ii+maxi1Ttuit2)+4var(uitujt)+O(log pT+log pT+aT2)(2Clog pT+CaT+2maxiσii)+4var(uitujt)+o(1),

where the O(.) and o(.) terms are uniformly in p an T. Hence under i=14Ai, for all large enough T, p, uniformly in i, j, we have

θ^i,j3maxipσii+4maxij var(uitujt).

Still by adding and subtracting terms, we obtain

1Tt(uitujtσij)24Tt(uitujtûitûjt)2+4Tt(ûitûjtσ^ij)2+4(σijσ^ij)28Ttuit2(ujtûjt)2+8Ttûjt2(uitûit)2+4θ^ij+O(log pT+aT2)8 maxit|ûituit|2(maxiσ^ii+maxj1Ttujt2)+4θ^ij+o(1).

Under the event i=14Ai, we have

4θ^ij+o(1)minij var(uitujt)Clog pT8maxit|ûituit|2[2Clog pT+CaT+2maxiσii]110minij var(uitujt).

Hence for all large T, p, uniformly in i, j, we have θ^ij145 minij var(uitujt).

Finally, by Lemma A.3 and Assumption 2.2,

P(i=14Ai)1O(1p2+κ1(p,T)+κ2(p,T)),

which completes the proof.

Q.E.D.

A.2. Proof of Theorem 2.1.

Proof. (i) For the operator norm, we have

Σ^u𝒯Σumaxipj=1p|σ^ijI(|σ^ij|ωTθ^ij1/2)σij|

By Lemma A.3 (iii), there exists C1 > 0 such that the event

A1={maxi,jp|σ^ijσij|C1(log pT+aT)}

occurs with probability P(A1)1O(1p2+κ1(p,T)). Let C > 0 be such that CCL>2C1, where CL is defined in Lemma A.4. Let ωT=C(log pT+aT),bT=C1(log pT+aT), then CLωT>2bT, and by Lemma A.4,

P(minijθ^ij1/2ωT>2bT)P(minijθ^ij1/2>CL)1O(1p+κ1(p,T)+κ2(p,T)).

Define the following events

A2={minijθ^ij1/2ωT>2bT}A3={maxijθ^ij1/2CU1/2},

where CU is defined in Lemma A.4. Under i=13Ai, the event |σ^ij|ωTθ^ij1/2 implies |σij| ≥ bT, and the event |σ^ij|<ωTθ^ij1/2 implies |σij|<bT+CUωT. We thus have, uniformly in ip, under i=13Ai,

Σ^u𝒯Σuj=1p|σ^ijI(|σ^ij|ωTθ^ij1/2)σij|j=1p|σ^ijσij|I(|σ^ij|ωTθ^ij1/2)+j=1p|σij|I(|σ^ij|<ωTθ^ij1/2)j=1p|σ^ijσij|I(|σij|bT)+j=1p|σij|I(|σij|<bT+CUωT)bTmT+(bT+CUωT)mT(CL+CU)ωTmT.

By Lemmas A.3(iii) and A.4, P(i=13Ai)1O(1p2+κ1(p,T)+κ2(p,T)), which proves the result. Q.E.D.

(ii) By part (i) of the theorem, there exists some C > 0,

P(Σ^u𝒯Σu>CωTmT)=O(1p2+κ1(p,T)+κ2(p,T)).

By Lemma A.1,

P(λmin(Σ^u𝒯)0.5λmin(Σu))P(Σ^u𝒯Σu0.5λmin(Σu))1O(1p2+κ1(p,T)+κ2(p,T)).

In addition, when ωTmT = o(1),

P((Σ^u𝒯)1Σu12Σu1CωTmT)P((Σ^u𝒯)1Σu12Σu1·Σ^u𝒯Σu,Σ^u𝒯ΣuCωTmT)P((Σ^u𝒯)1Σu12Σu1·Σ^u𝒯Σu)P(Σ^u𝒯Σu>CωTmT)P(Σ^u𝒯Σu0.5λmin(Σu))O(1p2+κ1(p,T)+κ2(p,T))1O(1p2+κ1(p,T)+κ2(p,T)),

where the third inequality follows from Lemma A.1 as well.

Q.E.D.

APPENDIX B: PROOFS FOR SECTION 3

B.1. Proof of Theorem 3.1.

Lemma B.1. There exists C1 > 0 such that,

  1. P(maxi,jK|1Tt=1TfitfjtE fitfjt|>C1log TT)=O(1T2),
  2. P(maxkK,ip|1Tt=1Tfktuit|>C1log pT)=O(1p2).

Proof. (i) Let Zij=1Tt=1T(fitfjtEfitfjt). We bound maxij |Zij| using Bernstein type inequality. Lemma A.2 implies that for any i and jK, fitfjt satisfies the exponential tail condition (3.3) with parameter r3/3. Let r41=3r31+r21, where r2 > 0 is the parameter in the strong mixing condition. By Assumption 3.3, r4 < 1, and by the Bernstein inequality for weakly dependent data in Merlevède (2009, Theorem 1), there exist Ci > 0, i = 1, …, 5, for any s > 0

maxi,jP(|Zij|>s)T exp ((T s)r4C1)+exp (T2s2C2(1+TC3))+exp ((T s)2C4Texp ((T s)r4(1r4)C5(logT s)r4)). (B.1)

Using the Bonferroni inequality,

P(maxiK,jK|Zij|>s)K2maxi,jP(|Zij|>s).

Let s=C(log T)/T. For all large enough C, since K2 = o(T),

TK2 exp ((T s)r4C1)+K2 exp ((T s)2C4Texp ((T s)r4(1r4)C5(logT s)r4))=o(1T2),K2 exp (T2s2C2(1+TC3))=O(1T2).

This proves part (i).

(ii) By Lemma A.2, and Assumptions 2.1(iii) and 3.3(ii), Zki,tfktuit satisfies the exponential tail condition (2.6) for the tail parameter 2r1r3/(3r1 + 3r3), as well as the strong mixing condition with parameter r2. Hence again we can apply the Bernstein inequality for weakly dependent data in Merlevède (2009, Theorem 1) and the Bonferroni’s method on Zki,t similar to (B.1) with the parameter γ21=1.5r11+1.5r31+r21. It follows from 3r11+r21>1 and 3r31+r21>1 that γ2 < 1. Thus when s=C(log p)/T for large enough C, the term

pKexp (T2s2C2(1+TC3))p2,

and the rest terms on the right hand side of the inequality, multiplied by pK are of order o(p−2). Hence when (log p)2/γ2−1 = o(T) (which is implied by the Theorem’s assumption), and K = o(p), there exists C′ > 0,

P(maxkK,ip|1Tt=1Tfktuit|>Clog pT)=O(1p2). (B.2)

Q.E.D.

Proof of Lemma 3.1

  1. Since Klog T=o(T), and λmin(Eftft) is bounded away from zero, for large enough T, by Lemma B.1(i),
    P(1TXXEftft0.5λmin(Eftft))P(KmaxiK,jK|1Tt=1TfitfjtE fitfjt|0.5λmin(Eftft))1O(1T2). (B.3)
    Hence by Lemma A.1,
    P(λmin(T1XX)0.5λmin(Eftft))1O(1T2). (B.4)
    As ibi = (XX′)−1Xui, we have b^ibi2=uiX(XX)2Xui. For C′ > 0 such that (B.2) holds, under the event
    A{maxkK,ip|1Tt=1Tfktuit|Clog pT}{λmin(T1XX)0.5λmin(Eftft)},
    we have
    b^ibi24λmin(Eftft)2k=1K(1Tt=1Tfktuit)24Kλmin(Eftft)2maxkK,ip(1Tt=1Tfktuit)24KC2log pλmin(Eftft)2T.

    The desired result then follows from that P(A)1O(1T2+1p2).

  2. For C > maxiK E fit2, we have, by Lemma B.1(i),
    P(1Ttft2>CK)P(KmaxkK|1Tt=1Tfkt2E fkt2|+KmaxkK Efkt2>CK)=O(1T2).
    The result then follows from
    maxip1Tt=1T|uitûit|2maxip1Ttft2b^ibi2
    and part(i).
  3. By Assumption 3.3, for any s > 0,
    P(maxtTft>s)TP(ft>s)TK max kKP(fkt2>s2/K)TKexp ((sb2K)r3).
    When sCK(log T)1/r3 for large enough C, i.e., Cr3>4b2r3,
    P(maxtTft>CK(log T)1/r3)T2.
    The result then follows from
    maxtT,ip|uitûit|=maxtT,ip|(b^ibi)ft|maxib^ibimaxtft,
    and Lemma 3.1(i). Q.E.D.

Proof of Theorem 3.1 Theorem 3.1 follows immediately from Theorem 2.1 and Lemma 3.1. Q.E.D.

B.2. Proof of Theorem 3.2 Part (i). Define

DT=cov^(ft)cov(ft),CT=B^B,E=(u1,,uT).

We have,

Σ^𝒯ΣΣ24BDTBΣ2+24Bcov^(f)CTΣ2+16CTcov^(f)CTΣ2+2Σ^u𝒯ΣuΣ2. (B.5)

We bound the terms on the right hand side in the following lemmas.

Lemma B.2. There exists C > 0, such that

  1. P(DTF2>CK2log TT)=O(T2);
  2. P(CTF2>CKplog pT)=O(T2+p2).

Proof. (i) Similar to the proof of Lemma B.1(i), it can be shown that there exists C1 > 0,

P(maxiK|1Tt=1TfitE fit|>C1log TT)=O(T2).

Hence supK maxiK E|fit| < ∞ implies that there exists C > 0 such that

P(maxi,jK|1Tt=1Tfit1Tt=1TfjtE fitE fjt|>Clog TT)=O(T2).

The result then follows from Lemma B.1(i) and that

DTF2K2(maxi,jK|1Tt=1TfitfjtE fitfjt|2+maxi,jK|1Tt=1Tfit1Tt=1TfjtE fitE fjt|2).

(ii) We have CT = EX′(XX′)−1. By Lemma B.1 (ii), there exists C′ > 0 such that

P(maxk,i|1Tt=1Tfktuit|>Clog pT)=O(p2).

Under the event

A={maxk,i|1Tt=1Tfktuit|Clog pT}{λmin(T1XX)0.5λmin(Eftft)},

CTF24λmin2(Eftft)C2pK(log p)/T, which proves the result since λmin(Eftft) is bounded away from zero and P(A) ≥ 1 − O (T−2 + p−2) due to (B.4).

Q.E.D.

Lemma B.3. There exists C > 0 such that

  1. P(BDTBΣ2+Bcov^(ft)CTΣ2>CKlog pT+CK2log TTp)=O(T2+p2);
  2. P(CTcov^(f)CTΣ2>CpK2(log p)2T2)=O(T2+p2).

Proof. (i) The same argument in Fan, Fan and Lv (2008), proof of Theorem 2 implies that

BΣ1B2cov(ft)1=O(1).

Hence

BDTBΣ2=p1tr(Σ1/2BDTBΣ1BDTBΣ1/2)=p1tr(DTBΣ1BDTBΣ1B)p1DTBΣ1BF2O(p1)DTF2. (B.6)

On the other hand,

Bcov^(f)CTΣ28T2BXXCTΣ2+8T4BX11XCTΣ2. (B.7)

Respectively,

BXXCTΣ2p1XXCTΣ1FCTXXBΣ1BFBX11XCTΣ2p1X11XCTΣ1FCTX11XBΣ1BF. (B.8)

By Lemma B.1(i), and Eftft<, P(‖XX′‖ > TC) = O(T−2) for some C > 0. Hence, Lemma B.2 (ii) implies

P(BXXCTΣ2>CTKlog p)=O(T2+p2) (B.9)

for some C′ > 0. In addition, the eigenvalues of cov^(ft)=T1XXT2X11X are all bounded away from both zero and infinity with probability at least 1 − O(T−2) (implied by Lemmas B.1(i), A.1, and Assumption 3.4). Hence for some C1 > 0, with probability ast least 1 − O(T−2),

X11XTXXT2C1,BX11XCTΣ2O(p1)X11X2CTF2. (B.10)

The result then follows from the combination of (B.6)(B.10), and Lemma B.2.

(ii) Straightforward calculation yields:

pCTcov^(f)CTΣ2=tr(CTcov^(f)CTΣ1CTcov^(f)CTΣ1)CTcov^(f)CTΣ1F2λmax2(Σ1)λmax2(cov^(ft))CTF4.

Since ‖cov(ft)‖ is bounded, by Lemma B.1(i), λmax2(cov^(ft)) is bounded with probability at least 1 − O(T−2). The result again follows from Lemma B.2(ii).

Proof of Theorem 3.2 Part (i)

  1. We have
    Σ^u𝒯ΣuΣ=p1/2Σ1/2(Σ^u𝒯Σu)Σ1/2FΣ1/2(Σ^u𝒯Σu)Σ1/2Σ^u𝒯Σu·λmax(Σ1). (B.11)
    Therefore, (B.5), (B.11), Theorem 3.1 and Lemmas B.2, B.3 yield the result, with the fact that (assuming log T = o(p))
    K log pT+K2 log TTp+pK2(log p)2T2+mT2K2 log pT=O(pK2(log p)2T2+mT2K2 log pT).
  2. For the infinity norm, it is straightforward to find that
    Σ^𝒯Σ2CTcov(ft)B+BDTB+CTcov(ft)CT+2BDTCT+CTDTCT+Σ^u𝒯Σu. (B.12)

By Assumption, both ‖B and ‖cov(ft)‖ are bounded uniformly in (p, K, T). In addition, let ei be a p-dimensional column vector whose ith component is one with the remaining components being zeros. Then under the events DTC(log T)/T,maxiK,jp|1Tt=1Tfitujt|C(log p)/T,1TXXC,and maxipb^ibiCK(log p)/T, we have, for some C′ > 0,

2CTcov(ft)B2maxi,jpeiCTcov(ft)Bej2maxipb^ibicov(ft)maxjpbjCKlog pT, (B.13)
CT=maxi,jp|ei1TEX(1TXX)1ej|maxipei1TEX·(1TXX)1KmaxiK,jp|1Tt=1Tfitujt|·(1TXX)1CK(log p)/T, (B.14)
BDTBK2B2DTFCK2log TT, (B.15)
CTcov(ft)CTmaxi,jpeiCTcov(ft)CTejmaxipeiCT2cov(ft)CK2log pT, (B.16)
2BDTCT2K2BDTCT=o(K2log TT), (B.17)

and

CTDTCTK2DTCT2=o(K2log TT). (B.18)

Moreover, the (i, j)th entry of Σ^u𝒯Σu is given by

σ^ijI(|σ^ij|ωTθ^ij)σij={σij,if|σ^ij|<ωTθ^ijσ^ijσij,o.w.

Hence Σ^u𝒯Σumaxi,jp|σijσ^ij|+ωTmaxi,jpθ^ij, which implies that with probability at least 1 − O(p−2 + T−2),

Σ^u𝒯ΣuCKlog pT. (B.19)

The result then follows from the combination of (B.12)(B.19), (B.4), and Lemmas 3.1,B.1.

Q.E.D.

B.3. Proof of Theorem 3.2 Part (ii). We first prove two technical lemmas to be used below.

Lemma B.4. (i) λmin(BΣu1B)cp for some c > 0.

(ii) [cov(f)1+BΣu1B]1=O(p1).

Proof. (i) We have

λmin(BΣu1B)λmin(Σu1)λmin(BB).

It then follows from Assumption 3.5 that λmin(B′B) > cp for some c > 0 and all large p. The result follows since ‖Σu‖ is bounded away from infinity.

(ii) It follows immediately from

λmin(cov(ft)1+BΣu1B)λmin(BΣu1B).

Q.E.D.

Lemma B.5. There exists C > 0 such that,

  1. P(B^(Σ^u𝒯)1B^BΣu1B>CpmTKlog pT)=O(1p2+1T2);
  2. P([cov^(f)1+B^(Σ^u𝒯)1B^]1>Cp)=O(1p2+1T2);
  3. for G=[cov^(f)1+B^(Σ^u𝒯)1B^]1,
    P(B^GB^(Σ^u𝒯)1>C)=O(1p2+1T2).

Proof. (i) Let H=B^(Σ^u𝒯)1B^BΣu1B.

H2CTΣu1B+2CT((Σ^u𝒯)1Σu1)B+B((Σ^u𝒯)1Σu1)B+CTΣu1CT+CT((Σ^u𝒯)1Σu1)CT.

The same argument of Fan, Fan and Lv (2008) (eq. 14) implies that BF=O(p). Therefore, by Theorem 3.1 and Lemma B.2(ii), it is straightforward to verify the result.

(ii) Since ‖DTF ≥ ‖DT‖, according to Lemma B.2(i), there exists C′ > 0 such that with probability ast least 1 − O(T−2), DT<CK(log T)/T. Thus by Lemma A.1, for some C″ > 0,

P(cov^(ft)1cov(ft)1<CDT)P(DT<CKlog TT)1O(T2),

which implies

P(cov^(ft)1cov(ft)1<CCKlog TT)1O(T2). (B.20)

Now let A^=cov^(ft)1+B^(Σ^u𝒯)1B^,and A=cov(ft)1+BΣu1B. Then part (i) and (B.20) imply

P(A^A<CCKlog TT+CpmTKlog pT)1O(1p2+1T2). (B.21)

In addition, mTK(log p)/T=o(1). Hence by Lemmas A.1, B.4(ii), for some C > 0,

P(λmin(A^)Cp)P(A^A<Cp)1O(1p2+1T2),

which implies the desired result.

(iii) By the triangular inequality, B^FCTF+O(p). Hence Lemma B.2(ii) implies, for some C > 0,

P(B^FCp)1O(T2+p2). (B.22)

In addition, since Σu1 is bounded, it then follows from Theorem 3.1 that (Σ^u𝒯)1 is bounded with probability at least 1 − O(p−2+T−2). The result then follows from the fact that

P(G>Cp1)=O(1p2+1T2),

which is shown in part (ii).

Q.E.D.

To complete the proof of Theorem 3.2 Part (ii), we follow similar lines of proof as in Fan, Fan and Lv (2008). Using the Sherman-Morrison-Woodbury formula, we have

(Σ^𝒯)1Σ1=(Σ^u𝒯)1Σu1+((Σ^u𝒯)1Σu1)B^[cov^(f)1+B^(Σ^u𝒯)1B^]1B^(Σ^u𝒯)1+((Σ^u𝒯)1Σu1)B^[cov^(f)1+B^(Σ^u𝒯)1B^]1B^Σu1+Σu1(B^B)[cov^(f)1+B^(Σ^u𝒯)1B^]1B^Σu1+Σu1(B^B)[cov^(f)1+B^(Σ^u𝒯)1B^]1BΣu1+Σu1B([cov^(f)1+B^(Σ^u𝒯)1B^]1[cov(f)1+BΣu1B]1)BΣu1=L1+L2+L3+L4+L5+L6. (B.23)

The bound of L1 is given in Theorem 3.1.

For G=[cov^(f)1+B^(Σ^u𝒯)1B^]1, then

L2(Σ^u𝒯)1Σu1·B^GB^(Σ^u𝒯)1. (B.24)

It follows from Theorem 3.1 and Lemma B.5(iii) that

P(L2CmTKlog pT)1O(1p2+1T2).

The same bound can be achieved in a same way for L3. For L4, we have

L4Σu12·B^B·B^·G.

It follows from Lemmas B.2, B.5(ii), and inequality (B.22) that

P(L4CK log pT)1O(1p2+1T2).

The same bound also applies to L5. Finally,

L6B2Σu12A^1A1B2Σu12A^A·A^1·A1,

where both  and A are defined after inequality (B.20). By Lemma B.4(ii), ‖A−1‖ = O(p−1). Lemma B.5(ii) implies P(‖Â−1 ‖ > Cp−1) = O(p−2 + T−2). Combining with (B.21), we obtain

P(L6CmTKlog pT)1O(1p2+1T2).

The proof is completed by combining L1 ~ L6. Q.E.D.

APPENDIX C: PROOFS FOR SECTION 4

The proof is similar to that of Lemma 3.1. Thus we sketch it very briefly. The OLS is given by

b^i=(XiXi)1Xiyi,ip.

The same arguments in the proof of Lemma B.1 can yield, for large enough C > 0,

P(maxipb^ibi>CK log pT)=O(1p2+1T2),

which then implies the rate of

maxip1Tt=1T(uitûit)2maxipb^ibi21Tt=1Tfit2.

The result then follows from a straightforward application of Theorem 2.1. Q.E.D.

Contributor Information

Jianqing Fan, Email: jqfan@princeton.edu.

Yuan Liao, Email: yuanliao@princeton.edu.

Martina Mincheva, Email: mincheva@princeton.edu.

REFERENCES

  • 1.Antoniadis A, Fan J. Regularized wavelet approximations. J. Amer. Statist. Assoc. 2001;96:939–967. [Google Scholar]
  • 2.Bai J. Inferential theory for factor models of large dimensions. Econometrica. 2003;71:135–171. [Google Scholar]
  • 3.Bai J, Ng S. Determining the number of factors in approximate factor models. Econometrica. 2002;70:191–221. [Google Scholar]
  • 4.Bickel P, Levina E. Covariance regularization by thresholding. Ann. Statist. 2008a;36:2577–2604. [Google Scholar]
  • 5.Bickel P, Levina E. Regularized estimation of large covariance matrices. Ann. Statist. 2008b;36:199–227. [Google Scholar]
  • 6.Cai T, Liu W. Adaptive thresholding for sparse covariance matrix estimation. J. Amer. Statist. Assoc. 2011;106:672–684. [Google Scholar]
  • 7.Cai T, Zhou H. Manuscript. University of Pennsylvania; 2010. Optimal rates of convergence for sparse covariance matrix estimation. [Google Scholar]
  • 8.Chamberlain G, Rothschild M. Arbitrage, factor structure and mean-variance analyssi in large asset markets. Econometrica. 1983;51:1305–1324. [Google Scholar]
  • 9.Connor G, Korajczyk R. A Test for the number of factors in an approximate factor model. Journal of Finance. 1993;48:1263–1291. [Google Scholar]
  • 10.Fama E, French K. The cross-section of expected stock returns. Journal of Finance. 1992;47:427–465. [Google Scholar]
  • 11.Lam C, Fan J. Sparsistency and rates of convergence in large covariance matrix estimation. Ann. Statist. 2009;37:4254–4278. doi: 10.1214/09-AOS720. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Fan J, Fan Y, Lv J. High dimensional covariance matrix estimation using a factor model. J. Econometrics. 2008;147:186–197. [Google Scholar]
  • 13.Fan J, Zhang J, Yu K. Manuscript. Princeton University; 2008. Asset location and risk assessment with gross exposure constraints for vast portfolios. [Google Scholar]
  • 14.Gorman M. Some Engel curves. In: Deaton A, editor. Essays in the Theory and Measurement of Consumer Behavior in Honor of Sir Richard Stone. New York: Cambridge University Press; 1981. [Google Scholar]
  • 15.Harding M. Manuscript. Stanford University; 2009. Structural estimation of high-dimensional factor models. [Google Scholar]
  • 16.James W, Stein C. Estimation with quadratic loss. Proc. Fourth Berkeley Symp. Math. Statist. Probab; Univ. California Press; Berkeley. 1961. pp. 361–379. [Google Scholar]
  • 17.Kmenta J, Gilbert R. Estimation of seemingly unrelated regressions with autoregressive disturbances. J. Amer. Statist. Assoc. 1970;65:186–196. [Google Scholar]
  • 18.Lewbel A. The rank of demand systems: theory and nonparametric estimation. Econometrica. 1991;59:711–730. [Google Scholar]
  • 19.Merlevède F, Peligrad M, Rio E. Manuscript. Université Paris Est.; 2009. A Bernstein type inequality and moderate deviations for weakly dependent sequences. [Google Scholar]
  • 20.Rothman A, Levina E, Zhu J. Generalized thresholding of large covariance matrices. J. Amer. Statist. Assoc. 2009;104:177–186. [Google Scholar]
  • 21.Zellner A. An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. J. Amer. Statist. Assoc. 1962;57:348–368. [Google Scholar]

RESOURCES