Skip to main content
Entropy logoLink to Entropy
. 2020 Nov 5;22(11):1258. doi: 10.3390/e22111258

Simultaneous Inference for High-Dimensional Approximate Factor Model

Yong Wang 1, Xiao Guo 2,*
PMCID: PMC7712464  PMID: 33287026

Abstract

This paper studies simultaneous inference for factor loadings in the approximate factor model. We propose a test statistic based on the maximum discrepancy measure. Taking advantage of the fact that the test statistic can be approximated by the sum of the independent random variables, we develop a multiplier bootstrap procedure to calculate the critical value, and demonstrate the asymptotic size and power of the test. Finally, we apply our result to multiple testing problems by controlling the family-wise error rate (FWER). The conclusions are confirmed by simulations and real data analysis.

Keywords: high-dimensional factor model, multiple testing, multiplier bootstrap, simultaneous inference

1. Introduction

The high-dimensional factor model is becoming more and more important in different scientific areas including finance and macroeconomics. For example, the data in the World Bank contain two-hundred countries over forty years and the number of stocks can be in the thousands which is larger than or of the same order of the sample size for portfolio allocation. Due to its broad applications, much efforts have been devoted to analyzing factor model in different aspects. Examples include estimation of factors and loadings for latent factor model [1,2], covariance matrix estimation [3,4,5,6], and simultaneous inference for factor loadings of dynamic factor model [7,8], among others.

This work focuses on the simultaneous inference for the loading matrix with observed factors, which is an important issue in the analysis of approximate factor models. For example, in the study of gene expression genomics, it is commonly assumed that each gene is associated with only a few factors. For example, the authors of [9] showed that several oncogenes are related to Rb/E2F pathway rather than any other pathways. The authors of [10] also considered sparse loading matrix for gene expression data. Therefore, it is necessary to test the sparsity of the factor loadings. In the literature, some inference procedures have been proposed for latent factor models. For example, in low-dimensional setting, the authors of [11] considered testing the homogeneity assumption, i.e., the loadings associated to a factor are identical for all variables. The same testing problem has been considered by the authors of [12] in high-dimensional situation. As for observed factors, to the best of our knowledge, very limited work has been conducted.

Inference for the factor loadings with observed factors is not trivial. The approaches for latent factors cannot be directly applied to observed factors. The major difficulty is due to the high dimensionality, which poses significant challenges in deriving the asymptotic null limiting distribution of the test statistic. We propose a test statistic based on the maximum discrepancy measure. The distribution of this statistic is attractive in high-dimensional statistical inference such as model selection, simultaneous inference, and multiple testing. Examples include the works in [13,14,15,16,17], among others.

We use the multiplier bootstrap procedure to obtain the critical value of our test statistic. Based on the fact that the test statistic can be approximated by the sum of the independent random variables, we show that the proposed multiplier bootstrap method consistently approximates the null limiting distribution of the test statistic, and thus the testing procedure achieves the prespecified significance level asymptotically. There are some related works applying multiplier bootstrap method to high-dimensional inference; see in [16,18,19], among others. However, their procedures require sparsity assumption on the parameters and cannot be directly applied to factor model. Compared with the works with latent factors, we do not require homogeneity constraints or sparsity on the model and our procedure is adaptive to high-dimensional regime.

Another application of our procedure is the multiple testing problem. Combining the multiplier bootstrap method with step-down procedure proposed by [17], we show that our procedure has a strong control of the family-wise error rate (FWER). Our method is asymptotically non-conservative as compared to the Bonferroni–Holm procedure since the correlation among the test statistics has been taken into account. We also want to point out that any procedure controlling the FWER will also control the false discovery rate [20] when there exist some true discoveries.

The rest of the paper is organized as follows. In Section 2.1, we develop the multiplier bootstrap procedure for simultaneous test of parameters for a single factor and demonstrate its asymptotic level and power. In Section 2.2, we give the result of simultaneous test of parameters for multiple factors. Section 3 discusses the multiple testing problem by combining the multiplier bootstrap procedure with the step-down method proposed by [17]. Section 4 investigates the numerical performance of the proposed test by simulations. We also conduct real data analysis on portfolio risk of S&P stocks via Fama–French model in Section 5. The proofs of the main results are given in Appendix A.

Finally, we introduce some notation. For set S, let |S| denote the cardinality of S. Let 0pRp be the vector of zeros. For p×p matrix A=(aij)i,j=1p, denote by λmin(A) and λmax(A) the minimum and maximum eigenvalues of A, respectively. The matrix element-wise maximum norm and L2 norm are defined as A=max1i,jp|aij| and A=λmax1/2(AA), respectively. For a=(a1,,ap)Rp and q>0, denote by aq=(i=1p|ai|q)1/q and a=max1jp|aj|. Let viRK be the ith column of the K×K identity matrix. We write atbt if at is smaller than or equal to bt up to a universal positive constant. For a,bR, we write ab=max{a,b}. For two sets, A and B, AB denotes their symmetric difference, that is, AB=(AB)(BA).

2. Methodology

2.1. Simultaneous Test for a Single Factor

We consider the factor model defined as follows,

yit=bift+uit,i=1,,pandt=1,,T, (1)

where yit is the observed response for the ith variable at time t, biRK is the unknown vector of factor loadings, ftRK is the observed vector of common factors, and uit is the latent error. Here, K is a fixed integer denoting the number of factors, p is the number of variables, and T denotes the sample size. Model (1) is commonly used in finance and macroeconomics, see, e.g., in [3,4,21], among others.

Denote by B=(b1,,bp), yt=(y1t,,ypt) and ut=(u1t,,upt), then model (1) can be re-expressed as

yt=Bft+ut. (2)

We first focus on testing the coefficients bik=bivk corresponding to a single factor, i.e., the kth factor. Specifically, we consider the following simultaneous testing problem that given k=1,,K,

H0,G:bik=biknullforalliGversusH1,G:bikbiknullforsomeiG, (3)

where G is a subset of {1,,p} and biknull are prespecified values. For example, if biknull are 0, then the hypotheses are able to test whether the variables with indices in G are significantly associated with the kth factor. Throughout the paper, |G| is allowed to grow as fast as p, which may have exponential growth with T as in Assumption 3.

The ordinary least squares (OLS) estimator B^=YF(FF)1 is applied to estimate B, where Y=(y1,,yT) and F=(f1,,fT). Therefore,

B^B=t=1Tutftt=1Tftft1. (4)

We propose the following test statistic for H0,G,

MT,k=maxiGT|b^ikbiknull|,

where (b^ik)ip,kK=B^. For each iG, the asymptotic normality of b^ik is straightforward due to the central limit theorem. However, when |G| diverges with p, it is very challenging to demonstrate the existence of the limiting distribution of MT,k. In order to approximate the asymptotic distribution of MT,k, we will use the multiplier bootstrap method. From (4), we know

T(b^ikbik)=1Tt=1TuitftΩ^fvk=1Tt=1Tξ^it, (5)

where ξ^it=uitftΩ^fvk and Ω^f=(t=1Tftft/T)1.

In order to apply the multiplier bootstrap procedure, we need to approximate t=1Tξ^it/T by sum of independent random variables. As Ω^f is consistent for Ωf={E(ftft)}1, we can replace the former with the latter in ξ^it, and define ξit=uitftΩfvk. Then, for each iG, {ξit}t1 are i.i.d. and t=1Tξit/T well approximates t=1Tξ^it/T.

We then apply the multiplier bootstrap procedure to approximate the distribution of maxiG|t=1Tξit|/T. Denote by Σu=(σij)p×p the covariance matrix of ut, and hence cov(ξit,ξjt)=Ωf(k,k)σij, where Ωf(k,k)=vkΩfvk. We know that Ω^f(k,k)=vkΩ^fvk is T-consistent for Ωf(k,k). To estimate σij, we first calculate the residuals

u^it=yitb^ift.

Denote by u^t=(u^1t,,u^pt), then the error covariance matrix is estimated by

Σ^u=1Tt=1Tu^tu^t=(σ^ij)ip,jp.

Let {et}t=1T, a sequence of i.i.d. N(0,1) independent of {yt,ft}t=1T, be the multiplier random variables. Then the multiplier bootstrap statistic is defined as

WT,k=maxiGT1/2Ω^f(k,k)|t=1Tu^itet|.

Conditioning on {yt,ft}t=1T, the covariance of T1/2t=1TΩ^f(k,k)u^itet and T1/2t=1TΩ^f(k,k)u^jtet is Ω^f(k,k)σ^ij, which can sufficiently approximate the covariance between ξit and ξjt. Then, the bootstrap critical value can be obtained via

cWT,k(α)=inf{tR:P(WT,kt|(Y,F))1α}.

cWT,k(α) is calculated by generating {et}t=1T repeatedly. In our simulations and real data, we conduct bootstrap 500 times. We now present some technical assumptions.

Assumption 1.

  • (i)

    {ft,ut}t1 are i.i.d. with E(ut)=0p and Σu=cov(ut).

  • (ii)

    There exist constants c1,c2 such that 0<c1<λmin(Σu)<λmax(Σu)<c2<.

  • (iii)

    {ut}t1 and {ft}t1 are independent.

Assumption 2.

There exist positive constants r1,r2,b1,b2, such that for any s>0, tT, ip and jK,

P(|uit|>s)exp{(s/b1)r1},P(|fjt|>s)exp{(s/b2)r2}.

The “i.i.d.” condition in Assumption 1 is commonly considered in the literature for high-dimensional inference, see, e.g., in [16]. Assumption 1 (ii) allows the bounded eigenvalue of the error covariance matrix. As noted in [22], such assumption is satisfied by two situations: (1) cov(U1,,Up), where {Ui,i1} is a stationary ergodic process with spectral density f, 0<c1<f<c2 and (2) cov(X1,,Xp) where Xi=Ui+Vi,i=1,,p, {Ui} is a stationary process as above and {Vi} is a noise process independent of {Ui}. In Example 1 in [22], they demonstrated that ARMA(r,q) process satisfies Assumption 1 (ii). Furthermore, this assumption is commonly considered in the literature, see, e.g., in [4,15].

Assumption 2 allows the application of the large deviation theory to (1/T)t=1Tuitujtσij and (1/T)t=1Tuitfjt. In this paper, we assume that ft and ut have exponential-type tails. Let γ11=3r11 and γ21=1.5r11+1.5r21.

Assumption 3.

Suppose γ1<1, γ2<1 and there exists a constant c1>0, such that (logp)γ=o(T), where γ=max{2/γ11,2/γ21,7+c1}.

Assumption 4.

There exists a constant C>0 such that λmax(Ωf)<C.

Assumption 3 is needed in Bernstein-type inequality [23] and commonly assumed in the literature for Gaussian approximation theory. Assumption 4 is also reasonable by bounding the eigenvalues of Ωf.

Theorem 1.

Under Assumptions 1–4, we have

supα(0,1)|PmaxiGT|b^ikbik|>cWT,k(α)α|=o(1).

Theorem 1 demonstrates that the multiplier bootstrap critical value cWT,k(α) well approximates the quantile of the test statistic. It is worth mentioning that our method does not require any sparsity assumption on either Σu or B.

The proof of Theorem 1 depends on the two results: (1) maxiGt=1Tξit/T is sufficiently close to maxiGt=1Tξ^it/T and (2) the covariances of ξit and ξjt are well approximated by the bootstrap version. The first result is demonstrated in Lemma A7 that there exist ζ1>0 and ζ2>0 such that

P|maxiGt=1Tξ^it/TmaxiGt=1Tξit/T|>ζ1<ζ2,

where ζ11log(p/ζ1)=o(1) and ζ2=o(1). The second result is shown in Lemma A6 that

Δ=max1i,jp|Ω^f(k,k)σ^ijΩf(k,k)σij|=oP((logp)2),

i.e., the maximum discrepancy between the empirical and population covariance matrices converges to zero.

Based on Theorem 1, for a given significance level 0<α<1, we define the test Φα by

Φα=I(MT,k>cWT,k(α)). (6)

The hypothesis H0,G is rejected whenever Φα=1.

Bootstrap is a commonly used resampling method and full theories about it can be found in [24]. There are many versions of bootstrap, for example, wild bootstrap, empirical bootstrap, and multiplier bootstrap, among others. As discussed in [25], other exchangeable bootstrap methods are asymptotically equivalent to the multiplier bootstrap. As our test statistic can be approximated by the maximum of sum of independent random vectors, we adopt the multiplier bootstrap method in [25] based on Gaussian approximation.

Alternatively, we propose the studentized statistic MT,k*:=maxiGT|b^ikbiknull|/ω^ii for H0,G, where ω^ii=Ω^f(k,k)σ^ii. Similarly to Section 2.1, we define the multiplier bootstrap statistic as

WT,k*=maxiGT1/2|t=1Tu^itet|Ω^f(k,k)/ω^ii=maxiGT1/2σ^ii1/2|t=1Tu^itet|,

where {et}t=1Ti.i.d.N(0,1) are independent of {yt,ft}t=1T. Then, the bootstrap critical value can be obtained via

cWT,k*(α)=inf{tR:P(WT,k*t|(Y,F))1α}.

Theorem 2 below justifies the validity of the bootstrap procedure for the studentized statistic.

Theorem 2.

Under the assumptions in Theorem 1, we have

supα(0,1)|PmaxiGT|b^ikbik|/ω^ii>cWT,k*(α)α|=o(1).

Based on this result, for a given significance level 0<α<1, we define the test Φα* by

Φα*=I(MT,k*>cWT,k*(α)).

The hypothesis H0,G is rejected whenever Φα*=1.

For the studentized statistic, we can derive its asymptotic distribution. By Lemma 6 of the work in [15], for any xR and as p, we have

Pmax1ipT|b^ikbik|2/ω^ii2log(p)+loglog(p)xexp1πexpx2.

However, the above alternative testing procedure may not perform well in practice, because it requires diverging p, and the convergence rate is typically slow.

In contrast to the extreme value approach, our testing procedure explicitly accounts for the effect of |G| in the sense that the bootstrap critical value cWT,k*(α) depends on G. Therefore, our approach is more robust to the change in |G|.

Next, we turn to the (asymptotic) power analysis of the above procedure. Denote by Bk the kth column of B. We focus on the case where |G| as T below. Define the separation set

UG(c)={(b1k,,bpk)T:maxiG|bikbiknull|/ωii>clog(|G|)/T}, (7)

where ωij=Ωf(k,k)σij. Let Θ=(θij)i,j=1p with θij=ωij/ωiiωjj=σij/σiiσjj, which is the correlation matrix of ut.

Assumption 5.

Suppose max1ijp|θij|c0<1 for some constant c0.

Theorem 3.

Under Assumptions 1–5, for any ε0>0, we have

infBkUG(2+ε0)PmaxiGT|b^ikbiknull|/ω^ii>cWT,k*(α)1.

As long as one entry of bikbiknull has a magnitude larger than (2+ε0)log|G|/T, our bootstrap-assisted test can reject the null correctly. Therefore, with B being sparse, our procedure performs well in detecting non-sparse alternatives. According to Section 3.2 of [26], the separation rate (2+ε0)log(|G|)/T is minimax optimal under suitable assumptions.

2.2. Simultaneous Test for Multiple Factors

In this section, we test the elements of the loading matrix corresponding to different factors. The testing problem can be stated as follows,

H0,G*:bik=biknullforall(i,k)G*versusH1,G*:bikbiknullforsome(i,k)G*,

where G* is a subset of M{(i,j):i=1,pandj=1,,K}. Define

ω(i,k),(j,)*=cov(uitftΩfvk,ujtftΩfv)=σijvkΩfv,ω^(i,k),(j,)*=(Ω^fvk)1Tt=1Tu^itu^jtftft(Ω^fv).

We propose the studentized test statistic

MT,G*=max(i,k)G*T|b^ikbik|/ω^(i,k),(i,k)*.

From the linear expansion in (5), the multiplier bootstrap statistic is defined as

WT,G*=max(i,k)G*T1/2|t=1Tu^itftΩ^fvket|/ω^(i,k),(i,k)*,

where {et}t=1Ti.i.d.N(0,1) are independent of {yt,ft}t=1T. Then, the bootstrap critical value can be obtained via

cWT,G*(α)=inf{tR:P(WT,G*t|(Y,F))1α}.

Let γ31=4r11+4r21, r31=3r11+9r21 and r=max{2/γ31,2/r31,c1+7} for a constant c1>0.

Theorem 4.

Suppose (logp)r=o(T), under Assumptions 1,2 and 4, we have

supα(0,1)|Pmax(i,k)G*T|b^ikbik|/ω^(i,k),(i,k)*>cWT,G*(α)α|=o(1).

Based on Theorem 4, for a given significance level 0<α<1, we define the test Φα(G*) by

Φα(G*)=I(MT,G*>cWT,G*(α)).

The hypothesis H0,G* is rejected whenever Φα(G*)=1.

Now we turn to the power analysis of the test Φα(G*). Similar to Section 2.1, we focus on the case where |G*| as T and define the separation set

VG*(c)={(bik)ip,kK:max(i,k)G*|bikbiknull|/ω(i,k),(i,k)*>clog(|G*|)/T},

Let θ(i,k),(j,)*=ω(i,k),(j,)*/ω(i,k),(i,k)*ω(j,),(j,)*. We consider the following condition.

Assumption 6.

Suppose max(i,k),(j,)|θ(i,k),(j,)*|c0*<1 for some constant c0*.

The asymptotic power of the testing procedure is given as follows.

Theorem 5.

Under the assumptions in Theorem 4 and Assumption 6 , for any ε0>0, we have

infBVG*(2+ε0)Pmax(i,k)G*T|b^ikbiknull|/ω^(i,k),(i,k)*>cWT,G*(α)1.

3. Multiple Testing with Strong FWER Control

In this section, we study the following multiple testing problem,

H0,i:bijbijnullversusH1,i:bij>bijnullforalliG.

For simplicity, we set G={1,2,,p} and let j be fixed. We combine the bootstrap-assisted procedure with the step-down method proposed by [17]. Our method can be seen as a special case in Section 5 of [25]. Note that this framework can cover the case of testing equalities (H0,j:bij=bijnull) because equalities can be rewritten as pairs of inequalities.

We briefly illustrate the control of the FWER. Full details and theory can be found in [25]. Let Ω be the space for all data generating processes, and ω be the true process. Each null hypothesis H0,i is equivalent to ωΩi for some ΩiΩ. For any ηG, denote by Ωη=(iηΩi)(iηΩic) with Ωic=ΩΩi. The strong control of the FWER means that

supηGsupωΩηPω(rejectatleastonehypothesisH0,i,iω)α+o(1), (8)

where Pω denotes the probability distribution under the data-generating process ω.

For i=1,,p, denote tij=T(b^ijbijnull). For a subset ηG, let cη(α) be the bootstrapped estimate for the (1α)-quantile of maxiηtij. The step-down procedure in [17] is described as follows. Define η(1)=G at the first step and reject all H0,i satisfying tij>cη(1)(α). If no H0,i is rejected, then stop the procedure. If some H0,i are rejected, let η(2) be the set of indices for those hypotheses not being rejected at the first step. On step 2, let η()G be the subset of hypotheses that were not rejected at step 1. Reject all hypotheses H0,i for iη() satisfying that tij>cη()(α). If no hypothesis is rejected, then stop the procedure. Proceed in this way until the algorithm stops.

Romano and Wolf [17] proved the following result:

cη(α)cη(α),forηη (9)
supηGsupωΩηPωmaxiηtij>cη(α)α+o(1). (10)

Therefore, we can show that the step-down method together with the multiplier bootstrap provide strong control of the FWER by verifying (9) and (10). The theoretical results are given in the proposition below. The proofs are similar to those of Theorem 1, which are omitted here.

Proposition 1.

Under the assumptions in Theorem 1, the step-down procedure with the bootstrap critical value cη(α) satisfies (8).

Our multiple testing method has the following two important features: (i) It can be applied to models with an increasing dimension; (ii) It takes into account the correlation amongst statistics and hence is asymptotically non-conservative.

In the simulation, we also consider Benjamini–Hochberg procedure [20] to control the false discovery rate (FDR), which is summarized as follows. For each of H0,1,,H0,p, we calculate the p-values P1,,Pp based on the studentized test statistic. Let P(1)P(p) be the ordered p-values, and denote by H0,(i) the null hypothesis corresponding to P(i). Let k=max{i:P(i)iα/p}, and then reject all H0,(i) for i=1,,k.

4. Simulation Study

This section examines the performance of the proposed testing procedure by a simulation study. We fix the number of factors K=3, the sample size T{200,400}, and let the dimensionality p increase from 50 to 600. Throughout the simulation, we consider testing the first column of B and repeat multiplier bootstrap 500 times.

Each row of B is generated independently from N(0,IK), where IK is K×K identity matrix. Let cov(ft)=(σijf)K×K with σijf=0.6|ij|. Here, we consider two models for the covariance structure Σu.

  • (a)

    Model 1 (sparse): Ωu=(ωij)1i,jp where ωii=1, ωij=0.8 for 2(k1)+1ij2k, where k=1,,[p/2] and ωi,j=0 otherwise. Σu=Ωu1.

  • (b)

    Model 2 (non-sparse): Σu=(σij)1i,jp where σii=1 and σij=0.5 for ij.

Under each model, {ft}t=1T and {ut}t=1T are generated independently from N(0,cov(ft)) and N(0,Σu), respectively.

We calculate the empirical sizes of test for each column of B under each model by considering hypothesis (3) with G={1,2,,p} and biknull being the true value of bik. The results are summarized in Table 1. Here “NST”, “ST” denote the non-studentized, studentized Bootstrap-based test, respectively, and “EX” denotes the test using extreme value distribution. The estimated sizes of the three tests are reasonably close to the nominal level 0.05 for the values of p ranging from 50 to 600.

Table 1.

Empirical sizes of tests, α=0.05, T=400, and 500 replications.

p = 50 p = 100 p = 200 p = 400 p = 600
Model 1
NST 0.076 0.060 0.058 0.050 0.058
ST 0.074 0.064 0.064 0.058 0.078
EX 0.046 0.046 0.038 0.046 0.058
Model 2
NST 0.050 0.052 0.056 0.060 0.038
ST 0.070 0.058 0.064 0.068 0.048
EX 0.038 0.030 0.024 0.018 0.016

For all iG, by varying bik=biknull+c/40 with c=±0.8 and =0,,10, we plot the empirical powers of MT,k and MT,k* in Figure 1. For ease of presentation, we only consider p{10,200,600}. The results for other dimensionality are similar in spirit, and are not presented here. For all tests, the significance level is fixed at α=0.05. From Figure 1, we can tell that the empirical rejection rate grows from the nominal level to one as c deviates away from zero. The difference between NST test and ST test is slight. For small p, the EX test does not perform well because this approach requires diverging p. Furthermore, for non-sparse error covariance matrix, our method performs better than the EX method. These numerical results confirm our theoretical analysis.

Figure 1.

Figure 1

Empirical powers of the NST, ST, and EX methods. The figures in the left panels are based on Model 1, while those in the right panels are for Model 2. The red solid line corresponds to the nominal level. (a) p=10, (b) p=10, (c) p=200, (d) p=200, (e) p=600, (f) p=600.

Next, we study the numerical performance of the step-down method in Section 3 and compare it with the Bonferroni–Holm procedure. Consider the following two-sided multiple testing problem; H0,i:bij=b˜ijnull among all i=1,2,,p with j=1. For Models 1 and 2, the first s0 entries of {b˜ijnull}i=1p are bijnull+0.5 and bijnull+0.35, respectively, and the rest are equal to bijnull. We set T{200,400} and p{50,200,500,600}.

We employ both the step-down method based on the studentized/non-studentized test statistic, and the Bonferroni–Holm procedure (based on the studentized test statistic) to control the FWER. We denote these three procedures by NST-FWER, ST-FWER, and BH-FWER, respectively. For comparison, we also consider using Benjamini–Hochberg procedure to control FDR. We denote this procedure by BH-FDR. Based on 500 replications, we calculate the average empirical FWER

Average{I{AtleastonehypothesisH0,iisrejected,i{s0+1,,p}}}

for methods NST-FWER, ST-FWER, and BH-FWER, the average empirical FDR

AverageiS0I{H0,iisrejected}/iGI{H0,iisrejected}

for method BH-FDR, and the average empirical power

AverageiS0I{H0,iisrejected}/s0

for all the four methods, where S0={1,2,,s0} and G={1,,p}. Under each model, we consider the case s0=3 and s0=15. Table 2 and Table 3 report the empirical FWER, FDR, and the average power. From Table 2 and Table 3, the proposed and Bonferroni–Holm procedures provide similar control on the FWER, and Benjamini–Hochberg procedure can control FDR. The empirical powers of the step-down method and Benjamini–Hochberg procedure are higher than that of the Bonferroni–Holm procedure. It is also seen that controlling the FDR is more powerful than controlling the FWER.

Table 2.

Empirical family-wise error rate (FWER) and false discovery rate (FDR) with power in the brackets of multiple testing based on Model 1, α=0.05, and 500 replications.

T s0 Method p = 50 p = 200 p = 500 p = 600
200 3 NST-FWER 0.058 (0.551) 0.062 (0.405) 0.048 (0.309) 0.056 (0.291)
ST-FWER 0.074 (0.554) 0.082 (0.431) 0.086 (0.337) 0.090 (0.324)
BH-FWER 0.054 (0.528) 0.070 (0.409) 0.074 (0.319) 0.068 (0.300)
BH-FDR 0.061 (0.635) 0.064 (0.470) 0.086 (0.380) 0.069 (0.353)
15 NST-FWER 0.056 (0.569) 0.050 (0.412) 0.040 (0.306) 0.046 (0.303)
ST-FWER 0.066 (0.583) 0.086 (0.430) 0.074 (0.334) 0.084 (0.327)
BH-FWER 0.060 (0.561) 0.066 (0.410) 0.056 (0.310) 0.068 (0.309)
BH-FDR 0.043 (0.810) 0.064 (0.655) 0.06 (0.518) 0.061 (0.509)
400 3 NST-FWER 0.050 (0.935) 0.058 (0.889) 0.062 (0.839) 0.058 (0.808)
ST-FWER 0.070 (0.937) 0.062 (0.885) 0.078 (0.842) 0.066 (0.813)
BH-FWER 0.052 (0.931) 0.054 (0.873) 0.062 (0.834) 0.052 (0.795)
BH-FDR 0.057 (0.957) 0.056 (0.924) 0.064 (0.889) 0.068 (0.863)
15 NST-FWER 0.058 (0.947) 0.054 (0.881) 0.040 (0.819) 0.070 (0.815)
ST-FWER 0.052 (0.946) 0.066 (0.881) 0.058 (0.825) 0.084 (0.882)
BH-FWER 0.050 (0.942) 0.056 (0.871) 0.050 (0.809) 0.060 (0.806)
BH-FDR 0.035 (0.989) 0.052 (0.968) 0.056 (0.946) 0.059 (0.941)

Table 3.

Empirical FWER and FDR with power in the brackets of multiple testing based on Model 2, α=0.05, and 500 replications.

T s0 Method p = 50 p = 200 p = 500 p = 600
200 3 NST-FWER 0.044 (0.805) 0.052 (0.692) 0.066 (0.622) 0.056 (0.609)
ST-FWER 0.058 (0.807) 0.066 (0.701) 0.084 (0.638) 0.066 (0.621)
BH-FWER 0.030 (0.759) 0.042 (0.620) 0.024 (0.517) 0.024 (0.505)
BH-FDR 0.039 (0.819) 0.046 (0.691) 0.038 (0.592) 0.030 (0.570)
15 NST-FWER 0.042 (0.805) 0.060 (0.697) 0.058 (0.626) 0.048 (0.618)
ST-FWER 0.050 (0.809) 0.080 (0.708) 0.080 (0.637) 0.072 (0.630)
BH-FWER 0.028 (0.757) 0.042 (0.621) 0.034 (0.530) 0.038 (0.519)
BH-FDR 0.035 (0.922) 0.044 (0.822) 0.046 (0.746) 0.040 (0.717)
400 3 NST-FWER 0.060 (0.989) 0.052 (0.985) 0.052 (0.971) 0.050 (0.970)
ST-FWER 0.064 (0.989) 0.054 (0.985) 0.068 (0.970) 0.072 (0.973)
BH-FWER 0.046 (0.983) 0.022 (0.975) 0.026 (0.951) 0.024 (0.945)
BH-FDR 0.045 (0.995) 0.034 (0.990) 0.040 (0.972) 0.035 (0.973)
15 NST-FWER 0.066 (0.992) 0.072 (0.986) 0.056 (0.975) 0.056 (0.975)
ST-FWER 0.072 (0.992) 0.076 (0.986) 0.056 (0.975) 0.064 (0.975)
BH-FWER 0.046 (0.988) 0.036 (0.973) 0.024 (0.952) 0.022 (0.950)
BH-FDR 0.043 (0.999) 0.042 (0.998) 0.031 (0.993) 0.044 (0.992)

5. Real Data Analysis

This section conducts hypothesis testing for financial data from 1 January 2017 to 14 March 2018. The dataset consists of daily returns of 491 stocks from S&P 500 index. In addition, we collected Fama–French three factors [21] in the same period. In summary, the panel matrix is a 300 by 491 matrix Y, in addition to a factor matrix F of size 300 by 3. Here, 300 is the number of days and 491 is the number of stocks.

We first centralize and standardize the factor matrix F and Y is centralized as well. We consider testing the sparsity of each column of B and repeat the multiplier bootstrap 500 times. Simultaneous test of parameters corresponding to multiple factors is also considered. The hypotheses are

H0:bik=0forall(i,k)sversusH1:bik0forsome(i,k)s,

where s={(i,k):ks*{1,2,3},|b^ik|arethesmallestβ%among{|b^ik|}i{1,,p};ks*}, with s*={1},{2},{3} or {2,3} and β=10,30,50,70,90. The results are depicted in Table 4. For the first column of B, it is therefore not reasonable to assume bi1=0. However, we can claim that the last two columns of B are sparse. Hence, a sufficiently large number of stocks are not influenced by the last two factors.

Table 4.

Results of sparse testing.

β = 10 β = 30 β = 50 β = 70 β = 90
1st loading R R R R R
2nd loading A A A A A
3rd loading A A A A R
2nd and 3rd loading A A A R R

Note: “A” means accepting the null hypothesis; “R” denotes rejecting the null hypothesis.

Acknowledgments

We thank the three referees for insightful comments and suggestions.

Appendix A. Technical Details

We prove the main results in this section. First, we introduce some notations. Throughout this section, we denote by c,c,C,C,Ci constants that do not depend on p,T and may vary from place to place. Define T0=maxiGt=1Tξit/T and T1=maxiGt=1Tξ^it/T. Let {zt}t=1T be a sequence of N(0,Ωf(k,k)Σu) vectors. Define cW0(α)=inf{tR:P(W0t|(Y,F))1α} with W0=maxiGT1/2Ω^f(k,k)t=1Tu^itet and cZ0(α)=inf{tR:P(Z0t|(Y,F))1α} with Z0=maxiGt=1Tzit/T. Denote by Σf(m,n)=vmΣfvn with Σf=Ωf1. We begin by presenting some useful lemmas that will be used in the proofs of the main results.

Lemma A1.

Suppose that the random variables Z1,Z2 both satisfy the exponential-type tail condition: There exist r1,r2(0,1) and b1,b2>0 such that s>0,

P(|Zi|>s)exp{1(s/bi)ri},i=1,2.

Then, for some r3 and b3>0, and any s>0,

  • (i)

    P(|Z1Z2|>s)exp{1(s/b3)r3}, where r3(0,r) with r=r1r2/(r1+r2),

  • (ii)

    P(|Z|>s)exp{1(s/b3)r3}, where Z=max{Z1,Z2}.

Proof of Lemma A1. 

The proof of the first claim can be found in the proof of Lemma A.2 in [4], thus we prove the second claim. For any s>0, we have

P(|Z|>s)P(|Z1|>s)+P(|Z2|>s)=exp{1(s/b1)r1}+exp{1(s/b2)r2}2exp{1(s/b)r},

where br=max{b1r1,b2r2}. Pick up an r3(0,r), and b3>max{(r3/r)1/rb,(1+log2)1/rb}; then, it can be shown that F(s)=(s/b)r(s/b3)r3 is increasing when s>b3. Therefore, F(s)>F(b3)>log2 when s>b3, which implies when s>b3,

P(|Z|>s)2exp{1(s/b)r}exp{1(s/b3)r3}.

When sb3,

P(|Z|>s)1exp{1(s/b3)r3}.

Then the proof is complete. ☐

Lemma A2.

Under Assumptions 1–4, we have

  • (i)

    maxi,jp|σ^ijσij|=Op(logp/T).

  • (ii)

    maxip,kK|(1/T)t=1Tfktuit|=Op(logp/T).

  • (iii)

    maxi,jK|(1/T)t=1TfitfjtEfitfjt|=Op(logT/T).

Proof of Lemma A2. 

For a proof, see the proof of Lemma A.3 and Lemma B.1 in [4]. ☐

Lemma A3.

If a random variable X satisfies exponential-type tail: there exist r>0 and b>0 such that s>0, P(|X|>s)exp{1(s/b)r}. Then E(|X|)=O(1).

Proof of Lemma A3. 

Note that

E(|X|)=0P(|X|>x)dx0exp{1(x/b)r}dx0exp(xr)dx:=I

It is not hard to check when r1, I1+e1. When r<1, I=αΓ(α), where α=1/r and Γ(α)=0exxα1dx=O(1). Then, the proof is complete. ☐

Lemma A4.

Under the assumptions in Theorem 1, there exist constants c,C>0 such that

ρ:=suptR|P(T0t)P(Z0t)|CTc.

Proof of Lemma A4. 

Implied by Assumption 3, we have (log(pT))7/TC1Tc1 for some constants c1,C1>0. We then apply Corollary 2.1 of [25] to the sequence ξit. What we should check is its Condition (E.2), that is, uniformly over i,

c0E(ξit)2C0,maxk=1,2E(|ξit|k+2/Bk)+E{(max1ip|ξit|/B)4}4,

where c0,C0>0 and B is some large enough constant. By Lemmas A1 and A3 we have E(fitfjt)=O(1) uniformly for i,jK. This implies Ωf=O(1). Uniformly for ip, we have

ξit=uitftΩfvkuitftΩfvk1:=γit.

By Lemma A1, we know γit and max1ip|γit| is exponential-type tail. Then by Lemma A3 we have E(|ξit|4)E(|γit|4)=O(1) and E(maxip|ξit|)4E(maxip|γit|)4=O(1). Thus, we can find large enough B such that the above condition is satisfied. Then, the proof is complete. ☐

Lemma A5.

Under the assumptions in Theorem 1, there exists a sequence of positive numbers αT such that αT/p=o(1) and P(αT(logp)2|Ω^f(k,k)Ωf(k,k)|>1)0.

Proof of Lemma A5. 

By Lemma A2 (iii), we have Ω^f1Ωf1=Op(logT/T). Since Ωf=O(1) and Ω^f=Op(1), we have

Ω^fΩf=Ω^f(Ωf1Ω^f1)ΩfΩ^fΩ^f1Ωf1Ωf=Op(logT/T).

On the other hand,

|Ω^f(k,k)Ωf(k,k)|Ω^fΩf=Op(logT/T).

Choosing αT=logT, by Assumption 3, the proof is complete. ☐

Lemma A6.

Under the assumptions in Theorem 1, we have for every α(0,1) and ϑ>0,

P(cW0(α)cZ0(α+π(ϑ)))1P(Δ>ϑ),P(cZ0(α)cW0(α+π(ϑ)))1P(Δ>ϑ),

Proof of Lemma A6. 

For ϑ>0, let π(ϑ)=C2ϑ1/3(1log(p/ϑ))2/3 with C2>0. Recall that Δ=max1i,jp|Ω^f(k,k)σ^ijΩf(k,k)σij|. As |Ωf(k,k)|=O(1) uniformly for kK, by Lemma A2 (i), we have

Δ=Op(|Ω^f(k,k)Ωf(k,k)|+(logp)/T).

By Lemma A5 and Assumption 3, choosing ϑ=1/(αT(logp)2), we have P(Δ>ϑ)=o(1). By Lemma 3.1 of [25], on the event {(Y,F):Δϑ}, we have |P(Z0t)P(P(W0t|(Y,F))|π(ϑ) for all tR, and so on this event

P(P(W0cZ0(α+π(ϑ))|(Y,F)))P(Z0cZ0(α+π(ϑ)))π(ϑ)α+π(ϑ)π(ϑ)=α,

implying the first claim. The second claim follows similarly. ☐

Lemma A7.

Under the assumptions in Theorem 1, there exist ζ1,ζ2>0 such that

P|max1ipt=1Tξ^it/Tmax1ipt=1Tξit/T|>ζ1<ζ2,

where ζ11log(p/ζ1)=o(1), ζ2=o(1).

Proof of Lemma A7. 

The arguments in the proof of Lemma A5 imply that

Ω^fvkΩfvk1=Op(logT/T).

By Lemma A2 (ii), uniformly for ip, we have

|t=1Tξ^it/Tt=1Tξit/T|=|t=1Tuitft(Ω^fvkΩfvk)/T|Ω^fvkΩfvk1maxip,kK|1Tt=1Tfktuit|=Op(logplogT/T).

Choosing ζ12=O(logplogT/T), we have

Pmax1ip|t=1Tξ^it/Tt=1Tξit/T|>ζ1ζ2,ζ2=o(1).

Note that

|max1ipt=1Tξ^it/Tmax1ipt=1Tξit/T|max1ip|t=1Tξ^it/Tt=1Tξit/T|,

then the proof is complete. ☐

Lemma A8.

Under the assumptions in Theorem 4, we have

  • (i)

    maxi,j,m,n|(1/T)t=1TuitujtfmtfntσijΣf(m,n)|=Op(logp/T),

  • (ii)

    maxi,j,m,n|(1/T)t=1Tuitfjtfmtfnt|=Op(logp/T),

Proof of Lemma A8. 

(i) By Assumption 1 and Lemma A1, uitfmt satisfies the exponential tail condition, with parameter 2r1r2/(3r1+3r2) as shown in Lemma A1. Thus, uitujtfmtfnt satisfies the exponential tail condition, with parameter r1r2/(4r1+4r2). It follows from 1.5(r11+r21)>1 that γ3<1. Therefore, by the Bernstein’s inequality [23], there exist constants Ci, i=1,,5, for any s>0

maxi,j,m,nP|1Tt=1TuitujtfmtfntσijΣf(m,n)|sTexp(Ts)γ3C1+expT2s2C2(1+TC3)+exp(Ts)2C4Texp(Ts)γ3(1γ3)C5(logTs)γ3. (A1)

Using Bonferroni’s method, we have

P(maxi,j,m,n|1Tt=1TuitujtfmtfntσijΣf(m,n)|>s)(pK)2maxi,j,m,nP|1Tt=1TuitujtfmtfntσijΣf(m,n)|>s.

Let s=C(logp)/T for some C>0. It is not hard to check that when (logp)2/γ31=o(T) (by assumption), for large enough C,

p2Texp(Ts)γ3C1+p2exp(Ts)2C4Texp(Ts)γ3(1γ3)C5(logTs)γ3=o1p2

and

p2expT2s2C2(1+TC3)=O1p2.

As K=O(1), this proves (i).

(ii) By Assumption 1 and Lemma A1, uitfjtfmtfnt satisfies the exponential tail condition for the tail parameter r1r2/(9r1+3r2). Therefore, again by the Bernstein’s inequality and the Bonferroni method on uitfjtfmtfnt similar to (A1) with the parameter r31=3r11+9r21, it follows from 1.5(r11+r21)>1 that r3<1. Thus, when s=Clogp/T for large enough C, as K is fixed, the term

pK3expT2s2C2(1+TC3)p2,

and the rest terms on the right-hand side of the inequality, multiplied by pK3 are of order o(p2). Hence when (logp)2/r31=o(T) (by assumption), we have

maxi,j,m,n|1Tt=1Tuitfjtfmtfnt|=Op(logp/T),

which completes the proof. ☐

Lemma A9.

Under the assumptions in Theorem 4, we have

  • (i)

    maxi,j,m,n|(1/T)t=1T(u^itu^jtfmtfntuitujtfmtfnt)|=Op(logp/T).

  • (ii)

    maxi,j,m,n|(1/T)t=1Tu^itu^jtfmtfntσijΣf(m,n)|=Op(logp/T).

Proof of Lemma A9. 

(i) By the triangular inequality, we have

|1Tt=1T(u^itu^jtfmtfntuitujtfmtfnt)||1Tt=1T(u^itu^jtfmtfntu^itujtfmtfnt)|I+|1Tt=1T(u^itujtfmtfntuitujtfmtfnt)|II

For I, we have

I=|1Tt=1Tu^it(b^jbj)ftfmtfnt||1Tt=1T(b^ibi)ft(b^jbj)ftfmtfnt|i+|1Tt=1Tuit(b^jbj)ftfmtfnt|ii

By Lemma 3.1 of [4], we have maxipb^ibi=Op(logp/T). It is straightforward to see that

maxm,nK1Tt=1Tftftfmtfnt=Op(1).

then we have,

imaxi,m,nb^ibi21Tt=1Tftftfmtfnt=Op(logp/T).

By Lemma A8 (ii), we have (1/T)t=1Tuitftfmtfnt=Op(logp/T), which implies that

iimaxjpb^jbj1Tt=1Tuitftfmtfnt=Op(logp/T)

Part II is similar to ii, thus we have

II=|1Tt=1T(b^ibi)ftujtfmtfnt|Op(logp/T).

Then the proof is complete.

(ii) By the triangular inequality, we have

maxi,j,m,n|1Tt=1Tu^itu^jtfmtfntσijΣf(m,n)|maxi,j,m,n|1Tt=1T(u^itu^jtfmtfntuitujtfmtfnt)|+maxi,j,m,n|1Tt=1TuitujtfmtfntσijΣf(m,n)|=Op(logp/T),

which proves the result. ☐

Proof of Theorem 1. 

Without loss of generality, we set G={1,2,,p}. First, we prove the following fact,

supα(0,1)|P(T1>cW0(α))α|=o(1), (A2)

For ϑ>0, let π(ϑ):=C2ϑ1/3(1log(p/ϑ))2/3 with C2>0. In addition, Let κ1(ϑ):=cZ0(αζ2π(ϑ)) and κ2(ϑ):=cZ0(α+ζ2+π(ϑ)). For every α(0,1), note that

P({T1cW0(α)}{T0cZ0(α)})(1)P(κ1(ϑ)2ζ1<T0κ2(ϑ)+2ζ1)+P(Δ>ϑ)+ζ2(2)P(κ1(ϑ)2ζ1<Z0κ2(ϑ)+2ζ1)+P(Δ>ϑ)+ρ+ζ2(3)π(ϑ)+P(Δ>ϑ)+ρ+ζ11log(p/ζ1)+ζ2,

where (1) follows from Lemmas A6 and A7, (2) follows from Lemma A4, and (3) follows from Lemma 2.1 in [25] and the fact that Z0 has no point masses. Then, by the definition of ρ in Lemma A4, we have

supα(0,1)|P(T1>cW0(α))α|ρ+ρ,

where ρ=supα(0,1)P({T1cW0(α)}{T0cZ0(α)}). The right-hand side of the above inequality is o(1), which has proved (A2). Since maxiGT|b^ikbik|=TmaxiGmax{b^ikbik,bikb^ik}, similar arguments imply that

supα(0,1)|PmaxiGT|b^ikbik|>cWT,k(α)α|=o(1),

which completes the proof. ☐

Proof of Theorem 2. 

From the arguments in the proof of Lemma A6, we have

Δ=Op(|Ω^f(k,k)Ωf(k,k)|+logp/T),

which implies that max1ip|ω^iiωii|=Op(|Ω^f(k,k)Ωf(k,k)|+logp/T). We then have

P(ωii/2<ω^ii<2ωiiforall1ip)1. (A3)

Define T¯1=maxiGt=1Tξ^it/Tω^ii and T¯0=maxiGt=1Tξit/Tωii. Note that

|T¯1T¯0|max1ip|t=1Tξ^it/Tω^iit=1Tξit/Tωii|max1ip|t=1Tξ^it/Tω^iit=1Tξ^it/Tωii|+max1ip|t=1Tξ^it/Tωiit=1Tξit/Tωii|Cmax1ip|t=1Tξ^it/T|max1ip|ωii/ω^ii1|+Cmax1ip|t=1Tξ^itξit/T|:=I1+I2,

where C,C>0.

On the event ωii/2<ω^ii<2ωii for all 1ip,

max1ip|ωii/ω^ii1|max1ip|ωiiω^ii|max1ip2/ωiimax1ip|ωiiω^iiωii+ω^ii|max1ip2/ωiimax1ip|ωiiω^ii|max1ip1/ωii=Op(|Ω^f(k,k)Ωf(k,k)|+(logp)/T).

On the other hand,

max1ip|t=1Tξ^it/T|max1ip|t=1T(ξ^itξit)/T|+max1ip|t=1Tξit/T|=OP(logplogT/T+logp)=OP(logp).

Therefore, on the above event, I1Op(logp|Ω^f(k,k)Ωf(k,k)|+logp/T). By Lemma A5, we can find ζ1 such that P(I1>ζ1)=o(1) and ζ11log(p/ζ1)=o(1). Thus by Lemma A7 and (A3), we have

P(|T¯1T¯0|>ζ1)P(I1+I2>ζ1)<ζ2,

for ζ11log(p/ζ1)=o(1) and ζ2=o(1).

Let Δ¯=max1j,kp|ωjk/ωjjωkkω^jk/ω^jjω^kk|. Note that

|ωjjωkkω^jjω^kk|=|ωjjωkkω^jjω^kk|ωjjωkk+ω^jjω^kk.

On the event ωii/2<ω^ii<2ωii for all 1ip, we have

|ωjjωkkω^jjω^kk|ωjjωkk+ω^jjω^kk|ωjjωkkω^jjω^kk|ωjjωkk+ωjjωkk/4(2/3)|ωjjωkkω^jjω^kk|max1jp1/ωjj,

which implies that

max1j,kp|ωjjωkk/ω^jjω^kk1|max1j,kp|ωjjωkkω^jjω^kk|max1jp2/ωjj(4/3)max1j,kp|ωjjωkkω^jjω^kk|max1jp1/ωjj2=Op(|Ω^f(k,k)Ωf(k,k)|+(logp)/T).

Choosing ϑ=1/(αT(logp)2), we can show that P(Δ¯>ϑ)=o(1). The rest of the proofs are similar to those in the proof of Theorem 1. We skip the details. ☐

Proof of Theorem 3. 

Let Z=(Z1,,Zp)dN(0,Θ). Following the arguments in the proof of Theorem 2, we can show that the distribution of maxiGT|b^ikbik|/ω^ii can be approximated by maxiG|Zi|. Under Assumption 5, by Lemma 6 of [15], we have for any xR and as |G|,

PmaxiG|Zi|22log(|G|)+loglog(|G|)xF(x):=exp1πexpx2.

It implies that

PmaxiGT|b^ikbik|2/ω^ii2log(|G|)loglog(|G|)/21. (A4)

The bootstrap consistency result implies that

|(cWT,k*(α))22log(|G|)+loglog(|G|)qα|=oP(1), (A5)

where qα is the 100(1α)th quantile of F(x). Consider any iG such that |biknullbik|/ωii>(2+ε0)log|G|/T. Using the inequality 2a1a2δ1a12+δa22 for any δ>0, we have

T|biknullbik|2/ω^ii(1+δ1)T|b^ikbik|2/ω^ii+(1+δ)T|b^ikbiknull|2/ω^ii, (A6)

where T|b^ikbik|2/ω^ii=op(log|G|) as i is fixed and |G| grows. From the proof of Theorem 2, we know the difference between T|biknullbik|2/ω^ii and T|biknullbik|2/ωii is asymptotically negligible. Thus, by (A6) and the fact that BkUG(2+ε0), we have

maxiGT|b^ikbiknull|2/ω^ii11+δ(2+ε0)2(log|G|)op(log|G|). (A7)

The conclusion thus follows from (A7) and (A5) provided that δ is small enough. ☐

Proof of Theorem 4. 

Without loss of generality, we set G*=M. Define

Γ^=1Tt=1Tu^itu^jtftft,andΓ=σijΣf.

Let

Δ*:=maxi,j,m,n|(Ω^fvm)Γ^(Ω^fvn)σijΩf(m,n)|

denote the maximum discrepancy between the empirical and population covariance matrices. By the triangular inequality, we have

|(Ω^fvm)Γ^(Ω^fvn)σijΩf(m,n)|=|(Ω^fvm)Γ^(Ω^fvn)(Ωfvm)Γ(Ωfvn)||(Ω^fvm)(Γ^Γ)(Ω^fvn)|I+|(Ω^fvmΩfvm)Γ(Ω^fvn)|II+|(Ωfvm)Γ(Ω^fΩf)vn|III.

Note that Ω^f=Op(1), by Lemma A9 (ii), we have

IΩ^fvm2Γ^Γ=Op(logp/T).

By Lemma A2 (iii) and Γ=O(1), we have

IIΩ^fvmΩfvmΓΩ^fvn=Op(logT/T).

Since Ωf=O(1), we have

III(Ωfvm)Γ(Ω^fΩf)vn=Op(logT/T).

The above results hold uniformly for i,j,m,n, thus we have

Δ*=Op(logp/T+logT/T).

The rest of the proofs is similar to those in the proof of Theorem 2. We skip the details. ☐

Proof of Theorem 5. 

For a proof, see the proof of Theorem 3. ☐

Author Contributions

X.G. conceived and designed the experiments; Y.W. performed the experiments; Y.W. analyzed the data; X.G. contributed to analysis tools; X.G. and Y.W. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grants 12071452, 11601500, 11671374 and 11771418 and the Fundamental Research Funds for the Central Universities.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Bai J., Liao Y. Efficient estimation of approximate factor models via penalized maximum likelihood. J. Econ. 2016;191:1–18. [Google Scholar]
  • 2.Heinemann A. Efficient estimation of factor models with time and cross-sectional dependence. J. Appl. Econ. 2017;32:1107–1122. [Google Scholar]
  • 3.Fan J., Fan Y., Lv J. High dimensional covariance matrix estimation using a factor model. J. Econ. 2008;147:186–197. [Google Scholar]
  • 4.Fan J., Liao Y., Mincheva M. High-dimensional covariance matrix estimation in approximate factor models1. Ann. Stat. 2011;39:3320–3356. doi: 10.1214/11-AOS944. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Fan J., Liao Y., Mincheva M. Large covariance estimation by thresholding principal orthogonal complements. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2013;75:603–680. doi: 10.1111/rssb.12016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Fan J., Wang W., Zhong Y. Robust covariance estimation for approximate factor models. J. Econom. 2019;208:5–22. doi: 10.1016/j.jeconom.2018.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Dickhaus T., Pauly M. Time Series Analysis and Forecasting. Springer; Berlin, Germany: 2016. Simultaneous statistical inference in dynamic factor models; pp. 27–45. [Google Scholar]
  • 8.Dickhaus T., Sirotko-Sibirskaya N. Simultaneous statistical inference in dynamic factor models: Chi-square approximation and model-based bootstrap. Comput. Stat. Data Anal. 2019;129:30–46. [Google Scholar]
  • 9.Lucas J., Carvalho C., Wang Q., Bild A., Nevins J.R., West M. Sparse statistical modelling in gene expression genomics. Bayesian Inference Gene Expr. Proteom. 2006;1:155–176. [Google Scholar]
  • 10.Carvalho C.M., Chang J., Lucas J.E., Nevins J.R., Wang Q., West M. High-dimensional sparse factor modeling: applications in gene expression genomics. J. Am. Stat. Assoc. 2008;103:1438–1456. doi: 10.1198/016214508000000869. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Reis R., Watson M.W. Relative goods’ prices, pure inflation, and the Phillips correlation. Am. Econ. J. Macroecon. 2010;2:128–157. [Google Scholar]
  • 12.Amengual D., Repetto L. Testing a Large Number of Hypotheses in Approximate Factor Models. CEMFI; Madrid, Spain: 2014. Technical Report. [Google Scholar]
  • 13.Candès E., Tao T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007;35:2313–2351. [Google Scholar]
  • 14.Cai T., Liu W., Xia Y. Two-sample covariance matrix testing and support recovery in high-dimensional and sparse settings. J. Am. Stat. Assoc. 2013;108:265–277. doi: 10.1080/01621459.2012.758041. [DOI] [Google Scholar]
  • 15.Cai T.T., Liu W., Xia Y. Two-sample test of high dimensional means under dependence. J. R. Stat. Soc. Ser. (Stat. Methodol.) 2014;76:349–372. [Google Scholar]
  • 16.Zhang X., Cheng G. Simultaneous inference for high-dimensional linear models. J. Am. Stat. Assoc. 2017;112:757–768. doi: 10.1080/01621459.2016.1166114. [DOI] [Google Scholar]
  • 17.Romano J.P., Wolf M. Exact and approximate stepdown methods for multiple hypothesis testing. J. Am. Stat. Assoc. 2005;100:94–108. [Google Scholar]
  • 18.Zhu Y., Yu Z., Cheng G. High dimensional inference in partially linear models; Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS); Naha, Okinawa, Japan. 16–18 April 2019; pp. 2760–2769. [Google Scholar]
  • 19.Zhang X., Cheng G. Bootstrapping high dimensional time series. arXiv. 20141406.1037 [Google Scholar]
  • 20.Benjamini Y., Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. (Methodol.) 1995;57:289–300. doi: 10.1111/j.2517-6161.1995.tb02031.x. [DOI] [Google Scholar]
  • 21.Fama E.F., French K.R. The cross-section of expected stock returns. J. Financ. 1992;47:427–465. doi: 10.1111/j.1540-6261.1992.tb04398.x. [DOI] [Google Scholar]
  • 22.Bickel P., Levina E. Some theory for Fisher’s linear discriminant function, “naive Bayes”, and some alternatives when there are many more variables than observations. Bernoulli. 2004;10:989–1010. doi: 10.3150/bj/1106314847. [DOI] [Google Scholar]
  • 23.Merlevède F., Peligrad M., Rio E. A Bernstein type inequality and moderate deviations for weakly dependent sequences. Probab. Theory Relat. Fields. 2011;151:435–474. doi: 10.1007/s00440-010-0304-9. [DOI] [Google Scholar]
  • 24.Kosorok M.R. Introduction to Empirical Processes and Semiparametric Inference. Springer; New York, NY, USA: 2008. [Google Scholar]
  • 25.Chernozhukov V., Chetverikov D., Kato K. Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors. Ann. Stat. 2013;41:2786–2819. [Google Scholar]
  • 26.Ingster Y.I., Tsybakov A.B., Verzelen N. Detection boundary in sparse regression. Electron. J. Stat. 2010;4:1476–1526. doi: 10.1214/10-EJS589. [DOI] [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES