Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2001 Apr 24;98(9):4860–4865. doi: 10.1073/pnas.071060498

The asymptotic distribution of canonical correlations and vectors in higher-order cointegrated models

T W Anderson 1,*
PMCID: PMC33128  PMID: 11320235

Abstract

The study of the large-sample distribution of the canonical correlations and variates in cointegrated models is extended from the first-order autoregression model to autoregression of any (finite) order. The cointegrated process considered here is nonstationary in some dimensions and stationary in some other directions, but the first difference (the “error-correction form”) is stationary. The asymptotic distribution of the canonical correlations between the first differences and the predictor variables as well as the corresponding canonical variables is obtained under the assumption that the process is Gaussian. The method of analysis is similar to that used for the first-order process.


Cointegrated stochastic processes are used in econometrics for modeling macroeconomic time series that have both stationary and nonstationary properties. The term “cointegrated” means that in a multivariate process that appears nonstationary some linear functions are stationary. Many economic time series may show inflationary tendencies or increasing volatility, but certain relationships are not affected by these tendencies. Statistical inference is involved in identifying these relationships and estimating their importance.

The family of stochastic processes studied in this paper consists of vector autoregressive processes of finite order. A vector of contemporary measures is considered to depend linearly on earlier values of these measures plus random disturbances or errors. The dependence may be evaluated by the canonical correlations between the contemporary values and the earlier values.

The nonstationarity of a process may be eliminated by treating differences or higher-order differences (over time) of the vectors. This paper treats processes in which first-order differencing accomplishes stationarity. The first-order difference is represented as a linear combination of the first lagged variable and lags of the difference variable. The stationary linear combinations are the canonical variables corresponding to the nonzero process canonical correlations between the difference variable and the first lagged variable not accounted for by the lagged differences. The number of these is defined as the degree of cointegration.

Statistical inference of the model is based on a sample of observations; that is, a vector time series over some period of time. The estimator of the parameters of the original autoregressive model is a transformation of the estimator of the (stationary) error-correction form. In the latter, one coefficient matrix is of lower rank (the degree of cointegration). It is estimated efficiently by the reduced rank regression estimator introduced by me (1). It depends on the larger canonical correlations and corresponding canonical vectors. The smaller correlations are used to determine the rank of this matrix. Inference is based on the large-sample distribution of these correlations and variables.

The asymptotic distribution of the canonical correlations and coefficients of the variates for the first-order autoregressive process was derived by me (2). The distribution for the higher-order process (that is, several lags) is obtained in this paper, using similar algebra. Hansen and Johansen (3) have independently obtained the asymptotic distribution of the canonical correlations, but by a different method and expressed in a different form.

The likelihood ratio test for the degree of cointegration that I found (1) is given in Asymptotic Distribution of the Smaller Roots; its asymptotic distribution under the null hypothesis was found by Johansen (4). To evaluate the power of such a test, one needs to know the distribution or asymptotic distribution of the sample canonical correlations corresponding to process canonical correlations different from 0. See ref. 5, for example.

For further background, the reader is referred to Johansen (6) and Reinsel and Velu (7).

The Model

The general cointegrated model is an autoregressive process {Yt} of order m defined by

graphic file with name M1.gif 1

where Zt is unobserved with ℰZt = 0, ℰZtZt = ΣZZ, and ℰYtiZt = 0, i = 1, … . Let B(λ) = λmI − λm−1B1 − … − Bm. If the roots λ1, … , λpn of |B(λ)| = 0 satisfy |λi| < 1, a stationary process {Yt} can be defined by 1. If some of the roots are 1, the process will be nonstationary. In this paper, we assume that n (0 < n < p) roots of |B(λ)| = 0 are 1(λ1 = … = λp = 1), and the other pmn roots satisfy |λi| < 1, i = n + 1, … , pm. The first difference of the process, the “error-correction” form, is

graphic file with name M2.gif
graphic file with name M3.gif 2

Here Π = B1 + … + BmI = −B(1), Πj = −(Bj+1 + … + Bm), j = 1, … , m − 1, Π̄ = (Π1, … , Πm−1), and Δ̄Yt−1 = (ΔYt−1, … , ΔYtm+1)′.

A sample consists of T observations: Y1, … , YT. Because the rank of Π is k, it is to be estimated by the reduced rank regression estimator introduced by me (1) as the maximum likelihood estimator when Z1, … , ZT are normally distributed and Y0, Y−1, … , Ym+1 are nonstochastic and known. The matrices Π1, … , Πm−1 are unrestricted except for the condition |λi| < 1, i = n + 1, … , pm. The estimator depends on the canonical correlations and vectors of ΔYt and Yt−1 conditioned on ΔYt−1, … , ΔYtm+1.

Define

graphic file with name M4.gif

where SΔ̄Y,Δ̄Y = T−1Inline graphicΔ̄Yt−1Δ̄Yt−1, SΔY,Δ̄Y = T−1Inline graphic ΔYtΔ̄Yt−1, and SȲ,Δ̄Y = T−1Inline graphic Yt−1Δ̄Yt−1. The vectors ΔŶInline graphic and ŶInline graphic are the sample residuals of ΔYt−1 and Yt−1 regressed on Δ̄Yt−1. Define ŜInline graphic = T−1Inline graphic ΔŶInline graphicΔŶInline graphic = SΔYYSΔY,Δ̄YSInline graphicSΔ̄YY, ŜInline graphic = T−1Inline graphic ΔŶInline graphicŶInline graphic = SΔYSΔY,Δ̄YSInline graphicSΔ̄Y, and ŜInline graphic = T−1Inline graphic ŶInline graphicŶInline graphic = SȲȲSȲ,Δ̄YSInline graphicSΔ̄Y, where SΔYY = T−1Inline graphic ΔYtΔYt, SΔY = T−1Inline graphic ΔYtYt−1, and SȲȲ = T−1Inline graphic Yt−1Yt−1. The sample canonical correlations between ΔŶInline graphic and ŶInline graphic and variates are defined by

graphic file with name M30.gif 3
graphic file with name M31.gif 4

More information on canonical analysis is covered in chapter 12 of ref. 8. One form of the reduced rank regression estimator is Π̂(k) = ŜInline graphicΓ̂2Γ̂2, where Γ̂2 = (γ̂n+1, … , γ̂p) and rInline graphic < … < rInline graphic.

We shall assume that there are exactly n linearly independent solutions to ω′B(1) = 0; that is, ω′Π = 0. Then the rank of Π is pn = k and there exists a p × n matrix Ω1 of rank n such that Ω1Π = 0. See Anderson (9). There is also a p × k matrix Ω2 of rank k such that Ω2Π = Υ2Ω2, where Υ2 (k × k) is nonsingular, and Ω = (Ω1, Ω2) is nonsingular.

To distinguish between the stationary and nonstationary coordinates, we make a transformation of coordinates. Define

graphic file with name M35.gif

Ψj = Ω′Bj(Ω′)−1, j = 1, … , m. Then the process 1 is transformed to

graphic file with name M36.gif 5

If we define Υ = Ψ1 + … + ΨmI = Ω′Π(Ω′)−1, Υj = −∑Inline graphic Ψj = Ω′Πj(Ω′)−1, = (Υ1, … , Υm−1), and Δ̄Xt−1 = (ΔXt−1, … , ΔXtm+1)′, the form 2 is transformed to

graphic file with name M38.gif 6

Note that Υ = diag(0, Υ22).

Define ΔInline graphic, Inline graphic, SΔ̄X,Δ̄X, SΔX,Δ̄X, SX̄,Δ̄X, ŜInline graphic, ŜInline graphic, and ŜInline graphic in a manner analogous to the definitions in the Y-coordinates. The reduced rank regression estimator of Υ is based on the canonical correlations and canonical variates between ΔInline graphic and Inline graphic defined by

graphic file with name M46.gif 7
graphic file with name M47.gif 8

The estimator of Υ of rank k is Υ̂(k) = ŜInline graphicG2G2, where G2 = (gn+1, … , gp) and gi is the solution for g in 8 when r = ri, the solution to 7 and r1 < … < rp. The rest of this paper is devoted to finding the asymptotic distribution of {gi, ri}. Note that Υ̂(k) = ΩΠ̂(k)(Ω′)−1.

The vectors ΔInline graphic = ΔXtSΔX,Δ̄XSInline graphicΔ̄Xt−1 and Inline graphic = Xt−1SX̄,Δ̄XSInline graphicΔ̄Xt−1 are the residuals of ΔXt and Xt−1 regressed on Δ̄Xt−1, and r1 is the maximum correlation between ΔInline graphic and Inline graphic, which is the correlation between ΔXt and Xt−1 after taking account of the dependence “explained” by Δ̄Xt−1. The canonical correlations are the canonical correlations between (ΔXt, Δ̄Xt−1) and (Xt−1, Δ̄Xt−1) other than ±1.

The Process

The process {Xt} defined by 5 can be put in the form of the Markov model

graphic file with name M55.gif 9

(section 5.4, ref. 10). Multiplication of 9 on the left by

graphic file with name M56.gif

yields a form that includes the error-correction form 6

graphic file with name M57.gif 10

The first n components of 10 constitute

graphic file with name M58.gif 11
graphic file with name M59.gif
graphic file with name M60.gif

Here Υj has been partitioned into n and k rows and columns. Assume X10 = X1,−1 = … = 0 and W10 = W1,−1 = … = 0. The sum of 11 for t = −∞ to t = s is X1s = ∑Inline graphic [ΥInline graphicX1,sj + ΥInline graphicX2,sj] + ∑Inline graphic W1t, or

graphic file with name M65.gif 12
graphic file with name M66.gif
graphic file with name M67.gif

Write 12 as

graphic file with name M68.gif 13

where Γ = (I − ∑Inline graphic ΥInline graphic)−1, Γ−1H is a linear combination of Υ1, … , Υm−1, and s = (X2s, Δ̄Xs)′. [The matrix on the left-hand side of 12 is nonsingular because otherwise there would be a linear combination of the right-hand side identically 0.] The right-hand side of 13 is the sum of a stationary process and a random walk (∑Inline graphic W1t).

The last pmn = k + p(m − 1) components of 10 constitute a stationary process satisfying

graphic file with name M72.gif 14

where t = (X2t, Δ̄Xt), t = (W2t, Wt, 0), and Υ̃ consists of the last pmn rows and columns of the coefficient matrix in 10. Note that the first n columns and last pmn rows of that matrix consist of 0s. Because the eigenvalues of Υ̃ are less than 1 in absolute value (9), t = ∑Inline graphic Υ̃sts, ℰtt = Σ̃ = ∑Inline graphic Υ̃sΣ̃WWΥ̃s, ℰtth = Υ̃hΣ̃. The covariance Σ̃ satisfies

graphic file with name M75.gif 15

Given Υ̃ and Σ̃WW, 15 can be solved for Σ̃ [Anderson (10), section 5.5]. Further we write 13 as X1t = Γ Inline graphic W1,ts + H Inline graphic Υ̃sts. Then

graphic file with name M78.gif
graphic file with name M79.gif
graphic file with name M80.gif
graphic file with name M81.gif

since IΥ̃tI. Here ℰW1tt = ΣInline graphic is the second set of rows in Σ̃WW. Then T−1SInline graphic = T−2Inline graphicX1tX1t → 2−1ΓΣInline graphicΓ′ because ∑Inline graphic t = T(T + 1)/2. Further

graphic file with name M87.gif
graphic file with name M88.gif
graphic file with name M89.gif

Define

graphic file with name M90.gif 16
graphic file with name M91.gif

where ΣΔX,Δ̄X = ℰΔXtΔ̄Xt−1, ΣX̄,Δ̄X = ℰXt−1Δ̄Xt−1 depends on t, and ΣΔ̄X,Δ̄X = ℰΔ̄Xt−1Δ̄Xt−1 does not depend on t. Note that ΔXInline graphic and XInline graphic correspond to ΔInline graphic and Inline graphic with SΔX,Δ̄X, SX̄,Δ̄X and SΔ̄X,Δ̄X replaced by ΣΔX,Δ̄X, ΣX̄,Δ̄X and ΣΔ̄X,Δ̄X, respectively. Then 6 can be written as a regression model

graphic file with name M96.gif 17

with ℰXInline graphicWt = 0. Note that this model has the form of 2.10 in Anderson (2).

From 16 and 17 we calculate

graphic file with name M98.gif
graphic file with name M99.gif
graphic file with name M100.gif

The process analogs of 7 and 8 are

graphic file with name M101.gif 18
graphic file with name M102.gif 19

These define the process canonical correlations and variates in the X-coordinates.

Sample Statistics

The canonical correlations and vectors depend on ŜInline graphic, ŜInline graphic, and ŜInline graphic, which in turn depend on the submatrices of SX̄X̄, SX̄,Δ̄X, and SΔ̄X,Δ̄X (equivalently SInline graphic, Inline graphic, ). The vector t satisfies the first-order stationary autoregressive model 14. The sample covariance matrices XX, WX, and SWW are consistent estimators of Σ̃, 0, and ΣWW, and Inline graphic = Inline graphic(XXΣ̃), Inline graphic = Inline graphic WX, SInline graphic = Inline graphic (SWWΣWW) have a limiting normal distribution with means 0 and covariances that have been given in refs. 2 and 11.

Let W(u) be the Brownian motion process defined by TInline graphicInline graphic WtwW(u). Define I11 by

graphic file with name M116.gif

See Anderson (2) and theorem B.12 of Johansen (6). Define Jj1 by

graphic file with name M117.gif

Then T−1SInline graphicdΓI11Γ′ by 13, T−1XXp0, and the Cauchy–Schwarz inequality.

We shall find the limit in distribution of SInline graphic from the limit of SInline graphic by using equation B.20 of theorem B.13 of Johansen (6). A specialization to the model here is

graphic file with name M121.gif

where (u) = [W2(u), W′(u), 0]′. [In theorem B.13, let θi = (I, 0), ψi = (0, Υ̃i), ɛ′t = (W1t, t), and Ω = ℰɛtɛ′t, Vt = t.] Then

graphic file with name M122.gif

Because {t} is stationary, T−1Inline graphic Wtt−1p0 and

graphic file with name M124.gif 20

Now we wish to show that ΔXInline graphic and XInline graphic lead to the same asymptotic results as ΔInline graphic and Inline graphic. First note that T−1SInline graphicdΓI11Γ′ and T−1 times any other sample covariance converges in probability to 0. Hence T−1SInline graphicdΓI11Γ′ and T−1ŜInline graphicdΓI11Γ′. Because {t} is stationary, {XInline graphic} is stationary, and SInline graphicpΣInline graphic, ŜInline graphicpΣInline graphic. Moreover Inline graphic = Inline graphic(X̄X̄Σ̃) has a limiting normal distribution. Expansion of SInline graphic and ŜInline graphic in terms of the submatrices of Inline graphic shows that SInline graphic and ŜInline graphic have the same limiting normal distribution. (See Asymptotic Distribution of the Larger Roots.) Finally, TInline graphicSInline graphicp0 and TInline graphicŜInline graphicp0 because SInline graphic, SInline graphic, SΔ̄X,Δ̄X, and SInline graphic and hence SInline graphic have finite limits in distribution.

From 17 we find that plimT→∞SInline graphic = plimT→∞ŜInline graphic = ΣInline graphic and

graphic file with name M155.gif

where SInline graphic = SWSW,Δ̄XΣInline graphicΣΔ̄X,X̄, which converges in distribution to the right-hand side of 20.

As noted above, SInline graphicdSInline graphic, which consists of the first k rows of the weak limit of SInline graphic. Then

graphic file with name M161.gif

Asymptotic Distribution of the Smaller Roots

Let Q+ = SInline graphic SInline graphic SInline graphic. The n smaller roots of |Q+r2SInline graphic| = 0 converge in probability to 0 and the k larger roots converge to the roots of

graphic file with name M166.gif

by the analysis of ref. 2. Let d = Tr2. The n smaller roots of |QInline graphicT−1 dSInline graphic| = 0 converge in distribution to the roots of

graphic file with name M169.gif

by the algebra used in ref. 2, section 5, resulting in the same distribution as in ref. 2. The likelihood ratio criterion for testing that the rank of Υ is k (2) is −2 log λ = −TInline graphic(1 − rInline graphic) = ∑Inline graphic di + op(1), the limiting distribution of which was found by Johansen (4, 6) and was given in equation 5.4 in ref. 2.

Asymptotic Distribution of the Larger Roots

We now turn to deriving the asymptotic distribution of the k larger roots of |Q+r2SInline graphic| = 0 and the associated vectors solving Q+g = r2SInline graphicg. First we show that the asymptotic distribution of rInline graphic, … , rInline graphic is the same as the asymptotic distribution of the zeros of |QInline graphicr2SInline graphic|. Then we transform from the X-coordinates to the coordinates of the process canonical correlations and vectors.

Let Inline graphic = diag(rInline graphic, … , rInline graphic) and G2 = (G12, G22)′ consist of the corresponding solutions to Q+G2 = SInline graphicG2Inline graphic. The normalization of the columns of G2 is G2SInline graphicG2 = I, that is,

graphic file with name M185.gif 21

The probability limit of 21 shows that Inline graphicG12 = Op(1) and G22 = Op(1). The submatrix equations in Q+G2 = SInline graphicG2Inline graphic can be written as

graphic file with name M189.gif 22
graphic file with name M190.gif 23

Because T−1QInline graphicp0, TInline graphicQInline graphicp0, TInline graphicSInline graphicp0, T−1SInline graphicdΓI11Γ′ and Inline graphicpRInline graphic = diag(ρInline graphic, … , ρInline graphic), the probability limit of the left-hand side of 22 is 0; this shows that Inline graphicG12p0. Then the asymptotic distribution of G22 is the asymptotic distribution of G22 defined by

graphic file with name M202.gif 24

where the elements of Inline graphic are defined by |QInline graphicr2SInline graphic| = 0. Note that when Inline graphicG12p0 is combined with 23, we obtain QInline graphicG22 = SInline graphicG22Inline graphic + op (TInline graphic).

We proceed to find the asymptotic distribution of G22 and Inline graphic defined by 24 in the manner of ref. 2. Let

graphic file with name M212.gif
graphic file with name M213.gif
graphic file with name M214.gif

where W2⋅1,t = W2tΣInline graphic(ΣInline graphic)−1W1t and ℰW2⋅1,tW2⋅1,t = ΣInline graphic = ΣInline graphicΣInline graphic (ΣInline graphic)−1ΣInline graphic. We expand Inline graphic{QInline graphic − [ΣInline graphic(ΣInline graphic)−1ΣInline graphic]22} to obtain

graphic file with name M227.gif 25
graphic file with name M228.gif
graphic file with name M229.gif
graphic file with name M230.gif
graphic file with name M231.gif
graphic file with name M232.gif

where Λ = Υ22ΣInline graphicΥ22′ + ΣInline graphic. See equation 6.5 of ref. 2.

To express the covariances of the sample matrices, we use the “vec” notation. For A = (a1, … an), we define vec A = (a1, … , an)′. The Kronecker product of two matrices A = (aij) and B is AB = (aijB). A basic relation is vec ABC = (C′A) vec B, which implies vec xy′ = vec x1y′ = (yx) vec 1 = yx. Define the commutator matrix K as the (square) permutation matrix such that vec A′ = K vec A for every square matrix of the same order as K.

Define C = (I, −Σ2,Δ̄XΣInline graphic) and D = [I, −ΣInline graphic(ΣInline graphic)−1, 0]. Then XInline graphic = CX̃t, W2⋅1,t = DW̃t, DΣ̃WW = ΣInline graphicJ(k), CΣ̃ = ΣInline graphicI(k), J(k) = (I, 0, I, 0), I(k) = (I, 0), ΣInline graphic = CΣC′, and ΣInline graphic = DΣ̃WWD′.

Theorem 1. If the Wt are independently normally distributed, SInline graphic, SInline graphic, and SInline graphic have a limiting normal distribution with means 0, 0, and 0 and covariances

graphic file with name M246.gif 26
graphic file with name M247.gif 27
graphic file with name M248.gif 28
graphic file with name M249.gif 29
graphic file with name M250.gif
graphic file with name M251.gif 30
graphic file with name M252.gif
graphic file with name M253.gif 31
graphic file with name M254.gif
graphic file with name M255.gif

Lemma 1. If X is normally distributed with X = 0 and XX′ = Σ, then ℰ vec XX′(vec XX′)′ = (I + K)(ΣΣ) + vec Σ(vec Σ)′. If X and Y are independent, ℰ vec XX′(vec YY′)′ = vec ℰXX′ ⊗ (vec ℰ′YY′)′, ℰ vec XY′(vec XY′)′ = ℰYY′ ⊗ ℰXX′, and ℰ vec XY′(vec(YX′)′ = KXX′ ⊗ ℰYY′.

Proof of Theorem 1: First 26 is equivalent to the first expression in Lemma 1. Next vec SInline graphic = TInline graphicInline graphic(XInline graphicW2⋅1,t) implies 27 because XInline graphic and W2⋅1,s are independent for t − 1 ≤ s. Similarly 28 follows. To prove 29, 30, and 31, we use the following lemma.

Lemma 2.

graphic file with name M261.gif
graphic file with name M262.gif
graphic file with name M263.gif

Proof of Lemma 2: We have from t = Υ̃X̃t−1 + t

graphic file with name M264.gif 32

Because XXX̄X̄ = (1/T)(TT00) and {t} is a stationary process, X̄X̄ in 32 can be replaced by XX + op(1). Then Lemma 2 results from 32 and vec Υ̃S̃W = K vec WΥ̃′.▪

Lemma 3.

graphic file with name M265.gif
graphic file with name M266.gif

Proof of Lemma 3: Write W2t = W2⋅1,t + ΣInline graphic(ΣInline graphic)−1W1t. Then

graphic file with name M269.gif
graphic file with name M270.gif
graphic file with name M271.gif
graphic file with name M272.gif

from which the lemma follows.▪

Proof of Theorem 1 Continued: Then 29 follows from Lemma 2, 26, and 28, and 30 follows from Lemma 2, 27, and 28 for t. To prove 31, use Lemma 2, 26, 27, and 10 to obtain

graphic file with name M273.gif 33
graphic file with name M274.gif
graphic file with name M275.gif
graphic file with name M276.gif
graphic file with name M277.gif

Then substitution of Σ̃WW = Σ̃Υ̃Σ̃Υ̃′ in 33 yields 31.▪

Let Ξ be a k × k matrix such that Ξ′(Υ22ΣInline graphicΥ22′)Ξ = Θ and Ξ′ΣInline graphicΞ = I, where Θ = diag(θn+1, … , θp) = RInline graphic(IRInline graphic)−1, RInline graphic = diag(ρInline graphic, … , ρInline graphic), and ρInline graphic is a root of 18 with 0 < ρInline graphic < … < ρInline graphic. Let UInline graphic = Ξ′XInline graphic, V2t = Ξ′W2t, V1t = W1t, Δ2 = Ξ′(Υ22 + I)(Ξ′)−1, M2 = Ξ′Υ22(Ξ′)−1, Ξ̃ = diag[Ξ, Im−1 ⊗ diag(In, Ξ)], Δ̃ = Ξ̃′Υ̃(Ξ̃′)−1, Ũt = Ξ̃t, CU = Ξ′C(Ξ̃′)−1. Then {Ũt} is generated by Ũt = Δ̃Ũt−1 + t, where t = Ξ̃t and UInline graphic satisfies UInline graphic = Δ2UInline graphic + V2t, ΔUInline graphic = M2UInline graphic + V2t. Multiplication of 25 on the left by Ξ′ and right by Ξ yields

graphic file with name M295.gif 34
graphic file with name M296.gif
graphic file with name M297.gif
graphic file with name M298.gif

Theorem 2. If the Vt are independently normally distributed, SInline graphic, SInline graphic, and SInline graphic have a limiting normal distribution with means 0, 0, and 0 and covariances

graphic file with name M302.gif
graphic file with name M303.gif
graphic file with name M304.gif
graphic file with name M305.gif
graphic file with name M306.gif
graphic file with name M307.gif
graphic file with name M308.gif
graphic file with name M309.gif
graphic file with name M310.gif
graphic file with name M311.gif
graphic file with name M312.gif
graphic file with name M313.gif 35

Let L2,t−1 = M2U2,t−1(= Ξ′Υ22X2,t−1). Then 34 becomes

graphic file with name M314.gif
graphic file with name M315.gif
graphic file with name M316.gif

The covariances of the limiting normal distribution of vec SInline graphic, vec SInline graphic = (M2I) vec SInline graphic, and vec SInline graphic = (M2M2) vec SInline graphic are found from Theorem 2. We write the transform of 35 as

graphic file with name M322.gif 36

where

graphic file with name M323.gif 37

Let H22 = (M2)−1Ξ−1G22 [= Ξ−1(Υ22′)−1G22]. Then QInline graphicG22 = SInline graphicG22Inline graphic and G22SInline graphicG22 = I transform to

graphic file with name M328.gif 38

Because (SInline graphicSInline graphicSInline graphic)22pΘRInline graphic and SInline graphicpΘ, the probability limits of 38 and hii > 0 imply H22pΘInline graphic.

Define HInline graphic = Inline graphic(H22ΘInline graphic) and Inline graphic = Inline graphic(Inline graphicRInline graphic). Then we can write 38 as

graphic file with name M342.gif 39
graphic file with name M343.gif 40

where

graphic file with name M344.gif
graphic file with name M345.gif

Lemma 4.

graphic file with name M346.gif 41
graphic file with name M347.gif
graphic file with name M348.gif

Lemma 5.

graphic file with name M349.gif 42
graphic file with name M350.gif
graphic file with name M351.gif
graphic file with name M352.gif

Proof of Lemma 5: We use the facts that M2 = Δ2I, J(k)M2 = Δ̃I(k)I(k) = (Δ̃I)I(k), and (I + K)K = I + K. Then the left-hand side of 42 is

graphic file with name M353.gif
graphic file with name M354.gif
graphic file with name M355.gif
graphic file with name M356.gif
graphic file with name M357.gif
graphic file with name M358.gif

which is the right-hand side of 42.▪

Theorem 3. If Zt are normally distributed and the roots of 18 are distinct,

graphic file with name M359.gif 43
graphic file with name M360.gif
graphic file with name M361.gif

Proof:  Theorem 4 follows from Theorem 2, 37, 41, 42, and the transpose of 42 and the fact that K(RInline graphicRInline graphic) = [(IRInline graphic) ⊗ RInline graphic]K(ΘΘ)[(IRInline graphic) ⊗ RInline graphic].▪

Note that 43 is equation 6.14 of ref. 2 with Φ+ replacing Φ.

Let = ∑Inline graphic ɛi(ɛ′i ⊗ ɛ′i), where ɛi is the k-vector with 1 in the ith position and 0s elsewhere. The matrix has 1 in the ith row and i,ith column and 0s elsewhere. Define r2∗ = (rInline graphic, … , rInline graphic)′. Then

graphic file with name M371.gif

The matrix has the effect of selecting the i,ith element of ΘInline graphicInline graphic and placing it in the ith position of r2∗.

Theorem 4. If the Zt vectors are independently normally distributed and the roots of 18 are distinct, the limiting distribution of r2∗ is normal with mean 0 and covariance matrix

graphic file with name M374.gif 44

In terms of the components of r2∗ the asymptotic covariance of rInline graphic and rInline graphic is 2[(1 − ρInline graphic)2φInline graphicρInline graphic + ρInline graphicInline graphic(1 − ρInline graphic)2]. Here φInline graphic denotes the element in the ith row of the ith block of rows and the jth column of the jth block of columns in Φ+.

We now derive the limiting distribution of HInline graphic = HInline graphic + HInline graphic, where HInline graphic = diag(hInline graphic, … , hInline graphic). From vec HInline graphicRInline graphic = (RInline graphicI) vec HInline graphic, and vec RInline graphicHInline graphic = (IRInline graphic) vec HInline graphic we obtain vec(HInline graphicRInline graphicRInline graphicHInline graphic) = NHInline graphic = NHInline graphic, where

graphic file with name M404.gif
graphic file with name M405.gif

The Moore–Penrose generalized inverse of N (denoted N+) has a 0 where N has a 0 and has (ρInline graphic − ρInline graphic)−1 where N has (ρInline graphic − ρInline graphic), ij. Note that NN+ = (II) − E, where E = ∑Inline graphici ⊗ ɛi)(ɛ′i ⊗ ɛ′i). The k2 × k2 matrix E is idempotent of rank k; the k2 × k2 matrix NN+ is idempotent of rank k2k; and E is orthogonal to N and N+.

From 39 we obtain vec HInline graphic = N+(ΘInline graphicΘ−1)vec P. From 40 we find HInline graphic = −½ΘInline graphic diagSInline graphic and vec HInline graphic = −½ Inline graphic vec SInline graphic.

Theorem 5. If the Zt vectors are independently normally distributed and the roots of 18 are distinct, vec HInline graphic and vec HInline graphic have a limiting normal distribution with means 0 and 0 and covariances

graphic file with name M421.gif
graphic file with name M422.gif
graphic file with name M423.gif

and

graphic file with name M424.gif

respectively.

From G22 = Υ22ΞH22 we can transform Theorem 5 into the asymptotic covariances of vec G22 = (IΥ22Ξ) vec H22.

References

  • 1.Anderson T W. Ann Math Stat. 1951;22:327–351. [Google Scholar]
  • 2.Anderson T W. Proc Natl Acad Sci USA. 2000;97:7068–7073. doi: 10.1073/pnas.97.13.7068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Hansen H, Johansen S. Econometrics J. 1999;2:306–333. [Google Scholar]
  • 4.Johansen S. J Econ Dyn Control. 1988;12:231–254. [Google Scholar]
  • 5.Anderson, T. W. (2001) Sankhya, in press.
  • 6.Johansen S. Likelihood-based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford Univ. Press; 1995. [Google Scholar]
  • 7.Reinsel G C, Velu R P. Multivariate Reduced-Rank Regression. New York: Springer; 1998. [Google Scholar]
  • 8.Anderson T W. An Introduction to Multivariate Statistical Analysis. 2nd Ed. New York: Wiley; 1984. [Google Scholar]
  • 9.Anderson, T. W. (2001) J. Econometrics, in press.
  • 10.Anderson T W. The Statistical Analysis of Time Series. New York: Wiley; 1971. [Google Scholar]
  • 11.Anderson, T. W. (2001) Ann. Stat., in press.

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES