Abstract
The study of the large-sample distribution of the canonical
correlations and variates in cointegrated models is extended from the
first-order autoregression model to autoregression of any (finite)
order. The cointegrated process considered here is nonstationary in
some dimensions and stationary in some other directions, but the first
difference (the “error-correction form”) is stationary. The
asymptotic distribution of the canonical correlations between the first
differences and the predictor variables as well as the corresponding
canonical variables is obtained under the assumption that the process
is Gaussian. The method of analysis is similar to that used for the
first-order process.
Cointegrated stochastic
processes are used in econometrics for modeling macroeconomic time
series that have both stationary and nonstationary properties. The term
“cointegrated” means that in a multivariate process that appears
nonstationary some linear functions are stationary. Many economic time
series may show inflationary tendencies or increasing volatility, but
certain relationships are not affected by these tendencies. Statistical
inference is involved in identifying these relationships and estimating
their importance.
The family of stochastic processes studied in this paper consists of
vector autoregressive processes of finite order. A vector of
contemporary measures is considered to depend linearly on earlier
values of these measures plus random disturbances or errors. The
dependence may be evaluated by the canonical correlations between the
contemporary values and the earlier values.
The nonstationarity of a process may be eliminated by treating
differences or higher-order differences (over time) of the vectors.
This paper treats processes in which first-order differencing
accomplishes stationarity. The first-order difference is represented as
a linear combination of the first lagged variable and lags of the
difference variable. The stationary linear combinations are the
canonical variables corresponding to the nonzero process canonical
correlations between the difference variable and the first lagged
variable not accounted for by the lagged differences. The number of
these is defined as the degree of cointegration.
Statistical inference of the model is based on a sample of
observations; that is, a vector time series over some period of time.
The estimator of the parameters of the original autoregressive model is
a transformation of the estimator of the (stationary) error-correction
form. In the latter, one coefficient matrix is of lower rank (the
degree of cointegration). It is estimated efficiently by the reduced
rank regression estimator introduced by me (1). It depends on the
larger canonical correlations and corresponding canonical vectors. The
smaller correlations are used to determine the rank of this matrix.
Inference is based on the large-sample distribution of these
correlations and variables.
The asymptotic distribution of the canonical correlations and
coefficients of the variates for the first-order autoregressive process
was derived by me (2). The distribution for the higher-order process
(that is, several lags) is obtained in this paper, using similar
algebra. Hansen and Johansen (3) have independently obtained the
asymptotic distribution of the canonical correlations, but by a
different method and expressed in a different form.
The likelihood ratio test for the degree of cointegration that I found
(1) is given in Asymptotic Distribution of the Smaller
Roots; its asymptotic distribution under the null hypothesis was
found by Johansen (4). To evaluate the power of such a test, one needs
to know the distribution or asymptotic distribution of the sample
canonical correlations corresponding to process canonical correlations
different from 0. See ref. 5, for example.
For further background, the reader is referred to Johansen (6) and
Reinsel and Velu (7).
The Model
The general cointegrated model is an autoregressive process
{Yt} of order m defined by
where Zt is unobserved with
ℰZt = 0,
ℰZtZ′t
= ΣZZ, and
ℰYt−iZ′t
= 0, i = 1, … . Let B(λ) =
λmI −
λm−1B1 −
… − Bm. If the
roots λ1, … , λpn of
|B(λ)| = 0 satisfy |λi| <
1, a stationary process {Yt} can
be defined by 1. If some of the roots are 1, the process
will be nonstationary. In this paper, we assume that n (0 <
n < p) roots of |B(λ)| = 0 are
1(λ1 = … =
λp = 1), and the other pm − n
roots satisfy |λi| < 1, i = n + 1,
… , pm. The first difference of the process, the
“error-correction” form, is
Here Π = B1 +
… + Bm −
I = −B(1),
Πj =
−(Bj+1 + … +
Bm), j = 1, … , m − 1,
Π̄ = (Π1, … ,
Πm−1), and
Δ̄Yt−1 =
(ΔY′t−1, … ,
ΔY′t−m+1)′.
A sample consists of T observations:
Y1, … , YT.
Because the rank of Π is k, it is to be
estimated by the reduced rank regression estimator introduced by me (1)
as the maximum likelihood estimator when Z1,
… , ZT are normally distributed and
Y0, Y−1, … ,
Y−m+1 are nonstochastic and known. The
matrices Π1, … ,
Πm−1 are unrestricted except for the
condition |λi| < 1, i = n + 1,
… , pm. The estimator depends on the canonical correlations
and vectors of ΔYt and
Yt−1 conditioned on
ΔYt−1, … ,
ΔYt−m+1.
Define
where
SΔ̄Y,Δ̄Y =
T−1 ∑
Δ̄Yt−1Δ̄Y′t−1,
SΔY,Δ̄Y =
T−1 ∑
ΔYtΔ̄Y′t−1,
and SȲ,Δ̄Y =
T−1 ∑
Yt−1Δ̄Y′t−1.
The vectors ΔŶ
and
Ŷ
are the sample residuals
of ΔYt−1 and
Yt−1 regressed on
Δ̄Yt−1. Define
Ŝ
=
T−1 ∑
ΔŶ
ΔŶ
= SΔY,ΔY −
SΔY,Δ̄YS
SΔ̄Y,ΔY,
Ŝ
=
T−1 ∑
ΔŶ
Ŷ
= SΔY,Ȳ −
SΔY,Δ̄YS
SΔ̄Y,
and Ŝ
=
T−1 ∑
Ŷ
Ŷ
= SȲȲ −
SȲ,Δ̄YS
SΔ̄Y,Ȳ,
where SΔY,ΔY =
T−1 ∑
ΔYtΔY′t,
SΔY,Ȳ = T−1
∑
ΔYtY′t−1,
and SȲȲ = T−1
∑
Yt−1Y′t−1.
The sample canonical correlations between
ΔŶ
and
Ŷ
and variates are defined
by
More information on canonical analysis is covered in chapter 12 of
ref. 8. One form of the reduced rank regression estimator is
Π̂(k) =
Ŝ
Γ̂2Γ̂′2,
where Γ̂2 =
(γ̂n+1, … , γ̂p) and
r
< … <
r
.
We shall assume that there are exactly n linearly
independent solutions to ω′B(1) = 0; that is,
ω′Π = 0. Then the rank of Π is p
− n = k and there exists a p × n matrix
Ω1 of rank n such that
Ω′1Π = 0. See Anderson
(9). There is also a p × k matrix
Ω2 of rank k such that
Ω′2Π =
Υ2Ω′2,
where Υ2 (k × k) is nonsingular,
and Ω = (Ω1,
Ω2) is nonsingular.
To distinguish between the stationary and nonstationary coordinates, we
make a transformation of coordinates. Define
Ψj =
Ω′Bj(Ω′)−1, j
= 1, … , m. Then the process 1 is transformed to
If we define Υ = Ψ1 +
… + Ψm −
I =
Ω′Π(Ω′)−1,
Υj = −∑
Ψj =
Ω′Πj(Ω′)−1,
Ῡ = (Υ1, … ,
Υm−1), and
Δ̄Xt−1 =
(ΔX′t−1, … ,
ΔX′t−m+1)′, the form
2 is transformed to
Note that Υ =
diag(0, Υ22).
Define ΔX̂
,
X̂
,
SΔ̄X,Δ̄X,
SΔX,Δ̄X,
SX̄,Δ̄X,
Ŝ
,
Ŝ
, and
Ŝ
in a
manner analogous to the definitions in the Y-coordinates.
The reduced rank regression estimator of Υ is based on the
canonical correlations and canonical variates between
ΔX̂
and
X̂
defined by
The estimator of Υ of rank k is
Υ̂(k) =
Ŝ
G2G′2,
where G2 = (gn+1,
… , gp) and
gi is the solution for g in
8 when r = ri, the solution to
7 and r1 < …
< rp. The rest of this paper is devoted to finding
the asymptotic distribution of
{gi, ri}. Note that
Υ̂(k) =
Ω′Π̂(k)(Ω′)−1.
The vectors ΔX̂
=
ΔXt −
SΔX,Δ̄XS
Δ̄Xt−1
and X̂
=
Xt−1 −
SX̄,Δ̄XS
Δ̄Xt−1
are the residuals of ΔXt and
Xt−1 regressed on
Δ̄Xt−1, and
r1 is the maximum correlation between
ΔX̂
and
X̂
, which is the correlation
between ΔXt and
Xt−1 after taking account of the
dependence “explained” by
Δ̄Xt−1. The canonical correlations
are the canonical correlations between
(ΔX′t,
Δ̄Xt−1) and
(Xt−1,
Δ̄Xt−1) other than ±1.
The Process
The process {Xt} defined by
5 can be put in the form of the Markov model
(section 5.4, ref. 10). Multiplication of 9 on the left
by
yields a form that includes the error-correction form 6
The first n components of 10 constitute
Here Υj has been partitioned into
n and k rows and columns. Assume
X10 =
X1,−1 = … = 0 and
W10 =
W1,−1 = … = 0.
The sum of 11 for t = −∞ to t
= s is X1s =
∑
[Υ
X1,s−j +
Υ
X2,s−j]
+ ∑
W1t, or
Write 12 as
where Γ = (I −
∑
Υ
)−1,
Γ−1H is a linear combination of
Υ1, … ,
Υm−1, and
X̃s =
(X′2s,
Δ̄X′s)′. [The matrix on
the left-hand side of 12 is nonsingular because otherwise
there would be a linear combination of the right-hand side identically
0.] The right-hand side of 13 is the sum of a stationary
process and a random walk (∑
W1t).
The last pm − n = k + p(m − 1)
components of 10 constitute a stationary process satisfying
where X̃′t =
(X′2t,
Δ̄X′t),
W̃′t =
(W′2t,
W′t, 0), and
Υ̃ consists of the last pm − n rows
and columns of the coefficient matrix in 10. Note that the
first n columns and last pm − n rows of
that matrix consist of 0s. Because the eigenvalues of
Υ̃ are less than 1 in absolute value (9),
X̃t = ∑
Υ̃sW̃t−s,
ℰX̃tX̃′t
= Σ̃ = ∑
Υ̃sΣ̃WWΥ̃′s,
ℰX̃tX̃′t−h
= Υ̃hΣ̃. The
covariance Σ̃ satisfies
Given Υ̃ and
Σ̃WW, 15 can be solved for
Σ̃ [Anderson (10), section 5.5]. Further we write
13 as X1t = Γ
∑
W1,t−s + H
∑
Υ̃sW̃t−s.
Then
since I − Υ̃t →
I. Here
ℰW1tW̃′t
= Σ
is the second set
of rows in Σ̃WW. Then
T−1ℰS
= T−2 ∑
ℰX1tX′1t
→
2−1ΓΣ
Γ′
because ∑
t = T(T + 1)/2.
Further
Define
where ΣΔX,Δ̄X =
ℰΔXtΔ̄X′t−1,
ΣX̄,Δ̄X =
ℰXt−1Δ̄X′t−1
depends on t, and
ΣΔ̄X,Δ̄X =
ℰΔ̄Xt−1Δ̄X′t−1
does not depend on t. Note that
ΔX
and
X
correspond to
ΔX̂
and
X̂
with
SΔX,Δ̄X,
SX̄,Δ̄X and
SΔ̄X,Δ̄X replaced by
ΣΔX,Δ̄X,
ΣX̄,Δ̄X and
ΣΔ̄X,Δ̄X,
respectively. Then 6 can be written as a regression model
with
ℰX
W′t
= 0. Note that this model has the form of 2.10 in Anderson
(2).
From 16 and 17 we calculate
The process analogs of 7 and 8 are
These define the process canonical correlations and variates in
the X-coordinates.
Sample Statistics
The canonical correlations and vectors depend on
Ŝ
,
Ŝ
, and
Ŝ
, which in turn
depend on the submatrices of SX̄X̄,
SX̄,Δ̄X, and
SΔ̄X,Δ̄X
(equivalently S
,
S̃
,
S̃). The vector X̃t
satisfies the first-order stationary autoregressive model
14. The sample covariance matrices
S̃XX,
S̃WX, and
SWW are consistent estimators of
Σ̃, 0, and
ΣWW, and
S̃
=
(S̃XX −
Σ̃), S̃
=
S̃WX,
S
=
(SWW −
ΣWW) have a limiting normal distribution
with means 0 and covariances that have been given in refs. 2
and 11.
Let W(u) be the Brownian motion process defined by
T−
∑
Wt →wW(u).
Define I11 by
See Anderson (2) and theorem B.12 of Johansen (6). Define
Jj1 by
Then
T−1S
→dΓI11Γ′ by 13,
T−1S̃XX →p0, and the
Cauchy–Schwarz inequality.
We shall find the limit in distribution of
S
from the limit of
S
by using
equation B.20 of theorem B.13 of Johansen (6). A specialization to the
model here is
where W̃(u) =
[W′2(u), W′(u),
0]′. [In theorem B.13, let θi =
(I, 0), ψi = (0,
Υ̃i),
ɛ′t =
(W′1t,
W̃′t), and Ω =
ℰɛtɛ′t,
Vt = X̃t.] Then
Because {X̃t} is stationary,
T−1 ∑
WtX̃′t−1
→p0 and
Now we wish to show that
ΔX
and
X
lead to the same asymptotic
results as ΔX̂
and
X̂
. First note that
T−1S
→dΓI11Γ′ and
T−1 times any other sample covariance converges
in probability to 0. Hence
T−1S
→dΓI11Γ′ and
T−1Ŝ
→dΓI11Γ′. Because
{X̃t} is stationary,
{X
} is stationary, and
S
→pΣ
,
Ŝ
→pΣ
. Moreover
S̃
=
(S̃X̄X̄ −
Σ̃) has a limiting normal distribution.
Expansion of S
and
Ŝ
in terms
of the submatrices of
S̃
shows that
S
and
Ŝ
have
the same limiting normal distribution. (See Asymptotic
Distribution of the Larger Roots.) Finally,
T−
S
→p0 and
T−
Ŝ
→p0 because
S
,
S
,
SΔ̄X,Δ̄X, and
S
and
hence S
have finite
limits in distribution.
From 17 we find that
plimT→∞S
=
plimT→∞Ŝ
= Σ
and
where S
=
SWX̄ −
SW,Δ̄XΣ
ΣΔ̄X,X̄,
which converges in distribution to the right-hand side of
20.
As noted above, S
→dS
, which consists of the first
k rows of the weak limit of
S
. Then
Asymptotic Distribution of the Larger Roots
We now turn to deriving the asymptotic distribution of the
k larger roots of |Q+ −
r2S
| = 0
and the associated vectors solving
Q+g =
r2S
g.
First we show that the asymptotic distribution of
r
, … , r
is the
same as the asymptotic distribution of the zeros of
|Q
−
r2S
|.
Then we transform from the X-coordinates to the coordinates
of the process canonical correlations and vectors.
Let R̂
=
diag(r
, … , r
)
and G′2 =
(G′12,
G′22)′ consist of the corresponding
solutions to Q+G2 =
S
G2R̂
.
The normalization of the columns of G2 is
G′2S
G2 = I, that is,
The probability limit of 21 shows that
G12 =
Op(1) and G22 =
Op(1). The submatrix equations in
Q+G2 =
S
G2R̂
can be written as
Because T−1Q
→p0,
T−
Q
→p0,
T−
S
→p0,
T−1S
→dΓI11Γ′ and
R̂
→pR
= diag(ρ
,
… , ρ
), the probability limit of the
left-hand side of 22 is 0; this shows that
G12 →p0. Then the
asymptotic distribution of G22 is the asymptotic
distribution of G22 defined by
where the elements of R̂
are
defined by |Q
−
r2S
| =
0. Note that when
G12 →p0 is combined
with 23, we obtain
Q
G22 =
S
G22R̂
+ op (T−
).
We proceed to find the asymptotic distribution of
G22 and R̂
defined by 24 in the manner of ref. 2. Let
where W2⋅1,t =
W2t −
Σ
(Σ
)−1W1t and
ℰW2⋅1,tW′2⋅1,t
= Σ
=
Σ
−
Σ
(Σ
)−1Σ
. We expand
{Q
−
[Σ
(Σ
)−1Σ
]22}
to obtain
where Λ =
Υ22Σ
Υ22′ + Σ
. See equation 6.5 of
ref. 2.
To express the covariances of the sample matrices, we use the
“vec” notation. For A = (a1,
… an), we define vec A =
(a′1, … ,
a′n)′. The Kronecker product of
two matrices A = (aij) and
B is A ⊗ B =
(aijB). A basic relation is vec
ABC = (C′ ⊗ A) vec
B, which implies vec xy′ = vec x1y′ =
(y ⊗ x) vec 1 = y ⊗
x. Define the commutator matrix K as the (square)
permutation matrix such that vec A′ = K vec
A for every square matrix of the same order as K.
Define C = (I,
−Σ2,Δ̄XΣ
)
and D = [I,
−Σ
(Σ
)−1,
0]. Then X
=
CX̃t,
W2⋅1,t =
DW̃t,
DΣ̃WW =
Σ
J′(k),
CΣ̃ =
Σ
I′(k),
J′(k) = (I,
0, I, 0),
I′(k) =
(I, 0),
Σ
= CΣC′,
and Σ
=
DΣ̃WWD′.
Theorem 1. If the Wt
are independently normally distributed,
S
,
S
, and
S
have a limiting
normal distribution with means 0, 0, and 0
and covariances
Lemma 1. If X is normally
distributed with ℰX = 0
and ℰXX′ = Σ, then
ℰ vec XX′(vec XX′)′ =
(I + K)(Σ ⊗
Σ) + vec Σ(vec Σ)′.
If X and Y are independent,
ℰ vec XX′(vec YY′)′ = vec
ℰXX′ ⊗ (vec
ℰ′YY′)′, ℰ vec
XY′(vec XY′)′ = ℰYY′
⊗ ℰXX′, and ℰ vec
XY′(vec(YX′)′ =
KℰXX′ ⊗
ℰYY′.
Proof of Theorem 1: First 26 is equivalent
to the first expression in Lemma 1. Next vec
S
=
T−
∑
(X
⊗ W2⋅1,t) implies
27 because X
and W2⋅1,s are independent
for t − 1 ≤ s. Similarly 28 follows.
To prove 29, 30, and 31, we use the
following lemma.
Lemma 2.
Proof of Lemma 2: We have from
X̃t =
Υ̃X̃t−1 +
W̃t
Because S̃XX −
S̃X̄X̄ =
(1/T)(X̃TX̃′T
−
X̃0X̃′0)
and {X̃t} is a stationary
process, S̃X̄X̄ in
32 can be replaced by S̃XX + op(1). Then Lemma 2 results from
32 and vec
Υ̃S̃X̄W = K
vec
S̃WX̄Υ̃′.▪
Lemma 3.
Proof of Lemma 3: Write
W2t =
W2⋅1,t +
Σ
(Σ
)−1W1t.
Then
from which the lemma follows.▪
Proof of Theorem 1 Continued: Then 29
follows from Lemma 2, 26, and 28, and
30 follows from Lemma 2, 27, and
28 for X̃t. To prove
31, use Lemma 2, 26, 27,
and 10 to obtain
Then substitution of Σ̃WW =
Σ̃ − Υ̃Σ̃Υ̃′ in
33 yields 31.▪
Let Ξ be a k × k matrix such that
Ξ′(Υ22Σ
Υ22′)Ξ = Θ and
Ξ′Σ
Ξ =
I, where Θ = diag(θn+1,
… , θp) =
R
(I −
R
)−1,
R
=
diag(ρ
, … ,
ρ
), and ρ
is a
root of 18 with 0 < ρ
<
… < ρ
. Let
U
=
Ξ′X
, V2t = Ξ′W2t, V1t =
W1t, Δ2 =
Ξ′(Υ22 +
I)(Ξ′)−1,
M2 =
Ξ′Υ22(Ξ′)−1,
Ξ̃ = diag[Ξ,
Im−1 ⊗ diag(In,
Ξ)], Δ̃ =
Ξ̃′Υ̃(Ξ̃′)−1,
Ũt =
Ξ̃′X̃t,
CU =
Ξ′C(Ξ̃′)−1. Then
{Ũt} is generated by
Ũt =
Δ̃Ũt−1 +
Ṽt, where
Ṽt =
Ξ̃′W̃t and
U
satisfies
U
=
Δ2U
+ V2t, ΔU
=
M2U
+ V2t. Multiplication of 25 on
the left by Ξ′ and right by Ξ yields
Theorem 2. If the Vt
are independently normally distributed,
S
,
S
, and
S
have a limiting
normal distribution with means 0, 0, and
0 and covariances
Let L2,t−1 =
M2U2,t−1(=
Ξ′Υ22X2,t−1).
Then 34 becomes
The covariances of the limiting normal distribution of vec
S
, vec
S
=
(M2 ⊗ I) vec
S
, and
vec S
=
(M2 ⊗ M2) vec
S
are found from
Theorem 2. We write the transform of 35 as
where
Let H22 =
(M′2)−1Ξ−1G22 [=
Ξ−1(Υ22′)−1G22].
Then Q
G22 =
S
G22R̂
and
G′22S
G22 = I transform to
Because
(S
S
S
)22
→pΘR
and
S
→pΘ, the
probability limits of 38 and hii >
0 imply H22 →pΘ−
.
Define H
=
(H22 −
Θ−
) and
R̂
=
(R̂
−
R
). Then we can write 38 as
where
Lemma 4.
Lemma 5.
Proof of Lemma 5: We use the facts that
M2 = Δ2 −
I,
J(k)M2 =
Δ̃I(k) −
I(k) = (Δ̃ −
I)I(k), and (I
+ K)K = I + K.
Then the left-hand side of 42 is
which is the right-hand side of 42.▪
Theorem 3. If Zt
are normally distributed and the roots of 18 are
distinct,
Proof: Theorem 4 follows from Theorem 2,
37, 41, 42, and the transpose of
42 and the fact that
K(R
⊗
R
) = [(I −
R
) ⊗
R
]K(Θ ⊗
Θ)[(I − R
)
⊗ R
].▪
Note that 43 is equation 6.14 of ref. 2 with
Φ+ replacing Φ.
Let Ẽ = ∑
ɛi(ɛ′i ⊗
ɛ′i), where
ɛi is the k-vector with 1 in the
ith position and 0s elsewhere. The matrix Ẽ
has 1 in the ith row and i,ith column and 0s
elsewhere. Define r2∗ =
(r
, … , r
)′.
Then
The matrix Ẽ has the effect of selecting the
i,ith element of
Θ−
PΘ−
and placing it in the ith position of
r2∗.
Theorem 4. If the Zt
vectors are independently normally distributed and the roots of
18 are distinct, the limiting distribution of
r2∗ is normal with mean 0
and covariance matrix
In terms of the components of r2∗ the
asymptotic covariance of r
and
r
is 2[(1 −
ρ
)2φ
ρ
+ ρ
(φ
(1 −
ρ
)2]. Here
φ
denotes the element in the
ith row of the ith block of rows and the
jth column of the jth block of columns in
Φ+.
We now derive the limiting distribution of
H
=
H
+
H
, where
H
=
diag(h
, … ,
h
). From vec
H
R
=
(R
⊗ I) vec
H
, and vec
R
H
= (I ⊗ R
) vec
H
we obtain
vec(H
R
−
R
H
)
= NH
=
NH
, where
The Moore–Penrose generalized inverse of N (denoted
N+) has a 0 where N has a 0 and has
(ρ
−
ρ
)−1 where N has
(ρ
− ρ
), i ≠
j. Note that NN+ = (I ⊗
I) − E, where E =
∑
(ɛi ⊗
ɛi)(ɛ′i ⊗
ɛ′i). The k2 ×
k2 matrix E is idempotent of rank
k; the k2 × k2
matrix NN+ is idempotent of rank
k2 − k; and E is orthogonal to
N and N+.
From 39 we obtain vec H
= N+(Θ−
⊗ Θ−1)vec P. From
40 we find H
=
−½Θ−
diagS
and vec
H
=
−½ EΘ−
vec
S
.
Theorem 5. If the Zt
vectors are independently normally distributed and the roots of
18 are distinct, vec H
and vec H
have a
limiting normal distribution with means 0 and 0
and covariances
and
respectively.
From G22 =
Υ22′ΞH22
we can transform Theorem 5 into the asymptotic covariances
of vec G22 = (I ⊗
Υ22′Ξ) vec
H22.
References
-
1.Anderson T W. Ann Math Stat. 1951;22:327–351. [Google Scholar]
-
2.Anderson T W. Proc Natl Acad Sci USA. 2000;97:7068–7073. doi: 10.1073/pnas.97.13.7068. [DOI] [PMC free article] [PubMed] [Google Scholar]
-
3.Hansen H, Johansen S. Econometrics J. 1999;2:306–333. [Google Scholar]
-
4.Johansen S. J Econ Dyn Control. 1988;12:231–254. [Google Scholar]
-
5.Anderson, T. W. (2001) Sankhya, in press.
-
6.Johansen S. Likelihood-based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford Univ. Press; 1995. [Google Scholar]
-
7.Reinsel G C, Velu R P. Multivariate Reduced-Rank Regression. New York: Springer; 1998. [Google Scholar]
-
8.Anderson T W. An Introduction to Multivariate Statistical Analysis. 2nd Ed. New York: Wiley; 1984. [Google Scholar]
-
9.Anderson, T. W. (2001) J. Econometrics, in press.
-
10.Anderson T W. The Statistical Analysis of Time Series. New York: Wiley; 1971. [Google Scholar]
-
11.Anderson, T. W. (2001) Ann. Stat., in press.