Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Aug 28.
Published in final edited form as: Neural Comput. 2016 Mar 4;28(5):914–949. doi: 10.1162/NECO_a_00828

State-Space Analysis of Granger-Geweke Causality Measures with Application to fMRI

Victor Solo 1
PMCID: PMC5572774  NIHMSID: NIHMS873505  PMID: 26942749

Abstract

The recent interest in the dynamics of networks and the advent, across a range of applications, of measuring modalities that operate on different temporal scales have put the spotlight on some significant gaps in the theory of multivariate time series. Fundamental to the description of network dynamics is the direction of interaction between nodes, accompanied by a measure of the strength of such interactions. Granger causality and its associated frequency domain strength measures (GEMs) (due to Geweke) provide a framework for the formulation and analysis of these issues. In pursuing this setup, three significant unresolved issues emerge. First, computing GEMs involves computing submodels of vector time series models, for which reliable methods do not exist. Second, the impact of filtering on GEMs has never been definitively established. Third, the impact of downsampling on GEMs has never been established. In this work, using state-space methods, we resolve all these issues and illustrate the results with some simulations. Our analysis is motivated by some problems in (fMRI) brain imaging, to which we apply it, but it is of general applicability.

1 Introduction

Following the operational development of the notion of (temporal) causality by Granger (1969) and Sims (1972), Granger causality (GC) analysis has become an important part of time series and econometric testing and inference (Hamilton, 1994). It has also been applied in the biosciences (Kaminski & Blinowska, 1991; Bernasconi & Konig, 1999; Ding, Bressler, Yang, & Liang, 2000), climatology (global warming; Suna & Wang, 1996; Kaufmann & Stern, 1997; Triacca, 2005), and, most recently, functional magnetic resonance imaging (fMRI). This last application has stimulated this letter.

Since its introduction into fMRI (Goebel, 2003; Valdes-Sosa, 2004; Roebroeck, Formisano, & Goebel, 2005; Yamashita, Sadato, Okada, & Ozaki, 2005) GC has become the subject of an intense debate: (see Valdes-Sosa, Roebroeck, Daunizeau, & Friston, 2011; Roebroeck, Formisano, & Goebel, 2011) and associated commentary. There are two main issues in that debate that occur more widely in dynamic networks. First is the impact of downsampling on GC. In the fMRI neuroimaging application, causal processes may operate on a timescale on the order of tens of milliseconds, whereas the recorded signals are typically available only on a 1 second timescale, although sampling up to a 100 ms rate is now becoming possible (Feinberg & Yacoub, 2012). So it is natural to wonder if GC analysis on a slower timescale can reveal GC structure (GCS) on a faster timescale. Second is the impact of filtering on GC due to the hemodynamic response function, which relates neural activity to the recorded fMRI signal. Since intuitively GC will be sensitive to time delay, the variability of the hemodynamic response function (HRF), particularly spatially varying delay (called time to onset in the fMRI literature) and time to peak (confusingly sometimes called delay in the fMRI literature) has been suggested as a potential source of problems (Deshpande, Sathian, & Hu, 2010; Handwerker, Gonzalez-Castillo, D’Esposito, & Bandettini, 2012). A referee has, however, pointed out that following David et al. (2008), deconvolution (Glover, 1999) has gained favor (Havlicek, Friston, Jan, Brazdil, & Calhoun, 2011) as a potential means of dealing with this.

An important advance in GC theory and tools was made by Geweke (1982), who provided measures of the strength of causality or influence of one time series on another (henceforth called GEM, for Geweke causality measure), including frequency domain decompositions of them. Subsequently it was pointed out that the GEMs are measures of directional mutual information (Rissannen & Wax, 1987). The GEMs were extended to conditional causality in Geweke (1984). However, GEMs have not found as wide an application as they should have, partly because of some technical difficulties in calculating them, which are discussed further below. But GEMs (and their frequency domain versions) are precisely the tool needed to pursue both GC downsampling and filtering questions.

In the econometric literature, it was appreciated early that downsampling, especially in the presence of aggregation, could cause problems. This was implicit in the work of Sims (1971) and mentioned also in the work of Christiano and Eichenbaum (1987), who gave an example of contradictory causal analysis based on monthly versus quarterly data, and also discussed in Marcet (1991). But precise general conditions under which problems do and do not arise have never been given. We do so in this letter.

Some of the econometric discussion has been framed in terms of sampling of continuous time models (Sims, 1971; Marcet, 1991; Christiano & Eichenbaum, 1987). And authors such as Sims (1971) have suggested that models are best formulated initially in continuous time. While this is a view that I have long shared (and related to the modeling approach advocated by Middleton and Goodwin (1990), we deal with only discrete time models here. To cast our development in terms of continuous time models would require a considerable development of its own without changing our basic message.

The issue at stake, in its simplest form, is the following. Suppose that a pair of (possibly vector) processes exhibit a unidirectional GC relation (a influences b, but b does not influence a), but suppose measurement time series are available only at a slower timescale or as filtered time series or both. Then two questions arise. The first, which we call the forward question, is this: When is the unidirectional GC relation preserved? The second, which we call the reverse question, is harder. Suppose the measurement time series exhibit a unidirectional GC relation. Does that mean the underlying unfiltered faster timescale processes do? The latter question is the more important and so far has received no theoretical attention. The more general form of this issue is to what extent the GC structure (GCS) between two time series is distorted by downsampling and or filtering or both.

In order to resolve these issues, we need to develop some theory and some computational and modeling tools. First, to compute GEMs, one needs to be able to find submodels from a larger model (i.e., one having more time series). Thus, to compute the GEMs between time series xt, yt, Geweke (1982, 1984) attempted to avoid this by fitting submodels separately to xt to yt and then also fitting a joint model to xt, yt. Unfortunately this can generate negative values for some of the frequency domain GEMs (Chen, Bressler, & Ding, 2006). Properly computing submodels will resolve this problem, and previous work has not accomplished this (we discuss the attempts in Dufour & Taamouti, 2010, and Chen et al., 2006, below).

Second, one needs to be able to compute how models transform when they are downsampled. This has been done only in special cases (Pandit & Wu, 1983) or by methods that are not computationally realistic. We provide computationally reliable, state-space-based methods for doing this here.

Third, we need to study the effect of downsampling and filtering on GEMs. We will show how to use the newly developed tools to do this.

To sum up, we can say that previous discussions, including those above, as well as Geweke (1978), Telser (1967), Sims (1971), and Marcet (1991), fail to provide general algorithms for finding submodels or models induced by downsampling. Indeed both these problems have remained open problems in multivariate time series in their own right for several decades, and we resolve them here. Further, there does not seem to have been any theoretical discussion of the effect of filtering on GEMs, and we resolve that here also. To do that, it turns out that state-space models provide the proper framework.

Throughout this work, we deal with the dynamic interaction between two vector time series. It is well known in econometrics that if there is a third vector time series involved in the dynamics but not accounted for, then spurious causality can occur for reasons that have nothing to do with downsampling, a situation discussed by Hsiao (1982) and Geweke (1984). Other causes of spurious causality such as observation noise are also not discussed. Of course, the impact of downsampling in the presence of a third (vector) variable is also of interest but will be pursued elsewhere.

Finally, our whole discussion is carried out in the framework of linear time series models. It is of great interest to pursue nonlinear versions of these issues, but that would be a major separate task.

The remainder of the letter, is organized as follows. In section 2, we review and modify some state-space results important for system identification or model fitting and needed in the following sections. In section 3, we develop state-space methods for computing submodels of innovations state-space models. In section 4, we develop methods for transforming state-space models under downsampling. In section 5, we review GC and GEMs and extend them to a state space-setting. In section 6, we study the effect of filtering on GC via frequency domain GEMs. In section 7, we offer a theory to explain when GC is preserved under downsampling. In section 8, we discuss the reverse GC problem showing how spurious causality can be induced by downsampling. In section 9, we set up the framework for the application of these results to fMRI. This includes discussion of the phase properties of HRFs as well as the impact of deconvolution on the assessment of GC. Conclusions are in section 10. There are three appendixes.

1.1 Acronyms and Notation

GC is Granger causality or Granger causes; GCS is Granger causal structure; dn-gc does not Granger cause; GEM is Geweke causality measure. HRF is hemodynamic response function; SS is state space or state-space model; ISS is innovations state-space model; UC is unit circle; VAR is vector autoregression; VARMA is vector autoregressive moving average process; wp1 is with probability 1. LHS denotes left-hand side. The lag or backshift operator is denoted L or z−1; thus, if xt is a signal, then Lxt = z−1 xt = xt−1. L or z−1 are causal operators since they operate on the past. L−1 or z are noncausal operators since they operate on the future (so L−1xt = zxt = xt+1). The causal filter h(L)=0hrLr is sometimes denoted h, and its noncausal reflection is h = h(L)= h(L−1).

Xab denotes the values xa, xa+1, …, xb, so Xaaxa. For stationary processes, we have a = −∞. If M, N are positive semidefinite matrices, then MN means MN is positive semidefinite. A stable square matrix is one whose eigenvalues all have modulus < 1.

2 State Space

The computational methods we develop rely on state-space techniques and spectral factorization, the latter being intimately related to the steady-state Kalman filter. In this section, we review and modify some basic results in state-space theory, Kalman filtering, and spectral factorization. Our discussion deals with two vector time series, collected as, zt=(xtT,ytT)T.

2.1 State-Space Models

We consider a general constant parameter SS model,

ξt+1=Aξt+wt,zt=Cξt+vt, (2.1)

with positive semidefinite noise covariance, var(wtvt)=(QSSTR). We refer to this as an SS model with parameters (A, C, [Q, R, S]).

It is common with SS models to take S = 0, but for equivalence between the class of VARMA models and the class of state-space models, it is necessary to allow S ≠ 0.

Now by matrix partitioning, |(QSSTR)|=|R||Qs|,Qs=QSR1ST. So we introduce

  • N: Noise Condition. R is positive definite.

N implies that Qs is positive semidefinite.

2.2 Steady-State Kalman Filter, Innovations State-Space Models, and the Discrete Algebraic Ricatti Equation

We now recall the Kalman filter for mean square estimation of the unobserved state sequence ξt from the observed time series zt. It is given by Kailath, Sayeed, and Hassibi (2000, theorem 9.2.1),

ξ^t+1=Aξ^t+Ktet,et=ztCξ^t,orzt=Cξ^t+et,

where et is the innovations sequence of variance Vt = R + CPtCT, Kt=(APtCT+S)Vt1 is the Kalman gain sequence, and Pt is the state error variance matrix generated from the Ricatti equation, Pt+1=APtAT+QKtVtKtT.

The Kalman filter is a time-varying filter, but we are interested in its steady state. If there is a steady state (i.e., PtP as t → ∞) then the limiting state error variance matrix P will, obey the so-called discrete algebraic Ricatti equation (DARE),

P=APAT+QKVKT, (2.2)

where V = R + CPCT and K = (APCT + S)V−1 is the corresponding steady-state Kalman gain. With some clever algebra, (Kailath et al., 2000), the DARE can be rewritten (the Ricatti equation can be similarly rewritten) as

P=AsPAsT+QsKsVKsT,

where As = ASR−1C and Ks = AsPCTV−1.

We now introduce two assumptions.

  • St: Stabilizability condition: As, Qs12 is stabilizable (see appendix A)

  • De: Detectability condition: As, C is detectable.

In appendix A, it is shown De is equivalent to A, C being detectable. Also it holds automatically if A is stable.

The resulting steady-state Kalman filter can be written as

ξ^t+1=Aξ^t+Kεt,zt=Cξ^t+εt, (2.3)

where εt is the steady-state innovation process and has variance matrix V and Kalman gain K. This steady-state filter provides a new state-space representation of the data sequence. We refer to it as an innovations state-space (ISS) model with parameters (A, C, K, V). We summarize this:

Result 1

Given the SS model, equation 2.1, with parameters (A, C, [Q, R, S]), then provided N, St, De hold:

  1. The corresponding ISS model, equation 2.3, with parameters (A, C, K, V), can be found by solving the DARE, equation 2.2, which has a unique positive-definite solution P.

  2. V is positive definite, (A, C) is detectable, and AKC is stable so that (A, K) is controllable.

Proof

See appendix A.

Remarks

  1. Henceforth an ISS model with parameters (A, C, K, V) will be required to have V positive definite, (A, C) detectable, and (A, K) controllable so that A–KC is stable.

  2. It is well known that any VARMA model can be represented as an ISS model and vice versa (Solo, 1986; Hannan & Deistler, 1988).

  3. Note that the ISS model with parameters (A, C, K, V) can also be written as the SS model with parameters (A, C, [KVKT, V, KV]).

  4. The DARE is a quadratic matrix equation, but it can be computed using the (numerically reliable) DARE command in Matlab as follows. Compute: [P, L0, G] = DARE(AT, CT, Q, R, S, I) and then, V = R + CPCT, K = GT.

  5. Note that stationarity is not required for this result.

2.3 Stationarity and Spectral Factorization

Given an ISS model with parameters (A, C, B, Σε), we now introduce

  • Assumption EV: Eigenvalue stability condition: A has all eigenvalues with modulus < 1, that is, A is a stability matrix.

With this assumption we can obtain an infinite vector moving average (VMA) representation, an infinite vector autoregressive (VAR) representation, and a spectral factorization.

Before stating these results, we need a definition;

  • Definition. Minimum phase transfer function. A matrix H(L) of causal transfer functions is said to be minimum phase (MP) if its inverse exists and is causal and stable. If H(L) is not minimum phase, it is called nonminimum phase (NMP).

Examples

  1. Causal and stable. H(L) = 1 − aL, |a| < 1 has inverse H_(L)=11aL=0arLr, which follows from summing the geometric series. Then H_(L) is clearly causal and is stable since its pole zp=Lp1=a has modulus < 1 (i.e., it lies inside the unit circle). Another way to state this is to say that H(L) is MP because its zero zo = a has modulus < 1 (i.e., it lies inside the unit circle, UC).

  2. Causal but not stable. H(L)=11θL,0<θ<1 is not minimum phase since H_(L)=111θL is not stable since its pole zp=Lp1=1θ has modulus > 1. If we rewrite H(L)=1θL(1θL1), then H_(L)=θL11θL1=θL10θrLr. This is stable but not causal since the expansion is in terms of the future. Again we can rephrase this example to say H(L) is NMP because its zero zo=1θ is outside the unit circle.

  3. Not causal but stable. H(L) = L(1 − aL), |a| < 1 is nonminimum phase since H_(L)=L11aL is not causal. Note that H_(L) is stable. We can rephrase this as follows. H(L) has two zeros: one is inside the unit circle, and the other is at ∞ (i.e., z = L−1 = ∞ makes H (L) vanish and so is outside the unit circle).

For further discussion of minimum phase filters, see Green (1988) and Solo (1986).

The following result is based on Kailath et al. (2000, theorem 8.3.2) and the surrounding discussion.

Result 2

For the ISS model (A, C, B, Σε) obeying condition Ev we have,

  1. Infinite VMA or Wold decomposition:
    zt=H(L)εt=(C(L1IA)1B+I)εt=(C(IAL)1BL+I)εt. (2.4)
  2. Infinite VAR representation:
    εt=G(L)zt=[I+C(L1IA+KC)1K]zt. (2.5)
  3. Spectral factorization. Put L = exp(− ); then zt has positive-definite spectrum with spectral factorization as follows:
    fZ(λ)=[C(L1IA)1,I][QSSTR][(LIAT)1CTI] =H(L)εHT(L1). (2.6)
  4. H(L) is minimum phase.

Proof

  1. Write equation 2.3, in operator form. The series is convergent wp1 and in mean square since A is stable.

  2. Rewrite equation as ξ^t+1=(AKC)ξ^t+Kzt,εt=ztCξ^t. Then write this in operator form. The series is convergent wp1 and in mean square since AKC is stable and zt is stationary.

  3. This follows from standard formulas for spectra of filtered stationary time series applied to (a).

  4. From (a),(b) G(L) = H−1 (L) and by (b) G(L) is causal and stable, and the result follows.

Remarks

  1. Result 2 is a special case of a general result that given a full-rank multivariate spectrum fZ(λ), there exists a unique causal stable minimum phase spectral factor H(L) with H(0) = I and positive-definite innovations variance matrix Σε such that equation 2.6 holds (Hannan & Deistler, 1988; Green, 1988). In general detH(L) may have some roots on the unit circle (Hannan & Poskitt, 1988; Green, 1988), but the assumptions in result 2 rule this case out. Such roots mean that some linear combinations of zt can be perfectly predicted from the past (Hannan & Poskitt, 1988; Green, 1988), something that is not realistic in the fMRI application.

  2. Result 2 is also crucial from a system identification or model-fitting point of view. From that point of view, all we can know (from second-order statistics) is the spectrum, and so if, naturally, we want a unique model, the only model we can obtain is the causal stable minimum phase model: the ISS model. The standard approach to SS model fitting is the so-called state-space subspace method (Deistler, Peternell, & Scherer, 1995; Bauer, 2005), and indeed it delivers an ISS model. The alternative approach of fitting a VARMA model (Hannan & Deistler, 1988; Lutkepohl, 1993) is equivalent to getting an ISS model.

  3. We need result 1, however, since when we form submodels, we do not immediately get an ISS model; rather, we must compute it.

3 Submodels

Our computation of causality measures requires that we compute induced submodels. In this section, we show how to obtain an ISS submodel from the ISS joint model.

We partition zt = (xt, yt)T into subvector signals of interest and partition the state-space model correspondingly: C=(CXCY) and B = (BX, BY). We first read out an SS submodel for xt from the ISS model for zt. We have simply ξt+1 = t + wt, xt = CXξt + εX,t where, wt=B(εX,tεY,t)=BXεX,t+BYεY,t. We need to calculate the covariance matrix: var(wtεX,t)=(QS¯S¯TR¯). We find R¯=X,ε, Q = var(wt) = BΣεBT and S¯=E(wtεX,tT)=B(X,εYX,ε)=Bo. This leads to:

Theorem 1

Given the joint ISS model, equation 2.3 or 2.4, for zt, then under condition Ev, the corresponding ISS submodel for xt, (A, CX, K(X), ΩX) (the bracket notation K(X) is used to avoid confusion with e.g. CX, ΣX,ε which are submatrices), can be found by solving the DARE, equation 2.2, with [Q,R¯,S¯]=[BεBT,X,ε,Bo].

Proof

First, we note by partitioning |Σε| = |ΣX,ε||Σ(Y|X)| where (Y|X),ε=YX,εX,ε1XY;εso that ΣX,ε and Σ(Y|X),ε are both positive definite. Now we need only check conditions N, St, De of result 1. We need to show that R¯=X,ε is positive definite, (A, CX) is detectable, and (ASR¯1CX,(BεBTSR¯1R¯T)12 is stabilizable; in fact, we show it is controllable.

The first is already established. The second follows trivially since A is stable. We use the PBH test (see appendix A) to check the third.

Suppose controllability fails. Then by the PBH test, there exists q ≠ 0 with λqT=qT(ASR¯1CX) and

0=qT(BεBTSR¯1S¯T)120=qT(BεBTSR¯1S¯T)=qT(BεBTB(X,εYX,ε)X,ε1[X,ε,XY,ε]BT)=qT(BX,BY)(000(Y|X),ε)(BXTBYT)=qTBY(Y|X),εBYT0=qTBY(Y|X),εBYTq0=BYTqqTBY=0

since Σ(Y|X, is positive definite. But then,

λqT=qT(A(BX,BY)(X,εYX,ε)X,ε1CX)=qT(A(BXCX+BYYX,εX,ε1CX)=qT(ABXCX).

Thus, (ABXCX, BY) is not controllable. But this is a contradiction since we can find a matrix, namely CY, so that ABXCXBYCY = ABC is stable.

Remarks

  1. For implementation in Matlab, positive definiteness in constructing Q can be an issue. A simple resolution is to use a Cholesky factorization of ε=LεLεT and form Bε = BLε and then form Q=BεBεT.

  2. The P(X) matrix from DARE, [P(X),L0,G]=DARE(AT,CXT,BεBT,X,ε,Bo,I) obeys P(X)=AP(X)AT+BεBTK(X)ΩXK(X)T and then K(X)=(AP(X)CXT+Bo)ΩX1,ΩX=X,ε+CXP(X)CXT.

  3. Dufour and Taamouti (2010) discuss a method for obtaining sub-models, but it is flawed. First, it requires computing the inverse of the VAR operator. While this might be feasible (analytically) on a toy example, there is no known numerically reliable way to do this in general (computation of determinants is notoriously ill conditioned). Second, it requires the solution of simultaneous quadratic autocovariance equations to determine VMA parameters for which no algorithm is given. In fact, these are precisely the equations required for a spectral factorization of a VMA process. There are reliable algorithms for doing this, but given the flaw already revealed, we need not discuss this approach any further.

Next we state an important corollary:

Corollary 1

Any submodel is in general a VARMA model, not a VAR. To put it another way, the class of VARMA models is closed under the forming of submodels, whereas the class of VAR models is not.

This means that VAR models are not generic and is a strong argument against their use. Any vector time series can be regarded as a submodel of a larger-dimensional time series and thus must in general obey a VARMA model. This result (which is well known in time series folklore) is significant for econometrics where VAR models are in widespread use.

For the next section we need:

Theorem 2

For the joint ISS model, equation 2.3 or 2.4, for zt with conditions St, De holding and with induced submodel for xt given in theorem 1, we have,

fX(λ)=hX(exp(jλ))ΩXhXT(exp(jλ)),hX(L)=I+CX(L1IA)1K(X)=I+LCX(IAL)1K(X),ln|ΩX|=ππln|fX(λ)|dλ2π. (3.1)

4 Downsampling

There are two approaches to the problem of finding the model obeyed by a downsampled process: frequency domain and time domain. While the general formula for the spectrum of a downsampled process has long been known, it is not straightforward to use and has not yielded any general computational approach to finding submodels of parameterized spectra.

The spectral formula, however, has made it very clear that downsampling leads to aliasing, so that higher-frequency information is lost. This strongly suggests that GCS will be affected by downsampling.

The most complete (time domain) work on downsampling seems to be that of Pandit and Wu (1983), although they treat only the first- and second-order scalar cases. There is work in the engineering literature for systems with observed inputs, but that is also limited and in any case not helpful here. We follow an SS route.

We begin with the ISS model, equation 2.3. Suppose we downsample the observed signal zt with sampling multiple m. Let t denote the fine timescale and k the coarse timescale so t = mk. The downsampled signal is z¯k=zmt. To develop the SS model for z¯k, we iterate the SS model above to obtain

ξt+l=Alξt+1lAliBεt+i1.

Now set t = mk, l = m and denote sampled signals, ξ¯k=ξmk,z¯k=zmk,ε¯k=εmk. Then we find

ξ¯k+1=Amξ¯k+wk,z¯k=Cξ¯k+ε¯k,

where wk=1mAmiBεkm+i1. We now use result 1 to find the ISS model corresponding to this SS model.

We first have to calculate the model covariances:

E(ε¯kε¯kT)=ε=R,E(wkε¯kT)=Am1Bε=Sm, (4.1)
E(wkwkT)=Qm,Qm=1mAmiBεBT(AT)mi=0m1ArBεBT(AT)r,Qm=AQm1AT+BεBT,m2,Q1=BεBT, (4.2)

We now obtain:

Theorem 3

Given the ISS model, equation 2.3, then under condition Ev, for m > 1, the ISS model for the downsampled process z¯k=zmt is (Am,C,Km,Vm), obtained by solving the DARE with SS model (Am, C, [Qm, R, Sm]) where Qm is given in equation 4.2 and Sm is given in equation 4.1.

Proof

Using result 1, we need to show the following. R is positive definite, (Am,C) is detectable, and (AmSmR−1C), (QmSmR1SmT)12) is stabilizable. In fact, we show controllability. The first holds trivially; the second does as well since A is stable and thus so is Am. For the third, we use the PBH test.

Suppose controllability fails. Then there is a left eigenvector q (possibly complex) with λqT = qT(AmSmR−1C) = qTAm−1(A − BC) and qT(QmSmR1SmT)=0=qT0m2ArBεBT(AT)r0m2qH0m1ArBεBT(AT)rq=0 Since Σε is positive definite, this delivers 0m2BT(AT)rq2=0BT(AT)rq=0 for r = 0, …, m − 2.

Using this, we now find λqT = qTAm−1(A − BC) = qTAm−2(A − BC)2 +qTAm−2BC(ABC) = qTAm−2(ABC)2. Iterating this yields λqTqT(ABC)m. Thus, if λm is an mth root of λ, then λmqT = qT(A − BC). Since qT B = 0, we thus conclude that (ABC, B) is not controllable. But this is a contradiction since (ABC) + BC = A is stable.

Remark

  1. In Matlab, we would compute, [Pm,L0,Gm]=DARE((Am)TCT,Qm,ε,Sm,I), yielding Vm=ε+CPmCT and Km=GmT.

  2. More specifically Pm(m>1) obeys Pm=A¯mPmA¯mT+Q¯mK¯mVmK¯mT, where A¯m=AmSmε1C=Am1(ABC); Q¯m=QmSmε1SmT=Qm1; Vm=ε+CPmCT; Km=A¯mPmCT(Vm)1.

5 Granger Causality

In this section, we review and extend some basic results in Granger causality. In particular, we extend GEMs to the state-space setting and show how to compute them reliably.

Since the development of Granger causality, it has become clear (Dufour & Renault, 1998; Dufour & Taamouti, 2010) that in general, one cannot address the causality issue with only one-step-ahead measures as commonly used. One needs to look at causality over all forecast horizons. However one-step measures are sufficient when one is considering only two vector time series as we are (Dufour & Renault, 1998, proposition 2.3).

5.1 Granger Causality Definitions

Our definitions of one-step Granger causality naturally draw on Granger (1963, 1969), Sims (1972), and Solo (1986) but are also influenced by Caines (1976), who, drawing on the work of Pierce and Haugh (1977), distinguished between weak and strong GC, or what Caines calls weak and strong feedback free processes. We introduce:

  • Condition WSS: The vector time series xt, yt are jointly second-order stationary.

  • Definition: Weak Granger causality. Under WSS, we say yt does not weakly Granger-cause (dn-wgc) xt if, for all t,
    var(Xt+1|Xt,Yt)=var(Xt+1|Xt).
    Otherwise, we say yt weakly Granger causes (wgc) xt. Because of the elementary identity,
    var(X|Z)=E[var(X|Z,W)]+var[E(X|Z,W)]=E[var(X|Z,W)]+E[(E(X|Z,W)E(X|Z))(E(X|Z,W)E(X|Z))T|Z],
    the equality of variance matrices in the definition also ensures the equality of predictions: E(Xt+1|Xt,Yt)=E(Xt+1|Xt). This definition agrees with Granger (1969); Caines and Chan (1975), who do not use the designator weak; and Caines (1976) and Solo (1986) who do.
  • Definition: Strong Granger causality. Under WSS, we say yt does not strongly Granger cause (dn-sgc) xt if, for all t,
    var(Xt+1|Xt,Yt,Yt+1)=var(Xt+1|Xt).
    Otherwise we say yt strongly Granger-causes (sgc) xt. Again, equality of the variance matrices ensures equality of predictions, E(Xt+1|Xt,Yt,Yt+1)=E(Xt+1|Xt). This definition agrees with Caines (1976) and Solo (1986).
  • Definition: FBI (feedback interconnected). If xt Granger-causes yt and yt Granger-causes xt, then we say xt, yt are feedback interconnected.

  • Definition: UGC (unidirectionally Granger causes). If xt Granger-causes yt but yt dn-gc xt we say xt unidirectionally Granger-causes yt.

5.2 Granger Causality for Stationary State-Space Models

Now we partition zt = (xt, yt)T into subvector signals of interest and partition the vector MA or state-space model, equation 2.4, correspondingly:

[xtyt]=[I+[CXCY](L1IA)1(BXBY)[εX,tεY,t] (5.1)
=[HXX(L)HXY(L)HYX(L)HYY(L)][εX,tεY,t]ε=var[εX,tεY,t]=[X,εXY,εYX,εY,ε][HXX(L)HXY(L)HYX(L)HYY(L)]=[CX(L1IA)1BX+ICX(L1IA)1BYCY(L1IA)1BXCY(L1IA)1BY+I]. (5.2)

Now we recall results of Caines (1976):

Result 3

If zt=(xtyt) obeys a Wold model of the form zt=HZ (L)εt where HZ(L) is a one-sided square summable moving average polynomial with HZ(0)= I, which is partitioned as in equation 5.2, then:

  1. yt dn-wgc xt iff HXY(L) = 0.

  2. yt dn-sgc xt iff HXY(L) = 0 and ΣXY,ε = 0.

We can now state a new SS version of this result:

Theorem 4

For the stationary ISS model, equations 5.1 and 5.2:

  1. yt dn-wgc xt iff CXAr BY = 0, r ≥ 0.

  2. yt dn-sgc xt iff CXAr BY = 0, r ≥ 0 and ΣXY,ε = 0.

Proof

The proof follows immediately from result 3 since HXY(L)=0CXArBYLr+1.

Remarks

  1. By the Cayley-Hamilton theorem, we can replace part a of theorem 4 with CXArBY = 0, 0 ≤ rn − 1, n = dim (ξt).

  2. Collecting these equations together gives CX (BY, ABY,…, An−1BY) = 0, which says that the pair (A, BY) is not controllable. Also, we have BYT(CXT,ATCXT,,(AT)n1CXT)=0, which says that the pair (CX, A) is not observable. Thus, the representation of HXY(L) is not minimal.

From a data analysis point of view, we need to embed this result in a well-behaved hypothesis test. The results of Geweke (1982), suitably modified, allow us to do this.

5.3 Geweke Causality Measures for SS Models

Although much of the discussion in Geweke (1982) is in terms of VARs, we can show it applies more generally. We begin as Geweke (1982) did with the following definitions. First, FYX=ln|ΩX||X,ε| is a measure of the gain in using the past of y to predict x beyond using just the past of x; similarly introduce FYX=ln|ΩY||Y,ε|. Thus, for example, FYX is a directional measure of the influence of yt on xt. Next, define the instantaneous influence measure: FY.X=ln|X,ε||Y,ε||ε|. These are then joined in the fundamental decomposition (Geweke, 1982),

FXoY=FYX+FXY+FY.X (5.3)

where, FXoY=ln|ΩX||ΩY||ε|. Geweke (1982) then proceeds to decompose these measures in the frequency domain. Thus, the frequency domain GEM for the dynamic influence of yt on xt is given by Geweke (1982),

FYX=ππfYX(λ)dλ2πwherefYX(λ)=ln|fX(λ)||fe(λ)|, (5.4)

and fe(λ) is assembled (following Geweke, 1982) as follows.

Introduce W=YX,εX,ε1, and note that εY,tWεX,t is uncorrelated with εX,t and has variance (Y|X),ε=Y,εYX,εX,ε1XY,ε. Then rewrite equation 5.2 as

[xtyt]=[HXX(L)HXY(L)HYX(L)HYY(L)][I0WI][I0WI][εX,tεY,t]=[HXX(L)+HXY(L)W,HXY(L)HYX(L)+HYY(L)W,HYY(L)][εX,tεY,tWεX,t].

This corresponds to equation 3.3 in Geweke (1982) and yields the following expressions corresponding to those in Geweke (1982):

fX(L)=fe(L)+HXY(L)(Y|X),εHXYT(L1), (5.5)
fe(L)=HeX(L)X,εHeXT(L1),HeX(L)=HXX(L)+HXY(L)W. (5.6)

Using the SS expressions above, we rewrite HeX(L) in a form more suited to computation as

HeX(L)=[CX(L1IA)1Bo+I]Bo=BX+BYXY,εTX,ε1=B(X,εYX,ε)X,ε1. (5.7)

Note that then, using theorem 2,

FYX=ππln|fX(λ)|dλ2πππln|fe(λ)|dλ2π=ln|ΩX|ln|X,ε|=ln|ΩX||X,ε|. (5.8)

Clearly, with L = exp(−), fX(L) ≥ fe(L) ⇒ FYX≥0.

Also the instantaneous causality measure is

FY.X=ln|X,ε||Y,ε||ε|=ln|X,ε||Y,ε||(X|Y),ε||Y,ε|=ln|X,ε|(X|Y),ε(X|Y),ε=X,εXY,εY,ε1YX,ε. (5.9)

Clearly ΣX,ε ≥ Σ(X|Y),ε so that FY.X ≥ 0.

Introduce the normalized cross-covariance-based matrix, Γx,y=Y,ε12YX,εX,ε1YX,εY,ε12. Then, using a well-known partitioned matrix determinant formula (Magnus & Neudecker, 1999), we find FY.X = ln|I − Γx,y|. This means that the instantaneous causality measure depends only on the canonical correlations (which are the eigenvalues of Γx,y) between εX,t, εY,t (Seber, 1984; Kshirsagar, 1972).

To implement these formulas, we need expressions for ΩX, ΩY, fX(λ). To get them, Geweke (1982) fits separate models to each of xt and yt. But this causes positivity problems with fYX(λ) (Chen et al., 2006). Instead we obtain the required quantities from the correct submodel obtained in the previous section. We have:

Theorem 5a

The GEMs can be obtained from the joint ISS model, equation 5.1, and the submodel in theorem 2, as follows:

  1. FYX=ln|ΩX||X,ε| where ΩX is got from the submodel in theorem 3.

  2. The frequency domain GEM fY→X(λ), equation 5.4, can be computed from equations 5.6, 3.1, and 5.7.

ΩY, FXY, fXY(λ) can be obtained similarly.

Now pulling all this together with the help of result 3, we have an extension of the results of Geweke (1982) to the state-space/VARMA case.

Theorem 5b

For the joint ISS model, equation 5.1:

  1. FY→X ≥ 0, FY.X ≥ 0 and FYX+FY.X=ln|ΩX||(X|Y),ε|.

  2. yt dn-wgc xt iff, fX(L) = fe(L) which holds iff FY→X=0, that is, iff ΩX = ΣX,ε.

  3. yt dn-sgc xt iff fX(L) = fe(L) and Σ(X|Y),ε = ΣX,ε that is, iff FY→X = 0 and FY.X=0, that is, iff FY→X + FY.X = 0 i.e. iff ΩX = Σ(X|Y),ε.

Remarks

  1. A very nice nested hypothesis testing explanation of the decomposition 5.3 is given by Parzen in the discussion to Geweke (1982).

  2. It is straightforward to see that the GEMs are unaffected by scaling of the variables. This is a problem for other GC measures (Edin, 2010).

  3. For completeness, we state extensions of the inferential results in Geweke (1982) without proof. Suppose we fit an SS model to data zt, t = 1, …, T using, for example, so-called state-space subspace methods (Deistler et al., 1995; Bauer, 2005) or VARMA methods in, for example, Lutkepohl (1993). Let F^YX, F^XY, F^Y.X, F^XoY be the corresponding GEM estimators. If we denote true values with a superscript 0, we find under some regularity conditions:
    H0:FYX0=0TF^YXχ2npxpY2,asTandH0:FY.X0=0TF^Y.XχpxpY2.
    So to test for strong GC, we put these together:
    H0:FYX0=0,FY.X0=0T(F^YX+F^Y.X)χ(2n+1)pxpY2.
    Together with similar asymptotics for F^XY, F^XoY, we see that the fundamental decomposition, equation 5.3, has a sample version involving a decomposition of a chi-squared into sums of smaller chi-squared statistics.
  4. Chen et al. (2006) also attempt to derive FYX without fitting separate models to xt, yt. However the proposed procedure to compute fX(λ) involves a two-sided filter and is thus in error. The only way to get fX(λ) is by spectral factorization (which produces one-sided or causal filters), as we have done.

  5. Other kinds of causality measures have emerged in the literature (Kaminski & Blinowska, 1991; Baccala & Sameshima, 2001; Takahashi, Baccala, & Sameshima, 2008; Pascual-Marqui et al., 2014), but it is not known whether they obey the properties in theorems 5a and 5b. However these properties are crucial to our subsequent analysis.

6 Effect of Filtering on Granger Causality Measures

Now the import of the frequency domain GEM becomes apparent since it allows us to determine the effect of one-sided (or causal) filtering on GC.

We need to be clear on the situation envisaged here. The unfiltered time series are the underlying series of interest, but we have access only to the filtered time series, so we can only find the GEMs from the spectrum of the filtered time series. What we need to know is when those filtered GEMs are the same as the underlying GEMs. We have:

Theorem 6

Suppose we filter zt with a stable, full-rank, one-sided filter Φ(L)=(ΦXX(L)00ΦYY(L)). Then:

  1. If Φ(L) is minimum phase, the GEMs (and so GC) are unaffected by filtering.

  2. If Φ(L) has the form Φ(L)=ψ(L)Φ¯(L) where ψ(L) is a scalar all-pass filter and Φ¯(L) is stable, minimum phase, then the GEMs (and so GC) are unaffected by filtering.

  3. If Φ(L) is nonminimum phase and case b does not hold, then the GEMs (and so GC) are changed by filtering.

Proof

Denote z¯t=Φ(L)zt=Φ(L)H(L)εt by result 2a. Then for the frequency domain GEM, we need to find fY¯X¯(λ)=ln|fX¯(λ)||HeX¯(L)X,εHeX¯T(L1)|, where L = exp(−). We find trivially that |fX¯(λ)|=|ΦX(L)fX(λ)ΦX(L1)|=|ΦX(L)||fX(λ)||ΦX(L1)|. Finding HeX¯(L) is much more complicated; we need the minimum phase vector moving average or state-space model corresponding to equation 5.2. Taking Φ(L) to be nonminimum phase, we carry out a spectral factorization, fz¯(λ)=H¯(L)¯H¯T(L1), where H¯(L) is causal, stable, minimum phase with H¯(0)=I and then, from appendix C, H¯(L), can be written H¯(L)=Φ(L)H(L)D(L1), D(L1)=JET(L1)J¯1, where E(L) is all pass and J, J¯ are constant matrices (Cholesky factors). Writing this in partitioned form,

H¯(L)=[ΦX(L)00ΦY(L)][HXX(L)HXY(L)HYX(L)HYY(L)][DXX(L1)DXY(L1)DYX(L1)DYY(L1)]

yields HeX¯(L)=ΦX(L)KeX¯(L), where

KeX¯(L)=HXX(L)(DXX(L1)+DXY(L1)YX,εX,ε1+HXY(L)(DYY(L1)YX,εX,ε1+DYX(L1)).

Thus, in fY¯X¯(λ), the |ΦX(L)| factors cancel giving, fY¯X¯(λ)=ln|fX(λ)||KeX¯(L)X,εKeX¯T(L1)|. This will reduce to fYX(λ) iff KeX¯(L)=HeX(L)K(L1) where K(L−1) is all pass, which occurs iff DXY(L−1) = 0, DYX(L−1)= 0, DXX (L−1) = ψ(L−1)I, DYY (L−1) = ψ(L−1)I where ψ(L−1) is a scalar all-pass filter. Results a, b, and c now follow.

Remark

Result a has also been obtained in the independent work of Seth, Chorley, and Barnett (2013) by different methods.1

We give three examples:

Example 6A

Differential delay. Suppose (xtyt)=(10ρ1)(εtνt) where εt, νt are independent zero mean white noises with variances 1, σ2, respectively, while Φ(L)=(100L). So the two series are white noises that exhibit an instantaneous GC. The filtering delays one series relative to the other. Then we have z¯t=(100L)(10ρ1)(atbt)=(10Lρ1)(atbt)=H¯(L)(atbt), and we see that H¯(0)=I while H¯(L) is stable, causal, and invertible; indeed, H¯1(L)=(10Lρ1). Thus, we see that the differential delay has introduced a spurious dynamic GC relation and the original purely instantaneous GC is lost.

Example 6B

Noncausal filter. We use the same setup as in example 6 A except that now Φ(L)=(1θxL1001θyL1) where |θx| < 1, |θy| < 1 and θxθy. We rewrite this as −L−1Ψ(L) where Ψ(L)=(θx(11θxL)00θy(11θyL)). Although L−1 is noncausal and so NMP, it is the same for both filters and so by theorem 6b will not affect the GEMS. So we can replace Φ(L) with Ψ(L). Both entries in Ψ(L) are causal but unstable, and so NMP and the GCS will be altered. In fact, we now show this explicitly.

Note, first, how the noncausal filter operates. We have

z¯t=Ψ(L)zt=(θxL00θyL)(xtyt)=(θxxtxt+1θyytyt+1).

Future values are needed to compute the filtered signal at the current time.

The process z¯t=Ψ(L)zt has spectrum (AAρABρBA(ρ2+σ2)BB) where A=A(L)=θx(11θxL), A=A(L1) and similarly for B. It is straightforward to check that this has spectral factorization,

(aρR2a0b)(α2R200R2)(a0ρR2ab)=(aaρbaρbaR2bb),

where R=ρ2+σ2 and a = 1 − θxL, b = 1 − θyL. Note that AA = aa and AB = ab. This factorization corresponds to the model (x¯ty¯t)=(aρR2a0b)(εa,tνb,t) where εa,t, νb,t are independent white noises of variances ρ2R2, R2, respectively. We see that the NMP filtering has introduced spurious dynamics and reversed the GCS.

Example 6C

Wiener deconvolution. When filtered versions of xt, yt are observed in noise, one can estimate them by using the Wiener fitler (Kailath et al., 2000). The optimal filter uses the joint spectrum, but fMRI practice (Glover, 1999) is to filter each signal separately. This is suboptimal but will still reduce the effect of the noise. Suppose we observe wx,t = hx(L)xt + εt where h(L) is a stable filter and εt is a white noise of variance σ2 independent of xt. The Wiener filter estimate of xt is x^t=Wx(L)wx,t where (Kailath et al., 2000) Wx(L)=hxhxhx+σ2/γx where γx = γx(L) is the autocovariance generating function of xt. In fMRI practice (Glover, 1999), γx(L) is set to 1. This is sub-optimal and may lead to performance worse than no filtering. But continuing, we carry out a spectral factorization giving hxhx+σ2=σe2kxkx where kx = kx(L) is a causal minimum phase filter with kx(0) = 1. The filter thus becomes Wx(L)=hxσe2kxkx. Since kx is minimum phase and causal, then kx1 is stable and causal and will not affect the GEMS. We are left with the noncausal filter βx=hxkx— for example, in the simple case, hx= 1 − θxL, |θx| < 1, the spectral factorization reduces to solving a quadratic equation and one obtains kx = 1 − αx L, |αx| < 1 Thus, βx=1θxL11αxL1=1+1(αxθx)αxr1Lr. With a similar deconvolution (where βxβy) applied to a noisy filtered version of yt, we conclude that deconvolution will distort the GCS since the βx, βy filters are noncausal.

7 Downsampling and Forwards Granger Causality

We now consider to what extent GC is preserved under downsampling.

Using the sampled notation of our discussion above and defining z¯x=(x¯ky¯k), we have the following result:

Theorem 7

Forwards causality:

  1. If yt dn-sgc xt, then y¯k dn-sgc x¯k.

  2. If yt dn-wgc xt, then in general y¯k wgc x¯k.

Remarks

  1. Part a is new, although technically a special case of a result of the author’s established in a non-SS framework.

  2. We might consider taking part b as a formalization of longstanding folklore in econometrics (Christiano & Eichenbaum, 1987; Marcet, 1991) that downsampling can destroy unidirectional Granger causality. However, that same folklore is flawed because it failed to recognize the possibility of part a. The folklore is further flawed because it failed to recognize the more serious reverse problem discussed below.

Proof of Theorem 7, Part a

We use the partitioned expressions in the discussion leading up to result 3. We also refer to the discussion leading up to theorem 6.

This allows us to write two decompositions. First, we write wk=wX,k +wY,k, where

wX,k=1mAmiBXεX,km+i1,wY,k=1mAmiBYεX,km+i1.

From result 3 and the definition of dn-sgc,

wX,kis uncorrelated withwY,lfor allk,l. (7.1)

The other decomposition is ε¯k=(ε¯X,kε¯Y,k), and

ε¯X,kis uncorrelated withε¯Y,lfor allk,l. (7.2)

Next we note from theorem 4 that

CX(L1IAm)1ApBY=CX1Am+r1BYLr=0.

for all p ≥ 0. Thus, we deduce

CXwY,k=0,for allk. (7.3)

We can now write

x¯k=CX(L1IAm)1wk+ε¯X,k=CX(L1IAm)1wX,k+ε¯X,ky¯k=CY(L1IAm)1wX,k+CY(L1IAm)1wY,k+ε¯Y,k. (7.4)

Based on equation 7.4, we now introduce the ISS model for x¯k,

x¯k=CX(L1IAm)1K(X)νX,k+νX,k,

where νX,k is the innovations sequence. Using this, we introduce the estimator K(X)νX,k of wX,k and the estimation error wX,k=wX,kK(X)νX,k. Below we show

wX,kis uncorrelated withνX,lfor allk,l. (7.5)

We thus rewrite the model for y¯k as

y¯k=CY(L1IAm)1K(X)νX,k+ζk,ζk=CY(L1IAm)1[wY,k+wX,k]+ε¯Y,k.

Now we can construct an ISS model for ζk = (I + CY (L−1 IAm)−1K(Y))νY,k where νY,k is the innovations sequence. In view of equations 7.1, 7.2, and 7.5, νX,k and νY,l are uncorrelated for all k, l. Thus, we have constructed the joint ISS model:

[x¯ky¯k]=H¯(z)[νX,kνY,k]H¯(z)=[I+CX(L1IAm)1K(X)0CY(L1IAm)1K(X)I+CY(L1IAm)1K(Y)].

From this we deduce that y¯k dn-sgc x¯k as required.

Proof of Equation 7.5

Consider then E(wX,kνX,lT)=E(wX,kK(X)νX,k)νX,lT)=E(wX,kνX,lTK(X)E(νX,kνX,lT). The second term vanishes for kl. The first term vanishes for k > l since wX,k is uncorrelated with the past and hence νX,l; for l > k, it vanishes since νX,l is uncorrelated with the past. For k = l, it vanishes by the definition of K(X) (Kailath et al., 2000).

Proof of Theorem 7, Part b

A perusal of the proof of theorem 7, part a, shows that we cannot construct the block lower triangular joint ISS model. In general, we obtain a full block ISS model.

8 Downsampling and Reverse Granger Causality

We now come to the more serious issue of how downsampling might distort GC. To establish distortion, we simply have to exhibit a numerical example, but that is not as simple as one might hope.

8.1 Simulation Design

Designing a procedure to generate a wide class of examples of spurious causality is not as simple as one might hope. We develop such a procedure for a bivariate vector autoregression of order one: a bivariate VAR(1). On the one hand, this is about the simplest example one can consider; on the other hand, it is general enough to generate important behaviors.

The bivariate VAR(1) model is then

(xtyt)=A(xt1yt1)+(εX,tεY,t),A=(ϕxγxγyϕy),=(σa2ρσaσbρσaσbσb2),

Where Σ is the variance matrix of the zero mean white noise (εX,tεY,t) and ρ is a correlation.

We note that this model can be written as an ISS model with parameters, A, I, −A, Σ. Hence, all the computations described above are easily carried out.

But the real issue is how to select the parameters. By a straightforward scaling argument, it is easy to see that we may set σa = 1 = σb without loss of generality. Thus, we need to choose only A, ρ.

Some reflection shows that there are two issues. First, we must ensure the process is stationary; for the eigenvalues λ1, λ2 of A, we must have |λ1| < 1, |λ2| < 1. Second, to design a simulation, we need to choose FYX, FXY, but these quantities depend on the parameters A, ρ in a highly nonlinear way, so it is not obvious how to do this. And five parameters are already too many to pursue this by trial and error.

For the first issue, we have trace(A) = λ1 + λ2=ϕx + ϕy and det(A) = λ1λ 2 = ϕxϕyγxγy. Our approach is to select λ1, λ2 and then find ϕx, ϕy to satisfy ϕx + ϕy = λ1 + λ2, ϕxϕy = λ1λ2 + γxγy. This requires solving a quadratic equation. If we denote the solutions as r+, r, then we get two cases: (ϕx, ϕy) = (r+, r) and (ϕy, ϕx) = (r+, r). This leaves us to select γx, γy.

In appendix B we show that FYX=lnσx2σa2ln(1+ξx) where ξx=γx2(1ρ2). Similarly, FXYln(1 + ξy) where ξy=γy2(1ρ2). But we also show that ξx = 0 ⇒ FYX = 0 and ξy = 0 ⇒ FXY = 0, so we select ξx, ξy, thereby setting a lower bounds to FYX, FXY. This seems to be the best one can do, and as we see below, it works quite well. So given ξx, ξy, compute γx=±ξx1ρ2 and γy=±ξy1ρ2. This gives four cases and multiplied by the two cases above yields eight cases.

This is not quite the end of the story since the γx, γy values need to be consistent with the ϕx, ϕy values. Specifically the quadratic equation to be solved for ϕx, ϕy must have real roots. Thus, the discriminant must be ≥ 0. So (ϕx + ϕy)2 − 4xϕy) = 1 + λ2)2 − 41λ2 + γxγy)=1λ2)2 − 4γxγy ≥ 0. There are four cases: two with real roots and two with complex roots.

If λ1, λ2 are real, then we require (λ1λ2)24γxγy(λ1λ2)24sign(γxγy)ξxξy1ρ. This always holds if sign(γxγy) ≤ 0. If sign(γx γy) > 0, we have a binding constraint that restricts the sizes of ξx, ξy.

If λ1, λ2 are complex conjugates, then (λ1λ2)2 is negative. If sign(γxγy) ≥ 0, then the condition never holds. If sign(γxγy) < 0, then there is a binding constraint that restricts the sizes of ξx, ξy. In particular, note that if sign(γxγy) = 0, then one cannot have complex roots for A. We now use this design procedure to illustrate reverse causality.

8.2 Computation

We describe the steps used to generate the results below. We assume the state-space model for zt=(xtyt) comes in ISS form. Since standard state-space subspace model-fitting algorithms (Larimore, 1983; van Overschee & de Moor, 1996; Bauer, 2005) generate ISS models, this is a reasonable assumption. Otherwise we use result 1 to generate the corresponding ISS model.

Given a sampling multiple m, we first use theorem 3 to generate the subsampled ISS model and, hence, ε(m). To obtain the GEMs, we use theorem 1 to generate the marginal models for xt, yt yielding ΩX(m), ΩY(m). And now FYX(m), FXY(m) are gotten from formulas 5.8 and 5.9 and the comment following theorem 5a.

8.3 Scenario Studies

We now illustrate the various results above with some bivariate simulations.

Example 1: GEMs Decline Gracefully

Here, for the underlying process, y pushes x much harder than x pushes y. This pattern is roughly preserved with slower sampling, but the relative strengths change as seen in Table 1.

Table 1.

GEMs for Various Sampling Intervals for 1, λ2, ξx, ξy, ρ) = (−.95 exp(j × .1), −.95 exp(− j × .1), 1.5, .2, .2) ⇒ A=(.2041.24.4521.69).

m 1 2 3 4 5 6 10 20 30 40
FYX 1.38 1.66 1.41 1.17 0.99 0.86 0.551 0.15 0.00 0.01
FXY 0.20 0.25 0.29 0.31 0.32 0.32 0.29 0.11 0.00 0.01

Example 2: GEMs Reverse

In this case, the underlying processes push each other with roughly equal strength. But subsampling yields a false picture with x pushing y much harder than the reverse (see Table 2).

Table 2.

GEMs for Various Sampling Intervals for 1, λ2, ξx, ξy, ρ) = (.95 exp(j × .1), .95 exp(− j × .1), 1.5, .2, .2) ⇒ A=(1.691.24.452.204).

m 1 2 3 4 5 6 10 20 30 40
FYX 0.93 0.88 0.77 0.683 0.62 0.57 0.42 0.13 0.00 0.01
FXY 1.05 1.82 2.01 1.80 1.53 1.30 0.75 0.18 0.00 0.02

Example 3: Near Equal Strength Dynamics Becomes Nearly Unidirectional

In this case, the underlying relation is one of near equal strength feedback interconnection. But as Table 3 shows, almost immediately a very unequal relation appears under subsampling which soon decays to a near unidirectional relation.

Table 3.

GEMs for Various Sampling Intervals for 1, λ2, ξx, ξy, ρ) = (.995, −.7, 1, .5, .7) ⇒ A=(1.451.18.841.16)

m 1 2 3 4 5 6 10 20 30 40
FYX 1.49 0.05 0.25 0.06 0.11 0.05 0.04 0.02 0.01 0.01
FXY 1.69 0.17 0.64 0.26 0.47 0.29 0.29 0.21 0.16 0.13

Example 4: Near Unidirectional Dynamics Becomes Near Equal Strength

In this case a near unidirectional dynamic relation immediately becomes one of significant but unequal strengths and then one of near equal strength (see Table 4).

Table 4.

GEMs for Various Sampling Intervals for 1, λ2, ξx, ξy, ρ) = (.99 exp(j × .25), .95 exp(− j × .25), .1, 3, −.8) ⇒ A=(1.8830.4082.2360.036).

m 1 2 3 4 5 6 10 20 30 40
FYX 0.02 0.28 0.38 0.43 0.45 0.46 0.31 0.45 0.41 0.17
FXY 2.93 2.38 1.62 1.24 1.02 0.86 0.37 0.53 0.45 0.18

There is nothing pathological about these examples, and using the design procedure developed above, it is easy to generate other similar kinds of examples. They make it emphatically clear that GC cannot be reliably discerned from downsampled data.

In independent work, Seth et al. (2013) have shown simulations where downsampling distorts GC. But their work is purely empirical, and they do not have the theoretical framework developed here to design specific types of distortion as we have done.

9 Application to fMRI

In this section we apply the above results to fMRI, but to do so, we need to describe fMRI data generation schemes (dgs’s); we give three such schemes The following feature is common to all of them.

We suppose there is an underlying neural activity (NA) 2-node circuit operating on a fast timescale of order 50 ms and that it is described by a state-space model. Specifically, the (possibly vector) NA signals at each node are xt, yt, which we collect together into a joint NA vector signal zt=(xtyt), which is described by a state-space model such as equation 2.1 or 2.3.

  • Filter (F). The BOLD signal recorded at a particular voxel is generated as follows. The NA zt is filtered by (a spatially varying) HRF to generate a fast BOLD signal z¯b,t—specifically
    z¯b,t=(x¯b,ty¯b,t)=[Hxx(L)00Hyy(L)]zt
  • Filter then downsample (F-DS). The BOLD signal recorded at a particular voxel is generated in two stages. The first stage is the same as F. In the second stage, the scanner samples this signal delivering a downsampled (or slow) BOLD signal zb,k=z¯b,mt—for example, if the NA timescale is 50 ms and the downsampled time scale is 200 ms, then m = 4.

  • Downsample then filter (DS-F). Here we suppose the NA zt is down-sampled to a slow timescale giving a slow NA z¯k=zmt. This is then filtered by (a slow spatially varying) HRF to generate the slow BOLD signal zb,k.

Note that F is the limiting case of F-DS where fMRI is recorded on the same timescale as NA. Clearly F-DS makes physiological sense, whereas DS-F does not. Indeed F-DS is advocated by Deshpande et al. (2010), who provide further physiological references to back it up. However, below we will see that DS-F seems to be behind some of the processing methods used in practice.

The aim now is to try to solve the reverse GC problem; to determine the GCS of fast NA circuit zt from the GCS of either the fast BOLD circuit z¯i or the slow BOLD circuit zb,k. To do this, we see from theorem 6, that we need to investigate the phase properties of the HRF.

9.1 HRFs Are Nonminimum Phase

We denote an HRF by hr, r=1, 2,⋯ and its transfer function as h(L)=1hrLr=1hrzr=H(z). We assume H(z) is bounded input, bounded output stable (Kailath et al., 2000), for which the necessary and sufficient condition is B=1|hr|<.

In Figure 1 we show plots of canonical HRFs (with default parameter choices) used in SPM and Glover (1999). They exhibit three main features found in practice:

  1. Refractory period: Characterized by a time to onset τo

  2. Overshoot—characterized by a peak time τp

  3. Undershoot

Figure 1.

Figure 1

Canonical HRFs.

Also there are persistent reports in the literature of HRFs with an initial small dip immediately following the refractory period (e.g., Miezin, Maccotta, Ollinger, Petersen, & Buckner, 2000).

A number of results in the control engineering literature show how nonminimum phase (NMP) zeros of the transfer function imply overshoot and undershoot of an impulse response (see Damm & Muhirwa, 2014). But none of these works provide converse results, showing what impulse response shape implies about transfer function zeros.

We give a preliminary general result and then two converse results.

Theorem 8a

Any HRF with a refractory period is NMP.

Proof

We can write the transfer function as h(L) = Lτo g(L) where g(L) is a causal transfer function. Even if g(L) is MP, then h(L) is NMP because Lτo is not a causal operator.

Certainly the time to onset can be estimated, and then the BOLD signal can be shifted in time to align the start time with the stimulus start time. This effectively corrects the HRF for the refractory period. But this makes sense only if it is known that the refractory period is due to information transmission delay. If it is due to the GCS, then such alignment, being noncausal, will destroy the GCS.

To continue, we now assume the HRF has been corrected for the refractory period so that the corrected transfer function is Hc(z)=z1hrzr.

Theorem 8b

Dip: if h1 < 0 and H(1) > 0, then the HRF is NMP.

Note that Hc(1)=H(1)=1hr=area above the dip+area under the overshootarea above the undershoot. The fact that the overshoot area (far) exceeds the sum of the other two areas (so that H(1) > 0) seems to be a universal property of actual HRFs.

Proof

We prove that Hc(z) has a real zero outside the UC. We can set z = R, which is real. Then consider that Hc(R)=RH(R)=R1hrRr=h1+2hrR(r1). For R ≥ 1, the second term is bounded by R12|hr|<R1B and → 0 as R → ∞. Since h1 < 0, then for R large enough, RH(R) < 0 Since 1H(1) > 0, this means there must exist Ro > 1 with RoH(Ro) = 0 as required.

Theorem 8c

No dip: if h1 > 0 and H(−1) > 0, then the HRF is NMP.

Note that H(1)=1(1)rhr=m=1(h2mh2m1) and H(−1) > 0 is satisfied by the default canonical HRFs. Further, because HRFs are very smooth, the successive differences h2mh2m−1 will be very small. H(−1) can be split into four pieces: a positive sum up to the peak time, a negative sum from there to the zero crossing, a positive sum to the trough of the undershoot, and a negative sum from there back up to the the time axis. We thus expect H(−1) to be small.

Proof

We prove Hc(z) has a complex zero outside the UC. Put z = Re = −R. Then consider that Hc(R)=RH(R)=R1hr(R)r=h1+2hr(1)rR(r1). For R ≥ 1, the second term is bounded by R12|hr|<R1B, which → 0 as R → ∞ and so since −h1 < 0, then for R large enough, RH(−R) < 0. Since 1H (−1) > 0, there must thus exist Ro > 1 for which RoH(Ro) = 0 and the result is established.

Since the condition H(−1) > 0 and more generally the possibility of complex zeros outside the UC is so crucial, more flexible modeling of the HRF is called for. Recent approaches include Khalidov, Fadili, Lazeyras, Ville, and Unser (2011), Zafer, Blu, and Ville (2014), Wu et al. (2013), Sreenivasan, Havlicek, and Deshpande (2015) and our approach using Laguerre-polynomials (Cassidy, Long, Rae, & Solo, 2012).

9.2 Deconvolution

As noted in section 1, the current practice is to first apply a deconvolution to the BOLD signals before carrying out a GC analysis. As discussed in section 9.3, this makes sense only for DS-F.

We now see from the deconvolution example 6 C that even if Wiener deconvolution is applied to noise-free data zb,k, it will distort the GCS because it necessarily involves noncausal filtering. This is discussed further in section 9.3.

9.3 Reverse GC Problem

We discuss approaches under F, F-DS, and DS-F.

F

There is only one approach possible here. If the HRF is minimum phase, then the fast filtering will not affect the GCS and the first stage would consist of a GC analysis of the fast BOLD data. That is, fit an SS model to z¯b,t; find submodels for x¯b,t, y¯b,t using DARE (see theorem 1); and compute the GEMS FYX, FY.X, FXY using the expressions in theorem 5a. This would deliver the GCS of the fast BOLD circuit and, hence, of the NA circuit. But in practice, the BOLD data will be noisy, and an attempt to reduce the impact of the noise by deconvolution will distort the GCS. If the HRF is nonminimum phase, then this GCS would be distorted and nothing can be done to overcome that.

F-DS

Again there is only one approach possible here. If the HRF is minimum phase, then the fast filtering will not affect the GCS and the first stage would consist of a GC analysis of the slow BOLD data—that is, fit an SS model to zb,k; find submodels for xb,k, yb,k using DARE (see theorem 1); and compute the GEMS FYX, FY.X, FXY using the expressions in theorem 5a. This would deliver the GCS of the slow BOLD circuit. The second stage is to try to get the GCS of the fast BOLD circuit. But now we see that it must fail since the results of the reverse GC study in section 8 demonstrate that we cannot use the GCS of the slow BOLD circuit to discern the GCS of the fast BOLD circuit. So we cannot get at the GCS of the fast NA circuit.

Notice that in F-DS there is no role for deconvolution.

If the HRF, is NMP then the GCS will be distorted and there is nothing we can do to overcome this. If we apply deconvolution, then the analysis above shows it must fail since the Wiener filter is NMP.

DS-F

There are two approaches here. The direct approach has two stages. If the HRF is minimum phase, then the slow filtering will not affect the GCS and the first stage would consist of a GC analysis of the slow BOLD data—that is, fit an SS model to zb,k; find submodels for xb,k, yb,k using DARE (see theorem 1); and compute the GEMS FYX, FY.X, FXY using the expressions in theorem 5a. This would deliver the GCS of the slow NA circuit since by theorem 6, the GEMS would be the same as for the slow BOLD data. The second stage is to try to get the GCS of the fast NA circuit. But now we see that it must fail, since the results of the reverse GC study in section 8 demonstrate that we cannot use the GCS of the slow NA circuit to discern the GCS of the fast NA circuit.

The indirect approach has three stages. If the slow HRF is minimum phase, then in the zeroth stage, we could inverse filter (deconvolve) the slow BOLD data to obtain the slow NA. In the first stage, we obtain its GCS using the procedure described in the first stage of the direct approach. But for the second stage, we are in the same situation as in the direct approach and must fail to discern the GCS of the fast NA.

In practice, slow timescale deconvolution has gained favor recently. It is done using a Wiener filter (Glover, 1999) since this deals with possible noise (which we have ignored in our discussion). But as discussed above, this must fail since the Wiener filter is NMP.

We see here that only in DS-F is there a role for slow timescale deconvolution.

In sum we conclude that under any data-generating scheme, GC analysis must fail.

We note that there is an extensive discussion of GC in fMRI in Rodrigues and Andrade (2015). Unfortunately this work relies on the erroneous method of Chen et al. (2006) for computing GEMs (see the remarks following theorem 5b), and so its results are flawed.

10 Conclusion

This letter has given a theoretical and computational analysis of the use of Granger causality (GC) and applied the results to fMRI. There were two main issues: the effect of downsampling and the effect of hemodynamic filtering on Granger causal structure (GCS). To deal with these issues, a number of novel results in multivariate time series and Granger causality were developed via state-space methods as follows:

  1. Computations of submodels via the DARE (see theorems 1 and 4)

  2. Computation of downsampled models via the DARE (see theorem 3)

  3. Reliable computation of GEMs via the DARE (see theorems 5a and 5b)

  4. Effect of filtering on GEMs (see theorem 6)

Theorem 6 showed that nonminimum phase filters distort GCS. Using the results in the first three methods, we were able to develop, in section 8, a framework for generating downsampling-induced spurious Granger causality on demand and provided a number of illustrations.

In section 9 these results were applied to fMRI. To do this we described three data generating processes for fMRI:

  1. F where fMRI time series are filtered neural activity (NA)

  2. F-DS where NA is filtered, then downsampled to give fMRI time series

  3. DS-F where NA is downsampled, then fitered to give fMRI time series

While F-DS has strong physiological justification, DS-F seems to be behind a number of approaches in the literature. We showed that HRFs are non-minimum phase due to a refractory period. If that period can be corrected for, the phase character of the HRF has to be checked on a case-by-case basis. Certainly default canonical HRFs are nonminimum phase, and so more flexible HRF modeling will be needed.

Then we discussed the reverse GC problem under each data-generating process. In each case, we concluded that GC analysis could not work.

All this leads to the conclusion that linear Granger causality analysis of fMRI data cannot be used to discern neuronal level–driving relationships. Not only is the timescale too slow, but even with faster sampling, the non-minimum phase aspect of the HRF and the noise will still compromise the method.

Future work would naturally include an extension of the Granger causality results to handle the presence of a third vector time series and extensions to deal with time-varying Granger causality. Nongaussian versions could mitigate the nonminimum phase problem to some extent, but there does not seem to be any evidence for the nongaussianity of fMRI data. Extensions to nonlinear Granger causality are currently of great interest but need considerable development.

Acknowledgments

In the May 2016 issue of Neural Computation (Volume 28, Number 5, pp. 914–949), the letter “State-Space Analysis of Granger-Geweke Causality Measures with Application to fMRI” by Victor Solo was printed with incomplete acknowledgments.

The published acknowledgment reads:

This work was partially supported by NIH grant P41EB015896.

The complete acknowledgment should read:

This work was partially supported by the NIH grant P41EB015896. The author would first like to thank Martinos Center colleagues Dr. Bruce Rosen and Dr. Robert Weiskoff who raised crucial questions during a joint discussion of Granger causality and fMRI (held in 1998), which ultimately led to this paper. Thanks are also due to Dr. Graham Goodwin who asked a critical question during an IEEE Control Engineering conference presentation that led to the work on the reverse problem. Special thanks are due to a referee who posed a series of critical questions that led to a major expansion of section 9 into its current form. Finally we acknowledge discussions with Martinos Center colleagues Dr. Mark Vangel, Dr. Fa-Hsuan Lin, and Dr. Matti Håmålåinen.

Appendix A: Stabilizability, Detectability, and DARE

In this appendix we restate and modify for our purposes some standard state-space results. We rely mostly on Kailath et al. (2000, appendixes E and C).

We denote an eigenvalue of a matrix by λ and a corresponding eigenvector by q. We say λ is a stable eigenvalue if |λ| < 1; otherwise, λ is an unstable eigenvalue.

A.1 Stabilizability

The pair (A, B) is controllable if there exists a matrix G so that ABG is stable (i.e., all eigenvalues of ABG are stable). (A, B) is controllable iff any of the following conditons hold,

  • Controllability matrix: C = [B, AB, …, An−1B] has rank n.

  • Rank test: rank[λIA, B] = n for all eigenvalues λ of A.

  • PBH test: There is no left eigenvector of A that is orthogonal to B, that is, if qTA = λqT, then qTB ≠ 0.

The pair (A, B) is stabilizable if rank[λIA, B] = n for all unstable eigen-values of A. Three useful tests for stabilizability are:

  • PBH test: (A, B) is stabilizable iff there is no left eigenvector of A corresponding to an unstable eigenvalue that is orthogonal to B, that is, if qTA = λqT and |λ| 1, then qTB ≠ 0.

  • (A, B) is stabilizable if (A, B) is controllable.

  • (A, B) is stabilizable if A is stable.

A.2 Theorem DARE

See Kailath et al. (2000, theorem E6.1, lemma 14.2.1, section 14.7).

Under conditions N,St,De, the DARE has a unique positive semidefinite solution P, which is stabilizing (i.e., AsKsC is a stable matrix). Further if we initialize P0 = 0, then Pt is nondecreasing and PtP as t → ∞.

Remarks

  1. AsKsC stable” means that AKC is stable (see below), and this implies that (A, K) is controllable (see below).

  2. Since VR, then NV is positive definite.

Proof of Remark i

We first note (taking limits in) (Kailath et al., 2000, equation 9.5.12) Ks = KSR−1. We then have AsKsC=ASR−1C − (KSR−1)C = AKC. So AsKsC is stable iff AKC is stable. But then (A, K) is controllable.

A.3 Detectability

The pair (A,C) is detectable if (AT,CT) is stabilizable.

Remarks

  1. If As is stable (all eigenvalues have modulus < 1), then St De automatically hold.

  2. Condition De can be replaced with the detectability of (A,C) which is the way Kailath et al. (2000) state the result. We show equivalence below (this is also noted in a footnote in Kailath et al., 2000, section 14.7).

Proof of Remark ii

Suppose (As,C) is detectable but (A,C) is not. Then by the PBH test, there is a right eigenvector p of A corresponding to an unstable eigenvalue of A with Ap = λp,Cp = 0. But then Asp = (ASR−1C)p = Ap = λp while Cp = 0 which contradicts the detectbility of (As,C). The reverse argument is much the same.

Proof of Result 1

Part a follows from the discussion leading to theorem DARE. Part b follows from the remarks after theorem DARE.

Appendix B: GEMs for Bivariate VAR(1)

Applying formula 5.5 and reading off HXX from the VAR(1) model yields

fX(λ)=[(1+ϕy22ϕycos(λ))σa2+γx2σb22ρσaσb(γxϕyγxcos(λ))]/|D(ejλ)|2D(L)=(1ϕxL)(1ϕyL)γxγyL2.

This can clearly be written as an ARMA(2,1) spectrum σx2|1θxejλ|2/|D(ejλ)|2 Equating coefficients gives

γ0=σx2(1+θx2)=(1+ϕy2)σa2+σb2γx22ρσaσbγxϕy=σa2(1+ξx+dx2),γ1=σx2θx=σa2dx,

where ξx=(1ρ2)γx2σb2 and dx=ϕy=ργxσbσa. We thus have θx2=γ12/σx4, and using this in the first equation gives, γ0=σx2+γ12/σx2 or σx4σx2γ0+γ12=0. This has, of course, two solutions:

σx2=12(γ0±γ024γ12),σx2σa2=12(γ0σa2±γ02σa44γ12σa4)=12(1+ξx+dx2±(1+ξx+dx2)24dx2).

Note that if ξx = 0, this delivers σx2σa2=12(1+dx2+(1dx2)2=1.

We must choose the solution that ensures |θx|<1γ12σx4<1γ0σx4<2γ0/σa2<2σx2/σa21+ξx+dx2<1+ξx+dx2±(1+ξx+dx2)24dx2. And so we must choose the + solution. Continuing, we now claim,

σx2σa212(1+ξx+dx2+(1+ξxdx2)2)=12(1+ξx+dx2+1+ξxdx2)=1+ξx.

This follows if (1+ξx+dx2)24dx2(1+ξxdx2)2(1+ξx+dx2)(1+ξxdx2)24dx22(1+ξx)2dx24dx21+ξx1, which holds.

Appendix C: Spectral Factorization

Suppose εt is a white noise sequence with E(εt) = 0, var(εt)= Σ. Let G(L) be a stable causal, possibly nonminimum phase filter. Then z¯t=G(L)εt has spectrum fZ¯(λ)=G(L)VGT(L1) where L = exp(−). We can then find a unique causal, stable minimum phase spectral factorization fZ¯(λ)=Go(L)VoGoT(L1). Let V, Vo have Cholesky factorizations V = JJT, Vo=JoJoT and set Gc(L) = G(L)J, Go,c(L) = Go(L)Jo. Then fZ¯(λ)=Gc(L)GcT(L1)=Go,c(L)Go,cT(L1). Since Go,c(L) is minimum phase, we can introduce the causal filter E(L)=Go,c1(L)Gc(L)E(L)ET(L1)=I. Such a filter is called an all pass filter (Hannan & Deistler, 1988; Green, 1988). Now, Gc(L) = Go,c(L)E(L) or G(L) = Go(L)JoE(L)J−1 (i.e., a decomposition of a nonminimum phase (matrix) filter into a product of a minimum phase filter and an all-pass filter. We can also write this as Go(L)=G(L)JE1(L)Jo1=G(L)JET(L1)Jo1 showing how the nonminimum phase filter is transformed to yield a spectral factor.

Footnotes

1

The main parts of this theorem were announced in a research workshop at the 2011 Human Brain Mapping meeting

References

  1. Baccala L, Sameshima K. Partial directed coherence: A new concept in neural structure determination. Biol Cybernetics. 2001;84:463–474. doi: 10.1007/PL00007990. [DOI] [PubMed] [Google Scholar]
  2. Bauer D. Estimating linear dynamical systems using subspace. Econometric Theory. 2005;21:181–211. [Google Scholar]
  3. Bernasconi C, Konig P. On the directionality of cortical interactions studied by structural analysis of electrophysiological recordings. Biol Cyb. 1999;81:199–210. doi: 10.1007/s004220050556. [DOI] [PubMed] [Google Scholar]
  4. Caines P. Weak and strong feedback free processes. IEEE Trans Autom Contr. 1976;21:737–739. [Google Scholar]
  5. Caines P, Chan W. Feedback between stationary stochastic processes. IEEE Trans Autom Contr. 1975;20:498–508. [Google Scholar]
  6. Cassidy B, Long C, Rae C, Solo V. Identifying fMRI model violations with Lagrange multiplier tests. IEEE Trans Medical Imaging. 2012;31:1481–1492. doi: 10.1109/TMI.2012.2195327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Chen Y, Bressler S, Ding M. Frequency domain decomposition of conditional Granger causality and application to multivariate neural field potential data. J Neuroscience Methods. 2006;150:228–237. doi: 10.1016/j.jneumeth.2005.06.011. [DOI] [PubMed] [Google Scholar]
  8. Christiano L, Eichenbaum M. (Technical Report, Federal Reserve Bank of Minnesota).Temporal aggregation and structural inference in macroeconomics. 1987 [Google Scholar]
  9. Damm T, Muhirwa LN. Zero crossings, overshoot and initial undershoot in the step and impulse responses of linear systems. IEEE Trans Autom Contr. 2014;59:1925–1929. [Google Scholar]
  10. David O, Guillemain I, Saillet S, Reyt S, Deransart C, Segebarth C, Depaulis A. Identifying neural drivers with functional MRI: An electrophysiological validation. PLOS Biology. 2008;6:2683–2697. doi: 10.1371/journal.pbio.0060315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Deistler M, Peternell K, Scherer W. Consistency and relative efficiency of subspace methods. Automatica. 1995;31:1865–1875. [Google Scholar]
  12. Deshpande G, Sathian K, Hu X. Effect of hemodynamic variability on Granger causality analysis of fMRI. NeuroImage. 2010;52:884–896. doi: 10.1016/j.neuroimage.2009.11.060. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Ding M, Bressler S, Yang W, Liang H. Short-window spectral analysis of cortical event-related potentials by adaptive multivariate autoregressive modeling: Data preprocessing, model validation, and variability assessment. Biol Cyb. 2000;83:35–45. doi: 10.1007/s004229900137. [DOI] [PubMed] [Google Scholar]
  14. Dufour J, Renault E. Short and long run causality in time series: Theory. Econometrica. 1998;66:1099–1125. [Google Scholar]
  15. Dufour J, Taamouti A. Short and long run causality measures: Theory and inference. Journal of Econometrics. 2010;154:42–58. [Google Scholar]
  16. Edin F. Scaling errors in measures of brain activity cause erroneous estimates of effective connectivity. NeuroImage. 2010;49:621–630. doi: 10.1016/j.neuroimage.2009.07.007. [DOI] [PubMed] [Google Scholar]
  17. Feinberg D, Yacoub E. The rapid development of high speed, resolution and precision in fMRI. NeuroImage. 2012;62:720–725. doi: 10.1016/j.neuroimage.2012.01.049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Geweke J. Temporal aggregation in the multiple regression model. Econo- metrica. 1978;46:643–661. [Google Scholar]
  19. Geweke J. The measurement of linear dependence and feedback between multiple time series. J Amer Stat Assoc. 1982;77:304–313. [Google Scholar]
  20. Geweke J. Measures conditional of linear dependence and feedback between time series. J Amer Stat Assoc. 1984;79:907–915. [Google Scholar]
  21. Glover G. Deconvolution of impulse response in event-related bold fMRI. NeuroImage. 1999;9:416–429. doi: 10.1006/nimg.1998.0419. [DOI] [PubMed] [Google Scholar]
  22. Goebel R. Investigating directed cortical interactions in time-resolved fMRI data using vector autoregressive modeling and Granger causality mapping. Magn Reson Imaging. 2003;21:1251–1261. doi: 10.1016/j.mri.2003.08.026. [DOI] [PubMed] [Google Scholar]
  23. Granger C. Economic processes involving feedback. Information and Control. 1963;6:28–48. [Google Scholar]
  24. Granger C. Investigating causal relations by econometric models and cross- spectral methods. Econometrica. 1969;37:424–438. [Google Scholar]
  25. Green M. All-pass matrices, the positive real lemma and unit canonical correlations between future and past. J Multiv Anal. 1988;24:143–154. [Google Scholar]
  26. Hamilton J. Time series analysis. Princeton, NJ: Princeton University Press; 1994. [Google Scholar]
  27. Handwerker D, Gonzalez-Castillo J, D’Esposito M, Bandettini P. The continuing challenge of understanding and modeling hemodynamic variation in fMRI. NeuroImage. 2012;62:1017–1023. doi: 10.1016/j.neuroimage.2012.02.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hannan E, Deistler M. Statistical theory of linear systems. New York: Wiley; 1988. [Google Scholar]
  29. Hannan E, Poskitt D. Unit canonical correlations between future and past. Ann Stat. 1988;16:784–790. [Google Scholar]
  30. Havlicek M, Friston K, Jan J, Brazdil M, Calhoun VD. Dynamic modeling of neuronal responses in fMRI using cubature Kalman filtering. NeuroImage. 2011;56(4):2109–2128. doi: 10.1016/j.neuroimage.2011.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Hsiao C. Time series modeling and causal ordering of Canadian money, income and interest rates. In: Anderson OD, editor. Time series analysis: Theory and practice. Amsterdam: North-Holland; 1982. pp. 671–699. [Google Scholar]
  32. Kailath T, Sayeed A, Hassibi B. Linear estimation. Upper Saddle River, NJ: Prentice Hall; 2000. [Google Scholar]
  33. Kaminski M, Blinowska K. A new method of the description of the information flow in the brain structures. Biol Cyb. 1991;65:203–210. doi: 10.1007/BF00198091. [DOI] [PubMed] [Google Scholar]
  34. Kaufmann K, Stern D. Evidence for human influence on climate from hemispheric temperature relations. Nature. 1997;388:39–44. [Google Scholar]
  35. Khalidov I, Fadili J, Lazeyras F, Ville D, Unser M. Activelets: Wavelets for sparse representation of hemodynamic responses. Signal Processing. 2011;91:2810–2821. [Google Scholar]
  36. Kshirsagar A. Multivariate analysis. New York: Marcel Dekker; 1972. [Google Scholar]
  37. Larimore W. Proc American Control Conference. Piscataway, NJ: IEEE; 1983. System identification, reduced order filters and modeling via canonical variate analysis; pp. 445–451. [Google Scholar]
  38. Lutkepohl H. Introduction to multivariate time series analysis. Berlin: Springer-Verlag; 1993. [Google Scholar]
  39. Magnus J, Neudecker H. Matrix differential calculus with applications in statistics and econometrics. New York: Wiley; 1999. [Google Scholar]
  40. Marcet A. Temporal aggregation of economic time series. In: Hanson LP, Sargent TJ, editors. Rational expectations matrix. Boulder, CO: Westview Press; 1991. pp. 237–281. [Google Scholar]
  41. Middleton R, Goodwin G. Digital control and estimation: A unified approach. Upper Saddle River, NJ: Prentice Hall; 1990. [Google Scholar]
  42. Miezin FM, Maccotta L, Ollinger JM, Petersen SE, Buckner RL. Characterizing the hemodynamic response: Effects of presentation rate, sampling procedure, and the possibility of ordering brain activity based on relative timing. NeuroImage. 2000;11:735–759. doi: 10.1006/nimg.2000.0568. [DOI] [PubMed] [Google Scholar]
  43. Pandit S, Wu S. Time series and systems analysis with applications. New York: Wiley; 1983. [Google Scholar]
  44. Pascual-Marqui R, Biscay R, Bosch-Bayard J, Lehmann D, Kochi K, Yamada N, Sadato N. Isolated effective coherence (ICOH): Causal information flow excluding indirect paths. Zurich: KEY Institute for Brain-Mind Research, University of Zurich; 2014. (Technical Report). arXiv:1402.4887. [Google Scholar]
  45. Pierce D, Haugh L. Causality in temporal systems characterizations and a survey. Journal of Econometrics. 1977;5:265–293. [Google Scholar]
  46. Rissannen J, Wax M. Measures of mutual and causal dependence between two time series. IEEE Trans Inf Thy. 1987;33:598–601. [Google Scholar]
  47. Rodrigues J, Andrade A. Lag-based effective connectivity applied to fMRI: A simulation study highlighting dependence on experimental parameters and formulation. NeuroImage. 2015;89:358–377. doi: 10.1016/j.neuroimage.2013.10.029. [DOI] [PubMed] [Google Scholar]
  48. Roebroeck A, Formisano E, Goebel R. Mapping directed influence over the brain using Granger causality. NeuroImage. 2005;25:230–242. doi: 10.1016/j.neuroimage.2004.11.017. [DOI] [PubMed] [Google Scholar]
  49. Roebroeck A, Formisano E, Goebel R. The identification of interacting networks in the brain using fMRI: Model selection, causality and deconvolution. NeuroImage. 2011;58:296–302. doi: 10.1016/j.neuroimage.2009.09.036. [DOI] [PubMed] [Google Scholar]
  50. Seber G. Multivariate observations. New York: Wiley; 1984. [Google Scholar]
  51. Seth A, Chorley P, Barnett L. Granger causality analysis of fMRI bold signals is invariant to hemodynamic convolution but not downsampling. NeuroImage. 2013;65:540–555. doi: 10.1016/j.neuroimage.2012.09.049. [DOI] [PubMed] [Google Scholar]
  52. Sims C. Discrete approximations to continuous time distributed lags in econometrics. Econometrica. 1971;39:545–563. [Google Scholar]
  53. Sims C. Money, income and causality. Am Econ Rev. 1972;62:540–552. [Google Scholar]
  54. Solo V. Topics in advanced time series analysis. In: del Pino G, Rebolledo R, editors. Lectures in probability and statistics. Berlin: Springer-Verlag; 1986. pp. 165–328. [Google Scholar]
  55. Sreenivasan K, Havlicek M, Deshpande G. Nonparametric hemodynamic deconvolution of fMRI using homomorphic filtering. IEEE Trans Medical Imaging. 2015;34:1155–1163. doi: 10.1109/TMI.2014.2379914. [DOI] [PubMed] [Google Scholar]
  56. Suna L, Wang M. Global warming and global dioxide emission: An empirical study. J Environmental Management. 1996;46(4):327–343. [Google Scholar]
  57. Takahashi D, Baccala L, Sameshima K. Partial directed coherence asymptotics for VAR processes of infinite order. International Journal of Bioelectromagnetism. 2008;10:31–36. [Google Scholar]
  58. Telser L. Discrete samples and moving sums in stationary stochastic processes. J Amer Stat Assoc. 1967;62:484–499. [Google Scholar]
  59. Triacca U. Is Granger causality analysis appropriate to investigate the relationship between atmospheric concentration of carbon dioxide and global surface air temperature? Theoretical and Applied Climatology. 2005;81:133–135. [Google Scholar]
  60. Valdes-Sosa P. Spatio-temporal autoregressive models defined over brain manifolds. Neuroinformatics. 2004;2:239–250. doi: 10.1385/NI:2:2:239. [DOI] [PubMed] [Google Scholar]
  61. Valdes-Sosa P, Roebroeck A, Daunizeau J, Friston K. Effective connectivity: Influence, causality and biophysical modeling. NeuroImage. 2011;58:339–361. doi: 10.1016/j.neuroimage.2011.03.058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. van Overschee P, de Moor B. Subspace identification for linear systems. Dordrecht: Kluwer; 1996. [Google Scholar]
  63. Wu G, Liao W, Stramaglia S, Ding J, Chen H, Marinazzo D. A blind deconvolution approach to recover effective connectivity brain networks from resting state fMRI data. Medical Image Analysis. 2013;17:365–374. doi: 10.1016/j.media.2013.01.003. [DOI] [PubMed] [Google Scholar]
  64. Yamashita O, Sadato N, Okada T, Ozaki T. Evaluating frequency-wise directed connectivity of bold signals applying relative power contribution with the linear multivariate time-series models. NeuroImage. 2005;25:478–490. doi: 10.1016/j.neuroimage.2004.11.042. [DOI] [PubMed] [Google Scholar]
  65. Zafer D, Blu T, Ville D. Proc IEEE ISBI. Piscataway, NJ: IEEE; 2014. Detecting spontaneous brain activity in functional magnetic resonance imaging using finite rate of innovation; pp. 1047–1050. [Google Scholar]

RESOURCES