Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2009 Feb 18.
Published in final edited form as: Neuroimage. 2008 Oct 17;44(3):796–811. doi: 10.1016/j.neuroimage.2008.09.048

Dynamic causal models of steady-state responses

RJ Moran 1, KE Stephan 1,2, T Seidenbecher 3, H-C Pape 3, RJ Dolan 1, KJ Friston 1
PMCID: PMC2644453  EMSID: UKMS3830  PMID: 19000769

Abstract

In this paper, we describe a dynamic causal model (DCM) of steady-state responses in electrophysiological data that are summarised in terms of their cross-spectral density. These spectral data-features are generated by a biologically plausible, neural-mass model of coupled electromagnetic sources; where each source comprises three sub-populations. Under linearity and stationarity assumptions, the model's biophysical parameters (e.g., post-synaptic receptor density and time constants) prescribe the cross-spectral density of responses measured directly (e.g., local field potentials) or indirectly through some lead-field (e.g., electroencephalographic and magnetoencephalographic data). Inversion of the ensuing DCM provides conditional probabilities on the synaptic parameters of intrinsic and extrinsic connections in the underlying neuronal network. This means we can make inferences about synaptic physiology, as well as changes induced by pharmacological or behavioural manipulations, using the cross-spectral density of invasive or non-invasive electrophysiological recordings. In this paper, we focus on the form of the model, its inversion and validation using synthetic and real data. We conclude with an illustrative application to multi-channel local field potential data acquired during a learning experiment in mice.

Keywords: Frequency Domain Electrophysiology, Bayesian Inversion, Cross-spectral Densities, DCM, fear conditioning, hippocampus, CA1, amygdala

INTRODUCTION

This paper is concerned with modelling steady-state or stationary responses recorded electrophysiologically using invasive or non-invasive techniques. Critically, the models are parameterised in terms of neurophysiologically meaningful parameters, describing the physiology and connectivity of coupled neuronal populations subtending observed responses. The model generates or predicts the cross-spectral density of observed responses, which are a simple but comprehensive summary of steady-state dynamics under linearity and stationarity assumptions. Furthermore, these cross-spectral features can be extracted quickly and simply from empirical data. In this paper, we describe the model and its inversion, with a focus on system identifiability and the validity of the proposed approach. This method is demonstrated using local field potentials (LFP) recorded from Pavlovian fear conditioned mice. In subsequent papers, we will apply the model to LFP data recorded during pharmacological experiments.

The approach described below represents the denouement of previous work on dynamic causal modelling of spectral responses. In Moran et al (2007), we described how neural-mass models, used originally to model evoked responses in the electroencephalogram (EEG) and magnetoencephalogram (MEG) (David et al 2003; 2005; Kiebel et al 2007), could also model spectral responses as recorded by LFPs. This work focussed on linear systems analysis and structural stability, in relation to model parameters. We then provided a face-validation of the basic idea, using single-channel local field potentials recorded from two groups of rats. These groups expressed different glutamatergic neurotransmitter function, as verified with microdialysis (Moran et al 2008). Using the model, we were able to recover the anticipated changes in synaptic function.

Here, we generalise this approach to provide a full dynamic causal model (DCM) of coupled neuronal sources, where the ensuing network generates electrophysiological responses that are observed directly or indirectly. This generalisation rests on two key advances. First, we model not just the spectral responses from each electromagnetic source but the cross-spectral density among sources. This enables us to predict the cross-spectral density in multi-channel data, even if it has been recorded non-invasively through, for example, scalp electrodes. Second, in our previous work we made the simplifying assumption that the neuronal innovations (i.e. the baseline cortical activity) driving spectral responses were white (i.e., had uniform spectral power). In this work, we relax this assumption and estimate, from the data, the spectral form of these innovations, using a more plausible mixture of white and pink (1/f) components.

This paper comprises three sections. In the first, we describe the DCM, the cross-spectral data-features generated by the model and model inversion given these features. In the second section, we address the face-validity of the model, using synthetic data to establish that both the form of the model and its key parameters can be recovered in terms of conditional probability densities. The parameters we look at are those that determine post-synaptic sensitivity to glutamate from extrinsic and intrinsic afferents. In the final section, we repeat the analysis of synthetic data using multi-channel LFP data from mice, acquired during cued recall of a conditioned fear memory. This section tries to establish the construct validity of DCM in relation to the previous analyses of functional connectivity using cross-correlogram analysis. These show an increase in the coupling between the hippocampus and amygdala using responses induced by conditioned fear-stimuli. We try to replicate this finding and, critically, extend it to establish the changes in directed connections that mediate this increased coupling.

THE DYNAMIC CAUSAL MODEL

In this section, we describe the model of cross-spectral density responses. Much of this material is based on linear systems theory and the differential equations that constitute our neural-mass model of underlying dynamics. We will use a tutorial style and refer interested readers to appendices and previous descriptions of the neural-mass model for details. We first consider the generative model for cross-spectral density and then describe how these cross-spectral features are evaluated. Finally, we review model inversion and inference.

A generative model for cross-spectral density

Under stationarity assumptions, one can summarize arbitrarily long electrophysiological recordings from multi-channel data in terms of cross-spectral density matrices, g(ω)c at frequency ω (radians per second). Heuristically, these can be considered as a covariance matrix at each frequency of interest. As such, these second-order data-features specify, completely, the second-order moments of the data under Gaussian assumptions. Cross-spectral density is useful because it represents the important information, in long time-series, compactly. Furthermore, it brings our data modelling into the domain of conventional spectral analysis and linear systems theory. The use of linear systems theory to derive the predicted spectral response from a non-linear dynamical system assumes that changes in the (neuronal) states of the system can be approximated with small perturbations around some fixed-point. One can motivate this assumption easily, given there are no profound perturbations to the subject's neuronal state, during data acquisition.

The neural mass model

The underlying dynamic causal model is defined by the equations of motion Inline graphic(t) = f(x,u) at the neuronal level. In this context, they correspond to a neural-mass model that has been used extensively in the causal modelling of EEG and MEG data and has been described previously for modelling spectral responses (Moran et al 2007; 2008). This model ascribes three sub-populations to each neuronal source, corresponding roughly to spiny stellate input cells, deep pyramidal output cells and inhibitory interneurons. Following standard neuroanatomic rules (Felleman & Van Essen 1991), we distinguish between forward connections (targeting spiny stellate cells), backward connections (targeting pyramidal cells and inhibitory interneurons with slower kinetics) and lateral connections (targeting all subpopulations); see Figure 1 and Moran et al (2007). Each neuronal source could be regarded as a three-layer structure, in which spiny stellate cells occupy the granular layer, while infragranular and supragranular layers contain both pyramidal cells and inhibitory interneurons.

Fig. 1.

Fig. 1

Schematic of the source model with intrinsic connections. This schematic includes the differential equations describing the motion of hidden electrophysiological states. Each source is modelled with three subpopulations (pyramidal, spiny-stellate and inhibitory interneurons) as described in (Jansen and Rit, 1995). In this figure these subpopulations have been assigned to granular and agranular cortical layers, which receive forward, backward and lateral connections from extrinsic sources in the network.

Each subpopulation is modelled with pairs of first-order differential equations of the following form:

x.V=xIx.I=κH(E(x)+C(u))2κxIκ2xV (1

The column vectors xV and xI, correspond to the mean voltages and currents, where each element corresponds to the hidden state of the subpopulation at each source. These differential equations implement a convolution of a subpopulation's presynaptic input to produce a postsynaptic response. The output of each source is modelled as a mixture of the depolarisation of each subpopulation. Due to the orientation of deep pyramidal cell dendrites, tangential to the cortical surface, this population tends to dominate LFP recordings. We accommodate this by making the output of each source, g(x) a weighted mixture of xV with weights of 60% for the pyramidal subpopulation and 20% for the others. The presynaptic input to each subpopulation comprises endogenous, E(x), and exogenous, C(u), components:

Endogenous inputs

In a DCM comprising s sources, endogenous input E(x) is a weighted mixture of the mean firing rates in other subpopulations (see Figure 1). These firing rates are a sigmoid activation function of depolarisation, which we approximate with a linear gain function; S(xi) = Sxi Inline graphic Inline graphics×1. Firing rates provide endogenous inputs from subpopulations that are intrinsic or extrinsic to the source. Subpopulations within each source are coupled by intrinsic connections, whose strengths are parameterised by γ = {γ1,…,γ5}. These endogenous intrinsic connections can arise from any subpopulation. Conversely, endogenous extrinsic connections arise only from the excitatory pyramidal cells of other sources. The strengths of these connections are parameterised by the forward, backward and lateral extrinsic connection matrices AF Inline graphic Inline graphics×s, AB Inline graphic Inline graphics×s and AL Inline graphic Inline graphics×s respectively. The postsynaptic efficacy of connections is encoded by the maximum amplitude of postsynaptic potentials He,i = diag(H1,…,Hs) (note the subscripts in Figure 1) and by the rate-constants of postsynaptic potentials, κ = diag(κ1,…,κs) for each source. The rate-constants are lumped representations of passive membrane properties and other spatially distributed dynamics in the dendritic tree.

Exogenous inputs

Exogenous inputs C(u) = Cu are scaled by the exogenous input matrix C Inline graphic Inline graphics×s so that each source-specific innovation u(t) Inline graphic Inline graphics×1 excites the spiny stellate subpopulation. We parameterise the spectral density of this exogenous input, g(ω)u, in terms of white (α) and pink (β) spectral components:

gk(ω)u=σu+βuω (2

Neuronal responses

The cross-spectral density is a description of the dependencies among the observed outputs of these neuronal sources. We will consider a linear mapping from s sources to c channels. In EEG and MEG this mapping is a lead-field or gain-matrix function, L(θ) Inline graphic Inline graphicc×s, of unknown spatial parameters, θ, such as source location and orientation. Generally, this function rests upon the solution of a well-posed electromagnetic forward model. For invasive LFP recordings that are obtained directly from the neuronal sources, this mapping is a leading diagonal gain-matrix, L = diag(θ1,…θs) where the parameters model electrode-specific gains. The observed output at channel i is thus si(t) = Lig(x), where g(x) is the source output (a mixture of depolarisations) and Li represents the i-th lead-field or row of the gain-matrix. In other words, Li = Inline graphics is the change in observed potential caused by changes in source activity. These observed outputs can now be used in a generative model of source cross-spectral measures.

Cross-spectral density

The neuronal model comprises a network of neuronal sources, each of which generates stationary time-series in a set of recording channels. These steady-state dynamics are expressed, in the frequency domain, as cross-spectral densities, gij(ω), at radial frequencies ω, between channels i and j. Under linear systems theory, the cross-spectral density induced by the k-th input or innovation uk(t), is simply the cross-transfer function Γijk(ω) times the spectral density of that innovation, gk(ω)u. This transfer function is the cross-product of the Fourier transforms of the corresponding first-order kernels, κik(t) and κjk(t) (and in the case of i = j may be regarded as the modulation or self-transfer function).

Γijk(ω)=|κik(t)ejωtdtκjk(t)ejωtdt|gij(ω)=ΣkΓijk(ω)gk(ω)u (3

The convolution kernels mediate the effect of the k-th input, at time t in the past, on the current response recorded at each channel. In general, they can be regarded as impulse response functions and describe the output at the i-th channel, si(t), produced by a spike of the k-th exogenous input, uk(t). The kernel for each channel obtains analytically from the Jacobian Inline graphic = ∂f/∂x describing how the system's hidden neuronal states, x(t), couple inputs to outputs. For channel i, and input k the kernel is

graphic file with name ukmss-3830-f0010.jpg (4

This means the kernels are analytic functions of Inline graphic(t) = f(x,u) and s(t) = Lg(x); the network's equations of motion and output function respectively. The use of the chain rule follows from the fact that the only way past inputs can affect current channel outputs is through the hidden states. It is these states that confer memory on the system. In Appendix I, we present an alternative derivation of the cross-spectral density using the Laplace transform of the dynamics in state-space form. This gives a more compact, if less intuitive, series of expressions that are equivalent to the kernel expansion. In this form, the Jacobian is known as the state transition matrix.

To furnish a likelihood model for observed data-features we include a cross-spectral density ψij induced by channel noise and add a random observation error to the predicted cross-spectral density. Finally, we apply a square root transform to the observed and predicted densities to render the observation error approximately Gaussian (Kiebel et al 2005).

gij(ω)c=gij(ω)+ψ(ω)ij+ε(ω)ψ(ω)ij={ψc+ψsi=jψcij}ψc=σc+βcωψs=σs+βsω (5

The spectral densities, ψc and ψs model the contributions of common noise sources (e.g., a common reference channel) and channel-specific noise respectively. As with the neuronal innovations we parameterise these spectral densities as an unknown mixture of white and pink components. The observation error ε ~ N(0, Σ(λ)) has a covariance function, Σ(λ) = exp(λ)V(ω), where λ are unknown hyperparameters and V(ω) encodes correlations over frequencies1.

Equations 1 to 5 specify the predicted cross-spectral density between any two channels given the parameters of the observation model {α,β,λ,θ} and the neuronal state equations, {κ,H,γ,A,C}. This means that the cross-spectral density is an analytic function of the parameters Inline graphic = {α,β,κ,H,γ,A,C,λ,θ} and specifies the likelihood p(gc|Inline graphic) of observing any given pattern of cross-spectral densities at any frequency. When this likelihood function is supplemented with a prior density on the parameters, p(Inline graphic) (see Moran et al 2007 and Table 1), we have a full probabilistic generative model for cross-spectral density features p(gc,Inline graphic) = p(gc|Inline graphic)p(Inline graphic) that is specified in terms of biophysical parameters. Next, we look at how to extract the data features this model predicts.

Table 1.

Parameter Priors for model parameters including the observation model, neuronal sources, and experimental effects. In practice, the non-negative parameters of this model are given log-normal priors, by assuming a Gaussian density on a scale parameter, Θi=N(0,σi2), where Inline graphici = πi exp(Θi), and πi is the prior expectation and σj2 is its log-normal dispersion.

Parameter
Inline graphici = πi exp(Θi)
Interpretation Prior
Mean:
πi
Variance:
Θi = N(0, σi)

Observation Model
α u Exogenous White Input παu = 0 σαu = 1/16
α s Channel Specific White Noise παs = 0 σαs = 1/16
α c White Noise common to all
channels
παc = 0 σαc = 1/16
β u Exogenous Pink Input πβu = 0 σβu = 1/16
β s Channel Specific Pink Noise πβs = 0 σβs = 1/16
β c Pink Noise common to all
channels
πβc = 0 σβc = 1/16
θ 1…s Lead-field gain πθi = 1 σθi = exp(8)
λ Noise hyperparameter πλ = 0 σλ = 1

Neuronal Sources

κ e/i Excitatory/Inhibitory Rate
Constants
πκe = 4ms−1 σKe = 1/8
πκi = 16ms−1 σKi = 1/8

He/i Excitatory/Inhibitory πHe = 8mV σHe = 1/16
Maximum post-synaptic
potentials
πHi = 32mV σHi = 1/16

γ 1,2,3,4,5 Intrinsic Connections πγ1 = 128 σγ1 = 0
πγ2 = 128 σγ2 = 0
πγ3 = 64 σγ3 = 0
πγ4 = 64 σγ4 = 0
πγ5 = 4 σγ5 = 0

AF Forward Extrinsic
Connections
πAF = 32 σAF = 1/2

AB backward Extrinsic
Connections
πAB = 16 σAB = 1/2

AL Lateral Extrinsic Connections πAL = 4 σAL = 1/2

C Exogenous Input πC = 1 σC = 1/32

Design
βki
Trial Specific Changes πβki = 1 σβk,i = 1/2

Evaluating the cross-spectral density

The assumptions above establish a generative model for cross-spectral features of observed data under linearity and local stationarity assumptions. To invert or fit this model we need to perform an initial feature selection on the raw LFP or M/EEG data. In this section, we describe this procedure, using a vector auto-regression (VAR) model of the multi-channel data and comment briefly on its advantages over alternative schemes. We use a p-order VAR-model of the channel data y, to estimate the underlying auto-regression coefficients A(p) Inline graphic Inline graphicc×c (where c is the number of channels2).

yn=A(1)yn1+A(2)yn2+A(p)ynp+e (6

Here the channel data at the n-th time point, yn, represents a signal vector over channels. The autoregressive coefficients A(k) are estimated using both auto- and cross-time-series components. These, along with an estimated channel noise covariance, Eij provide a direct estimate of the cross-spectral density, gij(ω)c = f(A(p)), using the following transform:

Hij(ω)=1Aij(1)eiw+Aij(2)ei2w++Aij(p)eipwgij(ω)c=H(ω)ijEijH(ω)ij (7

The estimation of the auto-regression coefficients, A(k) Inline graphic A(p) uses the spectral toolbox in SPM (http://www.fil.ion.ucl.ac.uk) that allows for Bayesian point estimators of A(p), under various priors on the coefficients. Details concerning the Bayesian estimation of the VAR-coefficients can be found in Roberts and Penny (2002). Briefly, this entails a variational approach that estimates the posterior densities of the coefficients. This posterior density is approximated in terms of its conditional mean and covariance; p(A|y,p) = N(μAA). These moments are optimised through hyperparameters νE and νA (with Gamma hyperpriors;Γ(103,10−3)) encoding the precision of the innovations e and the prior precision, respectively3:

μA=ΣAνEy~TyΣA=(νEy~Ty~+νAI)1 (8

Equation 7 uses the posterior mean of the coefficients to provide the cross-spectral density features.

Alternatively, non-parametric methods could be used to quantify the cross-spectral density; e.g., a fast Fourier transform (FFT). The advantage of our parametric approach is its structural equivalence to the generative model itself: We use uninformative priors but place formal constraints on the estimation of cross-spectral density through the order p of the VAR-model. This has important regularising properties when estimating the spectral features. Principled constraints on the order are furnished by the DCM above and follow from the fact that the order of the underlying VAR process is prescribed by the number of hidden neuronal states in the DCM. Heuristically, if one considers a single source, the evolution of its hidden states can be expressed as a p-variate VAR(1) process

graphic file with name ukmss-3830-f0011.jpg (9

where η(t) corresponds to exogenous input convolved with the system's kernel. Alternatively, we can represent this process with a univariate AR(p) process on a single state. Because there is a bijective mapping between source activity and measurement space, the multivariate data can be represented as a VAR(p) process. We provide a formal argument in Appendix II for interested readers.

The number of hidden states per source is twelve (see Figure 1) and this places an upper bound on the order of the VAR model4. The relationship between the VAR model order and the number of hidden sates can be illustrated in terms of the log-evidence ln p(y | p) for VAR models with different orders: We convolved a mixture of pink and white noise innovations with the DCM's first-order kernel (using the prior expectations) and used these synthetic data to invert a series of VAR models of increasing order. Figure 2 shows the ensuing model evidence jumps to a high value when the order reaches twelve, with smaller increases thereafter.

Fig. 2.

Fig. 2

The log-evidence for different order VAR models. The variational Bayes approach described in the text provides the log model evidence for different VAR model orders. This analysis illustrates a large increase in model evidence up to order twelve (black) and small increases thereafter (grey). This increase in evidence occurs at an order that is equal to the number of poles of the DCMs transfer function (see Appendix II).

Model inversion and inference

Model inversion means estimating the conditional density of the unknown model parameters p(Inline graphic| gc,m) given the VAR-based cross-spectral density features gc for any model m defined by the network architecture and priors on the parameters, p(Inline graphic|m). These unknown parameters include (i) the biophysical parameters of the neural-mass model, (ii) parameters controlling the spectral density of the neuronal innovations and channel noise, (iii) gain parameters and (iv) hyperparameters controlling the amplitude of the observation error in Eqn. 5. The model is inverted using standard variational approaches described in previous publications and summarised in Friston et al (2007). These procedures use a variational scheme in which the conditional density is optimized under a fixed-form (Laplace) assumption. This optimisation entails maximising a free-energy bound on the log-evidence, ln p(gc|m). Once optimised, this bound can be used as an approximate log-evidence for model comparison in the usual way. Comparing DCMs in a way that is independent of their parameters is useful when trying to identify the most plausible architectures subtending observed responses (Penny et al 2004; Stephan et al 2007) and is used extensively in subsequent sections. The focus of this paper is on the approximate log-evidence ln p(gc|m) and conditional densities p(Inline graphic| gc,m) and, in particular, whether they can support robust inferences on neural-mass models and their parameters.

IDENTIFIABILITY AND FACE VALIDITY

In this section, we try to establish the face-validity of the DCM and inversion scheme described in the previous section. Here, we use synthetic datasets generated by models with known parameters. We then try to recover the best model and its parameters, after adding noise to the data. We will address both inference on models and their parameters. This involves searching over a space or set of models to find the model with the greatest evidence. One then usually proceeds by characterising the parameters of the best model in terms of their conditional density. In both inference on models and parameters, we used the same model employed to analyse the empirical data of the next section. This enabled us to relate the empirical results to the simulations presented below.

Inference on model- space

For inference on models, we generated data from three two-source networks using extrinsic connections from the first to the second source, from the second to the first and reciprocal connections. To assess inference on model-space, we used each of the three models as a forward model of the three model-specific data sets. We hope to show that the inversion scheme identified the correct model in all three cases. In all three models exogenous neuronal inputs entered both sources and the connections were all of the forward type. These three models are also evaluated in the empirical analysis. The parameter values for all three models were set to their prior expectations5, with the exception of the extrinsic connections, for which we used the conditional estimates of the empirical analysis. Data were generated over frequencies from 4 to 48 Hz and observation noise was added (after the square root transform). The variance of this noise corresponded to the conditional estimate of the error variance from the empirical analysis.

The resulting three data sets were then inverted using each of the three models. For each data set, this provided three log-evidences (one for each model used to fit the spectral data). We normalised these to the log-evidence of the weakest model to produce log-likelihood ratios or log-Bayes factors. The results for the three models are shown in Table 2a. These indicate that, under this level of noise, DCM was able to identify the model that actually generated the data. In terms of inference on model-space, we computed the posterior probability of each model by assuming flat or uniform priors on models; under this assumption p(y | mi) ∞ p(mi | y), which means we can normalise the evidence for each model, given one data set and interpret the result as the conditional probability on models. These are expressed as percentages in Table 2b and show that we can be almost certain that the correct model will be selected. In summary, Bayesian model comparison with DCM seems to be able to identify these sorts of models with a high degree of confidence, with conditional probabilities close to one for correct models and close to zero for incorrect models.

Table 2a.

Inference on model space: Results of the Bayesian inversion on data simulated using three different network architectures (column-wise). Log-Bayes factors are presented relative to the worst model for each network. Best performing models are in bold. For all three simulations, the corresponding model-architecture was found to have the highest Log-Bayes factor.

Simulated Network
Connections
A2,1F A1,2F A2,1F and A1,2F
Modelled
Connections
A2,1F 416.6 0 0
A1,2F 0 399.2000 0.5000
A2,1F and A1,2F 398.4 381.6000 561.2000

Table 2b.

Inference on model space: Posterior probabilities of each model are computed by assuming flat or uniform priors on models; normalising these values gives the conditional probability of the models presented here as percentages.

Simulated
Connections A2,1F A1,2F A2,1F and A1,2F
Modelled
Connections
%
A2,1F 100 0 0
A1,2F 0 100 0
A2,1F and A1,2F 0 0 100

Inference on parameter- space

For inference on parameters, we looked at the effects of changing the maximum amplitudes of excitatory postsynaptic potentials (EPSP), which control the efficacy of intrinsic and extrinsic connections and the effects of changing the extrinsic connections themselves. These effects are encoded in the parameters HeInline graphicInline graphic and AFInline graphicInline graphic, respectively. We addressed identifiability by inverting a single model using synthetic data with different levels of noise. By comparing the true parameter values to the conditional confidence intervals, under different levels of noise, we tried to establish the accuracy of model inversion and how this depends upon the quality of the data. As above, we chose different levels of noise based upon the error variance estimated using real data. Specifically, we varied the noise levels from 0.001 to 2 times the empirical noise variance, allowing a broad exploration of relative signal-to-noise ratios (SNR).

The model we used is the same model identified by the empirical analyses of the next section. This model comprised two sources and two LFP channels with no cross-talk between the channels. The parameter values were based on the estimates from the empirical analysis. Specifically, source 1 sent a strong extrinsic connection to source 2, whose excitatory cells had a relatively low postsynaptic response (Figure 3). All parameter values were set to their prior expectation, except for the parameters of interest He(2) and A21F.

Fig. 3.

Fig. 3

Simulated two source model where excitatory responses are modulated via a scaling of an intrinsic maximum EPSP parameter in source 2: He(2) and an extrinsic connection from source 1 to source 2: A21F. The inversion scheme was tested by recovering the posterior estimates of these parameters, under different levels of observation noise (see Figure 4).

In our DCM, parameters are optimised by multiplying their prior expectation with an unknown log-scale parameter that is exponentiated to ensure positivity. Hence, a log-scale parameter of zero corresponds to a scale-parameter of one, which renders the parameter value equal to its prior expectation. By imposing Gaussian priors on the log-scale parameters we place log-normal priors on the parameters per se. To model reduced postsynaptic amplitudes in source 2, He2 had a log-scale parameter of −0.4 representing a exp(−0.4) = 67% decrease from its prior expectation. The log-scale parameter encoding the forward connection from source 1 to source 2, namely A2,1F, was set to 1.5, representing a exp(1.5) = 448% increase from its prior expectation. Both sources received identical neuronal innovations, comprising white and pink spectral components (as specified in Equation 2 above). Data were generated over frequencies from 4 to 48 Hz.

Posterior density estimates for all parameters, p(Inline graphic| gc,m) were obtained for 128 intermediate noise levels between one thousandth and twice the empirical noise variance. The conditional expectation or MAP (maximum a posteriori) estimates of He(2) and A2,1F are shown in Figure 4 (hashed red line). The (constant) true parameter values are indicated by the solid red line, and the prior value is in grey. The shaded areas correspond to the 90% confidence intervals based on the conditional or posterior density. The lower panels show the conditional probabilities p(He(2)<8) and p(A2,1F>32) that the parameters differed from their prior expectations.

Fig. 4.

Fig. 4

Conditional densities of parameter estimates using the two-source simulations. The data were generated under known parameter values (red line) and mixed with noise (one thousandth to twice the empirical noise estimate). The EPSP parameter (Top left) was exp(−0.4) = 67% of its prior expectation. The MAP estimates for this log-scale parameter (plotted in hashed red) display a characteristic shrinkage toward the prior of zero at high levels of noise (90% confidence intervals are plotted in grey). The extrinsic connection parameter (Top right) A21F displays a similar behaviour, when simulated at exp(1.5) = 448% of its prior expectation. The grey lines show the prior value (of zero) used for the simulations. The bottom graphs show the conditional probabilities that the MAP estimates of the log-scale parameters differ from their prior expectation.

It can be seen that the conditional expectation remained close to the true values for both parameters, despite differences in their conditional precision, which decreased with increasing levels of observation noise. This can be seen in the shrinking Bayesian confidence intervals (grey area) and in the small increase in conditional probabilities with less noise. This effect is more marked for the estimates of He(2); where the confidence intervals splay at higher noise levels. This jagged variance in the confidence interval itself reflects the simulation protocol, in which each data set comprised a different noise realisation. In addition, the lowest conditional probability (that the parameter posterior estimate differed from the prior) for all simulations, occurred for this EPSP parameter where p(He(2)<8)=.74 at a high noise level of 1.83. In contrast, the connection strength parameter remained within tight confidence bounds for all noise levels and produced a minimum conditional probability, p(A2,1F<32)=.99. This minimum occurred again as expected, at a high noise levels of 1.72 times the empirical noise level.. One can also see, for both parameters a trend for conditional estimates to shrink towards the prior values at higher noise levels; this shrinkage is typical of Bayesian estimators; i.e. when data become noisy, the estimation relies more heavily upon priors and the prior expectation is given more weight (Friston et al 2003). Importantly, while the 90% confidence bounds generally encompass the true values, the prior values remain outside. In summary, under the realistic levels of noise considered, it appears possible to recover veridical parameter estimates and be fairly confident that these estimates differ from their prior expectation.

EMPIRICAL DEMONSTRATION

In this section, we present a similar analysis to that of the previous section but using real data. Furthermore, to pursue construct-validity, we invert the model using data acquired under different experimental conditions to see if the conditional estimates of various synaptic parameters change in a way that is consistent with previous analyses of functional connectivity using cross-correlograms. These analyses suggest an increase in coupling between the amygdala and hippocampus that is expressed predominantly in the theta range. This section considers the empirical data set-up, experimental design and inference on models and parameters. We interpret the conditional estimates of the parameters, in relation to the underlying physiology, in the Discussion.

Empirical LFP data

Local field potential data were acquired from mice (adult male C57B/6J mice, 10 to 12 weeks old) during retrieval of a fear-memory, learned in a Pavlovian conditioning paradigm using acoustic tones (CS+ and CS−) and foot-shock (US). A previous analysis of these data (Seidenbecher et al 2003) points to the importance of theta rhythms (∼5Hz) during fear-memory retrieval (Pape and Stork, 2003; Buzsaki, 2002). Specifically, Seidenbecher et al (2003) demonstrated an increase in theta-band coupling between area CA1 of the hippocampus and the lateral nucleus of the amygdala (LA) during presentation of the CS+. Moreover, theta synchrony onset was correlated with freezing, a behavioural index of fear-memory (Maren et al 1997). For the purposes of demonstrating our DCM, we here revisit the data of a single animal and show that this ‘on/off’ theta synchrony can be explained with plausible neurobiological mechanisms at the synaptic level, using the methodology described in the previous sections.

LFP data were recorded from two electrodes in the LA and the CA1 of the dorsal hippocampus. The data comprised six minutes of recording, during which four consecutive CS− tones and four consecutive CS+ tones were presented, each lasting ten seconds. Freezing behaviour was seen prominently during the CS+. Preliminary analysis, using time-frequency spectrograms, revealed that the hippocampal region exhibited strong background theta rhythms, during CS+ and CS− epochs (Fig. 5a and b); whereas theta activity in lateral amygdala was prominent only during the CS+ stimulus. Figure 5 displays the first CS+ and CS− epochs of fear recall. Cross-spectra were computed for three-second epochs that followed the onset of freezing behaviour in the four CS+ epochs and order-time matched CS− epochs. Cross-spectral densities were computed from 4 to 48 Hz, using an eighth-order VAR model, for each epoch and averaged across conditions (Figure 6). This revealed spectral features that corroborated the analysis of Seidenbecher et al (2003); with pronounced fast theta activity in the hippocampus and a marked theta peak in the cross-spectral density. The amygdala showed a broader spectrum, with a preponderance of lower theta activity and a theta peak in, and only in, the CS+ trial.

Fig. 5.

Fig. 5

CS+ (Left) and CS− (Right) spectrograms. Time-frequency data demonstrating theta activity at hippocampal (Top) and amygdala (Bottom) electrodes during the CS+ and CS−. These plots are scaled relative to the maximum theta peak in the CS+ hippocampal image. They are displayed with corresponding behavioural modes represented as colour-bars; where ‘f’ demarks freezing periods (the behavioural correlate of fear recall), ‘e’ exploration, ‘r’ risk assessment and ‘s’ stereotypical behaviour. During the CS+ condition theta activity can be observed in both electrodes, in contrast, during the CS− condition, theta activity is evident in hippocampal data but much less in the amygdala.

Fig. 6.

Fig. 6

Average cross-spectral densities across all CS+ and CS− trials. Top left: hippocampal autospectrum, Top right: hippocampal-amygdala cross spectrum, Bottom right: amygdala autospectrum. These spectral data features were evaluated from three second epochs after the first freezing behaviour during CS+ and the time/order matched CS− trials. Peaks at theta frequency are evident in both CS+ and CS− conditions with reduced theta activity in the amygdala during CS−.

Dynamic causal modelling

These cross-spectral densities were then inverted using a series of generative models. These models were used to test the direction of information flow during heightened theta synchrony following CS+. Given key experimental differences between CS− and CS+ trials, we introduced log-scale parameters βki to model trial-specific variations in specified parameters:

ϑij=ϑiexp(ΣkXjkβki)X=[01] (10

βki is the k-th experimental effect on the i-th parameter and ϑij is the value of the i-th parameter Inline graphici in the j-th trial or condition.. These effects are meditated by an experimental design matrix X, which encodes how experimental effects are expressed in each trial.

Equation 10 is a generic device that we use to specify fully parameterised experimental effects on specific parameters in multi-trial designs. In this example, β1i is simply a log-scale parameter (Table 1) specifying the increase (or decrease) in CS+ relative to CS− trials. The parameters showing trial-specific effects were the extrinsic connections and excitatory post synaptic amplitudes; all other parameters we fixed over trials.

Inference on models

The extrinsic connection types in our DCM are based on connections between isocortical areas (Felleman & Van Essen 1991); however, in this analysis we are dealing with allocortical (CA1) and subcortical (LA) brain regions that have no clearly defined hierarchical relationship. Therefore, our first step was to establish which connection type best explained the measured LFP data. We approached this using model comparison using DCMs with reciprocal connections between CA1 and LA. The connections in these models were (model 1) forward; (model 2) backward; (model 3) lateral; (model 4) a combination of forward and backward and (model 5) a combination of all three. Bayesian model comparison based on the log-evidence indicated that the most likely type of inter-regional connections was of the ‘forward’ type (model 1); where connections originate from pyramidal cells and target excitatory interneurons. Figure 7a shows the relative model evidences for the five models (i.e., the log-Bayes factor with respect to the worst model).

Fig. 7.

Fig. 7

Results of the Bayesian model comparison. Log Bayes factors are plotted relative to the worst model in each comparison. (a) Optimal connection type is found in Model 1, where the connections are of the ‘forward’ type. (b) Model evidence supports Model 1, where exogenous inputs enter both the hippocampus and amygdala. (c) Model evidences suggest reciprocal connections between the hippocampus and amygdala.

Next, employing the optimal connection type, three different input schemes were tested to find where driving inputs, i.e. from cortical regions, enter during CS+ and CS− epochs. These DCM's included; (model 1) comprising exogenous inputs to both CA1 and LA; (model 2) exogenous input to hippocampal region CA1 only and (model 3) the lateral amygdala only. Figure 7b shows that the best model is model 1; where inputs enter both the lateral amygdala and hippocampal CA1.

Having established a causal architecture for the inputs, three further models were tested to examine whether connections were bidirectional or unidirectional. These results are displayed in Figure 7c, where model 1 had bidirectional connections, model 2 had unidirectional hippocampal to amygdala connections and model 3 had connections from amygdala to hippocampus. We see that the most plausible model contains bidirectional connections between hippocampus and amygdala.

This series of model searches can be regarded as a heuristic search over model space to identify the most likely model; clearly the combinations of connection types and architectures entail a very large model space. Effectively, we finessed the search of this space using a top-down strategy by optimising various model attributes, starting with complex models and removing connections to identify the best. The accuracy of this model was impressive; the fits to the cross-spectral data or shown in Figure 8 and are almost indistinguishable from the observed spectra. Having identified this model we now turn to inference on its parameters.

Fig. 8.

Fig. 8

Model fits for all empirical data. Top left: hippocampal autospectrum, Top right: hippocampal-amygdala cross spectrum, Bottom right: amygdala autospectrum. The measured spectra are shown with a dashed line and the conditional model predictions with a full line.

Inference on parameters

We now look at the conditional probabilities of key parameters showing trial-specific or conditioning effects, under the most plausible model. These parameters were the extrinsic connection strengths and intrinsic postsynaptic efficacies. When comparing the CS− and CS+ trials, we observe decreased amygdala-hippocampal connectivity and increased hippocampal-amygdala connectivity. Figure 9 shows the MAP estimates of β1i, which scale the extrinsic connections relative to 100% connectivity in CS−. In addition, there were small increases in postsynaptic efficacy in the amygdala for the CS+ relative to CS− Quantitatively, hippocampus-amygdala connectivity increased by 26%, with a conditional probability of 99.97% that this effect was zero. In contrast, amygdala-hippocampus forward connections decreased by 72%, with a conditional probability of almost one. The relative change of intrinsic amygdala excitatory postsynaptic amplitude was 8% with a high conditional probability 99.85% that the increase was greater than zero. In contrast, changes in hippocampal excitatory postsynaptic amplitude were unremarkable, (0.002%) and with a conditional probability that was close to chance (69.70%).

Fig. 9.

Fig. 9

Trial-specific effects encoding differences between the CS+, relative to CS− trials. Top left: Hippocampal EPSP displays <1% change on CS+ trials. Top right: Amygdala to hippocampus forward connection strength decreases by 72% on CS+ trials. Bottom left: Hippocampus to amygdale forward connection strength increases by 26% on CS+ trials. Bottom left: Amygdala EPSP increases by 8% in CS+ relative to CS− trials.

In summary, these results suggest that the hippocampus and amygdala influence each other through bidirectional connections. Steady states responses induced by CS+, relative to CS− stimuli appear to increase the intrinsic sensitivity of postsynaptic responses in the amygdala and with an additional sensitization to extrinsic afferents from the hippocampus. At the same time the reciprocal influence of the amygdala on the hippocampus is suppressed. These conclusions are exactly consistent with early hypotheses based on correlations (see below).

DISCUSSION

We have described a dynamic causal model (DCM) of steady-state responses that are summarised in terms of cross-spectral densities. These spectral data-features are generated by a biologically plausible, neural-mass model of coupled electromagnetic sources. Under linearity and stationarity assumptions, inversion of the DCM provides conditional probabilities on both the models and the synaptic parameters of any particular model. This scheme enables inference about synaptic physiology and changes induced by pharmacological or behavioural manipulations, using the cross-spectral density of invasive or non-invasive electrophysiological recordings.

Usually, in Dynamic Causal Modelling, data prediction involves the integration of a dynamical system to produce a time-series. In the current application, the prediction is over frequencies; however, the form of the inversion remains exactly the same. This is because in DCM for deterministic systems (i.e., models with no system or state noise) the time-series prediction is treated as a finite-length static observation, which is replaced here with a prediction over frequencies. The only difference between DCM for time-series and DCM for cross-spectral density is that the data-features are represented by a three dimensional array, covering c×c channels and b frequency-bins. In conventional time-series analysis the data-features correspond to a two-dimensional array covering c channels and b time-bins.

Our simulation studies provide some face-validity for DCM, in terms of internal consistency. DCM was able to identify the correct model and, under one model, parameter values were recovered reliably in settings of high observation noise. Changes in the postsynaptic responsiveness, encoded by the population maximum EPSP, were estimated veridically at levels below prior threshold, with a conditional confidence of more than 74%; even for the highest levels of noise. Similarly, inter-area connection strength estimates were reasonably accurate under high levels of noise. With noisy data, parameter estimates tend to shrink towards their prior expectation, reflecting the adaptive nature of the weights afforded to prior and data information in Bayesian schemes.

We have presented an analysis of empirical LFP data, obtained by invasive recordings in rat CA1 and LA during a fear conditioning paradigm. A previous analysis of these data (Seidenbecher et al 2003) showed prominent theta band activity in CA1 during both CS+ and CS− conditions, whereas LA expresses significant theta activity during CS+ trials only. Using an analysis of functional connectivity6, based on cross-correlograms of LA/CA1 activity in the theta range, Seidenbecher et al (2003) demonstrated an increase in connectivity between these two brain regions during CS+ trials. This is consistent with a trial-specific enabling or gating of the CA1→LA connection during retrieval of conditioned fear in the CS+ condition, leading to a transient coupling of LA responses to the condition-independent theta activity in CA1. However, this analysis of functional connectivity was unable to provide direct evidence for directed or causal interactions. This sort of evidence requires a model of effective connectivity like DCM. The DCM analysis in the present study confirmed the hypothesis based on the cross-correlogram results of Seidenbecher et al (2003). The DCM analysis showed a selective increase in CA1→LA connectivity during CS+ trials, accompanied by a decrease in LA→ CA1 connection strength. An additional finding was the increase in the amplitude of postsynaptic responses in LA during CS+ trials. This result may represent the correlate of long term potentiation of LA neurons following fear conditioning (Rodrigues et al 2004; LeDoux, 2000). In summary, one could consider these results as a demonstration of construct validity for DCM, in relation to the previous analyses of functional connectivity using cross-correlograms.

The analysis of parameter estimates was performed only after Bayesian model selection. In the search for an optimum model, we asked (i) which connection type was most plausible, (ii) whether neuronal inputs drive CA1, LA or both regions; and (iii) which extrinsic connectivity pattern was most likely to have generated the observed data (directed CA1→LA or LA→CA1 or reciprocal connections). The results of sequential model comparisons showed that there was a very strong evidence for a model in which (i) extrinsic connections targeted excitatory neurons, (ii) neuronal inputs drove both CA1 and LA and (iii) the two regions were linked by reciprocal connections. While there is, to our knowledge, no decisive empirical data concerning the first two issues, the last conclusion from our model comparisons is supported strongly by neuroanatomic data from tract-tracing studies. These have demonstrated prominent and reciprocal connections between CA1 and LA (see Pitkänen et al 2000 for a review). This correspondence between neuroanatomic findings and our model structure, which was inferred from the LFP data, provides further construct validity, in relation to neuroanatomy.

In conclusion, this study has introduced a novel variant of DCM that provides mechanistic explanations, at the level of synaptic physiology, for the cross-spectral density of invasive (LFP) or non-invasive (EEG) electrophysiological recordings. We have demonstrated how this approach can be used to investigate hypotheses about directed interactions among brain regions that cannot be addressed by conventional analyses of functional connectivity. A previous (single-source) DCM study (Moran et al 2008) of invasive LFP recordings in rats demonstrated the consistency of model parameter estimates with concurrent microdialysis measurements. The current study is another step towards establishing the validity of models, which we hope will be useful for deciphering the neurophysiological mechanisms that underlie pharmacological effects and pathophysiological processes (Stephan et al 2006).

Acknowledgments

The Wellcome Trust funded this work. Rosalyn Moran was funded by an Award from the Max Planck Society to RJD. We would like to thank Marcia Bennett for invaluable help preparing this manuscript.

Appendix I: Laplace Description of Cross-spectral Density

Consider the State Space Model for a particular neuronal source

x.=Ax+Buy=Cx+Du

where A is the state transition matrix or Jacobian, x are the hidden states (cf. Equation 1) and y is the source output. The Laplace transform gives

sX(s)=AX(s)+BU(s)Y(s)=CX(s)+DU(s)X(s)=(sIA)1BU(s)Y(s)=(C(sIA)1B+D)U(s)=H(s)U(s) (AI.1)

Evaluating at s = gives the frequency output of the system. Given that the cross-spectrum for two signals i and j is defined as Sij=YiYj and that inputs to the system are seen by both sources, we can write the output cross-spectral density as

Sij=HiHj|U| (AI.2

where Hi is computed from the transition matrices of each source directly. Furthermore, assuming white noise input we see from

y(t)=F1(H(jω))F1(U(jω))F1(U(jω))=δ(t) (AI.3

that Hi are the Fourier Transforms of the impulse responses. In our model, we supplement the input with pink (1/f) noise to render the input biologically plausible input. We can now see directly how the cross-spectral density in Eqn. A1.2 and Equation 3 are equivalent, in terms of system response to the unit impulse.

Appendix II: VAR model order selection from the number of Hidden States

Consider the discrete-time signal described by the difference equation

y(t)=a1y(t1)a2y(t2)apy(tp)+ε (AII.1

The Laplace transform of a sampled signal is known as the Z-transform

L(y(t))=Σn=0y[n]0δ(tnT)estY(z)=Σn=0y[n]ensT (AII.2

For the AR model of AII.1 we obtain a Z domain representation

Y(z)=a1z1Y(z)a2z2Y(z)apzpY(z)+ε(z) (AII.3

Now consider again the state-space form of each source in Equation AI.1. We see that the form of H(s) is a polynomial quotient, where the denominator is the characteristic polynomial of the Jacobian A. This contains powers of s up to the number of columns in A, indexed by the number of hidden states; i.e. the length of vector x. Hence, for q roots by partial fraction expansion we obtain

H(s)=Asλ1+Bsλ2+Ksλq (AII.4

Using the s-z relation s + β = 1 − z−1eβT, we obtain the order of the AR model p, determined by the number of roots of the Jacobian q to give the delay zp in Equation AII.3.

Footnotes

1

In our work, we use an AR(1) autoregression model of errors over frequencies, with an AR coefficient of one half and ensure that the error covariance components associated with the cross-spectral density between channels i and j are the same as the corresponding component for the cross-spectral density between channels j and i.

2
For computational expediency, if there are more than eight channels, we project the data and predictions onto an eight-dimensional subspace defined by the principal components of the prior covariance matrix in channel space
ΣiLθiσi2LTθi
where σi2 is the prior variance of the i-th spatial or gain parameter.
3

comprise the time lagged data.

4

In practice, we do not use the upper bound but use p = 8 for computational expediency; this seems to give robust and smooth spectral features.

5

These expectations are biologically plausible amplitudes and rate constants that have been used in previous instances of the model (Jansen et al 1993; David et al 2005) and are summarized in Moran et al 2007 and Table 1. In this study, prior variances on the intrinsic connectivity parameters were set to zero.

6

Functional connectivity is defined as the statistical dependence between two biophysical time-series, whereas effective connectivity refers to the directed and casual influence one biophysical system exerts over another (Friston et al 2003)

Software note

Matlab routines and demonstrations of the inversion described in this paper are available as academic freeware from the SPM website (http://www.fil,ion.ucl.ac.uk/spm) and will be found under the ‘api_erp’, ‘spectral’ and ‘Neural_Models’ toolboxes in SPM8.

References

  1. Buzsaki G. Theta Oscillations in the Hippocampus. Neuron. 2002;33(No. 3):325–340. doi: 10.1016/s0896-6273(02)00586-x. [DOI] [PubMed] [Google Scholar]
  2. David O, Harrison L, Friston KJ. Modelling event-related responses in the brain. NeuroImage. 2005 Apr 15;25(3):756–70. doi: 10.1016/j.neuroimage.2004.12.030. [DOI] [PubMed] [Google Scholar]
  3. David O, Friston KJ. A neural-mass model for MEG/EEG: coupling and neuronal dynamics. NeuroImage. 2003 Nov;20(3):1743–55. doi: 10.1016/j.neuroimage.2003.07.015. [DOI] [PubMed] [Google Scholar]
  4. Friston KJ, Mattout J, Trujillo-Barreto T, Ashburner J, Penny W. Variational free energy and the Laplace approximation. NeuroImage. 2007;34:220–234. doi: 10.1016/j.neuroimage.2006.08.035. [DOI] [PubMed] [Google Scholar]
  5. Friston KJ, Harrison L, Penny W. Dynamic causal modelling. NeuroImage. 2003;19(4):1273–1302. doi: 10.1016/s1053-8119(03)00202-7. [DOI] [PubMed] [Google Scholar]
  6. Felleman DJ, Van Essen DC. Distributed Hierarchical Processing in the Primate Cerebral Cortex. Cerebral Cortex. 1991;1(No. 1):1–47. doi: 10.1093/cercor/1.1.1-a. [DOI] [PubMed] [Google Scholar]
  7. Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biological Cybernetics. 1995;(No. 73):357–366. doi: 10.1007/BF00199471. 1995. [DOI] [PubMed] [Google Scholar]
  8. Jansen BH, Zouridakis G, Brandt ME. A neurophysiologically-based mathematical model of flash visual evoked potentials. Biol Cybernetics. 1993;68(No. 3) doi: 10.1007/BF00224863. [DOI] [PubMed] [Google Scholar]
  9. Kiebel SJ, Garrido ML, Friston KJ. Dynamic causal modelling of evoked responses: The role of intrinsic connections. NeuroImage. 2007;36:332–345. doi: 10.1016/j.neuroimage.2007.02.046. [DOI] [PubMed] [Google Scholar]
  10. LeDoux JE. Emotion circuits in the brain. Annu Rev Neurosci. 2000;23:155–84. doi: 10.1146/annurev.neuro.23.1.155. 2000. [DOI] [PubMed] [Google Scholar]
  11. McLennan H. The effect of decortication on the excitatory amino acid sensitivity of striatal neurones. Neurosci Lett. 1980;18:313–316. doi: 10.1016/0304-3940(80)90303-1. [DOI] [PubMed] [Google Scholar]
  12. Maren S, Aharonov G, Fanselow MS. Neurotoxic lesions of the dorsal hippocampus and Pavlovian fear conditioning in rats. Behavioural Brain Research. 1997 Nov;88(Issue 2):261–274. doi: 10.1016/s0166-4328(97)00088-0. [DOI] [PubMed] [Google Scholar]
  13. Mattout J, Phillips C, Penny WD, Rugg MD, Friston KJ. MEG source localization under multiple constraints: An extended Bayesian framework. NeuroImage. 2005;30(Is. 3):753–767. doi: 10.1016/j.neuroimage.2005.10.037. [DOI] [PubMed] [Google Scholar]
  14. Moran RJ, Kiebel SJ, Stephan KE, Reilly RB, Daunizeau J, Friston KJ. A Neural-mass Model of spectral responses in electrophysiology. NeuroImage. 2007;37(Issue 3):706–720. doi: 10.1016/j.neuroimage.2007.05.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Moran RJ, Stephan KE, Kiebel SJ, Rombach N, O'Connor WT, Murphy KJ, Reilly RB, Friston KJ. Bayesian estimation of synaptic physiology from the spectral responses of neural masses. NeuroImage. 2008 doi: 10.1016/j.neuroimage.2008.01.025. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Pape H-C, Stork O. Genes and Mechanisms in the Amygdala Involved in the Formation of Fear Memory. Annals of the New York Academy of Sciences. 2003;985:92–105. doi: 10.1111/j.1749-6632.2003.tb07074.x. 2003. [DOI] [PubMed] [Google Scholar]
  17. Pitkänen A, Pikkarainen M, Nurminen N, Ylinen A. Reciprocal connections between the amygdala and the hippocampal formation, perirhinal cortex, and postrhinal cortex in rat. A review. Ann N Y Acad Sci. 2000;911:369–391. doi: 10.1111/j.1749-6632.2000.tb06738.x. [DOI] [PubMed] [Google Scholar]
  18. Penny WD, Stephan KE, Mechelli A, Friston KJ. Comparing dynamic causal models. NeuroImage. 2004;22(3):1157–1172. doi: 10.1016/j.neuroimage.2004.03.026. [DOI] [PubMed] [Google Scholar]
  19. Robinson PA. Propagator theory of brain dynamics. Phys Rev E Stat Nonlin Soft Matter Phys. 2005 Jul;72(1 Pt 1):011904. doi: 10.1103/PhysRevE.72.011904. [DOI] [PubMed] [Google Scholar]
  20. Rodrigues SM, Schafe GE, LeDoux JE. Molecular mechanisms underlying emotional learning and memory in the lateral amygdala. Neuron. 2004 Sep 30;44(1):75–91. doi: 10.1016/j.neuron.2004.09.014. 2004. [DOI] [PubMed] [Google Scholar]
  21. Seidenbecher T, Laxmi TR, Stork O, Pape HC. Amygdalar and hippocampal theta rhythm synchronization during fear memory retrieval. Science. 2003;301:846–850. doi: 10.1126/science.1085818. [DOI] [PubMed] [Google Scholar]
  22. Stephan KE, Baldeweg T, Friston KJ. Synaptic plasticity and dysconnection in schizophrenia. Biological Psychiatry. 2006;59:929–939. doi: 10.1016/j.biopsych.2005.10.005. [DOI] [PubMed] [Google Scholar]
  23. Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ. Comparing hemodynamic models with DCM. NeuroImage. 2007;38:387–401. doi: 10.1016/j.neuroimage.2007.07.040. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES