Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2012 Mar 22;8(3):e1002408. doi: 10.1371/journal.pcbi.1002408

Impact of Network Structure and Cellular Response on Spike Time Correlations

James Trousdale 1,*, Yu Hu 2, Eric Shea-Brown 2, Krešimir Josić 1,3
Editor: Olaf Sporns4
PMCID: PMC3310711  PMID: 22457608

Abstract

Novel experimental techniques reveal the simultaneous activity of larger and larger numbers of neurons. As a result there is increasing interest in the structure of cooperative – or correlated – activity in neural populations, and in the possible impact of such correlations on the neural code. A fundamental theoretical challenge is to understand how the architecture of network connectivity along with the dynamical properties of single cells shape the magnitude and timescale of correlations. We provide a general approach to this problem by extending prior techniques based on linear response theory. We consider networks of general integrate-and-fire cells with arbitrary architecture, and provide explicit expressions for the approximate cross-correlation between constituent cells. These correlations depend strongly on the operating point (input mean and variance) of the neurons, even when connectivity is fixed. Moreover, the approximations admit an expansion in powers of the matrices that describe the network architecture. This expansion can be readily interpreted in terms of paths between different cells. We apply our results to large excitatory-inhibitory networks, and demonstrate first how precise balance – or lack thereof – between the strengths and timescales of excitatory and inhibitory synapses is reflected in the overall correlation structure of the network. We then derive explicit expressions for the average correlation structure in randomly connected networks. These expressions help to identify the important factors that shape coordinated neural activity in such networks.

Author Summary

Is neural activity more than the sum of its individual parts? What is the impact of cooperative, or correlated, spiking among multiple cells? We can start addressing these questions, as rapid advances in experimental techniques allow simultaneous recordings from ever-increasing populations. However, we still lack a general understanding of the origin and consequences of the joint activity that is revealed. The challenge is compounded by the fact that both the intrinsic dynamics of single cells and the correlations among then vary depending on the overall state of the network. Here, we develop a toolbox that addresses this issue. Specifically, we show how linear response theory allows for the expression of correlations explicitly in terms of the underlying network connectivity and known single-cell properties – and that the predictions of this theory accurately match simulations of a touchstone, nonlinear model in computational neuroscience, the general integrate-and-fire cell. Thus, our theory should help unlock the relationship between network architecture, single-cell dynamics, and correlated activity in diverse neural circuits.

Introduction

New multielectrode and imaging techniques are revealing the simultaneous activity of neural ensembles and, in some cases, entire neural populations [1][4]. This has thrust upon the computational biology community the challenge of characterizing a potentially complex set of interactions – or correlations – among pairs and groups of neurons.

Beyond important and rich challenges for statistical modeling [5], the emerging data promises new perspectives on the neural encoding of information [6]. The structure of correlations in the activity of neuronal populations is of central importance in understanding the neural code [7][13]. However, theoretical [9][11], [14][16], and empirical studies [17][19] do not provide a consistent set of general principles about the impact of correlated activity. This is largely because the presence of correlations can either strongly increase or decrease the fidelity of encoded information depending on both the structure of correlations across a population and how their impact is assessed.

A basic mechanistic question underlies the investigation of the role of collective activity in coding and signal transmission: How do single-cell dynamics, connection architecture, and synaptic dynamics combine to determine patterns of network activity? Systematic answers to this question would allow us to predict how empirical data from one class of stimuli will generalize to other stimulus classes and recording sites. Moreover, a mechanistic understanding of the origin of correlations, and knowledge of the patterns we can expect to see under different assumptions about the underlying networks, will help resolve recent controversies about the strength and pattern of correlations in mammalian cortex [1], [20], [21]. Finally, understanding the origin of correlations will inform the more ambitious aim of inferring properties of network architecture from observed patterns of activity [22][24].

Here, we examine the link between network properties and correlated activity. We develop a theoretical framework that accurately predicts the structure of correlated spiking that emerges in a widely used model – recurrent networks of general integrate and fire cells. The theory naturally captures the role of single cell and synaptic dynamics in shaping the magnitude and timescale of spiking correlations. We focus on the exponential integrate and fire model, which has been shown to capture membrane and spike responses of cortical neurons [25]; however, the general approach we take can be applied to a much broader class of neurons, a point we return to in the Discussion.

Our approach is based on an extension of linear response theory to networks [24], [26]. We start with a linear approximation of a neuron's response to an input. This approximation can be obtained explicitly for many neuron models [27][29], and is directly related to the spike triggered average [30]. The correlation structure of the network is then estimated using an iterative approach. As in prior work [31][33], the resulting expressions admit an expansion in terms of paths through the network.

We apply this theory to networks with precisely balanced inhibition and excitation in the inputs to individual cells. In this state individual cells receive a combination of excitatory and inhibitory inputs with mean values that largely cancel. We show that, when timescales and strengths of excitatory and inhibitory connections are matched, only local interactions between cells contribute to correlations. Moreover, our theory allows us to explain how correlations are altered when precise tuning balance is broken. In particular, we show how strengthening inhibition may synchronize the spiking activity in the network. Finally, we derive results which allow us to gain an intuitive understanding of the factors shaping average correlation structure in randomly connected networks of neurons.

Results

Our goal is to understand how the architecture of a network shapes the statistics of its activity. We show how correlations between spike trains of cells can be approximated using response characteristics of individual cells along with information about synaptic dynamics, and the structure of the network. We start by briefly reviewing linear response theory of neuronal responses [28], [34], [35], and then use it to approximate the correlation structure of a network.

Network model

To illustrate the results we consider a network of Inline graphic nonlinear integrate-and-fire (IF) neurons with membrane potentials modeled by

graphic file with name pcbi.1002408.e002.jpg (1)

Here Inline graphic is the leak reversal potential, and Inline graphic represents the mean synaptic input current from parts of the system not explicitly modeled. A spike-generating current Inline graphic may be included to emulate the rapid onset of action potentials. Unless otherwise specified, we utilize the exponential IF model (EIF), so that Inline graphic [25]. Cells are subject to internally induced fluctuations due to channel noise [36], and externally induced fluctuations due to inputs not explicitly modelled [37]. We model both by independent, Gaussian, white noise processes, Inline graphic [38]. An external signal to cell Inline graphic is represented by Inline graphic.

Upon reaching a threshold Inline graphic, an action potential is generated, and the membrane potential is reset to Inline graphic, where it is held constant for an absolute refractory period Inline graphic. The output of cell Inline graphic is characterized by the times, Inline graphic, at which its membrane potential reaches threshold, resulting in an output spike train Inline graphic. Synaptic interactions are modeled by delayed Inline graphic-functions

graphic file with name pcbi.1002408.e017.jpg (2)

The Inline graphic matrix Inline graphic contains the synaptic kernels, while the matrix Inline graphic contains the synaptic weights, and hence defines the network architecture. In particular, if Inline graphic is the membrane conductance, Inline graphic is the area under a post-synaptic current evoked in cell Inline graphic by a spike in the presynaptic cell Inline graphic, and along with the membrane and synaptic time constants, determines the area under a post-synaptic potential. Inline graphic represents the absence of a synaptic connection from cell Inline graphic to cell Inline graphic.

Table 1 provides an overview of all parameters and variables.

Table 1. Notation used in the text.

Symbol Description
Inline graphic Membrane potential, membrane time constant, leak reversal potential, and noise intensity of cell Inline graphic.
Inline graphic Mean and standard deviation of the background noise for cell Inline graphic.
Inline graphic Membrane potential threshold, reset, and absolute refractory period for cells.
Inline graphic Spike generating current, soft threshold and spike shape parameters for the IF model [25].
Inline graphic Synaptic input from other cells in the network, and external input to cell Inline graphic.
Inline graphic Synaptic time constant and delay for outputs of cell Inline graphic.
Inline graphic Spike train of cell Inline graphic.
Inline graphic The Inline graphic synaptic weight, proportional to the area under a single post-synaptic current for current-based synapses.
Inline graphic The Inline graphic synaptic kernel - equals the product of the synaptic weight Inline graphic and the synaptic filter for outputs of cell Inline graphic.
Inline graphic The cross-correlation function between cells Inline graphic defined by Inline graphic.
Inline graphic Spike count for cell Inline graphic, and spike count correlation coefficient for cells Inline graphic over windows of length Inline graphic.
Inline graphic Stationary rate, linear response kernel and uncoupled auto-correlation function for cell Inline graphicj.
Inline graphic The Inline graphic interaction kernel - describes how the firing activity of cell Inline graphic is perturbed by an input spike from cell Inline graphic. It is defined by Inline graphic.
Inline graphic The Inline graphic order approximation of the activity of cell Inline graphic in a network which accounts for directed paths through the network graph up to length Inline graphic ending at cell Inline graphic, and the cross-correlation between the Inline graphic order approximations of the activity of cells Inline graphic.
Inline graphic Inline graphic is the Fourier transform of Inline graphic with the convention
Inline graphic

Linear response of individual cells

Neuronal network models are typically described by a complex system of coupled nonlinear stochastic differential equations. Their behavior is therefore difficult to analyze directly. We will use linear response theory [28], [34], [35], [39] to approximate the cross-correlations between the outputs of neurons in a network. We first review the linear approximation to the response of a single cell. We illustrate the approach using current-based IF neurons, and explain how it can be generalized to other models in the Discussion.

The membrane potential of an IF neuron receiving input Inline graphic, with vanishing temporal average, Inline graphic, evolves according to

graphic file with name pcbi.1002408.e073.jpg (3)

The time-dependent firing rate, Inline graphic, is determined by averaging the resulting spike train, Inline graphic, across different realizations of noise, Inline graphic, for fixed Inline graphic. Using linear response theory, we can approximate the firing rate by

graphic file with name pcbi.1002408.e078.jpg (4)

where Inline graphic is the (stationary) firing rate when Inline graphic. The linear response kernel, Inline graphic, characterizes the firing rate response to first order in Inline graphic. A rescaling of the function Inline graphic gives the spike-triggered average of the cell, to first order in input strength, and is hence equivalent to the optimal Weiner kernel in the presence of the signal Inline graphic. [39], [40]. In Figure 1, we compare the approximate firing rate obtained from Eq. (4) to that obtained numerically from Monte Carlo simulations.

Figure 1. Illustrating Eq. (4).

Figure 1

(A) The input to the post-synaptic cell is a fixed spike train which is convolved with a synaptic kernel. (B) A sample voltage path for the post-synaptic cell receiving the input shown in A) in the presence of background noise. (C) Raster plot of 100 realizations of output spike trains of the post-synaptic cell. (D) The output firing rate, Inline graphic, obtained by averaging over realizations of the output spike trains in C). The rate obtained using Monte Carlo simulations (shaded in gray) matches predictions of linear response theory obtained using Eq. (4) (black).

The linear response kernel Inline graphic depends implicitly on model parameters, but is independent of the input signal, Inline graphic, when Inline graphic is small relative to the noise Inline graphic. In particular, Inline graphic is sensitive to the value of the mean input current, Inline graphic. We emphasize that the presence of the background noise, Inline graphic, in Eq. (3) is essential to the theory, as noise linearizes the transfer function that maps input to output. In addition, when applying linear response methods, there is an implicit assumption that the fluctuations of the input Inline graphic do not have a significant effect on the response properties of the cell.

Linear response in recurrent networks

The linear response kernel can be used to approximate the response of a cell to an external input. However, the situation is more complicated in a network where a neuron can affect its own activity through recurrent connections. To extend the linear response approximation to networks we follow the approach introduced by Lindner et al. [26]. Instead of using the linear response kernel to approximate the firing rate of a cell, we use it to approximate a realization of its output

graphic file with name pcbi.1002408.e094.jpg (5)

Here Inline graphic represents a realization of the spike train generated by an integrate-and-fire neuron obeying Eq. (3) with Inline graphic.

Our central assumption is that a cell acts approximately as a linear filter of its inputs. Note that Eq. (5) defines a mixed point and continuous process, but averaging Inline graphic in Eq. (5) over realizations of Inline graphic leads to the approximation in Eq. (4). Hence, Eq. (5) is a natural generalization of Eq. (4) with the unperturbed output of the cell represented by the point process, Inline graphic, instead of the firing rate, Inline graphic.

We first use Eq. (5) to describe spontaneously evolving networks where Inline graphic. Equation (1) can then be rewritten as

graphic file with name pcbi.1002408.e102.jpg (6)

where Inline graphic and Inline graphic represents the temporal average.

Lindner et al. used Eq. (5) as an ansatz to study the response of an all–to–all inhibitory network. They postulated that the spiking output Inline graphic of cell Inline graphic in the network, can be approximated in the frequency domain by

graphic file with name pcbi.1002408.e107.jpg

where Inline graphic are the zero-mean Fourier transforms of the processes Inline graphic, and Inline graphic for all other quantities (see Table 1 for the Fourier transform convention). The term in parentheses is the Fourier transform of the zero-mean synaptic input, Inline graphic, in Eq. (6), and Inline graphic represents a realization of the spiking output of cell Inline graphic in the absence of synaptic fluctuations from the recurrent network (i.e assuming Inline graphic). In matrix form this ansatz yields a simple self-consistent approximation for the firing activities Inline graphic which can be solved to give

graphic file with name pcbi.1002408.e116.jpg

where the interaction matrix Inline graphic has entries defined by Inline graphic. When averaged against its conjugate transpose, this expression yields an approximation to the full array of cross-spectra in the recurrent network:

graphic file with name pcbi.1002408.e119.jpg (7)

We next present a distinct derivation of this approximation which allows for a different interpretation of the ansatz given by Eq. (5). We iteratively build to the approximation in Eq. (7), showing how this expression for the correlation structure in a recurrent network can be obtained by taking into account the paths through the network of increasing length.

We start with realizations of spike trains, Inline graphic, generated by IF neurons obeying Eq. (6) with Inline graphic. This is equivalent to considering neurons isolated from the network, with adjusted DC inputs (due to mean network interactions). Following the approximation given by Eq. (5), we use a frozen realization of all Inline graphic to find a correction to the output of each cell, with Inline graphic set to the mean-adjusted synaptic input,

graphic file with name pcbi.1002408.e124.jpg

As noted previously, the linear response kernel is sensitive to changes in the mean input current. It is therefore important to include the average synaptic input Inline graphic in the definition of the effective mean input, Inline graphic.

The input from cell Inline graphic to cell Inline graphic is filtered by the synaptic kernel Inline graphic. The linear response of cell Inline graphic to a spike in cell Inline graphic is therefore captured by the interaction kernel Inline graphic, defined above as

graphic file with name pcbi.1002408.e133.jpg

The output of cell Inline graphic in response to mean-adjusted input, Inline graphic, from cell Inline graphic can be approximated to first order in input strength using the linear response correction

graphic file with name pcbi.1002408.e137.jpg (8)

We explain how to approximate the stationary rates, Inline graphic, in the Methods.

The cross-correlation between the processes Inline graphic in Eq. (8) gives a first approximation to the cross-correlation function between the cells,

graphic file with name pcbi.1002408.e140.jpg

which can be simplified to give

graphic file with name pcbi.1002408.e141.jpg (9)

where we used Inline graphic. Ostojic et al. obtained an approximation closely related to Eq. (9). [24] They first obtained the cross-correlation between a pair of neurons which either receive a common input or share a monosynaptic connection. This can be done using Eq. (4), without the need to introduce the mixed process given in Eq. (5). Ostojic et al. then implicitly assumed that the correlations not due to one of these two submotifs could be disregarded. The correlation between pairs of cells which were mutually coupled (or were unidirectionally coupled with common input) was approximated by the sum of correlations introduced by each submotif individually.

Equation (9) provides a first approximation to the joint spiking statistics of cells in a recurrent network. However, it captures only the effects of direct synaptic connections, represented by the second and third terms, and common input, represented by the last term in Eq. (9). The impact of larger network structures, such as loops and chains are not captured, although they may significantly impact cross-correlations [41][43]. Experimental studies have also shown that local cortical connectivity may not be fully random [44][46]. It is therefore important to understand the effects on network architecture on correlations.

We therefore propose an iterative approach which accounts for successively larger connectivity patterns in the network [32], [33]. We again start with Inline graphic, a realization of a single spike train in isolation. Successive approximations to the output of cells in a recurrent network are defined by

graphic file with name pcbi.1002408.e144.jpg (10)

To compute the correction to the output of a neuron, in the first iteration we assume that its inputs come from a collection of isolated cells: When Inline graphic, Eq. (10) takes into account only inputs from immediate neighbors, treating each as disconnected from the rest of the network. The corrections in the second iteration are computed using the approximate cell responses obtained from the first iteration. Thus, with Inline graphic, Eq. (10) also accounts for the impact of next nearest neighbors. Successive iterations include the impact of directed chains of increasing length: The isolated output from an independent collection of neurons is filtered through Inline graphic stages to produce the corrected response (See Figure 2.)

Figure 2. Iterative construction of the linear approximation to network activity.

Figure 2

(A) An example recurrent network. (B)–(D) A sequence of graphs determines the successive approximations to the output of neuron 1. Processes defined by the same iteration of Eq. (11) have equal color. (B) In the first iteration of Eq. (11), the output of neuron 1 is approximated using the unperturbed outputs of its neighbors. (C) In the second iteration the results of the first iteration are used to define the inputs to the neuron. For instance, the process Inline graphic depends on the base process Inline graphic which represents the unperturbed output of neuron 1. Neuron 4 receives no inputs from the rest of the network, and all approximations involve only its unperturbed output, Inline graphic. (D) Cells 3 and 4 are not part of recurrent paths, and their contributions to the approximation are fixed after the second iteration. However, the recurrent connection between cells 1 and 2 implies that subsequent approximations involve contributions of directed chains of increasing length.

Notation is simplified when this iterative construction is recast in matrix form to obtain

graphic file with name pcbi.1002408.e151.jpg (11)

where Inline graphic and Inline graphic are length Inline graphic column vectors, and Inline graphic represents a Inline graphic-fold matrix convolution of Inline graphic with itself. We define the convolution of matrices in the Methods.

The Inline graphic approximation to the matrix of cross-correlations can be written in terms of the interaction kernels, Inline graphic, and the autocorrelations of the base processes Inline graphic as

graphic file with name pcbi.1002408.e161.jpg (12)

where Inline graphic, Inline graphic and Inline graphic is the Inline graphic-fold matrix convolution of Inline graphic with itself.

Eq. (12) can be verified by a simple calculation. First, Eq. (11) directly implies that

graphic file with name pcbi.1002408.e167.jpg

which we may use to find, for each Inline graphic,

graphic file with name pcbi.1002408.e169.jpg (13)

Since Inline graphic, Eq. (13) is equivalent to Eq. (12).

If we apply the Fourier transform, to Eq. (12), we find that for each Inline graphic,

graphic file with name pcbi.1002408.e172.jpg (14)

where Inline graphic denotes the conjugate transpose of the matrix Inline graphic. As before, the zero-mean Fourier transforms Inline graphic of the processes Inline graphic are defined by Inline graphic, and Inline graphic for all other quantities.

Defining Inline graphic to be the spectral radius of the matrix Inline graphic, when Inline graphic, we can take the limit Inline graphic in Eq. (14) [47], [48], to obtain an approximation to the full array of cross-spectra

graphic file with name pcbi.1002408.e183.jpg (15)

As noted previously, this generalizes the approach of Lindner et al. [26] (also see [13]). In the limit Inline graphic, directed paths of arbitrary length contribute to the approximation. Equation (15) therefore takes into account the full recurrent structure of the network. Note that Eq. (15) may be valid even when Inline graphic. However, in this case the series in Eq. (14) do not converge, and hence the expansion of the correlations in terms of paths through the network is invalid. We confirmed numerically that Inline graphic for all of the networks and parameters we considered.

Finally, consider the network response to external signals, Inline graphic, with zero mean and finite variance. The response of the neurons in the recurrent network can be approximated iteratively by

graphic file with name pcbi.1002408.e188.jpg

where Inline graphic and Inline graphic. External signals and recurrent synaptic inputs are both linearly filtered to approximate a cell's response, consistent with a generalization of Eq. (4). As in Eq. (12), the Inline graphic approximation to the matrix of correlations is

graphic file with name pcbi.1002408.e192.jpg

where Inline graphic is the covariance matrix of the external signals. We can again take the Fourier transform and the limit Inline graphic, and solve for Inline graphic. If Inline graphic,

graphic file with name pcbi.1002408.e197.jpg (16)

When the signals comprising Inline graphic are white (and possibly correlated) corrections must be made to account for the change in spectrum and response properties of the isolated cells [26], [49], [50] (See Methods).

We note that Eq. (11), which is the basis of our iterative approach, provides an approximation to the network's output which is of higher than first order in connection strength. This may seem at odds with a theory that provides a linear correction to a cell's response, cf. Eq. (4). However, Eq. (11) does not capture nonlinear corrections to the response of individual cells, as the output of each cell is determined linearly from its input. It is the input that can contain terms of any order in connection strength stemming from directed paths of different lengths through the network.

We use the theoretical framework developed above to analyze the statistical structure of the spiking activity in a network of IF neurons described by Eq. (1). We first show that the cross-correlation functions between cells in two small networks can be studied in terms of contributions from directed paths through the network. We use a similar approach to understand the structure of correlations in larger all–to–all and random networks. We show that in networks where inhibition and excitation are tuned to exactly balance, only local interactions contribute to correlations. When such balance is broken by a relative elevation of inhibition, the result may be increased synchrony in the network. The theory also allows us to obtain averages of cross-correlation functions conditioned on connectivity between pairs of cells in random networks. Such averages can provide a tractable yet accurate description of the joint statistics of spiking in these networks.

The correlation structure is determined by the response properties of cells together with synaptic dynamics and network architecture. Network interactions are described by the matrix of synaptic filters, Inline graphic, given in Eq. (2), while the response of cell Inline graphic to an input is approximated using its linear response kernel Inline graphic. Synaptic dynamics, architecture, and cell responses are all combined in the matrix Inline graphic, where Inline graphic describes the response of cell Inline graphic to an input from cell Inline graphic (See Eq. (1)). The correlation structure of network activity is approximated in Eq. (15) using the Fourier transforms of the interaction matrix, Inline graphic, and the matrix of unperturbed autocorrelations Inline graphic.

Statistics of the response of microcircuits

We first consider a pair of simple microcircuits to highlight some of the features of the theory. We start with the three cell model of feed-forward inhibition (FFI) shown in Figure 3A [51]. The interaction matrix, Inline graphic, has the form

graphic file with name pcbi.1002408.e209.jpg

where cells are indexed in the order Inline graphic. To simplify notation, we omit the dependence of Inline graphic and other spectral quantities on Inline graphic.

Figure 3. The relation between correlation structure and response statistics in a feed-forward inhibitory microcircuit.

Figure 3

(A) The FFI circuit (left) can be decomposed into three submotifs. Equation (18) shows that each submotif provides a specific contribution to the cross-correlation between cells Inline graphic and Inline graphic. (B) Comparison of the theoretical prediction with the numerically computed cross-correlation between cells Inline graphic and Inline graphic. Results are shown for two different values of the inhibitory time constant, Inline graphic (Inline graphic ms, solid line, Inline graphic ms, dashed line). (C) The contributions of the different submotifs in panel A are shown for both Inline graphic ms (solid) and Inline graphic ms (dashed). Inset shows the corresponding change in the inhibitory synaptic filter. The present color scheme is used in subsequent figures. Connection strengths were Inline graphic for excitatory and inhibitory connections. In each case, the long window correlation coefficient Inline graphic between the two cells was Inline graphic.

Note that Inline graphic is nilpotent of degree 3 (that is, Inline graphic), and the inverse of Inline graphic may be expressed as

graphic file with name pcbi.1002408.e228.jpg (17)

Substituting Eq. (17) into Eq. (15) (and noting that a similar equation as Eq. (17) holds for Inline graphic) yields an approximation to the matrix of cross-spectra. For instance,

graphic file with name pcbi.1002408.e230.jpg (18)

Figure 3B shows that these approximations closely match numerically obtained cross-correlations. Inline graphic is the uncoupled power spectrum for cell Inline graphic.

Equation (18) gives insight into how the joint response of cells in this circuit is shaped by the features of the network. The three terms in Eq. (18) are directly related to the architecture of the microcircuit: Term I represents the correlating effect of the direct input to cell Inline graphic from cell Inline graphic. Term II captures the effect of the common input from cell Inline graphic. Finally, term III represents the interaction of the indirect input from Inline graphic to Inline graphic through Inline graphic with the input from Inline graphic to Inline graphic (See Figure 3C). A change in any single parameter may affect multiple terms. However, the individual contributions of all three terms are apparent.

To illustrate the impact of synaptic properties on the cross-correlation between cells Inline graphic and Inline graphic we varied the inhibitory time constant, Inline graphic (See Figure 3B and C). Such a change is primarily reflected in the shape of the first order term, I: Multiplication by Inline graphic is equivalent to convolution with the inhibitory synaptic filter, Inline graphic. The shape of this filter is determined by Inline graphic (See Eq. (2)), and a shorter time constant leads to a tighter timing dependency between the spikes of the two cells [24], [52][55]. In particular, Ostojic et al. made similar observations using a related approximation. In the FFI circuit, the first and second order terms, I and II, are dominant (red and dark orange, Figure 3B). The relative magnitude of the third order term, III (light orange, Figure 3B), is small. The next example shows that even in a simple recurrent circuit, terms of order higher than two may be significant.

More generally, the interaction matrices, Inline graphic, of recurrent networks are not nilpotent. Consider two reciprocally coupled excitatory cells, Inline graphic and Inline graphic (See Figure 4A, left). In this case,

graphic file with name pcbi.1002408.e250.jpg

so that

graphic file with name pcbi.1002408.e251.jpg

Equation (15) gives the following approximation to the matrix of cross-spectra

graphic file with name pcbi.1002408.e252.jpg (19)

In contrast to the previous example, this approximation does not terminate at finite order in interaction strength. After expanding, the cross-spectrum between cells Inline graphic and Inline graphic is approximated by

graphic file with name pcbi.1002408.e255.jpg (20)

Directed paths beginning at Inline graphic and ending at Inline graphic (or vice-versa) are of odd length. Hence, this approximation contains only odd powers of the kernels Inline graphic, each corresponding to a directed path from one cell to the other. Likewise, the approximate power spectra contain only even powers of the kernels corresponding to directed paths that connect a cell to itself (See Figure 4A).

Figure 4. The relation between correlation structure and response statistics for two bidirectionally coupled, excitatory cells.

Figure 4

(A) The cross-correlation between the two cells can be represented in terms of contributions from an infinite sequence of submotifs (See Eq. (20)). Though we show only a few “chain” motifs in one direction, one should note that there will also be contributions to the cross-correlation from chain motifs in the reverse direction in addition to indirect common input motifs (See the discussion of Figure 5). (B), (E) Linear response kernels in the excitable (B) and oscillatory (E) regimes. (C), (F) The cross-correlation function computed from simulations and theoretical predictions with first and third order contributions computed using Eq. (19) in the excitable (C) and oscillatory (F) regimes. (D), (G) The auto-correlation function computed from simulations and theoretical predictions with zeroth and second order contributions computed using Eq. (19) in the excitable (D) and oscillatory (G) regimes. In the oscillatory regime, higher order contributions were small relative to first order contributions and are therefore not shown. The network's symmetry implies that cross-correlations are symmetric, and we only show them for positive times. Connection strengths were Inline graphic. The long window correlation coefficient Inline graphic between the two cells was Inline graphic in the excitable regime and Inline graphic in the oscillatory regime. The ISI CV was approximately 0.98 for neurons in the excitable regime and 0.31 for neurons in the oscillatory regime.

The contributions of different sub-motifs to the cross- and auto-correlations are shown in Figures 4C, D when the isolated cells are in a near-threshold excitable state (Inline graphic). The auto-correlations are significantly affected by network interactions. We also note that chains of length two and three (the second and third submotifs in Figure 4A) provide significant contributions. Earlier approximations do not capture such corrections [24].

The operating point of a cell is set by its parameters (Inline graphic, etc.) and the statistics of its input (Inline graphic). A change in operating point can significantly change a cell's response to an input. Using linear response theory, these changes are reflected in the response functions Inline graphic, and the power spectra of the isolated cells, Inline graphic. To highlight the role that the operating point plays in the approximation of the correlation structure given by Eq. (15), we elevated the mean and decreased the variance of background noise by increasing Inline graphic and decreasing Inline graphic in Eq. (1). With the chosen parameters the isolated cells are in a super-threshold, low noise regime and fire nearly periodically (Inline graphic). After the cells are coupled, this oscillatory behavior is reflected in the cross- and auto-correlations where the dominant contributions are due to first and zeroth order terms, respectively (See Figures 4F,G).

Orders of coupling interactions

It is often useful to expand Eq. (15) in terms of powers of Inline graphic [31]. The term Inline graphic in the expansion is said to be of order Inline graphic. Equivalently, in the expansion of Inline graphic, the order of a term refers to the sum of the powers of all constituent interaction kernels Inline graphic. We can also associate a particular connectivity submotif with each term. In particular, Inline graphic order terms of the form

graphic file with name pcbi.1002408.e277.jpg

are associated with a directed path Inline graphic from cell Inline graphic to cell Inline graphic. Similarly, the term Inline graphic corresponds to a Inline graphic-step path from cell Inline graphic to cell Inline graphic. An Inline graphic order term of the form

graphic file with name pcbi.1002408.e286.jpg

represents the effects of an indirect common input Inline graphic steps removed from cell Inline graphic and Inline graphic steps removed from cell Inline graphic. This corresponds to a submotif of the form Inline graphic consisting of two branches originating at cell Inline graphic. (See Figure 5, and also Figure 6A and the discussion around Eqs. (18,20).)

Figure 5. The motifs giving rise to terms in the expansion of Eq. (15).

Figure 5

(A) Terms containing only unconjugated (or only conjugated) interaction kernels Inline graphic correspond to directed chains. (B) Terms containing both unconjugated and conjugated interaction kernels Inline graphic correspond to direct or indirect common input motifs.

Figure 6. All–to–all networks and the importance of higher order motifs.

Figure 6

(A) Some of the submotifs contributing to correlations in the all–to–all network. (B) Cross-correlations between two excitatory cells in an all–to-all network (Inline graphic) obtained using Eq. (21) (Solid – precisely tuned network with Inline graphic [Inline graphic Inline graphic], dashed – non-precisely tuned network with Inline graphic [Inline graphic Inline graphic Inline graphic]). (C) Comparison of first and second order contributions to the cross-correlation function in panel A in the precisely tuned (left) and non-precisely tuned (right) network. In both cases, the long window correlation coefficient Inline graphic was 0.05.

Statistics of the response of large networks

The full power of the present approach becomes evident when analyzing the activity of larger networks. We again illustrate the theory using several examples. In networks where inhibition and excitation are tuned to be precisely balanced, the theory shows that only local interactions contribute to correlations. When this balance is broken, terms corresponding to longer paths through the network shape the cross-correlation functions. One consequence is that a relative increase in inhibition can lead to elevated network synchrony. We also show how to obtain tractable and accurate approximation of the average correlation structure in random networks.

A symmetric, all–to–all network of excitatory and inhibitory neurons

We begin with an all–to–all coupled network of Inline graphic identical cells. Of these cells, Inline graphic make excitatory, and Inline graphic make inhibitory synaptic connections. The excitatory cells are assigned indices Inline graphic, and the inhibitory cells indices Inline graphic. All excitatory (inhibitory) synapses have weight Inline graphic (Inline graphic), and timescale Inline graphic (Inline graphic). The interaction matrix Inline graphic may then be written in block form,

graphic file with name pcbi.1002408.e314.jpg

Here Inline graphic is the Inline graphic matrix of ones, Inline graphic is the weighted synaptic kernel for cells of class Inline graphic (assumed identical within each class), and Inline graphic is the susceptibility function for each cell in the network. Although the effect of autaptic connections (those from a cell to itself) is negligible (See Figure S2 in Text S1), their inclusion significantly simplifies the resulting expressions.

We define Inline graphic, and Inline graphic. Using induction, we can show that

graphic file with name pcbi.1002408.e322.jpg

Direct matrix multiplication yields

graphic file with name pcbi.1002408.e323.jpg

which allows us to calculate the powers Inline graphic when Inline graphic,

graphic file with name pcbi.1002408.e326.jpg

An application of Eq. (15) then gives an approximation to the matrix of cross-spectra:

graphic file with name pcbi.1002408.e327.jpg (21)

The cross-spectrum between two cells in the network is therefore given by

graphic file with name pcbi.1002408.e328.jpg (22)

where Inline graphic. In Eq. (22) the first two terms represent the effects of all unidirectional chains originating at cell Inline graphic and terminating at cell Inline graphic, and vice versa. To see this, one should expand the denominators as power series in Inline graphic. The third term represents the effects of direct and indirect common inputs to the two neurons, which can be seen by expanding this denominator as a product of power series in Inline graphic and Inline graphic. In Figure 6A, we highlight a few of these contributing motifs.

Interestingly, when excitation and inhibition are tuned for precise balance (so that the mean excitatory and inhibitory synaptic currents cancel, and Inline graphic). Using Inline graphic in Eq. (22) yields

graphic file with name pcbi.1002408.e337.jpg (23)

Effects of direct connections between the cells are captured by the first two terms, while those of direct common inputs to the pair are captured by the third term. Contributions from other paths do not appear at any order. In other words, in the precisely balanced case only local interactions contribute to correlations.

To understand this cancelation intuitively, consider the contribution of directed chains originating at a given excitatory neuron, Inline graphic. For Inline graphic, the cross-correlation function, Inline graphic, is determined by the change in firing rate of cell Inline graphic at time Inline graphic given a spike in cell Inline graphic at time 0. By the symmetry of the all–to–all connectivity and stationarity, the firing of cell Inline graphic has an equal probability of eliciting a spike in any excitatory or inhibitory cell in the network. Due to the precise synaptic balance, the postsynaptic current generated by the elicited spikes in the excitatory population will cancel the postsynaptic current due to elicited spikes in the inhibitory population on average. The contribution of other motifs cancel in a similar way.

In Figure 6B, we show the impact of breaking this excitatory-inhibitory balance on cross-correlation functions. We increased the strength and speed of the inhibitory synapses relative to excitatory synapses, while holding constant, for sake of comparison, the long window correlation coefficients Inline graphic between excitatory pairs (note that, by symmetry, all excitatory pairs should have the same correlation coefficient). Moreover, the degree of network synchrony, characterized by the short window correlation coefficients, is increased (See Figure 6B inset). Intuitively, a spike in one of the excitatory cells transiently increases the likelihood of spiking in all other cells in the network. Since inhibition in the network is stronger and faster than excitation, these additional spikes will transiently decrease the likelihood of spiking in twice removed cells.

Linear response theory allows us to confirm this heuristic observation, and quantify the impact of the imbalance on second order statistics. Expanding Eq. (22) for two excitatory cells to second order in coupling strength, we find

graphic file with name pcbi.1002408.e346.jpg (24)

Compared to the balanced case, there is no longer a complete cancellation between contributions of chains involving excitatory and inhibitory cells, and the two underlined terms appear as a result (compare with Eq. (23)). These terms capture the effects of all length two chains between cells Inline graphic or Inline graphic, starting at one and terminating at the other. The relative strengthening of inhibition implies that chains of length two provide a negative contribution to the cross-correlation function at short times (cf. [56], see the dashed orange lines in Figure 6C). Additionally, the impact of direct common input to cells Inline graphic and Inline graphic on correlations is both larger in magnitude (because we increased the strength of both connection types) and sharper (the faster inhibitory time constant means common inhibitory inputs induce sharper correlations). These changes are reflected in the shape of the second order, common input term Inline graphic in Eq. (24) (see dotted orange lines in Figure 6C).

In sum, unbalancing excitatory and inhibitory connections via stronger, faster inhibitory synapses enhances synchrony, moving a greater proportion of the covariance mass closer to Inline graphic (See Figure 6B). To illustrate this effect in terms of underlying connectivity motifs, we show the contributions of length two chains and common input in both the precisely tuned and non-precisely tuned cases in Figure 6C. A similar approach would allow us to understand the impact of a wide range of changes in cellular or synaptic dynamics on the structure of correlations across networks.

Random, fixed in-degree networks of homogeneous excitatory and inhibitory neurons

Connectivity in cortical neuronal networks is typically sparse, and connection probabilities can follow distinct rules depending on area and layer [57]. The present theory allows us to consider arbitrary architectures, as we now illustrate.

We consider a randomly connected network of Inline graphic excitatory and Inline graphic inhibitory cells coupled with probability Inline graphic. To simplify the analysis, every cell receives exactly Inline graphic excitatory and Inline graphic inhibitory inputs. Thus, having fixed in-degree (that is, the number of inputs is fixed and constant across cells), each cell receives an identical level of mean synaptic input. In addition, we continue to assume that cells are identical. Therefore, the response of each cell in the network is described by the same linear response kernel. The excitatory and inhibitory connection strengths are Inline graphic and Inline graphic, respectively. The timescales of excitation and inhibition may differ, but are again identical for cells within each class.

The approximation of network correlations (Eq. (15)) depends on the realization of the connectivity matrix. For a fixed realization, the underlying equations can be solved numerically to approximate the correlation structure (See Figure 7A). However, the cross-correlation between a pair of cells of given types has a form which is easy to analyze when only leading order terms in Inline graphic are retained.

Figure 7. Correlations in random, fixed in-degree networks.

Figure 7

(A) A comparison of numerically obtained excitatory-inhibitory cross-correlations to the approximation given by Eq. (26). (B) Mean and standard deviation for the distribution of correlation functions for excitatory-inhibitory pairs of cells. (Solid line – mean cross-correlation, shaded area – one standard deviation from the mean, calculated using bootstrapping in a single network realization). (C) Mean and standard deviation for the distribution of cross-correlation functions conditioned on cell type and first order connectivity for a reciprocally coupled excitatory-inhibitory pair of cells. (Solid line – mean cross-correlation function, shaded area – one standard deviation from the mean found by bootstrapping). (D) Average reduction in Inline graphic error between cross-correlation functions and their respective first-order conditioned averages, relative to the error between the cross-correlations and their cell-type averages. Blue circles give results for a precisely tuned network, and red squares for a network with stronger, faster inhibition. Error bars indicate two standard errors above and below the mean. Inline graphic for panels A-C are as in the precisely tuned network of Figure 6, and the two networks of panel D are as in the networks of the same figure.

Specifically, the average cross-spectrum for two cells of given types is (See Section 1 in Text S1)

graphic file with name pcbi.1002408.e363.jpg (25)

when Inline graphic. This shows that, to leading order in Inline graphic, the mean cross-spectrum between two cells in given classes equals that in the all–to–all network (see Eq. (22)). Therefore our previous discussion relating network architecture to the shape of cross-correlations in the all–to–all network extends to the average correlation structure in the random network for large Inline graphic.

Pernice et al. [31] derived similar expressions for the correlation functions in networks of interacting Hawkes processes [58], [59], which are linear, self-exciting point processes with history-dependent intensities. They assumed that either the network is regular (i.e., both in- and out-degrees are fixed) or has a sufficiently narrow degree distribution. Our analysis depends on having fixed in-degrees, and we do not assume that networks are fully regular. Both approaches lead to results that hold approximately (for large enough Inline graphic) when the in-degree is not fixed.

Average correlations between cells in the random network conditioned on first order connectivity

As Figure 7B shows there is large variability around the mean excitatory-inhibitory cross-correlation function given by the leading order term of Eq. (25). Therefore, understanding the average cross-correlation between cells of given types does not necessarily provide much insight into the mechanisms that shape correlations on the level of individual cell pairs. Instead, we examine the average correlation between a pair of cells conditioned on their first order (direct) connectivity.

We derive expressions for first order conditional averages correct to Inline graphic (See Section 2 in Text S1). The average cross-spectrum for a pair of cells with indices Inline graphic, conditioned on the value of the direct connections between them is

graphic file with name pcbi.1002408.e370.jpg (26)

Here we set Inline graphic if we condition on the absence of a connection Inline graphic, and Inline graphic if we condition on its presence. The term Inline graphic is set similarly.

Although Eq. (26) appears significantly more complicated than the cell-type averages given in Eq. (25), they only differ in the underlined, first order terms. The magnitude of expected contributions from all higher order motifs is unchanged and coincides with those in the all–to–all network.

Figure 7C shows the mean cross-correlation function for mutually coupled excitatory-inhibitory pairs. Taking into account the mutual coupling significantly reduces variability (Compare with Figure 7B). To quantify this reduction, we calculate the mean reduction in variability when correlation functions are computed conditioned on the connectivity between the cells. For a single network, the relative decrease in variability can be quantified using

graphic file with name pcbi.1002408.e375.jpg

where Inline graphic represents pairs of cells of a given type and connection (in the present example these are reciprocally coupled excitatory-inhibitory pairs), Inline graphic is the number of pairs of that type in the network, Inline graphic is the leading order approximation of average correlations given only the type of cells in Inline graphic (as in Eq. (25)), and Inline graphic the leading order approximation to average correlations conditioned on the first order connectivity of class Inline graphic (as in Eq. (26)). We make use of the norm Inline graphic defined by Inline graphic. Figure 7D shows Inline graphic averaged over twenty networks. In particular, compare the reduction in variability when conditioning on bidirectional coupling between excitatory-inhibitory pairs shown in Figures 7B,C, with the corresponding relative error in Figure 7D (circled in red).

Discussion

We have extended and further developed a general theoretical framework that can be used to describe the correlation structure in a network of spiking cells. The application of linear response theory allows us to find tractable approximations of cross-correlation functions in terms of the network architecture and single cell response properties. The approach was originally used to derive analytical approximations to auto- and cross-spectra in an all–to–all inhibitory network in order to study the population response of the electrosensory lateral line lobe of weakly electric fish [26]. The key approximation relies on the assumption that the activity of cells in the network can be represented by a mixed point and continuous stochastic process, as given in Eq. (9). This approximation may be viewed as a generalization of classic Linear-Poisson models of neural spiking: the crucial difference is the replacement of the stationary firing rate by a realization of an integrate-and-fire spiking process. This allows for the retention of the underlying IF spiking activity while additionally posing that neurons act as perfect linear filters of their inputs. An iterative construction then leads to the expressions for approximate cross-correlations between pairs of cells given by Eq. (15).

The linear response framework of Lindner et al. [26] was extended by Marinazzo et al. [60] to somewhat more complex networks, and compared with other studies in which networks exhibit collective oscillations. In addition, other works [13], [61], [62] used linear response techniques to study information in the collective response of cells in a network. More recently, Ostojic et al. [24] obtained formulas for cross-correlations given in Eq. (9), which correspond to the first step in the iterative construction. Their approach captures corrections due to direct coupling (first order terms) and direct common input (second order terms involving second powers of interaction kernels; see also [49], [63]). Our approach can be viewed as a generalization that also accounts for length two directed chains, along with all higher order corrections. As Figure 4 illustrates, these additional terms can be significant. The present approach also allows us to calculate corrected auto-correlations, in contrast with that of Ostojic et al.

Our work is also closely related to that of Pernice et al. [31], who analyzed the correlation structure in networks of interacting Hawkes processes [58], [59]. Both studies represent correlations between cell pairs in terms of contributions of different connectivity motifs. However, our methods also differ: while their expressions are exact for Hawkes processes, Pernice et al. did not compare their results to those obtained using physiological models, and did not account for the response properties of individual cells (though it is possible that both can be achieved approximately by using appropriate kernels for the Hawkes processes). Moreover, for simplicity Pernice et al. examined only “total” spike count covariances, which are the integrals of the cross-correlation functions. However, as they note, their approach can be extended to obtain the temporal structure of cross-correlations. Similarly, Toyoizumi et al. [64] derive expressions for cross-correlations in networks of interacting point process models in the Generalized Linear Model (GLM) class. These are very similar to Hawkes processes, but feature a static nonlinearity that shapes the spike emission rate.

To illustrate the power of the present linear response theory in analyzing the factors that shape correlations, we considered a number of simple examples for which the approximation given by Eq. (15) is tractable. We showed how the theory can be used both to gain intuition about the network and cell properties that shape correlations, and to quantify their impact. In particular, we explained how only local connections affect correlations in a precisely tuned all–to–all network, and how strengthening inhibition may synchronize spiking activity. In each case, we use comparisons with integrate-and-fire simulations to show that linear response theory makes highly accurate predictions.

It may be surprising that linear response theory can be used to provide corrections to cross-correlations of arbitrary order in network connectivity. The key to why this works lies in the accuracy of the linearization. A more accurate approximation could be obtained by including second and higher order corrections to the approximate response of a single cell, as well as corrections to the joint response. While including such terms is formally necessary to capture all contributions of a given order in network connectivity [32], [33], the success of of linear response theory suggests that they are small for the cases at hand. In short, the present approximation neglects higher-order corrections to the approximate response of individual cells, along with all corrections involving joint responses, but accounts for paths through the network of arbitrary length.

As expected from the preceding discussion, simulations suggest that, for IF neurons, our approximations become less accurate as cells receive progressively stronger inputs. The physical reasons for this loss of accuracy could be related to interactions between the “hard threshold” and incoming synaptic inputs with short timescales. Additionally, while the theory will work for short synaptic timescales, it will improve for slower synaptic dynamics, limiting towards being essentially exact in the limit of arbitrarily long synaptic time constants (note the improvement in the approximation for the FFI circuit for the slower timescale exhibited in Figure 3). Another important factor is background noise, which is known to improve the accuracy of the linear description of single cell responses. We assume the presence of a white noise background, although it is possible to extend the present methods to colored background noise [25], [65].

We found that linear response theory remains applicable in a wide range of dynamical regimes, including relatively low noise, superthreshold regimes where cells exhibit strong oscillatory behavior. Moreover, the theory can yield accurate approximations of strong correlations due to coupling: for the bidirectionally coupled excitatory circuit of Figure 4, the approximate cross-correlations match numerically obtained results even when correlation coefficients are large (Inline graphic in the excitable regime, Inline graphic in the oscillatory regime). Additional discussion of the limits of applicability of linear response to the computation of correlations in networks can be found in the Supplementary Information. There, we show that the approximation is valid over a range of physiological values in the case of the all-to-all network, and that the theory gives accurate predictions in the presence of low firing rates (see Figures S3, S4 in Text S1).

The limits of linear response approximations of time-dependent firing activity and correlations have been tested in a number of other studies. Ostojic and Brunel [66] examined this accuracy in the relatively simple case of a neuron receiving filtered Gaussian noise in addition to a white background. Chacron et al. [61] noted that linear response approaches applied to networks of perfect integrators begin to display significant errors at larger connection strengths. Marinazzo et al. [60] remarked on the errors induced by network effects in linear response approximations to correlations in a delayed feedback loop. In particular, these errors were attributed to network effects such as synchrony in the excitatory population. The authors noted that such activity can not be correctly modeled by a linear approach.

Although we have demonstrated the theory using networks of integrate–and–fire neurons, the approach is widely applicable. The linear response kernel and power spectrum for a general integrate and fire neuron model can be easily obtained [29]. In addition, it is also possible to obtain the rate, spectrum, and susceptibility for modulation of the mean conductance in the case of conductance-based (rather than current-based) synapses (See [67] and Section 3 in Text S1). As the linear response kernel is directly related to the spike triggered average [24], [30], the proposed theoretical framework should be applicable even to actual neurons whose responses are characterized experimentally.

The possibilities for future applications are numerous. For example, one open question is how well the theory can predict correlations in the presence of adaptive currents [67]. In addition, the description of correlations in terms of architecture and response properties suggests the possibility of addressing the difficult inverse problem of inferring architectural properties from correlations [22][24], [64]. Ostojic et al. applied linear response methods to the latter problem. It is our hope that the present approach will prove a valuable tool in moving the computational neuroscience community towards a more complete understanding of the origin and impact of correlated activity in neuronal populations.

Methods

Measures of spike time correlation

We quantify dependencies between the responses of cells in the network using the spike train auto- and cross-correlation functions [39]. For a pair of spike trains, Inline graphic, the cross-correlation function Inline graphic is defined as

graphic file with name pcbi.1002408.e389.jpg

The auto-correlation function Inline graphic is the cross-correlation between a spike train and itself, and Inline graphic is the matrix of cross-correlation functions. Denoting by Inline graphic the number of spikes over a time window Inline graphic, the spike count correlation, Inline graphic, over windows of length Inline graphic is defined as,

graphic file with name pcbi.1002408.e396.jpg

We assume stationarity of the spiking processes (that is, the network has reached a steady state) so that Inline graphic does not depend on Inline graphic. We also use the total correlation coefficient Inline graphic to characterize dependencies between the processes Inline graphic and Inline graphic over arbitrarily long timescales.

The spike count covariance is related to the cross-correlation function by [7], [68]

graphic file with name pcbi.1002408.e402.jpg

We can interpret the cross-correlation as the conditional probability that cell Inline graphic spikes at time Inline graphic given that cell Inline graphic spiked at time Inline graphic. The conditional firing rate,

graphic file with name pcbi.1002408.e407.jpg

is the firing rate of cell Inline graphic conditioned on a spike in cell Inline graphic at Inline graphic units of time in the past, and Inline graphic

Define the Fourier transform of a function Inline graphic as Inline graphic We will often make use of the cross-spectrum between the output of cells Inline graphic, given by Inline graphic, which is the Fourier transform of the cross-correlation function of cells Inline graphic. The power spectrum Inline graphic is the cross-spectrum between a cell and itself, and is the Fourier transform of the auto-correlation function.

Numerical methods

Simulations were run in C++, and the stochastic differential equations were integrated with a standard Euler method with a time-step of 0.01 ms. General parameter values were as follows: Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic. Marginal statistics (firing rates, uncoupled power spectra and response functions) were obtained using the threshold integration method of [29] in MATLAB. We have posted a package of code which contains examples of all the numerical methods used in this paper (both simulations and theory) at http://www.math.uh.edu/~josic/myweb/software.html. Additional code is available upon request.

Calculation of stationary rates in a recurrent network

The stationary firing rate of an IF neuron can be computed as a function of the mean and intensity of internal noise (Inline graphic) and other cellular parameters (Inline graphic, etc…) [69]. Denote the stationary firing rate of cell Inline graphic in the network by Inline graphic, and by Inline graphic the stationary firing rate in the presence of white noise with mean Inline graphic and variance Inline graphic. We keep the dependencies on other parameters are implicit. The stationary rates, Inline graphic, in the recurrent network without external input are determined self-consistently by

graphic file with name pcbi.1002408.e437.jpg

where we used Inline graphic. This equality holds because the synaptic kernels, Inline graphic, were normalized to have area Inline graphic. These equations can typically be solved by fixed-point iteration.

Note that this provides an effective mean input, Inline graphic, to each cell, but does not give adjustments to the variance, Inline graphic. We assume that the major impact of recurrent input is reflected in Inline graphic, and ignore corrections to the cell response involving higher order statistics of the input. This approach is valid as long as fluctuations in the recurrent input to each cell are small compared to Inline graphic, and may break down otherwise [27].

Correction to statistics in the presence of an external white noise signals

Expression (16) can be used to compute the statistics of the network response to inputs Inline graphic of finite variance. As noted by [26], when inputs have infinite variance additional corrections are necessary. As a particular example, consider the case where the processes are correlated white noise, i.e., when Inline graphic, where Inline graphic are independent white noise processes with variance Inline graphic. Then each Inline graphic is also a white noise process with intensity Inline graphic, but Inline graphic. The firing rate of cell Inline graphic in response to this input is Inline graphic, and the point around which the response of the cell is linearized needs to be adjusted.

Finally, we may apply an additional correction to the linear response approximation of autocorrelations. For simplicity, we ignore coupling in Eq. (16) (so that Inline graphic). Linear response predicts that Inline graphic, where we have introduced explicit dependence on Inline graphic, the variance of white noise being received by an IF neuron with power spectrum Inline graphic, in the absence of the external signal. The approximation may be improved in this case by making the following substitution in Eq. (16) [26], [50]:

graphic file with name pcbi.1002408.e458.jpg

The response function Inline graphic should be adjusted likewise.

Convolution of matrices

Let Inline graphic and Inline graphic be Inline graphic and Inline graphic matrices of functions, respectively. We define the convolution of matrices Inline graphic to be the Inline graphic matrix of functions with entries defined by

graphic file with name pcbi.1002408.e466.jpg

Expectations and convolutions commute for matrix convolutions as matrix expectations are taken entry-wise. Each entry of a matrix convolution is a linear combination of scalar convolutions which commute with expectations. Additionally, we adopt the convention that the zeroth power of the interaction matrix, Inline graphic, is the diagonal matrix with Inline graphic when Inline graphic. Hence Inline graphic acts as the identity matrix under matrix convolution.

Supporting Information

Text S1

Supplementary information file containing derivations and additional content, such as an exploration of the error of the theory. Supporting information figures were included in this file (and not separately).

(PDF)

Acknowledgments

The authors thank Robert Rosenbaum, Brent Doiron and Srdjan Ostojic for many helpful discussions.

Footnotes

The authors have declared that no competing interests exist.

This work was supported by NSF grants DMS-0817649, DMS-1122094, and a Texas ARP/ATP award to KJ, as well as NSF Grants DMS-1122106, DMS-0818153, and a Burroughs Wellcome Career Award at the Scientic Interface to ESB. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Cohen M, Kohn A. Measuring and interpreting neuronal correlations. Nat Neurosci. 2011;14:811–819. doi: 10.1038/nn.2842. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ohki K, Chung S, Ch'ng Y, Kara P, Reid R. Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature. 2005;433:597–603. doi: 10.1038/nature03274. [DOI] [PubMed] [Google Scholar]
  • 3.Field G, Gauthier J, Sher A, Greschner M, Machado T, et al. Functional connectivity in the retina at the resolution of photoreceptors. Nature. 2010;467:673–677. doi: 10.1038/nature09424. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Jia H, Rochefort N, Chen X, Konnerth A. In vivo two-photon imaging of sensory-evoked dendritic calcium signals in cortical neurons. Nat Protoc. 2010;6:28–35. doi: 10.1038/nprot.2010.169. [DOI] [PubMed] [Google Scholar]
  • 5.Brown EN, Kass RE, Mitra P. Multiple neural spike train data analysis: state-of-the-art and future challenges. Nat Neurosci. 2004;7:456–61. doi: 10.1038/nn1228. [DOI] [PubMed] [Google Scholar]
  • 6.Averbeck B, Latham P, Pouget A. Neural correlations, population coding and computation. Nat Rev Neurosci. 2006;7:358–366. doi: 10.1038/nrn1888. [DOI] [PubMed] [Google Scholar]
  • 7.Shadlen M, Newsome W. The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci. 1998;18:3870–3896. doi: 10.1523/JNEUROSCI.18-10-03870.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Panzeri S, Schultz S, Treves A, Rolls E. Correlations and encoding of information in the nervous system. P Roy Soc Lond B Bio. 1999;266:1001–1012. doi: 10.1098/rspb.1999.0736. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Zohary E, Shadlen MN, Newsome WT. Correlated neuronal discharge rate and its implication for psychophysical performance. Nature. 1994;370:140–143. doi: 10.1038/370140a0. [DOI] [PubMed] [Google Scholar]
  • 10.Abbott LF, Dayan P. The effect of correlated variability on the accuracy of a population code. Neural Comput. 1999;11:91–101. doi: 10.1162/089976699300016827. [DOI] [PubMed] [Google Scholar]
  • 11.Sompolinsky H, Yoon H, Kang K, Shamir M. Population coding in neuronal systems with correlated noise. Phys Rev E. 2001;64:051904. doi: 10.1103/PhysRevE.64.051904. [DOI] [PubMed] [Google Scholar]
  • 12.Panzeri S, Petersen R, Schultz S, Lebedev M, Diamond M. The role of spike timing in the coding of stimulus location in rat somatosensory cortex. Neuron. 2001;29:769–777. doi: 10.1016/s0896-6273(01)00251-3. [DOI] [PubMed] [Google Scholar]
  • 13.Beck J, Bejjanki V, Pouget A. Insights from a simple expression for linear fisher information in a recurrently connected population of spiking neurons. Neural Comput. 2011;23:1484–1502. doi: 10.1162/NECO_a_00125. [DOI] [PubMed] [Google Scholar]
  • 14.Josić K, Shea-Brown E, Doiron B, de La Rocha J. Stimulus-dependent correlations and population codes. Neural Comput. 2009;21:2774–2804. doi: 10.1162/neco.2009.10-08-879. [DOI] [PubMed] [Google Scholar]
  • 15.Schneidman E, Bialek W, Berry M. Synergy, Redundancy, and Independence in Population Codes. J Neurosci. 2003;23:11539–11553. doi: 10.1523/JNEUROSCI.23-37-11539.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Latham PE, Nirenberg S. Synergy, Redundancy, and Independence in Population Codes, Revisited. J Neurosci. 2005;25:5195–5206. doi: 10.1523/JNEUROSCI.5319-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Nirenberg S, Carcieri SM, Jacobs AL, Latham PE. Retinal ganglion cells act largely as independent encoders. Nature. 2001;411:698–701. doi: 10.1038/35079612. [DOI] [PubMed] [Google Scholar]
  • 18.Cohen M, Newsome W. Estimates of the contribution of single neurons to perception depend on timescale and noise correlation. J Neurosci. 2009;29:6635–6648. doi: 10.1523/JNEUROSCI.5179-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Romo R, Hernandez A, Zainos A, Salinas E. Correlated neuronal discharges that increase coding efficiency during perceptual discrimination. Neuron. 2003;38:649–657. doi: 10.1016/s0896-6273(03)00287-3. [DOI] [PubMed] [Google Scholar]
  • 20.Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, et al. The asynchronous state in cortical circuits. Science. 2010;327:587–590. doi: 10.1126/science.1179850. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ecker A, Berens P, Keliris G, Bethge M, Logothetis N, et al. Decorrelated neuronal firing in cortical microcircuits. Science. 2010;327:584–587. doi: 10.1126/science.1179867. [DOI] [PubMed] [Google Scholar]
  • 22.Paninski L, Ahmadian Y, Ferreira D, Koyama S, Rahnama Rad K, et al. A new look at state-space models for neural data. J Comput Neurosci. 2010;29:107–126. doi: 10.1007/s10827-009-0179-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Nykamp D. A mathematical framework for inferring connectivity in probabilistic neuronal networks. Math Biosci. 2007;205:204–251. doi: 10.1016/j.mbs.2006.08.020. [DOI] [PubMed] [Google Scholar]
  • 24.Ostojic S, Brunel N, Hakim V. How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains. J Neurosci. 2009;29:10234–10253. doi: 10.1523/JNEUROSCI.1275-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Fourcaud-Trocmé N, Hansel D, Van Vreeswijk C, Brunel N. How spike generation mechanisms determine the neuronal response to uctuating inputs. J Neurosci. 2003;23:11628–11640. doi: 10.1523/JNEUROSCI.23-37-11628.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Lindner B, Doiron B, Longtin A. Theory of oscillatory firing induced by spatially correlated noise and delayed inhibitory feedback. Phys Rev E. 2005;72:061919. doi: 10.1103/PhysRevE.72.061919. [DOI] [PubMed] [Google Scholar]
  • 27.Brunel N, Hakim V. Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput. 1999;11:1621–1671. doi: 10.1162/089976699300016179. [DOI] [PubMed] [Google Scholar]
  • 28.Lindner B, Schimansky-Geier L. Transmission of noise coded versus additive signals through a neuronal ensemble. Phys Rev Lett. 2001;86:2934–2937. doi: 10.1103/PhysRevLett.86.2934. [DOI] [PubMed] [Google Scholar]
  • 29.Richardson M. Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive. Phys Rev E. 2007;76:021919. doi: 10.1103/PhysRevE.76.021919. [DOI] [PubMed] [Google Scholar]
  • 30.Gabbiani F, Koch C. Principles of spike train analysis. In: Segev I, Koch C, editors. Methods in Neuronal Modeling. Cambridge: MIT Press; 1998. pp. 313–360. [Google Scholar]
  • 31.Pernice V, Staude B, Cardanobile S, Rotter S. How structure determines correlations in neuronal networks. PLoS Comput Biol. 2011;7:e1002059. doi: 10.1371/journal.pcbi.1002059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Rangan A. Diagrammatic expansion of pulse-coupled network dynamics. Phys Rev Lett. 2009;102:158101. doi: 10.1103/PhysRevLett.102.158101. [DOI] [PubMed] [Google Scholar]
  • 33.Rangan A. Diagrammatic expansion of pulse-coupled network dynamics in terms of subnetworks. Phys Rev E. 2009;80:036101. doi: 10.1103/PhysRevE.80.036101. [DOI] [PubMed] [Google Scholar]
  • 34.Risken H. The Fokker-Planck equation: Methods of solution and applications. Berlin: Springer Verlag; 1996. 488 [Google Scholar]
  • 35.Brunel N, Chance F, Fourcaud N, Abbott L. Effects of synaptic noise and filtering on the frequency response of spiking neurons. Phys Rev Lett. 2001;86:2186–2189. doi: 10.1103/PhysRevLett.86.2186. [DOI] [PubMed] [Google Scholar]
  • 36.White JA, Rubinstein JT, Kay AR. Channel noise in neurons. Trends Neurosci. 2000;23:131–137. doi: 10.1016/s0166-2236(99)01521-0. [DOI] [PubMed] [Google Scholar]
  • 37.Renart A, Brunel N, Wang X. Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. In: Feng J, editor. Computational Neuroscience: A Comprehensive Approach. Boca Raton: CRC Press; 2004. pp. 431–490. [Google Scholar]
  • 38.Burkitt A. A review of the integrate-and-fire neuron model: I. homogeneous synaptic input. Biol Cybern. 2006;95:1–19. doi: 10.1007/s00422-006-0068-6. [DOI] [PubMed] [Google Scholar]
  • 39.Gabbiani F, Cox S. Mathematics for Neuroscientists. London: Academic Press; 2010. 498 [Google Scholar]
  • 40.Barreiro A, Shea-Brown E, Thilo E. Time scales of spike-train correlation for neural oscillators with common drive. Phys Rev E. 2010;81:011916. doi: 10.1103/PhysRevE.81.011916. [DOI] [PubMed] [Google Scholar]
  • 41.Roxin A. The role of degree distribution in shaping the dynamics in networks of sparsely connected spiking neurons. Front Comput Neurosci. 2011;5:1–15. doi: 10.3389/fncom.2011.00008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Zhao L, Beverlin B, Netoff T, Nykamp DQ. Synchronization from second order network connectivity statistics. Front Comput Neurosci. 2011;5:1–16. doi: 10.3389/fncom.2011.00028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Bullmore E, Sporns O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat Rev Neurosci. 2009;10:186–198. doi: 10.1038/nrn2575. [DOI] [PubMed] [Google Scholar]
  • 44.Song S, Sjöström P, Reigl M, Nelson S, Chklovskii D. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 2005;3:e68. doi: 10.1371/journal.pbio.0030068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Oswald A, Doiron B, Rinzel J, Reyes A. Spatial profile and differential recruitment of gabab modulate oscillatory activity in auditory cortex. J Neurosci. 2009;29:10321. doi: 10.1523/JNEUROSCI.1703-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Perin R, Berger T, Markram H. A synaptic organizing principle for cortical neuronal groups. Proc Natl Acad Sci U S A. 2011;108:5419–5424. doi: 10.1073/pnas.1016051108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Katō T. Perturbation Theory for Linear Operators. Springer-Verlag; 1995. 640 [Google Scholar]
  • 48.Horn R, Johnson C. Matrix Analysis. Cambridge: Cambridge University Press; 1990. 575 [Google Scholar]
  • 49.de la Rocha J, Doiron B, Shea-Brown E, Josić K, Reyes A. Correlation between neural spike trains increases with firing rate. Nature. 2007;448:802–806. doi: 10.1038/nature06028. [DOI] [PubMed] [Google Scholar]
  • 50.Vilela R, Lindner B. Comparative study of different integrate-and-fire neurons: Spontaneous activity, dynamical response, and stimulus-induced correlation. Phys Rev E. 2009;80:031909. doi: 10.1103/PhysRevE.80.031909. [DOI] [PubMed] [Google Scholar]
  • 51.Kremkow J, Perrinet L, Masson G, Aertsen A. Functional consequences of correlated excitatory and inhibitory conductances in cortical networks. J Comp Neurosci. 2010;28:579–594. doi: 10.1007/s10827-010-0240-9. [DOI] [PubMed] [Google Scholar]
  • 52.Veredas F, Vico F, Alonso J. Factors determining the precision of the correlated firing generated by a monosynaptic connection in the cat visual pathway. J Physiol. 2005;567:1057. doi: 10.1113/jphysiol.2005.092882. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Kirkwood P. On the use and interpretation of cross-correlations measurements in the mammalian central nervous system. J Neurosci Meth. 1979;1:107. doi: 10.1016/0165-0270(79)90009-8. [DOI] [PubMed] [Google Scholar]
  • 54.Fetz E, Gustafsson B. Relation between shapes of post-synaptic potentials and changes in firing probability of cat motoneurones. J Physiol. 1983;341:387. doi: 10.1113/jphysiol.1983.sp014812. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Herrmann A, Gerstner W. Noise and the PSTH response to current transients: I. general theory and application to the integrate-and-fire neuron. J Comput Neurosci. 2001;11:135–151. doi: 10.1023/a:1012841516004. [DOI] [PubMed] [Google Scholar]
  • 56.Vreeswijk C, Abbott L, Bard Ermentrout G. When inhibition not excitation synchronizes neural firing. J Comput Neurosci. 1994;1:313–321. doi: 10.1007/BF00961879. [DOI] [PubMed] [Google Scholar]
  • 57.Shepherd G. Foundations of the Neuron Doctrine. Oxford: Oxford University Press; 1991. 352 [Google Scholar]
  • 58.Hawkes A. Spectra of some self-exciting and mutually exciting point processes. Biometrika. 1971;58:83–90. [Google Scholar]
  • 59.Hawkes A. Point spectra of some mutually exciting point processes. J Roy Stat Soc B Met. 1971;33:438–443. [Google Scholar]
  • 60.Marinazzo D, Kappen H, Gielen S. Input-driven oscillations in networks with excitatory and inhibitory neurons with dynamic synapses. Neural Comput. 2007;19:1739–1765. doi: 10.1162/neco.2007.19.7.1739. [DOI] [PubMed] [Google Scholar]
  • 61.Chacron M, Longtin A, Maler L. Delayed excitatory and inhibitory feedback shape neural information transmission. Phys Rev E. 2005;72:051917. doi: 10.1103/PhysRevE.72.051917. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Åkerberg O, Chacron M. Noise shaping in neural populations. Phys Rev E. 2009;79:011914. doi: 10.1103/PhysRevE.79.011914. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Shea-Brown E, Josić K, de La Rocha J, Doiron B. Correlation and synchrony transfer in integrate-and-fire neurons: Basic properties and consequences for coding. Phys Rev Lett. 2008;100:108102. doi: 10.1103/PhysRevLett.100.108102. [DOI] [PubMed] [Google Scholar]
  • 64.Toyoizumi T, Rad K, Paninski L. Mean-field approximations for coupled populations of generalized linear model spiking neurons with markov refractoriness. Neural Comput. 2009;21:1203–1243. doi: 10.1162/neco.2008.04-08-757. [DOI] [PubMed] [Google Scholar]
  • 65.Alijani A, Richardson M. Rate response of neurons subject to fast or frozen noise: From stochastic and homogeneous to deterministic and heterogeneous populations. Phys Rev E. 2011;84:011919. doi: 10.1103/PhysRevE.84.011919. [DOI] [PubMed] [Google Scholar]
  • 66.Ostojic S, Brunel N. From spiking neuron models to linear-nonlinear models. PLoS Comput Biol. 2011;7:e1001056. doi: 10.1371/journal.pcbi.1001056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Richardson M. Dynamics of populations and networks of neurons with voltage-activated and calcium-activated currents. Phys Rev E. 2009;80:021928. doi: 10.1103/PhysRevE.80.021928. [DOI] [PubMed] [Google Scholar]
  • 68.Bair W, Zohary E, Newsome W. Correlated firing in macaque visual area mt: time scales and relationship to behavior. J Neurosci. 2001;21:1676–1697. doi: 10.1523/JNEUROSCI.21-05-01676.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Ricciardi L, Sacerdote L. The ornstein-uhlenbeck process as a model for neuronal activity. Biol Cybern. 1979;35:1–9. doi: 10.1007/BF01845839. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Text S1

Supplementary information file containing derivations and additional content, such as an exploration of the error of the theory. Supporting information figures were included in this file (and not separately).

(PDF)


Articles from PLoS Computational Biology are provided here courtesy of PLOS

RESOURCES