Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2014 Jan 16;10(1):e1003428. doi: 10.1371/journal.pcbi.1003428

The Correlation Structure of Local Neuronal Networks Intrinsically Results from Recurrent Dynamics

Moritz Helias 1,*, Tom Tetzlaff 1, Markus Diesmann 1,2
Editor: Olaf Sporns3
PMCID: PMC3894226  PMID: 24453955

Abstract

Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations.

Author Summary

The co-occurrence of action potentials of pairs of neurons within short time intervals has been known for a long time. Such synchronous events can appear time-locked to the behavior of an animal, and also theoretical considerations argue for a functional role of synchrony. Early theoretical work tried to explain correlated activity by neurons transmitting common fluctuations due to shared inputs. This, however, overestimates correlations. Recently, the recurrent connectivity of cortical networks was shown responsible for the observed low baseline correlations. Two different explanations were given: One argues that excitatory and inhibitory population activities closely follow the external inputs to the network, so that their effects on a pair of cells mutually cancel. Another explanation relies on negative recurrent feedback to suppress fluctuations in the population activity, equivalent to small correlations. In a biological neuronal network one expects both, external inputs and recurrence, to affect correlated activity. The present work extends the theoretical framework of correlations to include both contributions and explains their qualitative differences. Moreover, the study shows that the arguments of fast tracking and recurrent feedback are not equivalent, only the latter correctly predicts the cell-type specific correlations.

Introduction

The spatio-temporal structure and magnitude of correlations in cortical neural activity have been subject of research for a variety of reasons: the experimentally observed task-dependent modulation of correlations points at a potential functional role. In the motor cortex of behaving monkeys, for example, synchronous action potentials appear at behaviorally relevant time points [1]. The degree of synchrony is modulated by task performance, and the precise timing of synchronous events follows a change of the behavioral protocol after a phase of re-learning. In primary visual cortex, saccades (eye movements) are followed by brief periods of synchronized neural firing [2], [3]. Further, correlations and fluctuations depend on the attentive state of the animal [4], with higher correlations and slow fluctuations observed during quiet wakefulness, and faster, uncorrelated fluctuations in the active state [5]. It is still unclear whether the observed modulation of correlations is in fact employed by the brain, or whether it is merely an epiphenomenon. Theoretical studies have suggested a number of interpretations and mechanisms of how correlated firing could be exploited: Correlations in afferent spike-train ensembles may provide a gating mechanism by modulating the gain of postsynaptic cells (for a review, see [6]). Synchrony in afferent spikes (or, more generally, synchrony in spike arrival) can enhance the reliability of postsynaptic responses and, hence, may serve as a mechanism for a reliable activation and propagation of precise spatio-temporal spike patterns [7], [8], [9], [10]. Further, it has been argued that synchronous firing could be employed to combine elementary representations into larger percepts [11], [12], [7], [13], [14]. While correlated firing may constitute the substrate for some en- and decoding schemes, it can be highly disadvantageous for others: The number of response patterns which can be triggered by a given afferent spike-train ensemble becomes maximal if these spike trains are uncorrelated [15]. In addition, correlations in the ensemble impair the ability of readout neurons to decode information reliably in the presence of noise (see e.g. [16], [15], [17]). Recent studies have indeed shown that biological neural networks implement a number of mechanisms which can efficiently decorrelate neural activity, such as the nonlinearity of spike generation [18], synaptic-transmission variability and failure [19], [20], short-term synaptic depression [20], heterogeneity in network connectivity [21] and neuron properties [22] and the recurrent network dynamics [23], [24], [17]. To study the significance of experimentally observed task-dependent correlations, it is essential to provide adequate null hypotheses: Which level and structure of correlations is to be expected in the absence of any task-related stimulus or behavior? Even in the simplest network models without time varying input, correlations in the neural activity emerge as a consequence of shared input [25], [26], [27] and recurrent connectivity [24], [28], [17], [29], [30]. Irrespective of the functional aspect, the spatio-temporal structure and magnitude of correlations between spike trains or membrane potentials carry valuable information about the properties of the underlying network generating these signals [26], [28], [31], [29], [30] and could therefore help constraining models of cortical networks. Further, the quantification of spike-train correlations is a prerequisite to understand how correlation sensitive synaptic plasticity rules, such as spike-timing dependent plasticity [32], interact with the recurrent network dynamics [33]. Finally, knowledge of the expected level of correlations between synaptic inputs is crucial for the correct interpretation of extracellular signals like the local-field potential (LFP) [34].

Previous theoretical studies on correlations in local cortical networks provide analytical expressions for the magnitude [27], [24], [17] and the temporal shape [35], [36], [29], [30] of average pairwise correlations, capture the influence of the connectivity on correlations [37], [38], [28], [31], [29], [39], and connect oscillatory network states emerging from delayed negative feedback [40] to the shape of correlation functions [30]. In particular we have shown recently that negative feedback loops, abundant in cortical networks, constitute an efficient decorrelation mechanism and therefore allow neurons to fire nearly independently despite substantial shared presynaptic input [17] (see also [37], [24], [41]). We further pointed out that in networks of excitatory (E) and inhibitory (I) neurons, the correlations between neurons of different cell type (EE, EI, II) differ in both magnitude and temporal shape, even if excitatory and inhibitory neurons have identical properties and input statistics [17], [30]. It remains unclear, however, how this cell-type specificity of correlations is affected by the connectivity of the network.

The majority of previous theoretical studies on cortical circuits is restricted to local networks driven by external sources representing thalamo-cortical or cortico-cortical inputs (e.g. [42], [43], [44]). Most of these studies emphasize the role of the local network connectivity (e.g. [45]). Despite the fact that inputs from remote (external) areas constitute a substantial fraction of all excitatory inputs (about Inline graphic [7], see also [46], [47]), their spatio-temporal structure is often abstracted by assuming that neurons in the local network are independently driven by external sources. A priori, this assumption can hardly be justified: neurons belonging to the local cortical network receive, at least to some extent, inputs from identical or overlapping remote areas, for example due to patchy (clustered) horizontal connectivity [48], [49]. Hence, shared-input correlations are likely to play a role not only for local but also for external inputs. Coherent activation of neurons in remote presynaptic areas constitutes another source of correlated external input, in particular for sensory areas [50], [5], [51], [4]. So far, it is largely unknown how correlated external input affects the dynamics of local cortical networks and alters correlations in their neural activity.

In this article, we investigate how the magnitude and the cell-type specificity of correlations depend on i) the connectivity in local cortical networks of finite size and ii) the level of correlations in external inputs. Existing theories of correlations in cortical networks are not sufficient to address these questions as they either do not incorporate correlated external input [35], [17], [29], [28], [31] or assume infinitely large networks [24]. Lindner et al. [37] studied the responses of finite populations of spiking neurons receiving correlated external input, but described inhibitory feedback by a global compound process.

Our work builds on the existing theory of correlations in stochastic binary networks [35], a well-established model in the neuroscientific community [42], [24]. This model has the advantage of requiring for its analytical treatment elementary mathematical methods only. We employ the same network structure used in the work by Renart et al. [24] which relates the mechanism of recurrent decorrelation to the fast tracking of external signals (see [52] for a recent review). This choice enables us to reconsider the explanation of decorrelation by negative feedback [17], originally shown for networks of leaky integrate-and-fire neurons, and to compare it to the findings of Renart et al. In fact, the motivation for the choice of the model arose from the review process of [17], during which both the reviewers and the editors encouraged us to elucidate the relation of our work to the one of Renart et al. in a separate subsequent manuscript. The present work delivers this comparison.

We show here that the results presented in [17] for the leaky integrate-and-fire model are in qualitative agreement with those in networks of binary neurons. The formal relationship between spiking models and the binary neuron model is established in [53]. In particular, for weak correlations it can be shown that both models map to the Ornstein-Uhlenbeck process with one important difference: The location of the effective white noise for spiking neurons is additive in the output, while for binary neurons the effective noise is low-pass filtered, or equivalently additive on the input side of the neuron.

The remainder of the manuscript is organized as follows: In “Methods”, in recurrent random networks of excitatory and inhibitory cells driven by fluctuating input from an external population of finite size. We account for the fluctuations in the synaptic input to each cell, which effectively linearize the hard threshold of the neurons [54], [24]. We further include the resulting finite-size correlations into the established mean-field description [42], [54] to increase the accuracy of the theory. In “Results”, we first show in “Correlations are driven by intrinsic and external fluctuations” that correlations in recurrent networks are not only caused by the externally imposed correlated input, but also by intrinsically generated fluctuations of the local populations. We demonstrate that the external drive causes an overall shift of the correlations, but that their relative magnitude is mainly determined by the intrinsically generated fluctuations. In “Cancellation of input correlations”, we revisit the earlier reported phenomenon of the suppression of correlations between input currents to pairs of cells [24] and show that it is a direct consequence of the suppression of fluctuations on the population level [17]. In “Limit of infinite network size” we consider the strong coupling limit of the theory, where the network size goes to infinity to recover earlier results for inhomogeneous connectivity [24] and to extend these results to homogeneous connectivity. Subsequently, in “Influence of connectivity on the correlation structure”, we investigate in how far the reported structure of correlations is a generic feature of balanced networks and isolate parameters of the connectivity determining this structure. Finally, in “Discussion”, we summarize our results and their implications for the interpretation of experimental data, discuss the limitations of the theory, and provide an outlook of how the improved theory may serve as a further building block to understand processing of correlated activity.

Methods

Networks of binary neurons

We denote the activity of neuron Inline graphic as Inline graphic. The state Inline graphic of a binary neuron is either Inline graphic or Inline graphic, where Inline graphic indicates activity, Inline graphic inactivity [35], [55], [24]. The state of the network of Inline graphic such neurons is described by a binary vector Inline graphic. We denote the mean activity as Inline graphic, the (zero time lag) covariance of the activities of a pair Inline graphic of neurons is defined as Inline graphic, where Inline graphic is the deviation of neuron Inline graphic's activity from expectation and the average Inline graphic is over time and realizations of the stochastic activity.

The neuron model shows stochastic transitions (at random points in time) between the two states Inline graphic and Inline graphic controlled by transition probabilities, as illustrated in Figure 1. Using asynchronous update [56], in each infinitesimal interval Inline graphic each neuron in the network has the probability Inline graphic to be chosen for update [57], where Inline graphic is the time constant of the neuronal dynamics. An equivalent implementation draws the time points of update independently for all neurons. For a particular neuron, the sequence of update points has exponentially distributed intervals with mean duration Inline graphic, i.e. update times form a Poisson process with rate Inline graphic. We employ the latter implementation in the globally time-driven [58] spiking simulator NEST [59], and use a discrete time resolution Inline graphic for the intervals. The stochastic update constitutes a source of noise in the system. Given the Inline graphic-th neuron is selected for update, the probability to end in the up-state (Inline graphic) is determined by the gain function Inline graphic which possibly depends on the activity Inline graphic of all other neurons. The probability to end in the down state (Inline graphic) is Inline graphic. This model has been considered earlier [60], [35], [55], and here we follow the notation introduced in the latter work.

Figure 1. State transitions of a binary neuron.

Figure 1

Each neuron is updated at random time points, intervals are i.i.d. exponential with mean duration Inline graphic, so the rate of updates per neuron Inline graphic is Inline graphic. The probability of neuron Inline graphic to end in the up-state (Inline graphic) is determined by the gain function Inline graphic which potentially depends on the states Inline graphic of all neurons in the network. The up-transitions are indicated by black arrows. The probability for the down state (Inline graphic) is given by the complementary probability Inline graphic, indicated by gray arrows.

The stochastic system is completely characterized by the joint probability distribution Inline graphic in all Inline graphic binary variables Inline graphic. An example is the recurrent random network considered here (Figure 2). Knowing the joint probability distribution, arbitrary moments can be calculated, among them pairwise correlations. Here we are only concerned with the stationary state of the network. A stationary solution of Inline graphic implies that for each state a balance condition holds, so that the incoming and outgoing probability fluxes sum up to zero. The occupation probability of the state is then constant. We denote as Inline graphic the state, where the Inline graphic-th neuron is active (Inline graphic), and Inline graphic where neuron Inline graphic is inactive (Inline graphic). Since in each infinitesimal time interval at most one neuron can change state, for each given state Inline graphic there are Inline graphic possible transitions (each corresponding to one of the Inline graphic neurons changing state). The sum of the probability fluxes into the state and out of the state must compensate to zero [61], so

graphic file with name pcbi.1003428.e053.jpg (1)

Figure 2. Recurrent local network of two populations of excitatory (Inline graphic) and inhibitory (Inline graphic) neurons driven by a common external population (Inline graphic).

Figure 2

The external population Inline graphic delivers stochastic activity to the local network. The local network is a recurrent Erdös-Rényi random network with homogeneous synaptic weights Inline graphic coupling neurons in population Inline graphic to neurons in population Inline graphic, for Inline graphic and same parameters for all neurons. There are Inline graphic neurons in both the excitatory and the inhibitory population. The connection probability is Inline graphic, and each neuron in population Inline graphic receives the same number Inline graphic of excitatory and inhibitory synapses. The size Inline graphic of the external population determines the amount of shared input received by each pair of cells in the local network. The neurons are modeled as binary units with a hard threshold Inline graphic.

From this equation we derive expressions for the first Inline graphic and second moments Inline graphic by multiplying with Inline graphic and summing over all possible states Inline graphic, which leads to

graphic file with name pcbi.1003428.e072.jpg

Note that the term denoted Inline graphic does not depend on the state of neuron Inline graphic. We use the notation Inline graphic for the state of the network excluding neuron Inline graphic, i.e. Inline graphic. Separating the terms in the sum over Inline graphic into those with Inline graphic and the two terms with Inline graphic and Inline graphic, we obtain

graphic file with name pcbi.1003428.e082.jpg

where we obtained the first term by explicitly summing over state Inline graphic (i.e. using Inline graphic and evaluating the sum Inline graphic). This first sum obviously vanishes. The remaining terms are of identical form with the roles of Inline graphic and Inline graphic interchanged. We hence only consider the first of them and obtain the other by symmetry. The first term simplifies to

graphic file with name pcbi.1003428.e088.jpg

where we denote as Inline graphic the average of a function Inline graphic with respect to the distribution Inline graphic. Taken together with the mirror term Inline graphic, we arrive at two conditions, one for the first (Inline graphic, Inline graphic) and one for the second (Inline graphic) moment

graphic file with name pcbi.1003428.e096.jpg (2)

Considering the covariance Inline graphic with centralized variables Inline graphic, for Inline graphic one arrives at

graphic file with name pcbi.1003428.e100.jpg (3)

This equation is identical to eq. 3.9 in [35], to eqs. 3.12 and 3.13 in [55], and to eqs. (19)–(22) in [24, supplement].

Mean-field solution

Starting from (1) for the general case Inline graphic, a similar calculation as the one resulting in (2) for Inline graphic leads to

graphic file with name pcbi.1003428.e103.jpg

where we used Inline graphic, valid for binary variables. As in [24] we now assume a particular form for the gain function and for the coupling between neurons by specifying

graphic file with name pcbi.1003428.e105.jpg
graphic file with name pcbi.1003428.e106.jpg
graphic file with name pcbi.1003428.e107.jpg

where Inline graphic is the incoming synaptic weight from neuron Inline graphic to neuron Inline graphic, Inline graphic is the Heaviside function, and Inline graphic is the threshold of the activation function. For positive Inline graphic the neuron gets activated only if sufficient excitatory input is present and for negative Inline graphic the neuron is intrinsically active even in the absence of excitatory input. We denote by Inline graphic the summed synaptic input to the neuron, sometimes also called the “field”. Because Inline graphic, the variance Inline graphic of a binary variable is Inline graphic. We now aim to solve (2) for the case Inline graphic, i.e. the equation Inline graphic. In general, the right hand side depends on the fluctuations of all neurons projecting to neuron Inline graphic. An exact solution is therefore complicated. However, for sufficiently irregular activity in the network we assume the neurons to be approximately independent. Further assume that in a network of homogeneous populations Inline graphic (same parameters Inline graphic, Inline graphic and same statistics of the incoming connections for all neurons, i.e. same number Inline graphic and strength Inline graphic of incoming connections from neurons in a given population Inline graphic) the mean activity of an individual neuron can be represented by the population mean Inline graphic. The mean input to a neuron in population Inline graphic then is

graphic file with name pcbi.1003428.e130.jpg (4)

We assumed in the last step identical synaptic amplitudes Inline graphic for a synapse from a neuron in population Inline graphic to a neuron in population Inline graphic. So the input to each neuron has the same mean Inline graphic. As a first approximation, if the mean activity in the network is not saturated, i.e. neither Inline graphic nor Inline graphic, mapping this activity back by the inverse gain function to the input, Inline graphic must be close to the threshold value, so

graphic file with name pcbi.1003428.e138.jpg (5)

This relation may be solved for Inline graphic and Inline graphic to obtain a coarse estimate of the activity in the network [42], [54]. In mean-field approximation we assume that the fluctuations of the fields of individual neurons Inline graphic around their mean are mutually independent, so that the fluctuations Inline graphic of Inline graphic are, in turn, caused by a sum of independent random variables and hence the variances add up to the variance Inline graphic of the field

graphic file with name pcbi.1003428.e145.jpg (6)

As Inline graphic is a sum of typically thousands of synaptic inputs, it approaches a Gaussian distribution Inline graphic with mean Inline graphic and variance Inline graphic. In this approximation the mean activity in the network is the solution of

graphic file with name pcbi.1003428.e150.jpg (7)

This equation needs to be self-consistently solved with Inline graphic by numerical or graphical methods in order to obtain the stationary activity, because Inline graphic and Inline graphic depend on Inline graphic themselves. We here employ the algorithm Inline graphic and Inline graphic from the MINPACK package, implemented in scipy (version 0.9.0) [62] as the function Inline graphic.

Linearized equation for correlations and susceptibility

In general, the term Inline graphic in (3) couples moments of arbitrary order, resulting in a moment hierarchy [55]. Here we only determine an approximate solution. Since the single synaptic amplitudes Inline graphic are small, we linearize the effect of a single synaptic input. We apply the linearization to the two terms of the form Inline graphic on the right hand side of (3). In the recurrent network, the activity of each neuron in the vector Inline graphic may be correlated to the activity of any other neuron Inline graphic. Therefore, the input Inline graphic sensed by neuron Inline graphic not only depends on Inline graphic directly, but also indirectly through the correlations of Inline graphic with any of the other neurons Inline graphic that project to neuron Inline graphic. We need to take this dependence into account in the linearization. Considering the effect of one particular input Inline graphic explicitly one gets

graphic file with name pcbi.1003428.e170.jpg

The first term Inline graphic already contains two factors Inline graphic and Inline graphic, so it takes into account second order moments. Performing the expansion for the next input would yield terms corresponding to correlations of higher order, which are neglected here. This amounts to the assumption that the remaining fluctuations in Inline graphic are independent of Inline graphic and Inline graphic, and we again approximate them by a Gaussian random variable Inline graphic with mean Inline graphic and variance Inline graphic, so Inline graphic. Here we used the smallness of the synaptic weight Inline graphic and replaced the difference by the derivative Inline graphic, which has the form of a susceptibility. Using the explicit expression for the Gaussian integral (7), the susceptibility is exactly

graphic file with name pcbi.1003428.e183.jpg (8)

The same expansion holds for the remaining inputs to cell Inline graphic. With Inline graphic, the equation for the pairwise correlations (3) in linear approximation takes the form

graphic file with name pcbi.1003428.e186.jpg (9)

corresponding to eq. (6.8) in [35] and eqs. (31)–(33) in [24, supplement]. Note, however, that the linearization used in [35] relies on the smoothness of the gain function due to additional local noise, whereas here and in [24, supplement] a Heaviside gain function is used and only the existence of noise generated by the network itself justifies the linearization. If the input to each neuron is homogeneous, i.e. Inline graphic and Inline graphic for all neurons Inline graphic in population Inline graphic, a structurally similar equation connects the correlations Inline graphic averaged over disjoint pairs of neurons belonging to two (possibly identical) populations Inline graphic, Inline graphic with the population averaged variances Inline graphic

graphic file with name pcbi.1003428.e195.jpg (10)

In deriving the last expression, we replaced variances of individual neurons and correlations between individual pairs by their respective population averages and counted the number of connections. This equation corresponds to eqs. (9.14)–(9.16) in [35] (which lack, however, the external population Inline graphic, and note the typo in the first term in line 2 of eq. (9.16), which should read Inline graphic) and eqs. (36) in [24, supplement]. Written in matrix form (10) takes the form (24) stated in the results sections of the present article, where we defined

graphic file with name pcbi.1003428.e198.jpg (11)

The explicit solution of the system of equations in the second line of (24) is

graphic file with name pcbi.1003428.e199.jpg (12)

Mean-field theory including finite-size correlations

The mean-field solution presented in “Mean-field solution” assumes that correlations among the neurons in the network are negligible. This assumption enters the expression (6) for the variance of the input to a neuron. Having determined the actual magnitude of the correlations in (24), we are now able to state a more accurate approximation in which we take these correlations into account, modifying the expression for the variance of the field Inline graphic

graphic file with name pcbi.1003428.e201.jpg (13)

This correction suggests an iterative scheme: Initially we solve the mean-field equation (7) assuming Inline graphic (hence Inline graphic given by (6)). In each step of the iteration we then calculate the correlations by (24), compute the mean-field solution of (7) and the susceptibility Inline graphic (8), taking into account the correlations (13) determined in the previous step. These steps are iterated until the solution (Inline graphic) converges. We use this approach to determine the correlation structure in Figure 3, where we iterated until the solution became invariant up to a residual absolute difference of Inline graphic. A comparison of the distribution of the total synaptic input Inline graphic at the end of the iteration with a Gaussian distribution with parameters Inline graphic and Inline graphic is shown in Figure 3D.

Figure 3. Correlations in a network of three populations as illustrated in Figure 2 in dependence of the size Inline graphic of the external population.

Figure 3

Each neuron in population Inline graphic receives Inline graphic randomly drawn excitatory inputs with weight Inline graphic, Inline graphic randomly drawn inhibitory inputs of weight Inline graphic and Inline graphic external inputs of weight Inline graphic (homogeneous random network with fixed in-degree, connection probability Inline graphic). A Correlations averaged over pairs of neurons within the local network (22). Dots indicate results of direct simulation over Inline graphic averaged over Inline graphic pairs of neurons. Curves show the analytical result (24). The point DC shows the correlation structure emerging if the drive from the external population is replaced by a constant value Inline graphic, which provides the same mean input as the original external drive. B Correlations between neurons within the local network and the external population averaged over pairs of neurons (same labeling as in A). C Correlation between the inputs to a pair of cells in the network decomposed into the contributions due to shared inputs Inline graphic (gray, eq. 25) and due to correlations Inline graphic in the presynaptic activity (light gray, eq. 26). Dashed curves and St. Andrew's Crosses show the contribution due to external inputs, solid curves and dots show the contribution from local inputs. The sum of all components is shown by black dots and curve. Curves are theoretical results based on (24), (25), and (26), symbols are obtained from simulation. D Probability distribution of the fluctuating input Inline graphic to a single neuron in the excitatory population. Dots show the histogram obtained from simulation binned over the interval Inline graphic with a bin size of Inline graphic. The gray curve is the prediction of a Gaussian distribution obtained from mean-field theory neglecting correlations, with mean and variance given by (4) and (6), respectively. The black curve takes correlations in the afferent signals into account and has a variance given by (13). Other parameters: simulation resolution Inline graphic, synaptic delay Inline graphic, activity measurement in intervals of Inline graphic. Threshold of the neurons Inline graphic, time constant of inter-update intervals Inline graphic. The average activity in the network is Inline graphic.

Influence of inhomogeneity of in-degrees

In the previous sections we assumed the number of incoming connections to be the same for all neurons. Studying a random network in its original Erdös-Rényi [63] sense, the number of synaptic inputs Inline graphic to a neuron Inline graphic from population Inline graphic is a binomially distributed random number. As a consequence, the time-averaged activity differs among neurons. Since each neuron Inline graphic samples a random subset of inputs from a given population Inline graphic, we can assume that the realization of Inline graphic is independent of the realization of the time-averaged activity of the inputs from population Inline graphic. So these two contributions to the variability of the mean input Inline graphic add up. The number of incoming connections to a neuron in population Inline graphic follows a binomial distribution

graphic file with name pcbi.1003428.e242.jpg

where Inline graphic is the connection probability and Inline graphic the size of the sending population. The mean value is as before Inline graphic, where we denote the expectation value with respect to the realization of the connectivity as Inline graphic. The variance of the in-degree is hence

graphic file with name pcbi.1003428.e247.jpg

In the following we adapt the results from [54], [24] to the present notation. The contribution of the variability of the number of synapses to the variance of the mean input is Inline graphic. The contribution from the distribution of the mean activities can be expressed by the variance of the mean activity defined as

graphic file with name pcbi.1003428.e249.jpg

The Inline graphic independently drawn inputs hence contribute Inline graphic, as the variances of the Inline graphic terms add up. So together we have [54, eq. 5.5–5.6]

graphic file with name pcbi.1003428.e253.jpg

Using Inline graphic we obtain

graphic file with name pcbi.1003428.e255.jpg (14)

The latter expression differs from [54, eq. 5.7] only in the term Inline graphic that is absent in the work of van Vreeswijk and Sompolinsky, because they assumed the number of synapses to be Poisson distributed in the limit of sparse connectivity [54, Appendix, (A.6)] (also note that their Inline graphic corresponds to our Inline graphic). The expression (14) is identical to [24, supplement, eq. (25)].

Since the variance of a binary signal with time-averaged activity Inline graphic is Inline graphic, the population-averaged variance is hence

graphic file with name pcbi.1003428.e261.jpg (15)

So the sum of Inline graphic such (uncorrelated) signals contributes to the fluctuation of the input as

graphic file with name pcbi.1003428.e263.jpg (16)

The contribution due to the variability of the number of synapses Inline graphic can be neglected in the limit of large networks [24]. With the time-averaged activity of a single cell with mean input Inline graphic and variance Inline graphic given by (7) Inline graphic the distribution of activity in the population is

graphic file with name pcbi.1003428.e268.jpg (17)

The mean activity of the whole population is

graphic file with name pcbi.1003428.e269.jpg (18)

because the penultimate line is a convolution of two Gaussian distributions, so the means and variances add up. The second moment of the population activity is

graphic file with name pcbi.1003428.e270.jpg (19)

These expressions are identical to [24, supplement, eqs. (26), (27)]. The system of equations (4), (14), (16), (18), and (19) can be solved self-consistently. We use the algorithm Inline graphic and Inline graphic of the MINPACK package, implemented in scipy (version 0.9.0) [62] as the function Inline graphic. This yields the self-consistent solutions for Inline graphic and Inline graphic and hence the distribution of time averaged activity (17) can be obtained, shown in Figure 4F.

Figure 4. Activity in a network of Inline graphic binary neurons as described in [24, their Fig. 2], with Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic.

Figure 4

Number Inline graphic of synaptic inputs binomially distributed as Inline graphic, with connection probability Inline graphic. A Population averaged activity (black Inline graphic, gray Inline graphic, light gray Inline graphic). Analytical prediction (5) for the mean activities Inline graphic (dashed horizontal line) and numerical solution of mean field equation (7) (solid horizontal line). B Cross correlation between excitatory neurons (black curve), between inhibitory neurons (gray curve), and between excitatory and inhibitory neurons (light gray curve) obtained from simulation. St. Andrew's Crosses show the theoretical prediction from [24, supplement, eqs. 38,39] (prediction yields Inline graphic, so only one cross is visible). Dots show the theoretical prediction (24). The plus symbol shows the prediction for the correlation Inline graphic when terms proportional to Inline graphic and Inline graphic are set to zero. C Correlation between the input currents to a pair of excitatory neurons. Contribution due to pairwise correlations Inline graphic (black curve) and due to shared input Inline graphic (gray curve). Symbols show the theoretical predictions based on [24] (crosses) and based on (24) (dots). D Similar to B, but showing the correlations between external neurons and neurons in the excitatory and inhibitory population. E Fluctuating input Inline graphic averaged over the excitatory population (black), separated into contributions from excitatory synapses Inline graphic (gray) and from inhibitory synapses Inline graphic (light gray). F Distribution of time averaged activity obtained by direct simulation (symbols) and analytical prediction (17) using the numerically evaluated self-consistent solution for the first Inline graphic and second moments Inline graphic, Inline graphic (19). Duration of simulation Inline graphic, mean activity Inline graphic, other parameters as in Figure 3.

Results

Our aim is to investigate the effect of recurrence and external input on the magnitude and structure of cross-correlations between the activities in a recurrent random network, as defined in “Networks of binary neurons”. We employ the established recurrent neuronal network model of binary neurons in the balanced regime [42]. The binary dynamics has the advantage to be more easily amendable to analytical treatment than spiking dynamics and a method to calculate the pairwise correlations exists [35]. The choice of binary dynamics moreover renders our results directly comparable to the recent findings on decorrelation in such networks [24]. Our model consists of three populations of neurons, one excitatory and one inhibitory population which together represent the local network, and an external population providing additional excitatory drive to the local network, as illustrated in Figure 2. The external population may either be conceived as representing input into the local circuit from remote areas or as representing sensory input. The external population contains Inline graphic neurons, which are pairwise uncorrelated and have a stochastic activity with mean Inline graphic. Each neuron in population Inline graphic within the local network draws Inline graphic connections randomly from the finite pool of Inline graphic external neurons. Inline graphic therefore determines the number of shared afferents received by each pair of cells from the external population with on average Inline graphic common synapses. In the extreme cases Inline graphic all neurons receive exactly the same input, whereas for large Inline graphic the fraction of shared external input approaches Inline graphic. The common fluctuating input received from the finite-sized external population hence provides a signal imposing pairwise correlations, the amount of which is controlled by the parameter Inline graphic.

Correlations are driven by intrinsic and external fluctuations

To explain the correlation structure observed in a network with external inputs (Figure 2), we extend the existing theory of pairwise correlations [35] to include the effect of externally imposed correlations. The global behavior of the network can be studied with the help of the mean-field equation (7) for the population-averaged mean activity Inline graphic

graphic file with name pcbi.1003428.e316.jpg (20)

where the fluctuations of the input Inline graphic to a neuron in population Inline graphic are to good approximation Gaussian with the moments

graphic file with name pcbi.1003428.e319.jpg (21)
graphic file with name pcbi.1003428.e320.jpg

To determine the average activities in the network, the mean-field equation (20) needs to be solved self-consistently, as the right-hand side depends on the mean activities Inline graphic through (21), as explained in “Mean-field theory including finite-size correlations”. Here Inline graphic denotes the number of connections from population Inline graphic to Inline graphic, and Inline graphic their average synaptic amplitude. Once the mean activity in the network has been found, we can determine the structure of correlations. For simplicity we focus on the zero time lag correlation, Inline graphic, where Inline graphic is the deflection of neuron Inline graphic's activity from baseline and Inline graphic is the variance of neuron Inline graphic's activity. Starting from the master equation for the network of binary neurons, in “Methods” for completeness and consistency in notation we re-derive the self-consistent equation that connects the cross covariances Inline graphic averaged over pairs of neurons from population Inline graphic and Inline graphic and the variances Inline graphic averaged over neurons from population Inline graphic

graphic file with name pcbi.1003428.e336.jpg (22)
graphic file with name pcbi.1003428.e337.jpg

The obtained inhomogeneous system of linear equations (24) reads [35]

graphic file with name pcbi.1003428.e338.jpg (23)

Here Inline graphic measures the effective linearized coupling strength from population Inline graphic to population Inline graphic. It depends on the number of connections Inline graphic from population Inline graphic to Inline graphic, their average synaptic amplitude Inline graphic and the susceptibility Inline graphic of neurons in population Inline graphic. The susceptibility Inline graphic given by (8) quantifies the influence of fluctuation in the input to a neuron in population Inline graphic on the output. Inline graphic depends on the working point Inline graphic of the neurons in population Inline graphic. The autocorrelations Inline graphic, Inline graphic and Inline graphic are the inhomogeneity in the system of equations, so they drive the correlations, as pointed out earlier [35]. This is in line with the linear theories [17], [30] for leaky integrate-and-fire model neurons, where cross-correlations are proportional to the auto-correlations; the system of equations (23) is identical to [35, eqs. (9.14)–(9.16)]. Note that this description holds for finite-sized networks. With the symmetry Inline graphic, (23) can be written in matrix form as

graphic file with name pcbi.1003428.e357.jpg (24)

The explicit forms of the matrices Inline graphic are given in (11). This system of linear equations can be solved by elementary methods. From the structure of the equations it follows, that the correlations between the external input and the activity in the network, Inline graphic and Inline graphic, are independent of the other correlations in the network. They are solely determined by the solution of the system of equations in the second line of (24), driven by the fluctuations of the external drive Inline graphic. The correlations among the neurons within the network are given by the solution of the first system in (24). They are hence driven by two terms, the fluctuations of the neurons within the network proportional to Inline graphic and Inline graphic and the correlations between the external population and the neurons in the network, Inline graphic and Inline graphic.

The second line of (24) shows that all correlations depend on the size Inline graphic of the external population. Since the number Inline graphic of randomly drawn afferents per neuron from this population is constant, the mean number of shared inputs to a pair of neurons is Inline graphic. In the extreme case Inline graphic on the left of Figure 3 all neurons receive exactly identical input. If the recurrent connectivity would be absent, we would hence have perfectly correlated activity within the local network, the covariance between two neurons would be equal to their variance Inline graphic, in this particular network Inline graphic. Figure 3A shows that the covariance in the recurrent network is much smaller; on the order of Inline graphic. The reason is the recently reported mechanism of decorrelation [24], explained by the negative feedback in inhibition-dominated networks [17]. Increasing the size of the external population decreases the amount of shared input, as shown in Figure 3C. In the limit where the external drive is replaced by a constant value (visualized as point “Inline graphic”), the external drive does consequently not contribute to correlations in the network. Figure 3A shows that the relative position of the three curves does not change with Inline graphic. The overall offset, however, changes. This can be understood by inspecting the analytical result (24): The solution of this system of linear equations is a superposition of two contributions. One is due to the externally imposed fluctuations, proportional to Inline graphic, the other is due to fluctuations generated within the local network, proportional to Inline graphic and Inline graphic. Varying the size of the external population only changes the external contribution, causing the variation in the offset, while the internal contribution, causing the splitting between the three curves, remains constant. In the extreme case Inline graphic (Inline graphic), we still observe a similar structure. The slightly larger splitting is due to the reduced variance Inline graphic in the single neuron input, which consequently increases the susceptibility Inline graphic (8).

Figure 3D shows the probability distribution of the input Inline graphic to a neuron in population Inline graphic. The histogram is well approximated by a Gaussian. The first two moments of this Gaussian are Inline graphic and Inline graphic given by (21), if correlations among the afferents are neglected. This approximation deviates from the result of direct simulation. Taking the correlations among the afferents into account affects the variance in the input according to (13). The latter approximation is a better estimate of the input statistics, as shown in Figure 3D. This improved estimate can be accounted for in the solution of the mean-field equation (20), which in turn affects the correlations via the susceptibility Inline graphic. Iterating this procedure until convergence, as explained in “Mean-field theory including finite-size correlations”, yields the semi-analytical results presented in Figure 3.

Cancellation of input correlations

For strongly coupled networks in the limit of large network size, previous work [24], [52] derived a balance equation for the correlations between pairs of neurons. The expressions for the correlations are approximate at finite network size and become exact for infinitely large networks. The authors show that the resulting structure of correlations amounts to a suppression of the correlations between the input currents to a pair of cells and that the population-averaged activity closely follows the fluctuations imposed by the external drive, known as fast tracking [42]. Here we revisit these three observations - the correlation structure, the input correlation, and fast tracking - from a different view point, providing an explanation based on the suppression of population rate fluctuations by negative feedback [17].

Figure 4A shows the population activities in a network of three populations for fixed numbers of neurons Inline graphic and otherwise identical parameters as in [24, their Fig. 2]. Moreover, we distributed the number of incoming connections Inline graphic per neuron according to a binomial distribution as in the original publication. The deflections of the excitatory and the inhibitory population partly resemble those of the external drive to the network, but partly the fluctuations are independent. Our theoretical result for the correlation structure (24) is in line with this observation: the fluctuations in the network are not only driven by external input (proportional to Inline graphic), but also by the fluctuations generated within the local populations (proportional to Inline graphic and Inline graphic), so the tracking cannot be perfect in finite-sized networks.

We now consider the fluctuations in the input averaged over all neurons Inline graphic belonging to a particular population Inline graphic, Inline graphic. We can decompose the input Inline graphic to the population Inline graphic into contributions from excitatory (local and external) and from inhibitory cells, Inline graphic and Inline graphic, respectively, where we used the short hand Inline graphic. As shown in Figure 4E, the contributions of excitation and inhibition cancel each other so that the total input fluctuates close to the threshold (Inline graphic) of the neurons: the network is in the balanced state [42]. Moreover, this cancellation not only holds for the mean value, but also for fast fluctuations, which are consequently reduced in the sum Inline graphic compared to the individual components Inline graphic and Inline graphic (Figure 4E).

We next show that this suppression of fluctuations directly implies a relation for the correlation Inline graphic between the inputs to a pair Inline graphic of individual neurons. There are two distinct contributions to this correlation Inline graphic, one due to common inputs shared by the pair of neurons (both neurons Inline graphic assumed to belong to population Inline graphic)

graphic file with name pcbi.1003428.e409.jpg (25)

and one due to the correlations between afferents

graphic file with name pcbi.1003428.e410.jpg (26)

Figure 4C shows these two contributions to be of opposite sign but approximately same magnitude, as already shown in [24, supplement] and in [17]. Figure 3C shows a further decomposition of the input correlation into contributions due to the external sources and due to connections from within the local network. The sum of all components is much smaller than each individual component. This cancellation is equivalent to small fluctuations in the population-averaged input Inline graphic, because

graphic file with name pcbi.1003428.e412.jpg (27)

where in the second step we used the general relation between the covariance Inline graphic among two population averaged signals Inline graphic and Inline graphic, the population-averaged variance Inline graphic, and the pairwise averaged covariances Inline graphic, which reads [17, cf. eq. (1)]

graphic file with name pcbi.1003428.e418.jpg (28)

We have therefore shown that the cancellation of the contribution of shared input Inline graphic with the contribution due to the correlations among cells Inline graphic is equivalent to a suppression of the fluctuations in the population-averaged input signal to the population Inline graphic.

This suppression of fluctuations in the population-averaged input is a consequence of the overall negative feedback in these networks [17]: a fluctuation Inline graphic of the population averaged input Inline graphic causes a response in network activity which is coupled back with a negative sign, counteracting its own cause and hence suppressing the fluctuation Inline graphic. Expression (27) is an algebraic identity showing that hence also correlations between the total inputs to a pair of cells must be suppressed. Qualitatively this property can be understood by inspecting the mean-field equation (7) for the population-averaged activities, where we linearized the gain function Inline graphic around the stationary mean-field solution to obtain

graphic file with name pcbi.1003428.e426.jpg (29)

Here the noise term qualitatively describes the fluctuations caused by the stochastic update process and the external drive (see [53] for the appropriate treatment of the noise). After transformation into the coordinate system of eigenvectors Inline graphic (with eigenvalue Inline graphic) of the effective connectivity matrix Inline graphic, each component fulfills the differential equation

graphic file with name pcbi.1003428.e430.jpg

For stability the eigenvalues must satisfy Inline graphic. In the example of the Inline graphic network shown in Figure 4 we have the two eigenvalues

graphic file with name pcbi.1003428.e433.jpg (30)

which in the case of identical susceptibility Inline graphic for all populations can be expressed in terms of the synaptic weights

graphic file with name pcbi.1003428.e435.jpg (31)

where in the second line we inserted the numerical values of Figure 4. The fluctuations Inline graphic are hence suppressed so the contributions Inline graphic to the fluctuations on the input side are small. This explains why fluctuations of Inline graphic are small in networks stabilized by negative feedback. This argument also shows why the suppression of input-correlations does not rely on a balance between excitation and inhibition; it is as well observed in purely inhibitory networks of leaky integrate-and-fire neurons [17, cf. text following eq. (21) therein] and of binary neurons [52, eq. (30)], where the overall negative feedback suppresses population fluctuations Inline graphic in exactly the same manner, as the only appearing eigenvalue in this case is negative. Figure 5 shows the correlations in a purely inhibitory network without any external fluctuating drive. In this network the neurons are autonomously active due to a negative threshold Inline graphic, which, by the cancellation argument Inline graphic, was chosen to obtain a mean activity of about Inline graphic. Pairwise correlations in the finite-sized network follow from (23) to be negative,

graphic file with name pcbi.1003428.e443.jpg (32)

and approach Inline graphic in the limit of strong coupling, as also shown in [52, eq. 30]. The contributions to the input correlation follow from (25) and (26) as

graphic file with name pcbi.1003428.e445.jpg (33)

so that for strong negative feedback Inline graphic the contribution due to correlations approaches Inline graphic. In this limit the two contributions cancel each other as in the inhibition-dominated network with excitation and inhibition. Note, however, that the presence of externally imposed fluctuations is not required for the mechanism of cancellation by negative feedback. The negative feedback suppresses also purely network generated fluctuations. For finite coupling we have Inline graphic, so the total currents are always positively correlated.

Figure 5. Suppression of correlations by purely inhibitory feedback in absence of external fluctuations.

Figure 5

Activity in a network of Inline graphic binary inhibitory neurons with synaptic amplitudes Inline graphic. Each neuron receives Inline graphic randomly drawn inputs (fixed in-degree) with Inline graphic. A Population averaged activity. Numerical solution of mean field equation (7) (solid horizontal line). B Cross covariance between inhibitory neurons. Theoretical result (32) shown as dot. St. Andrew's Cross indicates the leading order term Inline graphic. C Correlation between the input currents to a pair of excitatory neurons. The black curve is the contribution due to pairwise correlations Inline graphic, the gray curve is the contribution of shared input Inline graphic. The dot symbols show the theoretical expectations (33) based on the leading order (crosses) and based on the full solution (32) (dot). Threshold of neurons Inline graphic.

An interesting special case is a network with homogeneous connectivity, as studied in “Correlations are driven by intrinsic and external fluctuations”, where Inline graphic and Inline graphic, shown in Figure 6. In this symmetric case there is only one negative eigenvalue Inline graphic. The other eigenvalue is Inline graphic, so fluctuations are only mildly suppressed in direction Inline graphic. However, on the input side of the neurons, these fluctuations are not seen, since their contribution to the input field is by the vanishing eigenvalue Inline graphic. Another consequence of the vanishing eigenvalue is that the system can freely fluctuate along the eigendirection Inline graphic. Consequently the tracking of the external signal is much weaker in this case, as evidenced in Figure 6A.

Figure 6. Activity in a network of Inline graphic binary neurons with synaptic amplitudes Inline graphic, Inline graphic depending exclusively on the type of the sending neuron (Inline graphic or Inline graphic).

Figure 6

Each neuron receives Inline graphic randomly drawn inputs (fixed in-degree, Inline graphic). A Population averaged activity (black Inline graphic, gray Inline graphic, light gray Inline graphic). Analytical prediction (5) for the mean activities Inline graphic (dashed horizontal line) and numerical solution of mean field equation (7) (solid horizontal line). B Cross covariance between excitatory neurons (black), between inhibitory neurons (gray), and between excitatory and inhibitory neurons (light gray). Theoretical results (24) shown as dots. St. Andrew's Crosses indicate the theoretical prediction of leading order in Inline graphic (43). C Correlation between the input currents to a pair of excitatory neurons. The black curve is the contribution due to pairwise correlations Inline graphic, the gray curve is the contribution of shared input Inline graphic. The symbols show the theoretical expectation (25) and (26) based on (43) (crosses) and based on (24) (dots). D Similar to B, but showing the correlations between external neurons and neurons in the excitatory and inhibitory population. Note that both theories yield Inline graphic, so for each theory ((43) crosses, (24) dots) only the symbol for Inline graphic is visible. E Contributions Inline graphic (gray) due to excitatory synapses and Inline graphic (light gray) due to inhibitory synapses to the input Inline graphic averaged over all excitatory neurons. Duration of simulation Inline graphic, mean activity Inline graphic, Inline graphic, other parameters as in Figure 3.

It is easy to see that the cancellation condition (27) does not uniquely determine the structure of correlations in an Inline graphic network, i.e. the structure of correlations in a finite network is not uniquely determined by Inline graphic. This is shown in Figure 4B, illustrating as an example the correlation structure predicted in the limit of infinite network size and perfect tracking [24, supplement, eqs. 38–39], which fulfills Inline graphic exactly, because this correlation structure can alternatively be derived starting from the condition for perfect tracking Inline graphic. The predicted structure does not coincide with the results obtained by direct simulation of the finite network. By construction and by virtue of (27) this correlation structure, however, still fulfills the cancellation condition on the input side, as visualized in Figure 4C. We show in “Limit of infinite network size” below that the deviations from direct simulation are due to the theory being strictly valid only in the limit of infinite network size, neglecting the contribution of fluctuations of the local populations (Inline graphic,Inline graphic), as they appear in (24). Formally this is apparent from [24, eq. (2)] and [24, supplement eq. (40–41)], stating that the solution for correlations is equivalent to the network fluctuations predominantly caused by the external input, also reflected in the expression Inline graphic [24, supplement eq. (38–39)]. This can be demonstrated explicitly by setting Inline graphic and Inline graphic in (24), resulting in a similar prediction for Inline graphic, as shown in Figure 4B (plus symbol). The remaining deviation between the theories is due to the different susceptibilities Inline graphic used by the two approaches. The full theory (24) predicts the correct correlation structure independent of the connectivity matrix. In summary, the cancellation condition imposes a constraint on the structure of correlations but is not sufficient as a unique determinant.

The distribution of the in-degree in Figure 4 is an additional source of variability compared to the case of fixed in-degree. It causes a distribution of the mean activity of the neurons in the network, as shown in Figure 4F. The shape of the distribution can be assessed analytically by self-consistently solving a system of equations for the first Inline graphic (18) and second moment Inline graphic (19) of the rate distribution [54], as described in “Influence of inhomogeneity of in-degrees”. The resulting second moments Inline graphic (Inline graphic by simulation) and Inline graphic (Inline graphic by simulation) are small compared to the mean activity Inline graphic. For the prediction of the covariances shown in Figure 4B–D we employed the semi-analytical self-consistent solution to determine the variances Inline graphic. The difference to the approximate value Inline graphic is, however, small for low mean activity.

Limit of infinite network size

To relate the finite-size correlations presented in the previous sections to earlier studies on the dominant contribution to correlations in the limit of infinitely large networks [24], we here take the limit Inline graphic. For non-homogeneous connectivity, we recover the earlier result [24] in “Inhomogeneous connectivity”. In “Homogeneous connectivity” we show that the correlations converge to a different limit than what would be expected from the idea of fast tracking.

Starting from (10) we follow [24, supplement] and introduce the covariances between population-averaged activities as Inline graphic, which leads to

graphic file with name pcbi.1003428.e508.jpg (34)

The general solution of the continuous Lyapunov equation stated in the last line can be obtained by projecting onto the set of left-sided eigenvectors of Inline graphic (see e.g. [35] eq. 6.14). Alternatively the system of linear equations (34) may be written explicitly as

graphic file with name pcbi.1003428.e510.jpg (35)

The solution of the latter equation is given by (12), so Inline graphic. We observe that the right hand side of the first line in (35) contains again two source terms, those corresponding to fluctuations caused by the external drive (proportional to Inline graphic) and those due to fluctuations generated within the network (proportional to Inline graphic or Inline graphic). This motivates our definition of the two contributions Inline graphic and Inline graphic as

graphic file with name pcbi.1003428.e517.jpg (36)
graphic file with name pcbi.1003428.e518.jpg (37)

which allows us to write the full solution of (35) as Inline graphic. We use the superscripts Inline graphic and Inline graphic to distinguish the driving sources of the fluctuations coming from outside the network (Inline graphic driven by Inline graphic) and coming from within the network (Inline graphic driven by Inline graphic and Inline graphic).

Inhomogeneous connectivity

In the following we assume inhomogeneous connectivity, meaning that the synaptic amplitudes not only depend on the type of the sending neuron but also on the receiving neuron, such that the matrix Inline graphic is invertible. In the limit of large networks with Inline graphic the solution (12) can be approximated as

graphic file with name pcbi.1003428.e529.jpg

where the definitions of Inline graphic and Inline graphic correspond to the ones of [24] if the susceptibility Inline graphic is the same for all populations. Solving the first system of equations (36) leads to

graphic file with name pcbi.1003428.e533.jpg

where we again assumed that Inline graphic and therefore neglected the Inline graphic in the sums on the diagonal of the matrix Inline graphic (35). Hence the covariance due to Inline graphic is

graphic file with name pcbi.1003428.e538.jpg (38)

The latter equation is the solution given in [24, supplement, eqs. (38)–(39)]. The form of the equation shows that this contribution is due to fluctuations of the population activity driven by the external input, exhibited by the factor Inline graphic driving Inline graphic, where the intrinsic contribution of the single cell autocorrelations is subtracted. The quantities Inline graphic and Inline graphic contain the effect of the recurrence on these externally applied fluctuations and are independent of network size, so Inline graphic decays with Inline graphic as shown in Figure 7A (dashed curve).

Figure 7. Scaling the network size to infinity.

Figure 7

Comparison of the solution of (24) (solid) to the contribution of the leading order in Inline graphic (dashed). Gray coded are the different pairs of covariances, black (Inline graphic), mid gray (Inline graphic), light gray (Inline graphic). A Network as in [24] with non-homogeneous synaptic coupling as in Figure 4. The dashed curve is given by the leading order term Inline graphic (38) and [24, eqs. (38)–(39)] driven by external fluctuations, the dotted curve is the next order term Inline graphic (37), driven by intrinsic fluctuations generated by the excitatory and inhibitory population. The dashed curve is not shown for networks smaller than Inline graphic neurons as it assumes negative values. Relative error of the theory with respect to simulation at Inline graphic neurons is Inline graphic percent. The solid curve is the full solution of (24) Inline graphic. The relative error at Inline graphic neurons is Inline graphic percent. Symbols show direct simulations. B Network with homogeneous connectivity, as in Figure 6. Same symbol code as in A. Both contributions Inline graphic (36) and Inline graphic (37) show the same scaling (44). Note that for the parameters here Inline graphic, so the only dashed curve shown is Inline graphic. Symbols indicate the results of direct simulations; vertical lines are included to guide the eye.

The second contribution Inline graphic given by the solution of (37) is driven by the intrinsically generated fluctuations. As the network tends to infinity, this contribution vanishes faster than Inline graphic, because the coupling matrix grows as Inline graphic. So the term Inline graphic is a correction to (38) of the order Inline graphic. This faster decay can be observed at large network sizes in Figure 7A (dotted curve). For finite networks of natural size, however, this term determines the structure of the correlations. Specifically, for the parameters chosen in [24], the contribution Inline graphic dominates in networks up to about Inline graphic neurons (Figure 7A).

Homogeneous connectivity

In the previous section we showed that in agreement with [24] the leading order term Inline graphic dominates the limit of infinitely large networks and yields practically useful results for random networks of Inline graphic neurons. In the following we will extend the theory to homogeneous connectivity, where the synaptic weights only depend on the type of the sending neuron, i.e. all Inline graphic and Inline graphic are the same for all Inline graphic. The matrix

graphic file with name pcbi.1003428.e573.jpg (39)

is hence not invertible and the theory in “Inhomogeneous connectivity” not directly applicable. Note that assuming fast tracking in this situation, which for inhomogeneous connectivity is a consequence of the correlation structure in the Inline graphic limit [24, eq. (2)], due to the degenerate rows of the connectivity here yields

graphic file with name pcbi.1003428.e575.jpg (40)

Here the assumption leads to a wrong result, if Inline graphic is naively inserted into equation (38) or equivalently into [24, supplement, eqs. (38)–(39)]. In particular, for the given parameters Inline graphic and with homogeneous activity (and Inline graphic) the cross covariances Inline graphic are predicted to approximately vanish Inline graphic. This failure could have been anticipated based on the observation that the tracking does not hold in this case, as observed in Figure 6A. We therefore need to extend the theory for the Inline graphic limit of networks with homogeneous connectivity.

To this end we write out (24) explicitly for the homogeneous network using Inline graphic. In (24) we observe that Inline graphic and Inline graphic and introduce Inline graphic, Inline graphic, Inline graphic to obtain

graphic file with name pcbi.1003428.e588.jpg (41)
graphic file with name pcbi.1003428.e589.jpg (42)

For sufficiently large networks, we can neglect a Inline graphic on the left hand side of (41) to obtain

graphic file with name pcbi.1003428.e591.jpg

and hence the second equation, again neglecting the Inline graphic on the left hand side, leads to

graphic file with name pcbi.1003428.e593.jpg (43)

This result shows explicitly the two contributions to the correlations due to external fluctuations (Inline graphic) and due to intrinsic fluctuations (Inline graphic), respectively. In contrast to the case of inhomogeneous connectivity, both contributions decay as Inline graphic, so the external drive does not provide the leading contribution even in the limit Inline graphic. Note also that we may write this result in a similar form as for the inhomogeneous connectivity, as

graphic file with name pcbi.1003428.e598.jpg (44)
graphic file with name pcbi.1003428.e599.jpg
graphic file with name pcbi.1003428.e600.jpg

with Inline graphic given by (40). Here, Inline graphic has the same form as the solution [24, eqs. (38)–(39)] originating from external fluctuations, but Inline graphic is still a contribution of same order of magnitude. The susceptibility Inline graphic has been eliminated from these expressions and hence only structural parameters remain, analogous to the solution [24, eqs. (38)–(39)]. The two contributions Inline graphic and Inline graphic given by the non-approximate solution of (36) and (37), respectively, are shown together with their sum and with results of direct simulations in Figure 7B. For the given network parameters, the contribution of intrinsic correlations dominates across all network sizes, because Inline graphic, as Inline graphic, and all Inline graphic and Inline graphic are approximately identical for Inline graphic. The splitting between the covariances of different types scales proportional to the absolute value Inline graphic, so even at infinite network size the relative differences between the covariances stay the same.

The underlying reason for the qualitatively different scaling of the intrinsically generated correlations Inline graphic for homogeneous connectivity compared to Inline graphic for inhomogeneous connectivity is related to the vanishing eigenvalue of the effective connectivity matrix (39). The zero eigenvalue belongs to the eigenvector Inline graphic, meaning excitation and inhibition may freely fluctuate in this eigendirection without sensing any negative feedback through the connectivity, as reflected in the last line in (44). These fluctuations are driven by the intrinsically generated noise of the stochastic update process and hence contribute notably to the correlations in the network.

In summary, the two examples “Inhomogeneous connectivity” and “Homogeneous connectivity” are both inhibition-dominated (Inline graphic) networks that exhibit small correlations on the order Inline graphic at finite size Inline graphic. Only in the limit of infinitely large networks with inhomogeneous connectivity is Inline graphic the dominant contribution that can be related to fast and perfect tracking of the external drive. At finite network sizes, the contribution Inline graphic is generally not negligible and may be dominant. Therefore fast tracking cannot be the explanation of small correlations in these networks. Note that there is a difference in the line of argument used in the main text of [24] and its mathematical supplement: While the main text advocates fast tracking as the underlying mechanism explaining small correlations, in the mathematical supplement fast tracking is found as a consequence of the theory of correlations in the limit of infinite network size and under the stated prerequisites, in line with the calculation presented above.

Influence of connectivity on the correlation structure

Comparing Figure 6B and Figure 4B, the structure of correlations is obviously different. In Figure 6B, the structure is Inline graphic, whereas in Figure 4B the relation is Inline graphic. The only difference between these two networks is in the coupling strengths Inline graphic and Inline graphic. In the following we derive a more complete picture of the determinants of the correlation structure. In order to identify the parameters that influence the fluctuations in these networks, it is instructive to study the mean-field equation for the population-averaged activities. Linearizing (20) for small deviations Inline graphic of the population-averaged activity Inline graphic from the fixed point Inline graphic, for large networks with Inline graphic the dominant term is proportional to the change of the mean Inline graphic, because the standard deviation Inline graphic is only proportional to Inline graphic. To linear order we hence have a coupled set of two differential equations (29). The dynamics of this coupled set of linear differential equations is determined by the two eigenvalues of the effective connectivity (30). Due to the presence of the leak term on the left hand side of (29), the fixed point rate is stable only if the real parts of the eigenvalues Inline graphic are both smaller than Inline graphic. In the network with identical input statistics for all neurons the fluctuating input is characterized by the same mean and variance Inline graphic for each neuron. For homogeneous neuron parameters the susceptibility Inline graphic is hence the same for both populations Inline graphic. If further the number of synaptic afferents is the same Inline graphic for all populations, the eigenvalues can be expressed by those of the original connectivity matrix as (31)

graphic file with name pcbi.1003428.e638.jpg

where we defined the two parameters Inline graphic and Inline graphic which control the location of the eigenvalues. In the left column of Figure 8 we keep Inline graphic, Inline graphic, and Inline graphic constant and vary Inline graphic, where we choose the maximum value by the condition Inline graphic and the minimum value by the condition that Inline graphic and Inline graphic, leading to Inline graphic and Inline graphic, both fulfilled if Inline graphic. Varying Inline graphic in the right column of Figure 8, the bounds are given by the same condition that Inline graphic and Inline graphic, so Inline graphic, and the condition for the larger eigenvalue to stay below or equal Inline graphic, so Inline graphic. In order for the network to maintain similar mean activity, we choose the threshold of the neurons such that the cancellation condition Inline graphic is fulfilled for Inline graphic. The resulting average activity is close to this desired value of Inline graphic and agrees well to the analytical prediction (20), as shown in Figure 8 A, B.

Figure 8. Connectivity structure determines correlation structure.

Figure 8

In the left column (A,C,E) Inline graphic is the independent variable, in the right column (B,D,F) Inline graphic. A,B Mean activity in the network as a function of the structural parameters Inline graphic and Inline graphic, respectively. C,D Correlations averaged over pairs of neurons. Dots obtained from direct simulation, solid curves given by theory (24) E,F Eigenvalues (30) of the population-averaged connectivity matrix; solid curves show the real part, dashed curves the imaginary part.

The right-most point in both columns of Figure 8 where one eigenvalue vanishes Inline graphic, results in the same connectivity structure. This is the case for the connectivity with the symmetry Inline graphic and Inline graphic (cf. Figure 6), because in this case the population averaged connectivity matrix has two linearly dependent rows, hence a vanishing determinant and thus an eigenvalue Inline graphic. As observed in Figure 8C,D at this point the absolute magnitude of correlations is largest. This is intuitively clear as the network has a degree of freedom in the direction of the eigenvector Inline graphic belonging to the vanishing eigenvalue Inline graphic. In this direction the system effectively does not feel any negative feedback, so the evolution is as if the connectivity would be absent. Fluctuations in this direction are large and are only damped by the exponential relaxation of the neuronal dynamics, given by the left hand side of (29). The time constant of these fluctuations is then solely determined by the time constant of the single neurons, as seen in Figure 6B. From the coefficients of the eigenvector we can further conclude that the fluctuations of the excitatory population are stronger by a factor Inline graphic than those of the inhibitory population, explaining why Inline graphic, and that both populations fluctuate in-phase, so Inline graphic, (Figure 8C,D, right most point). Moving away from this point, panels C,D in Figure 8 both show that the magnitude of correlations decreases. Comparing the temporal structures of Figure 6B and Figure 4B shows that also the time scale of fluctuations decreases. The two structural parameters Inline graphic and Inline graphic affect the eigenvalues of the connectivity in a distinct manner. Changing Inline graphic merely shifts the real part of both eigenvalues, but leaves their relative distance constant, as seen in Figure 8E. For smaller values of Inline graphic the coupling among excitatory neurons becomes weaker, so their correlations are reduced. At the left most point in Figure 8C the coupling within the excitatory population vanishes, Inline graphic. Changing the parameter Inline graphic has a qualitatively different effect on the eigenvalues, as seen in Figure 8F. At Inline graphic, the two real eigenvalues merge and for smaller Inline graphic they turn into a conjugate complex pair. At the left-most point Inline graphic, so both couplings within the populations vanish Inline graphic. The system then only has coupling from Inline graphic to Inline graphic and vice versa. The conjugate complex eigenvalues show that the population activity of the system has oscillatory solutions. This is also called the PING (pyramidal - inhibitory - gamma) mechanism of oscillations in the gamma-range [64]. Panels C,D in Figure 8 show that for most connectivity structures the correlation structure is Inline graphic, in contrast to our previous finding [17], where we studied only the symmetric case (the right-most point), at which the correlation structure is Inline graphic. The comparison of the direct simulation to the theoretical prediction (24) in Figure 8C,D shows that the theory yields an accurate prediction of the correlation structure for all connectivity structures considered here.

Discussion

The present work explains the observed pairwise correlations in a homogeneous random network of excitatory and inhibitory binary model neurons driven by an external population of finite size.

On the methodological side the work is similar to the approach taken in the work of Renart et al. [24], that starts from the microscopic Glauber dynamics of binary networks with dense and strong synaptic coupling Inline graphic and derives a set of self-consistent equations for the second moment of the fluctuations in the network. As in the earlier work [24], we take into account the fluctuations due to the balanced synaptic noise in the linearization of the neuronal response [24], [65] rather than relying on noise intrinsic to each neuron, as in the work by Ginzburg and Sompolinsky [35]. Although the theory by Ginzburg and Sompolinsky [35] was explicitly derived for binary networks that are densely, but weakly coupled, i.e. the number of synapses per neuron is Inline graphic and synaptic amplitudes scale as Inline graphic, identical equations result for the case of strong coupling, where the synaptic amplitudes decay slower than Inline graphic [24]. The reason for both weakly and strongly coupled networks to be describable by the same equations lies in the self-regulating property of binary neurons: Their susceptibility (called Inline graphic in the present work) inversely scales with the fluctuations in the input, Inline graphic, such that Inline graphic and hence correlations are independent of the synaptic amplitude Inline graphic [65]. A difference between the work of Ginzburg and Sompolinsky [35] and the work of Renart et al. [24] is, however, that the former authors assume all correlations to be equally small Inline graphic, whereas the latter show that the distribution of correlations is wider than their mean due to the variability in the connectivity, in particular the varying number of common inputs. The theory yields the dominant contribution to the mean value of this distribution scaling as Inline graphic in the limit of infinite network size. Although the asynchronous state of densely coupled networks has been described earlier [42], [54] by a mean-field theory neglecting correlations, the main achievement of the work by Renart et al. [24] must be seen as demonstrating that the formal structure of the theory of correlations indeed admits a solution with low correlations of order Inline graphic and that such a solution is accompanied by the cancellation of correlations between the inputs to pairs of neurons. In particular can this state of small correlations be achieved although the contribution of shared afferents to the input correlations is of order Inline graphic in the strong coupling limit, in contrast to the work of [35], where this contribution is of order Inline graphic. The authors of [24] employ an elegant scaling argument, taking the network size and hence the coupling to infinity, to obtain their results. In contrast, here we study these networks at finite size and obtain a theoretical prediction in good agreement with direct simulations in a large range of biologically relevant networks sizes. We further extend the framework of correlations in binary networks by an iterative procedure taking into account the finite-size fluctuations in the mean-field solution to determine the working point (mean activity) of the network. We find that the iteration converges to predictions for the covariance with higher accuracy than the previous method.

Equipped with these methods we investigate a network driven by correlated input due to shared afferents supplied by an external population. The analytical expressions for the covariances averaged over pairs of neurons show that correlations have two components that linearly superimpose, one caused by intrinsic fluctuations generated within the local network and one caused by fluctuations due to the external population. The size Inline graphic of the external population controls the strength of the correlations in the external input. We find that this external input causes an offset of all pairwise correlations, which decreases with increasing external population size in proportion to the strength of the external correlations (Inline graphic). The structure of correlations within the local network, i.e. the differences between correlations for pairs of neurons of different types, is mostly determined by the intrinsically generated fluctuations. These are proportional to the population-averaged variances Inline graphic and Inline graphic of the activity of the neurons in the local network. As a result, the structure of correlations is mostly independent of the external drive, and hence similar to the limiting case of an infinitely large external population Inline graphic or the case where the external drive is replaced by a DC signal with the same mean. For the other extreme, when the size of the external population equals the number of external afferents, Inline graphic, all neurons receive an exactly identical external signal. We show that the mechanism of decorrelation [24], [17] still holds for these strongly correlated external signals. The resulting correlation within the network is much smaller than expected given the amount of common input.

We proceed to re-investigate three observations in balanced random networks: fast tracking of external input signals [42], [54], the suppression of common input correlations, and small pairwise correlations to provide a view that is complementary to previous reports [24], [17], [52]. The lines of argument on these matters provided in the main text of [24] and in its mathematical supplement (as well as in [52]) differ. The main text starts at the observation that in large networks in the inhibition-dominated regime with an invertible connectivity matrix the activity exhibits fast-tracking [24, eq. (2)]. The authors then argue that hence positive correlations between excitatory and inhibitory synaptic currents are responsible for the decorrelation of network activity. The mathematical supplement, however, first derives the leading term of order Inline graphic for the pairwise correlations in the network in the limit of infinite network size [24, supplement, eqs. 38,39] and then shows that fast tracking and the cancellation of input correlations are both consequences of this correlation structure. The relation of fast tracking to the structure of correlations is a novel finding in [24, supplement, section 1.4] and not contained in the original report on fast tracking [42], [54]. We here in addition show that the cancellation of correlations between the inputs to pairs of neurons is equivalent to a suppression of fluctuations of the population-averaged input. We further demonstrate how negative feedback suppresses these fluctuations. This argument is in line with the earlier explanation that correlations are suppressed by negative feedback on the population level [17]. Dominant negative feedback is a fundamental requirement for the network to stabilize its activity in the balanced state [42]. We further show that the cancellation of input correlations does not uniquely determine the structure of correlations; different structures of correlations lead to the same cancellation of correlations between the summed inputs. The cancellation of input correlations therefore only constitutes a constraint for the pairwise correlations in the network. This constraint is identically fulfilled if the network shows perfect tracking of external input, which is equivalent to completely vanishing input fluctuations [24]. We show that the correlation structure compatible with perfect tracking [24, supplement, eqs. 38,39] is generally different from the structure in finite-sized networks, although both fulfill the constraint imposed by the cancellation of input correlations.

Performing the limit Inline graphic we distinguish two cases. (i) For an invertible connectivity matrix, we recover the result by [24], that in the limit of infinite network size correlations are dominated by tracking of the external signal and intrinsically generated fluctuations can be neglected; the resulting expressions for the correlations within the network [24, supplement, eqs. 38,39] are lacking the locally generated fluctuations that decay faster than Inline graphic for invertible connectivity. However, the intermediate result [24, supplement, eqs. 31,33] is identical to [35, eq. 6.8] and to (9) and contains both contributions. The convergence of the correlation structure to the limiting theory appears to be slow. For the parameters given in [24], quantitative agreement is achieved at around Inline graphic neurons. For the range of network sizes up to which a random network is typically considered a good model (Inline graphic neurons), the correlation structure is dominated by intrinsic fluctuations. (ii) For a singular matrix, as for example resulting from statistically identical inputs to excitatory and inhibitory neurons, the contributions of external and intrinsic fluctuations both scale as Inline graphic. Hence the intrinsic contribution cannot be neglected even in the limit Inline graphic. At finite network size the observed structure of correlations generally contains contributions from both intrinsic and external fluctuations, still present in the intermediate result [24, supplement, eqs. 31, 33] and in [35, eq. 6.8] and (9). In particular, the external contribution dominating in infinite networks with invertible connectivity may be negligible at finite network size. We therefore conclude that the mechanism determining the correlation structure in finite networks cannot be deduced from the limit Inline graphic and is not given by fast tracking of the external signal. Fast tracking is rather a consequence of negative feedback.

For the common but special choice of network connectivity where the synaptic weights depend only on the type of the source but not the target neuron, i.e. Inline graphic and Inline graphic [44], we show that the locally generated fluctuations and correlations are elevated and that the activity only loosely tracks the external input. The resulting correlation structure is Inline graphic. To systematically investigate the dependence of the correlation structure on the network connectivity, it proves useful to parameterize the structure of the network by two measures differentially controlling the location of the eigenvalues of the connectivity matrix. We find that for a wide parameter regime the correlations change quantitatively, but the correlation structure Inline graphic remains invariant. The qualitative comparison with experimental observations of [51] hence only constrains the connectivity to be within the one or the other parameter regime.

The networks we study here are balanced networks in the original sense as introduced in [42], that is to say they are inhibition-dominated and the balance of excitatory and inhibitory currents on the input side to a neuron arises as a dynamic phenomenon due to dominance of negative feedback which stabilizes the mean activity. A network with a balance of excitation and inhibition built into the connectivity of the network on the other hand would correspond in our notation to setting Inline graphic for both receiving populations Inline graphic, assuming identical sizes for the excitatory and the inhibitory population. The network activity is then no longer stabilized by negative feedback, because the mean activities Inline graphic and Inline graphic can freely co-fluctuate, Inline graphic and Inline graphic, without affecting the input to other cells: Inline graphic is independent of Inline graphic. Mathematically this amounts to a two-fold degenerate vanishing eigenvalue of the effective connectivity matrix. The resulting strong fluctuations would have to be treated with different methods than presented here and would lead to strong correlations.

The current work assumes that fluctuations are sufficiently small, restricting the expressions to asynchronous and irregular network states. Technically this assumption enters in form of two approximations: First, the summed input to a cell is replaced by a Gaussian fluctuating variable, valid only if pairwise correlations are weak. Second, the effect of a single synapse on the outgoing activity of a neuron is approximated to linear order allowing us to close the hierarchy of moments, as described in [55]. Throughout this work we show in addition to the obtained approximate solutions the results of simulations of the full, non-linear system. Deviations from direct simulations are stronger at lower mean activity, when the synaptic input fluctuates in the non-linear part of the effective transfer function. The best agreement of theory and simulation is hence obtained for a mean population activity close to Inline graphic, where Inline graphic means all neurons are active.

For simplicity in the major parts of this work we consider networks where neurons have a fixed in-degree. In large homogeneous random networks this is often a good approximation, because the mean number of connections is Inline graphic, and its standard deviation Inline graphic declines relative to the mean. Taking into account distributed synapse numbers and the resulting distribution of the mean activity in Figure 4 and Figure 7A shows that the results are only marginally affected for low mean activity. The impact of the activity distribution on the correlation structure is more pronounced at higher mean activity, where the second moment of the activity distribution has a notable effect on the population-averaged variance.

The presented work is closely related to our previous work on the correlation structure in spiking neuronal networks [17] and indeed was triggered by the review process of the latter. In [17], we exclusively studied the symmetric connectivity structure, where excitatory and inhibitory neurons receive the same input on average. The results are qualitatively the same as those shown in Figure 6. A difference though is, that the external input in [17] is uncorrelated, whereas here it originates from a common finite population. The cancellation condition for input correlations, also observed in vivo [50], holds for spiking networks as well as for the binary networks studied here. For both models, negative feedback constitutes the essential mechanism underlying the suppression of fluctuations at the population level. This can be explained by a formal relationship between the two models (see [53]).

Our theory presents a step towards an understanding of how correlated neuronal activity in local cortical circuits is shaped by recurrence and inputs from other cortical and thalamic areas. For example the correlation between membrane potentials of pairs of neurons in somatosensory cortex of behaving mice is dominated by low-frequency oscillations during quiet wakefulness. If the animal starts whisking, these correlations significantly decrease, even if the sensory nerve fibers are cut, suggesting an internal change of brain state [5]. Our work suggests that such a dynamic reduction of correlation could come about by modulating the effective negative feedback in the network. A possible neural implementation is the increase of tonic drive to inhibitory interneurons. This hypothesis is in line with the observed faster fluctuations in the whisking state [5]. Further work is needed to verify if such a mechanism yields a quantitative explanation of the experimental observations.

The network where the number of incoming external connections per neuron equals the size of the external population, cf. Figure 3 Inline graphic, can be regarded as a setting where all neurons receive an identical incoming stimulus. The correlations between this signal and the responses of neurons in the local network (Figure 3C) are smaller than in an unconnected population without local negative feedback. This can formally be seen from (29), because negative eigenvalues of the recurrent coupling dampen the population response of the system. This suppression of correlations between stimulus and local activity hence implies weaker responses of single neurons to the driving signal. Recent experiments have shown that only a sparse subset of around 10 percent of the neurons in S1 of behaving mice responds to a sensory stimulus evoked by the active touch of a whisker with an object [4]. The subset of responding cells is determined by those neurons in which the cell specific combination of activated excitatory and inhibitory conductances drives the membrane potential above threshold. Our work suggests that negative feedback mediated among the layer 2/3 pyramidal cells, e.g. through local interneurons, should effectively reduce their correlated firing. In a biological network the negative feedback arrives with a synaptic delay and effectively reduces the low-frequency content [17]. The response of the local activity is therefore expected to depend on the spectral properties of the stimulus. Intuitively one expects responses to better lock to the stimulus for fast and narrow transients with high-frequency content. Further work is required to investigate this issue in more detail.

A large number of previous studies on the dynamics of local cortical networks focuses on the effect of the local connectivity, but ignores the spatio-temporal structure of external inputs by assuming that neurons in the local network are independently driven by external (often Poissonian) sources. Our study shows that the input correlations of pairs of neurons in the local network are only weakly affected by additional correlations caused by shared external afferents: Even for the extreme case where all neurons in the network receive exactly identical external input (Inline graphic), the input correlations are small and only slightly larger than those obtained for the case where neurons receive uncorrelated external input (Inline graphic; black curve in Figure 8C). One may therefore conclude that the approximation of uncorrelated external input is justified. In general, this may however be a hasty conclusion. Tiny changes in synaptic-input correlations have drastic effects, for example, on the power and reach of extracellular potentials [34]. For the modeling of extracellular potentials, knowledge of the spatio-temporal structure of inputs from remote areas is crucial.

The theory of correlations in presence of externally impinging signals is a required building block to study correlation-sensitive synaptic plasticity [66] in recurrent networks. Understanding the emerging structure of correlations imposed by an external signal is the first step in predicting the connectivity patterns resulting from ongoing synaptic plasticity sensitive to those correlations.

Acknowledgments

All simulations were carried out with NEST (http://www.nest-initiative.org).

Funding Statement

This work is partially supported by the Helmholtz Association: HASB and portfolio theme SMHB, the Next-Generation Supercomputer Project of MEXT, and EU grant 269921 (BrainScaleS). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Kilavik BE, Roux S, Ponce-Alvarez A, Confais J, Gruen S, et al. (2009) Long-term modifications in motor cortical dynamics induced by intensive practice. J Neurosci 29: 12653–12663 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Maldonado P, Babul C, Singer W, Rodriguez E, Berger D, et al. (2008) Synchronization of neuronal responses in primary visual cortex of monkeys viewing natural images. J Neurophysiol 100: 1523–1532 [DOI] [PubMed] [Google Scholar]
  • 3.Ito J, Maldonado P, Singer W, Grün S (2011) Saccade-related modulations of neuronal excitability support synchrony of visually elicited spikes. Cereb Cortex 21: 2482–2497 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Crochet S, Poulet JF, Kremer Y, Petersen CC (2011) Synaptic mechanisms underlying sparse coding of active touch. Neuron 69: 1160–1175 [DOI] [PubMed] [Google Scholar]
  • 5.Poulet J, Petersen C (2008) Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice. Nature 454: 881–885 [DOI] [PubMed] [Google Scholar]
  • 6.Salinas E, Sejnowski TJ (2001) Correlated neuronal activity and the flow of neural information. Nat Rev Neurosci 2: 539–550 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Abeles M (1982) Local Cortical Circuits: An Electrophysiological Study. Studies of Brain Function. Berlin, Heidelberg, New York: Springer-Verlag.
  • 8.Diesmann M, Gewaltig MO, Aertsen A (1999) Stable propagation of synchronous spiking in cortical neural networks. Nature 402: 529–533 [DOI] [PubMed] [Google Scholar]
  • 9.Izhikevich EM (2006) Polychronization: Computation with spikes. Neural Comput 18: 245–282 [DOI] [PubMed] [Google Scholar]
  • 10.Sterne P (2012) Information recall using relative spike timing in a spiking neural network. Neural Comput 24: 2053–2077 [DOI] [PubMed] [Google Scholar]
  • 11.Hebb DO (1949) The organization of behavior: A neuropsychological theory. New York: John Wiley & Sons. [Google Scholar]
  • 12.von der Malsburg C (1981) The correlation theory of brain function. Internal report 81-2, Department of Neurobiology, Max-Planck-Institute for Biophysical Chemistry, Göttingen, Germany.
  • 13.Bienenstock E (1995) A model of neocortex. Network: Comput Neural Systems 6: 179–224 [Google Scholar]
  • 14.Singer W, Gray C (1995) Visual feature integration and the temporal correlation hypothesis. Annu Rev Neurosci 18: 555–586 [DOI] [PubMed] [Google Scholar]
  • 15.Tripp B, Eliasmith C (2007) Neural populations can induce reliable postsynaptic currents without observable spike rate changes or precise spike timing. Cereb Cortex 17: 1830–1840 [DOI] [PubMed] [Google Scholar]
  • 16.Zohary E, Shadlen MN, Newsome WT (1994) Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370: 140–143 [DOI] [PubMed] [Google Scholar]
  • 17.Tetzlaff T, Helias M, Einevoll G, Diesmann M (2012) Decorrelation of neural-network activity by inhibitory feedback. PLoS Comput Biol 8: e1002596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.De la Rocha J, Doiron B, Shea-Brown E, Kresimir J, Reyes A (2007) Correlation between neural spike trains increases with firing rate. Nature 448: 802–807 [DOI] [PubMed] [Google Scholar]
  • 19.Rosenbaum R, Josic K (2011) Mechanisms that modulate the transfer of spiking correlations. Neural Comput 23: 1261–1305 [DOI] [PubMed] [Google Scholar]
  • 20.Rosenbaum R, Rubin JE, Doiron B (2013) Short-term synaptic depression and stochastic vesicle dynamics reduce and shape neuronal correlations. J Neurophysiol 109: 475–484 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Bernacchia A, Wang XJ (2013) Decorrelation by recurrent inhibition in heterogeneous neural circuits. Neural Comput 25: 1732–1767 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Padmanabhan K, Urban NN (2010) Intrinsic biophysical diversity decorrelates neuronal firing while increasing information content. Nat Neurosci 13: 1276–1282 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Hertz J (2010) Cross-correlations in high-conductance states of a model cortical network. Neural Comput 22: 427–447 [DOI] [PubMed] [Google Scholar]
  • 24.Renart A, De La Rocha J, Bartho P, Hollender L, Parga N, et al. (2010) The asynchronous state in cortical circuits. Science 327: 587–590 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Shadlen MN, Newsome WT (1998) The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. J Neurosci 18: 3870–3896 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Tetzlaff T, Rotter S, Stark E, Abeles M, Aertsen A, et al. (2008) Dependence of neuronal correlations on filter characteristics and marginal spike-train statistics. Neural Comput 20: 2133–2184 [DOI] [PubMed] [Google Scholar]
  • 27.Kriener B, Tetzlaff T, Aertsen A, Diesmann M, Rotter S (2008) Correlations and population dynamics in cortical networks. Neural Comput 20: 2185–2226 [DOI] [PubMed] [Google Scholar]
  • 28.Pernice V, Staude B, Cardanobile S, Rotter S (2011) How structure determines correlations in neuronal networks. PLoS Comput Biol 7: e1002059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Trousdale J, Hu Y, Shea-Brown E, Josic K (2012) Impact of network structure and cellular response on spike time correlations. PLoS Comput Biol 8: e1002408. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Helias M, Tetzlaff T, Diesmann M (2013) Echoes in correlated neural systems. New J Phys 15: 023002 [Google Scholar]
  • 31.Pernice V, Staude B, Cardanobile S, Rotter S (2012) Recurrent interactions in spiking networks with arbitrary topology. Phys Rev E 85: 031916. [DOI] [PubMed] [Google Scholar]
  • 32.Bi G, Poo M (1998) Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci 18: 10464–10472 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL (2009) Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. I. Input selectivity - strengthening correlated input pathways. Biol Cybern 101: 81–102 [DOI] [PubMed] [Google Scholar]
  • 34.Lindén H, Tetzlaff T, Potjans TC, Pettersen KH, Grün S, et al. (2011) Modeling the spatial reach of the LFP. Neuron 72: 859–872 [DOI] [PubMed] [Google Scholar]
  • 35.Ginzburg I, Sompolinsky H (1994) Theory of correlations in stochastic neural networks. Phys Rev E 50: 3171–3191 [DOI] [PubMed] [Google Scholar]
  • 36.Meyer C, van Vreeswijk C (2002) Temporal correlations in stochastic networks of spiking neurons. Neural Comput 14: 369–404 Meyer02. [DOI] [PubMed] [Google Scholar]
  • 37.Lindner B, Doiron B, Longtin A (2005) Theory of oscillatory firing induced by spatially correlated noise and delayed inhibitory feedback. Phys Rev E 72: 061919. [DOI] [PubMed] [Google Scholar]
  • 38.Ostojic S, Brunel N, Hakim V (2009) How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains. J Neurosci 29: 10234–10253 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Hu Y, Trousdale J, Josić K, Shea-Brown E (2013) Motif statistics and spike correlations in neuronal networks. J Stat Mech : P03012.
  • 40.Brunel N, Hakim V (1999) Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput 11: 1621–1671 [DOI] [PubMed] [Google Scholar]
  • 41.Litwin-Kumar A, Chacron MJ, Doiron B (2012) The spatial structure of stimuli shapes the timescale of correlations in population spiking activity. PLoS Comput Biol 8: e1002667. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.van Vreeswijk C, Sompolinsky H (1996) Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274: 1724–1726 [DOI] [PubMed] [Google Scholar]
  • 43.Amit DJ, Brunel N (1997) Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cereb Cortex 7: 237–252 [DOI] [PubMed] [Google Scholar]
  • 44.Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 8: 183–208 [DOI] [PubMed] [Google Scholar]
  • 45.Potjans TC, Diesmann M (2012) The cell-type specific cortical microcircuit: Relating structure and activity in a full-scale spiking network model. Cerebral Cortex [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Binzegger T, Douglas RJ, Martin KAC (2004) A quantitative map of the circuit of cat primary visual cortex. J Neurosci 39: 8441–8453 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Stepanyants A, Martinez LM, Ferecskó AS, Kisvárday ZF (2009) The fractions of short- and long-range connections in the visual cortex. Proc Nat Acad Sci USA 106: 3555–3560 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Gilbert CD, Wiesel TN (1983) Clustered intrinsic connections in cat visual cortex. J Neurosci 5: 1116–33 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Voges N, Schüz A, Aertsen A, Rotter S (2010) A modeler's view on the spatial structure of intrinsic horizontal connectivity in the neocortex. Progress in Neurobiology 92: 277–292 [DOI] [PubMed] [Google Scholar]
  • 50.Okun M, Lampl I (2008) Instantaneous correlation of excitation and inhibition during sensory-evoked activities. Nat Neurosci 11: 535–537 [DOI] [PubMed] [Google Scholar]
  • 51.Gentet L, Avermann M, Matyas F, Staiger JF, Petersen CC (2010) Membrane potential dynamics of GABAergic neurons in the barrel cortex of behaving mice. Neuron 65: 422–435 [DOI] [PubMed] [Google Scholar]
  • 52.Parga N (2013) Towards a self-consistent description of irregular and asynchronous cortical activity. J Stat Mech: Theory and Exp P03010 [Google Scholar]
  • 53.Grytskyy D, Tetzlaff T, Diesmann M, Helias M (2013) A unified view on weakly correlated recurrent networks. Front Comput Neurosci 7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Van Vreeswijk C, Sompolinsky H (1998) Chaotic balanced state in a model of cortical circuits. Neural Comput 10: 1321–1371 [DOI] [PubMed] [Google Scholar]
  • 55.Buice MA, Cowan JD, Chow CC (2009) Systematic fluctuation expansion for neural network activity equations. Neural Comput 22: 377–426 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Rumelhart DE, McClelland JL, the PDP Research Group (1986) Parallel Distributed Processing, Explorations in the Microstructure of Cognition: Foundations, volume 1. Cambridge, Massachusetts: MIT Press. [Google Scholar]
  • 57.Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 79: 2554–2558 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Hanuschkin A, Kunkel S, Helias M, Morrison A, Diesmann M (2010) A general and efficient method for incorporating precise spike times in globally time-driven simulations. Front Neuroinform 4: 113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Gewaltig MO, Diesmann M (2007) NEST (NEural Simulation Tool). Scholarpedia 2: 1430 [Google Scholar]
  • 60.Hertz J, Krogh A, Palmer RG (1991) Introduction to the Theory of Neural Computation. Perseus Books.
  • 61.Kelly F (1979) Stochastic processes and reversibility. Wiley, Cambridge University Press. [Google Scholar]
  • 62.Jones E, Oliphant T, Peterson P, et al. (2001). SciPy: Open source scientific tools for Python. Http://www.scipy.org/
  • 63.Palmer EM (1985) Graphical Evolution. Wiley.
  • 64.Buzsáki G, Wang XJ (2012) Mechanisms of gamma oscillations. Annu Rev Neurosci 35: 203–225 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Grytskyy D, Tetzlaff T, Diesmann M, Helias M (2013) Invariance of covariances arises out of noise. AIP Conf Proc 1510: 258–262 [Google Scholar]
  • 66.Morrison A, Diesmann M, Gerstner W (2008) Phenomenological models of synaptic plasticity based on spike-timing. Biol Cybern 98: 459–478 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from PLoS Computational Biology are provided here courtesy of PLOS

RESOURCES