Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2012 Aug 2;8(8):e1002596. doi: 10.1371/journal.pcbi.1002596

Decorrelation of Neural-Network Activity by Inhibitory Feedback

Tom Tetzlaff 1,2,*,#, Moritz Helias 1,3,#, Gaute T Einevoll 2, Markus Diesmann 1,3,4
Editor: Nicolas Brunel5
PMCID: PMC3487539  PMID: 23133368

Abstract

Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).

Author Summary

The spatio-temporal activity pattern generated by a recurrent neuronal network can provide a rich dynamical basis which allows readout neurons to generate a variety of responses by tuning the synaptic weights of their inputs. The repertoire of possible responses and the response reliability become maximal if the spike trains of individual neurons are uncorrelated. Spike-train correlations in cortical networks can indeed be very small, even for neighboring neurons. This seems to be at odds with the finding that neighboring neurons receive a considerable fraction of inputs from identical presynaptic sources constituting an inevitable source of correlation. In this article, we show that inhibitory feedback, abundant in biological neuronal networks, actively suppresses correlations. The mechanism is generic: It does not depend on the details of the network nodes and decorrelates networks composed of excitatory and inhibitory neurons as well as purely inhibitory networks. For the case of the leaky integrate-and-fire model, we derive the correlation structure analytically. The new toolbox of formal linearization and a basis transformation exposing the feedback component is applicable to a range of biological systems. We confirm our analytical results by direct simulations.

Introduction

Neurons generate signals by weighting and combining input spike trains from presynaptic neuron populations. The number of possible signals which can be read out this way from a given spike-train ensemble is maximal if these spike trains span an orthogonal basis, i.e. if they are uncorrelated [1]. If they are correlated, the amount of information which can be encoded in the spatio-temporal structure of these spike trains is limited. In addition, correlations impair the ability of readout neurons to decode information reliably in the presence of noise. This is often discussed in the context of rate coding: for Inline graphic uncorrelated spike trains, the signal-to-noise ratio of the compound spike-count signal can be enhanced by increasing the population size Inline graphic. In the presence of correlations, however, the signal-to-noise ratio is bounded [2], [3]. The same reasoning holds for any other linear combination of spike trains, also for those where exact spike timing matters (for example for the coding scheme presented in [4]). Thus, the robustness of neuronal responses against noise critically depends on the level of correlated activity within the presynaptic neuron population.

Several studies suggested that correlated neural activity could be beneficial for information processing: Spike-train correlations can modulate the gain of postsynaptic neurons and thereby constitute a gating mechanism (for a review, see [4]). Coherent spiking activity might serve as a means to bind elementary representations into more complex objects [5], [6]. Information represented by correlated firing can be reliably sustained and propagated through feedforward subnetworks (‘synfire chains’; [7], [8]). Whether correlated firing has to be considered favorable or not largely depends on the underlying hypothesis, the type of correlation (e.g. the time scale or the affected frequency band) or which subpopulations of neurons are involved. Most ideas suggesting a functional benefit of correlated activity rely on the existence of an asynchronous ‘ground state’. Spontaneously emerging correlations, i.e. correlations which are not triggered by internal or external events, would impose a serious challenge to many of these hypotheses. Functionally relevant synfire activity, for example, cannot be guaranteed in the presence of correlated background input from the embedding network [9]. It is therefore–from several perspectives–important to understand the origin of uncorrelated activity in neural networks.

It has recently been shown that spike trains of neighboring cortical neurons can indeed be uncorrelated [10]. Similar results have been obtained in several theoretical studies [11][17]. From an anatomical point of view, this observation is puzzling: in general, neurons in finite networks share a certain fraction of their presynaptic sources. In particular for neighboring neurons, the overlap between presynaptic neuron populations is expected to be substantial. This feedforward picture suggests that such presynaptic overlap gives rise to correlated synaptic input and, in turn, to correlated response spike trains.

A number of theoretical studies showed that shared-input correlations are only weakly transferred to the output side as a consequence of the nonlinearity of the spike-generation dynamics [15], [18][21]. Unreliable spike transmission due to synaptic failure can further suppress the correlation gain [22]. In [9], we demonstrated that spike-train correlations in finite-size recurrent networks are even smaller than predicted by the low correlation gain of pairs of neurons with nonlinear spike-generation dynamics. We concluded that this suppression of correlations must be a result of the recurrent network dynamics. In this article, we compare correlations observed in feedforward networks to correlations measured in systems with an intact feedback loop. We refer to the reduction of correlations in the presence of feedback as “decorrelation”. Different mechanisms underlying such a dynamical decorrelation have been suggested in the recent past. Asynchronous states in recurrent neural networks are often attributed to chaotic dynamics [23], [24]. In fact, networks of nonlinear units with random connectivity and balanced excitation and inhibition typically exhibit chaos [11], [25]. The high sensitivity to noise may however question the functional relevance of such systems ([26], [27]; cf., however, [28]). [29] and [27] demonstrated that asynchronous irregular firing can also emerge in networks with stable dynamics. Employing an analytical framework of correlations in recurrent networks of binary neurons [30], the balance of excitation and inhibition has recently been proposed as another decorrelation mechanism [17]: In large networks, fluctuations of excitation and inhibition are in phase. Positive correlations between excitatory and inhibitory input spike trains lead to a negative component in the net input correlation which can compensate positive correlations caused by shared input.

In the present study, we demonstrate that dynamical decorrelation is a fundamental phenomenon in recurrent systems with negative feedback. We show that negative feedback alone is sufficient to efficiently suppress correlations. Even in purely inhibitory networks, shared-input correlations are compensated by feedback. A balance of excitation and inhibition is thus not required. The underlying mechanism can be understood by means of a simple linear model. This simplifies the theory and helps to gain intuition, but it also confirms that low correlations can emerge in recurrent networks with stable, non-chaotic dynamics.

The suppression of pairwise spike-train correlations by inhibitory feedback is reflected in a reduction of population-rate fluctuations. The main effect described in this article can therefore be understood by studying the dynamics of the macroscopic population activity. This approach leads to a simple mathematical description and emphasizes that the described decorrelation mechanism is a general phenomenon which may occur not only in neural networks but also in other (biological) systems with inhibitory feedback. In Results : Suppression of population-rate fluctuations in LIF networks”, we first illustrate the decorrelation effect for random networks of Inline graphic leaky integrate-and-fire (LIF) neurons with inhibitory or excitatory-inhibitory coupling. By means of simulations, we show that low-frequency spike-train correlations, and, hence, population-rate fluctuations are substantially smaller than expected given the amount of shared input. As shown in the subsequent section, the “Suppression of population-activity fluctuations by negative feedback” can readily be understood in the framework of a simple one-dimensional linear model with negative feedback. In Results : Population-activity fluctuations in excitatory-inhibitory networks”, we extend this to a two-population system with excitatory-inhibitory coupling. Here, a simple coordinate transform exposes the inherent negative feedback loop as the underlying cause of the fluctuation suppression in inhibition-dominated networks. The population-rate models of the inhibitory and the excitatory-inhibitory network are sufficient to understand the basic mechanism underlying the decorrelation. They do, however, not describe how feedback in cortical networks affects the detailed structure of pairwise correlations. In Results : Population averaged correlations in cortical networks”, we therefore compute self-consistent population averaged correlations for a random network of Inline graphic linear excitatory and inhibitory neurons. By determining the parameters of the linear network analytically from the LIF model, we show that the predictions of the linear model are—for a wide and realistic range of parameters—in excellent agreement with the results of the LIF network model. In Results : Effect of feedback manipulations”, we demonstrate that the active decorrelation in random LIF networks relies on the feedback of the (sub)population averaged activity but not on the precise microscopic structure of the feedback signal. In the Discussion , we put the consequences of this work into a broader context and point out limitations and possible extensions of the presented theory. The Methods contain details on the LIF network model, the derivation of the linear model from the LIF dynamics and the derivation of population-rate spectra and population averaged correlations in the framework of the linear model. This section is meant as a supplement; the basic ideas and the main results can be extracted from the Results .

Results

In a recurrent neural network of size Inline graphic, each neuron Inline graphic receives in general inputs from two different types of sources: External inputs Inline graphic representing the sum of afferents from other brain areas, and local inputs resulting from the recurrent connectivity within the network. Depending on their origin, external inputs Inline graphic and Inline graphic to different neurons Inline graphic and Inline graphic can be correlated or not. Throughout this manuscript, we ignore correlations between these external sources, thereby ensuring that correlations within the network activity arise from the local connectivity alone and are not imposed by external inputs [17]. The local inputs feed the network's spiking activity Inline graphic back to the network (we refer to spike train Inline graphic, the Inline graphic th component of the column vector Inline graphic [the superscript “Inline graphic” denotes the transpose], as a sum over delta-functions centered at the spike times Inline graphic: Inline graphic; the abstract quantity ‘spike train’ can be considered as being derived from the observable quantity ‘spike count’ Inline graphic, the number of spikes occurring in the time interval Inline graphic, by taking the limit Inline graphic: Inline graphic). The structure and weighting of this feedback can be described by the network's connectivity matrix Inline graphic (see Fig. 1 A). In a finite network, the local connectivity typically gives rise to overlapping presynaptic populations: in a random (Erdös-Rényi) network with connection probability Inline graphic, for example, each pair of postsynaptic neurons shares, on average, Inline graphic presynaptic sources. For a network size of, say, Inline graphic and a connection probability Inline graphic, this corresponds to a fairly large number of Inline graphic identical inputs. For other network structures, the amount of shared input may be smaller or larger. Due to this presynaptic overlap, each pair of neurons receives, to some extent, correlated input (even if the external inputs are uncorrelated). One might therefore expect that the network responses Inline graphic are correlated as well. In this article, we show that, in the presence of negative feedback, the effect of shared input caused by the structure of the network is compensated by its recurrent dynamics.

Figure 1. Spiking activity in excitatory-inhibitory LIF networks with intact (left column; feedback scenario) and opened feedback loop (right column; feedforward scenario).

Figure 1

A,B: Network sketches for the feedback (A) and feedforward scenario (B). C,D: Spiking activity (top panels) and population averaged firing rate (bottom panels) of the local presynaptic populations. E,F: Response spiking activity (top panels) and population averaged response rate (bottom panels). In the top panels of C–F, each pixel depicts the number of spikes (gray coded) of a subpopulation of Inline graphic neurons in a Inline graphic time interval. In both the feedback and the feedforward scenario, the neuron population Inline graphic is driven by the same realization Inline graphic of an uncorrelated white-noise ensemble; local input is fed to the population through the same connectivity matrix Inline graphic. The in-degrees, the synaptic weights and the shared-input statistics are thus exactly identical in the two scenarios. In the feedback case (A), local presynaptic spike-trains are provided by the network's response Inline graphic, i.e. the pre- (C) and postsynaptic spike-train ensembles (E) are identical. In the feedforward scenario (B), the local presynaptic spike-train population is replaced by an ensemble of Inline graphic independent realizations Inline graphic of a Poisson point process (D). Its rate is identical to the time- and population-averaged firing rate in the feedback case. See Table 1 and Table 2 for details on network models and parameters.

Suppression of population-rate fluctuations in LIF networks

To illustrate the effect of shared input and its suppression by the recurrent dynamics, we compare the spike response Inline graphic of a recurrent random network (feedback scenario; Fig. 1 A,C,E) of Inline graphic LIF neurons to the case where the feedback is cut and replaced by a spike-train ensemble Inline graphic, modeled by Inline graphic independent realizations of a stationary Poisson point process (feedforward scenario; Fig. 1 B,D,F). The rate of this Poisson process is identical to the time and population averaged firing rate in the intact recurrent system. In both the feedback and the feedforward case, the (local) presynaptic spike trains are fed to the postsynaptic population according to the same connectivity matrix Inline graphic. Therefore, not only the in-degrees and the synaptic weights but also the shared-input statistics are exactly identical.

For realistic size Inline graphic and connectivity Inline graphic, asynchronous states of random neural networks [12], [31] exhibit spike-train correlations which are small but not zero (compare raster displays in Fig. 1 C and D; see also [15]). Although the presynaptic spike trains are, by construction, independent in the feedforward case (Fig. 1 D), the resulting response correlations, and, hence, the population-rate fluctuations, are substantially stronger than those observed in the feedback scenario (compare Fig. 1 F and E). In other words: A theory which is exclusively based on the amount of shared input but neglects the details of the presynaptic spike-train statistics can significantly overestimate correlations and population-rate fluctuations in recurrent neural networks.

The same effect can be observed in LIF networks with both purely inhibitory and mixed excitatory-inhibitory coupling (Fig. 2). To demonstrate this quantitatively, we focus on the fluctuations of the population averaged activity Inline graphic. Its power-spectrum (or auto-correlation, in the time domain)

Figure 2. Suppression of low-frequency fluctuations in recurrent LIF networks with purely inhibitory (A, C) and mixed excitatory-inhibitory coupling (B, D) for instantaneous synapses with delay Inline graphic (A, B) and low-pass synapses with Inline graphic (C, D).

Figure 2

Power-spectra Inline graphic of population rates Inline graphic for the feedback (black) and the feedforward case (gray; cf. Fig. 1). See Table 1 and Table 2 for details on network models and parameters. In C and D, local synaptic inputs are modeled as currents Inline graphic with Inline graphic-function shaped kernel Inline graphic with time constant Inline graphic (Inline graphic denotes Heaviside function). (Excitatory) Synaptic weights are set to Inline graphic (see Table 1 for details). Simulation time Inline graphic. Single-trial spectra smoothed by moving average (frame size Inline graphic).

graphic file with name pcbi.1002596.e001.jpg (1)

is determined both by the power-spectra (auto-correlations) Inline graphic of the individual spike trains and the cross-spectra (cross-correlations) Inline graphic (Inline graphic) of pairs of spike trains (throughout the article, we use capital letters to represent quantities in frequency [Fourier] space; Inline graphic represents the Fourier transform of the spike train Inline graphic). We observe that the spike-train power-spectra Inline graphic (and auto-correlations) are barely distinguishable in the feedback and in the feedforward case (not shown here; the main features of the spike-train auto-correlation are determined by the average single-neuron firing rate and the refractory mechanism; both are identical in the feedback and the feedforward scenario). The differences in the population-rate spectra Inline graphic are therefore essentially due to differences in the spike-train cross-spectra Inline graphic. In other words, the fluctuations in the population activity serve as a measure of pairwise spike-train correlations [32]: small (large) population averaged spike-train correlations are accompanied by small (large) fluctuations in the population rate (see lower panels in Fig. 1 C–F). The power-spectra Inline graphic of the population averaged activity reveal a feedback-induced suppression of the population-rate variance at low frequencies up to several tens of Hertz. For the examples shown in Fig. 2, this suppression spans more than three orders of magnitude for the inhibitory and more than one order of magnitude for the excitatory-inhibitory network.

The suppression of low-frequency fluctuations does not critically depend on the details of the network model. As shown in Fig. 2, it can, for example, be observed for both networks with zero rise-time synapses (Inline graphic-shaped synaptic currents) and short delays and for networks with delayed low-pass filtering synapses (Inline graphic-shaped synaptic currents). In the latter case, the suppression of fluctuations is slightly more restricted to lower frequencies (Inline graphic). Here, the fluctuation suppression is however similarly pronounced as in networks with instantaneous synapses.

In Fig. 2 C,D, the power-spectra of the population activity converge to the mean firing rate at high frequencies. This indicates that the spike trains are uncorrelated on short time scales. For instantaneous Inline graphic-synapses, neurons exhibit an immediate response to excitatory input spikes [33], [34]. This fast response causes spike-train correlations on short time scales. Hence, the compound power at high frequencies is increased. In a recurrent system, this effect is amplified by reverberating simultaneous excitatory spikes. Therefore, the high-frequency power of the compound activity is larger in the feedback case (Fig. 2 B). Note that this high-frequency effect is absent in networks with more realistic low-pass filtering synapses (Fig. 2 C,D) and in purely inhibitory networks (Fig. 2 A).

Synaptic delays and slow synapses can promote oscillatory modes in certain frequency bands [12], [31], thereby leading to peaks in the population-rate spectra in the feedback scenario which exceed the power in the feedforward case (see peaks at Inline graphic in Fig. 2 C,D). Note that, in the feedforward case, the local input was replaced by a stationary Poisson process, whereas in the recurrent network (feedback case) the presynaptic spike trains exhibit oscillatory modes. By replacing the feedback by an inhomogeneous Poisson process with a time dependent intensity which is identical to the population rate in the recurrent network, we found that these oscillatory modes are neither suppressed nor amplified by the recurrent dynamics, i.e. the peaks in the resulting power-spectra have the same amplitude in the feedback and in the feedforward case (data not shown here). At low frequencies, however, the results are identical to those obtained by replacing the feedback by a homogeneous Poisson process (i.e. to those shown in Fig. 2; see Results : Effect of feedback manipulations”). In the present study, we mainly focus on these low-frequency effects.

The observation that the suppression of low-frequency fluctuations is particularly pronounced in networks with purely inhibitory coupling indicates that inhibitory feedback may play a key role for the underlying mechanism. In the following subsection, we demonstrate by means of a one-dimensional linear population model that, indeed, negative feedback alone leads to an efficient fluctuation suppression.

Suppression of population-activity fluctuations by negative feedback

Average pairwise correlations can be extracted from the spectrum (1) of the compound activity, provided the single spike-train statistics (auto-correlations) is known (see previous section). As the single spike-train statistics is identical in the feedback and in the feedforward scenario, the mechanism underlying the decorrelation in recurrent networks can be understood by studying the dynamics of the population averaged activity. In this and in the next subsection, we consider the linearized dynamics of random networks composed of homogeneous subpopulations of LIF neurons. The high-dimensional dynamics of such systems can be reduced to low-dimensional models describing the dynamics of the compound activity (for details, see Methods : Linearized network model”). Note that this reduction is exact for networks with homogeneous out-degree (number of outgoing connections). For the networks studied here (random networks with homogeneous in-degree), it serves as a sufficient approximation (in a network of size Inline graphic where each connection is randomly and independently realized with probability Inline graphic [Erdös-Rényi graph], the [binomial] in- and out-degree distributions become very narrow for large Inline graphic [relative to the mean in/out-degree]; both in- and out-degree are therefore approximately constant across the population of neurons). In this subsection, we first study networks with purely inhibitory coupling. In Results : Population-activity fluctuations in excitatory-inhibitory networks”, we investigate the effect of mixed excitatory-inhibitory connectivity.

Consider a random network of Inline graphic identical neurons with connection probability Inline graphic. Each neuron Inline graphic receives Inline graphic randomly chosen inputs from the local network with synaptic weights Inline graphic. In addition, the neurons are driven by external uncorrelated Gaussian white noise Inline graphic with amplitude Inline graphic, i.e. Inline graphic and Inline graphic. For small input fluctuations, the network dynamics can be linearized. This linearization is based on the averaged response of a single neuron to an incoming spike and describes the activity of an individual neuron Inline graphic by an abstract fluctuating quantity Inline graphic which is defined such that within the linear approximation its auto- and cross-correlations fulfill the same linearized equation as the spiking model in the low-frequency limit. Consequently, also the low-frequency fluctuations of the population spike rate are captured correctly by the reduced model up to linear order. This approach is equivalent to the treatment of finite-size fluctuations in spiking networks (see, e.g., [31]). For details, see Methods : Linearized network model”. For large Inline graphic, the population averaged activity Inline graphic can hence be described by a one-dimensional linear system

graphic file with name pcbi.1002596.e002.jpg (2)

with linear kernel Inline graphic, effective coupling strength Inline graphic and the population averaged noise Inline graphic (see Methods : Linearized network model” and Fig. 3 B). The coupling strength Inline graphic represents the integrated linear response of the neuron population to a small perturbation in the input rate of a single presynaptic neuron. For a population of LIF neurons, its relation to the synaptic weight Inline graphic (PSP amplitude) is derived in Methods : Linearized network model” and Methods : Response kernel of the LIF model”. The normalized kernel Inline graphic (with Inline graphic) captures the time course of the linear response. It is determined by the single-neuron properties (e.g. the spike-initiation dynamics [35], [36]), the properties of the synapses (e.g. synaptic weights and time constants [37], [38]) and the properties of the input (e.g. excitatory vs. inhibitory input [39]). For many real and model neurons, the linear population-rate response exhibits low-pass characteristics [13], [34][46]. For illustration (Fig. 3), we consider a 1st-order low-pass filter, i.e. an exponential impulse response Inline graphic with time constant Inline graphic (cutoff frequency Inline graphic; see Fig. 3 A, light gray curve in E). The results of our analysis are however independent of the choice of the kernel Inline graphic. The auto-correlation Inline graphic of the external noise is parametrized by the effective noise amplitude Inline graphic.

Figure 3. Partial canceling of fluctuations in a linear system by inhibitory feedback.

Figure 3

Response Inline graphic of a linear system with impulse response Inline graphic (1st-order low-pass, cutoff frequency Inline graphic) to Gaussian white noise input Inline graphic with amplitude Inline graphic for three local-input scenarios. A (light gray): No feedback (local input Inline graphic). B (black): Negative feedback (Inline graphic) with strength Inline graphic. The fluctuations of the weighted local input Inline graphic (BInline graphic) are anticorrelated to the external drive Inline graphic (BInline graphic). C (dark gray): Feedback in B is replaced by uncorrelated feedforward input Inline graphic with the same auto-statistics as the response Inline graphic in BInline graphic. The local input Inline graphic is constructed by assigning a random phase Inline graphic to each Fourier component Inline graphic of the response in BInline graphic. Fluctuations in CInline graphic and CInline graphic are uncorrelated. A, B, C: Network sketches. A Inline graphic, B Inline graphic, C Inline graphic: External input Inline graphic. A Inline graphic, B Inline graphic, C Inline graphic: Weighted local input Inline graphic. A Inline graphic, B Inline graphic, C Inline graphic: Responses Inline graphic. D, E: Response auto-correlation functions (D) and power-spectra (E) for the three cases shown in A,B,C (same gray coding as in A,B,C; inset in D: normalized auto-correlations).

Given the simplified description (2), the suppression of response fluctuations by negative feedback can be understood intuitively: Consider first the case where the neurons in the local network are unconnected (Fig. 3 A; no feedback, Inline graphic). Here, the response Inline graphic (Fig. 3 A Inline graphic) is simply a low-pass filtered version of the external input Inline graphic (Fig. 3 A Inline graphic), resulting in an exponentially decaying response auto-correlation (Fig. 3 D; light gray curve) and a drop in the response power-spectrum at the cutoff frequency Inline graphic (Fig. 3 E). At low frequencies, Inline graphic and Inline graphic are in phase; they are correlated. In the presence of negative feedback (Fig. 3 B), the local input Inline graphic (Fig. 3 B Inline graphic) and the low-frequency components of the external input Inline graphic (Fig. 3 B Inline graphic) are anticorrelated. They partly cancel out, thereby reducing the response fluctuations Inline graphic (Fig. 3 B Inline graphic). The auto-correlation function and the power-spectrum are suppressed (Fig. 3 D,E; black curves). Due to the low-pass characteristics of the system, mainly the low-frequency components of the external drive Inline graphic are transferred to the output side and, in turn, become available for the feedback signal. Therefore, the canceling of input fluctuations and the resulting suppression of response fluctuations are most efficient at low frequencies. Consequently, the auto-correlation function is sharpened (see inset in Fig. 3 D). The cutoff frequency of the system is increased (Fig. 3 E; black curve). This effect of negative feedback is very general and well known in the engineering literature. It is employed in the design of technical devices, like, e.g., amplifiers [47]. As the zero-frequency power is identical to the integrated auto-correlation function, the suppression of low-frequency fluctuations is accompanied by a reduction in the auto-correlation area (Fig. 3 D; black curve). Note that the suppression of fluctuations in the feedback case is not merely a result of the additional inhibitory noise source provided by the local input, but follows from the precise temporal alignment of the local and the external input. To illustrate this, let's consider the case where the feedback channel is replaced by a feedforward input Inline graphic (Fig. 3 C) which has the same auto-statistics as the response Inline graphic in the feedback case (Fig. 3 B Inline graphic) but is uncorrelated to the external drive Inline graphic. In this case, external input fluctuations (Fig. 3 C Inline graphic) are not canceled by the local input Inline graphic (Fig. 3 C Inline graphic). Instead, the local feedforward input acts as an additional noise source which leads to an increase in the response fluctuations (Fig. 3 C Inline graphic). The response auto-correlation and power-spectrum (Fig. 3 D,E; dark gray curves) are increased. Compared to the unconnected case (Fig. 3 E; light gray curve), the cutoff frequency remains unchanged.

The feedback induced suppression of response fluctuations can be quantified by comparing the response power-spectra

graphic file with name pcbi.1002596.e003.jpg (3)

and

graphic file with name pcbi.1002596.e004.jpg (4)

in the feedback (Fig. 3 B) and the feedforward case (Fig. 3 C), respectively (see Methods : Population-activity spectrum of the linear inhibitory network”). Here, Inline graphic and Inline graphic denote the Fourier transforms of the response fluctuations in the feedback and the feedforward scenario, respectively, Inline graphic the transfer function (Fourier transform of the filter kernel Inline graphic) of the neuron population, and Inline graphic the average across noise realizations. We use the power ratio

graphic file with name pcbi.1002596.e005.jpg (5)

as a measure of the relative fluctuation suppression caused by feedback. For low frequencies (Inline graphic) and strong effective coupling Inline graphic, the power ratio (5) decays as Inline graphic (see Fig. 4 A): the suppression of population-rate fluctuations is promoted by strong negative feedback. In line with the observations in Results : Suppression of population-rate fluctuations in LIF networks”, this suppression is restricted to low frequencies; for high frequencies (Inline graphic, i.e. Inline graphic), the power ratio Inline graphic approaches Inline graphic. Note that the power ratio (5) is independent of the amplitude Inline graphic of the population averaged external input Inline graphic. Therefore, even if we dropped the assumption of the external inputs Inline graphic being uncorrelated, i.e. if Inline graphic for Inline graphic, the power ratio (5) remained the same. For correlated external input, the power Inline graphic of the population average Inline graphic is different from Inline graphic. The suppression factor Inline graphic, however, is not affected by this. Moreover, it is straightforward to show that the power ratio (5) is, in fact, independent of the shape of the external-noise spectrum Inline graphic. The same result (5) is obtained for any type of external input (e.g. colored noise or oscillating inputs).

Figure 4. Suppression of low-frequency (LF) population-rate fluctuations in linearized homogeneous random networks with purely inhibitory (A) and mixed excitatory-inhibitory coupling (B).

Figure 4

Dependence of the zero-frequency power ratio Inline graphic on the effective coupling strength Inline graphic (solid curves: full solutions; dashed lines: strong-coupling approximations). The power ratio Inline graphic represents the ratio between the low-frequency population-rate power in the recurrent networks (A: Fig. 3 B; B: Fig. 5 A,B) and in networks where the feedback channels are replaced by uncorrelated feedforward input (A: Fig. 3 C; B, black: Fig. 5 C,D; B, gray: Fig. 5 D′). Dotted curves in B depict power ratio of the sum modes Inline graphic and Inline graphic (see text). B: Balance factor Inline graphic.

For low frequencies, the transfer function Inline graphic approaches unity (Inline graphic); the exact shape of the kernel Inline graphic becomes irrelevant. In particular, the cutoff frequency (or time constant) of a low-pass kernel has no effect on the zero-frequency power (integral correlation) and the zero-frequency power ratio Inline graphic (Fig. 4). Therefore, the suppression of low-frequency fluctuations does not critically depend on the exact choice of the neuron, synapse or input model. The same reasoning applies to synaptic delays: Replacing the kernel Inline graphic by a delayed kernel Inline graphic leads to an additional phase factor Inline graphic in the transfer function Inline graphic. For sufficiently small frequencies (long time scales), this factor can be neglected (Inline graphic).

For networks with purely inhibitory feedback, the absolute power (3) of the population rate decreases monotonously with increasing coupling strength Inline graphic. As we will demonstrate in Results : Population-activity fluctuations in excitatory-inhibitory networks” and Results : Population averaged correlations in cortical networks”, this is qualitatively different in networks with mixed excitatory and inhibitory coupling Inline graphic and Inline graphic, respectively: here, the fluctuations of the compound activity increase with Inline graphic. The power ratio Inline graphic, however, still decreases with Inline graphic.

Population-activity fluctuations in excitatory-inhibitory networks

In the foregoing subsection, we have shown that negative feedback alone can efficiently suppress population-rate fluctuations and, hence, spike-train correlations. So far, it is unclear whether the same reasoning applies to networks with mixed excitatory and inhibitory coupling. To clarify this, we now consider a random network composed of a homogeneous excitatory and inhibitory subpopulation Inline graphic and Inline graphic of size Inline graphic and Inline graphic, respectively. Each neuron receives Inline graphic excitatory and Inline graphic inhibitory inputs from Inline graphic and Inline graphic with synaptic weights Inline graphic and Inline graphic, respectively. In addition, the neurons are driven by external Gaussian white noise. As demonstrated in Methods : Linearized network model”, linearization and averaging across subpopulations leads to a two-dimensional system

graphic file with name pcbi.1002596.e006.jpg (6)

describing the linearized dynamics of the subpopulation averaged activity Inline graphic. Here, Inline graphic denotes the subpopulation averaged external uncorrelated white-noise input with correlation functions Inline graphic (Inline graphic, Inline graphic), and Inline graphic a normalized linear kernel with Inline graphic. The excitatory and inhibitory subpopulations are coupled through an effective connectivity matrix

graphic file with name pcbi.1002596.e007.jpg (7)

with effective weight Inline graphic and balance parameter Inline graphic.

The two-dimensional system (6)/(7) represents a recurrent system with both positive and negative feedback connections (Fig. 5 A). By introducing new coordinates

Figure 5. Sketch of the 2D (excitatory-inhibitory) model for the feedback (A,B) and the feedforward scenario (C,D) in normal (A,C) and Schur-basis representation (B,D).

Figure 5

A: Original 2D recurrent system. B: Schur-basis representation of the system shown in A. C: Feedforward scenario: Excitatory and inhibitory feedback connections of the original network (A) are replaced by feedforward input from populations with rates Inline graphic, Inline graphic, respectively. D: Schur-basis representation of the system shown in C. D′: Alternative feedforward scenario: Here, the feedforward channel (weight Inline graphic) of the original system in Schur basis (B) remains intact. Only the inhibitory feedback (weight Inline graphic) is replaced by feedforward input Inline graphic.

graphic file with name pcbi.1002596.e008.jpg (8)

and Inline graphic, Inline graphic, we obtain an equivalent representation of (6)/(7),

graphic file with name pcbi.1002596.e009.jpg (9)

describing the dynamics of the sum and difference activity Inline graphic and Inline graphic, respectively, i.e. the in- and anti-phase components of the excitatory and inhibitory subpopulations (see [48][50]). The new coupling matrix

graphic file with name pcbi.1002596.e010.jpg (10)

reveals that the sum mode Inline graphic is subject to self-feedback (Inline graphic) and receives feedforward input from the difference mode Inline graphic (Inline graphic). All remaining connections are absent (Inline graphic) in the new representation (8) (see Fig. 5 B). The correlation functions of the external noise in the new coordinates are given by Inline graphic with Inline graphic (Inline graphic).

The feedforward coupling is positive (Inline graphic): an excitation surplus (Inline graphic) will excite all neurons in the network, an excitation deficit (Inline graphic) will lead to global inhibition. In inhibition dominated regimes with Inline graphic, the self-feedback of the sum activity Inline graphic is effectively negative (Inline graphic). The dynamics of the sum rate in inhibition-dominated excitatory-inhibitory networks is therefore qualitatively similar to the dynamics in purely inhibitory networks ( Results : Suppression of population-activity fluctuations by negative feedback”). As shown below, the negative feedback loop exposed by the transform (8) leads to an efficient relative suppression of population-rate fluctuations (if compared to the feedforward case).

Mathematically, the coordinate transform (8) corresponds to a Schur decomposition of the dynamics: Any recurrent system of type (6) (with arbitrary coupling matrix Inline graphic) can be transformed to a system with a triangular coupling matrix (see, e.g., [50]). The resulting coupling between the different Schur modes can be ordered so that there are only connections from modes with lower index to modes with the same or larger index. In this sense, the resulting system has been termed ‘feedforward’ [50]. The original coupling matrix Inline graphic is typically not normal, i.e. Inline graphic. Its eigenvectors do not form an orthogonal basis. By performing a Gram-Schmidt orthonormalization of the eigenvectors, however, one can obtain a (normalized) orthogonal basis, a Schur basis. Our new coordinates (8) correspond to the amplitudes (the time evolution) of two orthogonal Schur modes.

The spectra Inline graphic, Inline graphic, Inline graphic and Inline graphic of the subpopulation averaged rates Inline graphic, Inline graphic and the sum mode Inline graphic, respectively, are derived in Methods : Population-activity spectra of the linear excitatory-inhibitory network”. In contrast to the purely inhibitory network (see Results : Suppression of population-activity fluctuations by negative feedback”), the population-rate fluctuations of the excitatory-inhibitory network increase monotonously with increasing coupling strength Inline graphic. For strong coupling, Inline graphic approaches

graphic file with name pcbi.1002596.e011.jpg (11)

from below with Inline graphic. Close to the critical point (Inline graphic), the rate fluctuations become very large; (11) diverges. Increasing the amount of inhibition by increasing Inline graphic, however, leads to a suppression of these fluctuations. In the limit Inline graphic, Inline graphic and (11) approach the spectrum Inline graphic of the unconnected network. For strong coupling (Inline graphic), the ratio Inline graphic approaches Inline graphic: the fluctuations of the population averaged excitatory firing rate exceed those of the inhibitory population by a factor Inline graphic (independently of Inline graphic and Inline graphic).

Similarly to the strategy we followed in the previous subsections, we will now compare the population-rate fluctuations of the feedback system (6), or equivalently (9), to the case where the feedback channels are replaced by feedforward input with identical auto-statistics. A straight-forward implementation of this is illustrated in Fig. 5 C: Here, the excitatory and inhibitory feedback channels Inline graphic and Inline graphic are replaced by uncorrelated feedforward inputs Inline graphic and Inline graphic, respectively. The Schur representation of this scenario is depicted in Fig. 5 D. According to (6), the Fourier transforms of the response fluctuations of this system read

graphic file with name pcbi.1002596.e012.jpg (12)

With Inline graphic, and using Inline graphic, Inline graphic, Inline graphic, we can express the spectrum Inline graphic of the sum activity in the feedforward case in terms of the spectra Inline graphic and Inline graphic of the feedback system (see eq. (55)). For strong coupling (Inline graphic), the zero-frequency component (Inline graphic) becomes

graphic file with name pcbi.1002596.e013.jpg (13)

Thus, for strong coupling, the zero-frequency power ratio

graphic file with name pcbi.1002596.e014.jpg (14)

reveals a relative suppression of the population-rate fluctuations in the feedback system which is proportional to Inline graphic (see Fig. 4 B; black dashed line). The power ratio Inline graphic for arbitrary weights Inline graphic is depicted in Fig. 4 B (black dotted curve). For a network at the transition point Inline graphic, (14) equals Inline graphic. Increasing the level of inhibition by increasing Inline graphic leads to a decrease in the power ratio: in the limit Inline graphic, (14) approaches Inline graphic monotonously.

Above, we suggested that the negative self-feedback of the sum mode Inline graphic, weighted by Inline graphic (Fig. 5 B), is responsible for the fluctuation suppression in the recurrent excitatory-inhibitory system. Here, we test this by considering the case where this feedback loop is opened and replaced by uncorrelated feedforward input Inline graphic, weighted by Inline graphic, while the feedforward input from the difference mode Inline graphic, weighted by Inline graphic, is left intact (see Fig. 5 D′). As before, we assume that the auto-statistics of Inline graphic is identical to the auto-statistics of Inline graphic as obtained in the feedback case, i.e. Inline graphic. According to the Schur representation of the population dynamics (9)/(10), the Fourier transform of the sum mode of this modified system is given by

graphic file with name pcbi.1002596.e015.jpg (15)

With Inline graphic given in (54) and Inline graphic, we obtain the power ratio

graphic file with name pcbi.1002596.e016.jpg (16)

Its zero-frequency component Inline graphic is shown in Fig. 4 B (gray dotted curve). For strong coupling, the power ratio decays as Inline graphic (gray dashed line in Fig. 4 B). Thus, the (relative) power in the recurrent system is reduced by strengthening the negative self-feedback loop, i.e. by increasing Inline graphic.

So far, we have presented results for the subpopulation averaged firing rates Inline graphic and Inline graphic and the sum mode Inline graphic. The spectrum of the compound rate Inline graphic, i.e. the activity averaged across the entire population, reads

graphic file with name pcbi.1002596.e017.jpg (17)

In the feedforward scenario depicted in Fig. 5 C, the spectrum of the compound rate Inline graphic (with Inline graphic) is given by

graphic file with name pcbi.1002596.e018.jpg (18)

For strong coupling, the corresponding low-frequency power ratio Inline graphic (black solid curve in Fig. 4 B) exhibits qualitatively the same decrease Inline graphic as the sum mode.

To summarize the results of this subsection: the population dynamics of a recurrent network with mixed excitatory and inhibitory coupling can be mapped to a two-dimensional system describing the dynamics of the sum and the difference of the excitatory and inhibitory subpopulation activities. This equivalent representation uncovers that, in inhibition dominated networks (Inline graphic), the sum activity is subject to negative self-feedback. Thus, the dynamics of the sum activity in excitatory-inhibitory networks is qualitatively similar to the population dynamics of purely inhibitory networks (see Results : Suppression of population-activity fluctuations by negative feedback”). Indeed, the comparison of the compound power-spectra of the intact recurrent network and networks where the feedback channels are replaced by feedforward input reveals that the (effective) negative feedback in excitatory-inhibitory networks leads to an efficient suppression of population-rate fluctuations.

Population averaged correlations in cortical networks

The results presented in the previous subsections describe the fluctuations of the compound activity. Pairwise correlations Inline graphic between the (centralized) spike trains Inline graphic are outside the scope of such a description. In this subsection, we consider the same excitatory-inhibitory network as in Results : Population-activity fluctuations in excitatory-inhibitory networks” and present a theory for the population averaged spike-train cross-correlations. In general, this is a hard problem. To understand the structure of cross-correlations, it is however sufficient to derive a relationship between the cross- and auto-covariances in the network, because the latter can, to good approximation, be understood in mean-field theory. The integral of the auto-covariance function of spiking LIF neurons can be calculated by Fokker-Planck formalism [12], [31], [51]. To determine the relation between the cross-covariance and the auto-covariance, we replace the spiking dynamics by a reduced linear model with covariances obeying, to linear order, the same relation. We present the full derivation in Methods : Linearized network model”. There, we first derive an approximate linear relation between the auto- and cross-covariance functions Inline graphic and Inline graphic, respectively, of the LIF network. A direct solution of this equation is difficult. In the second step, we therefore show that there exists a linear stochastic system with activity Inline graphic and correlations Inline graphic and Inline graphic fulfilling the same equation as the original LIF model. This reduced model can be solved in the frequency domain by standard Fourier methods. Its solution allows us, by construction, to determine the relation between the integral cross-covariances Inline graphic and the integral auto-covariances Inline graphic up to linear order.

As we are interested in the covariances averaged over many pairs of neurons, we average the resulting set of Inline graphic linear self-consistency equations (56) for the covariance matrix in the frequency domain Inline graphic over statistically identical pairs of neurons and many realizations of the random connectivity (see Methods : Population averaged correlations in the linear EI network”). This yields a four-dimensional linear system (76) describing the population averaged variances Inline graphic and Inline graphic of the excitatory and inhibitory subpopulations, and the covariances Inline graphic and Inline graphic for unconnected excitatory-excitatory and inhibitory-inhibitory neuron pairs, respectively (note that we use the terms “variance” and “covariance” to describe the integral of the auto- and cross-correlation function, respectively; in many other studies, they refer to the zero-lag correlation functions instead). The dependence of the variances and covariances on the coupling strength Inline graphic, obtained by numerically solving (76), is shown in Fig. 6. We observe that the variances Inline graphic and Inline graphic of excitatory and inhibitory neurons are barely distinguishable (Fig. 6 A). With the approximation Inline graphic, explicit expressions can be obtained for the covariances (thick dashed curves Fig. 6 E):

Figure 6. Dependence of population averaged correlations and population-rate fluctuations on the effective coupling Inline graphic in a linearized homogeneous network with excitatory-inhibitory coupling.

Figure 6

A: Spike-train variances Inline graphic (black) and Inline graphic (gray) of excitatory and inhibitory neurons. B: Spike-train covariances Inline graphic (black solid), Inline graphic (dark gray solid) and Inline graphic (light gray solid) for excitatory-excitatory, excitatory-inhibitory and inhibitory-inhibitory neuron pairs in the recurrent network, respectively, and shared-input contribution Inline graphic (black dotted curve; ‘feedforward case’). C: Decomposition of the total input covariance Inline graphic (light gray) into shared-input covariance Inline graphic (black) and weighted spike-train covariance Inline graphic (dark gray). Covariances in A, B and C are given in units of the noise variance Inline graphic. D: Input-correlation coefficient Inline graphic in the recurrent network (black solid curve). In the feedforward case, the input-correlation coefficient is identical to the network connectivity Inline graphic (horizontal dotted line). E: Spike-train correlation coefficients Inline graphic (black), Inline graphic (dark gray) and Inline graphic (solid light gray curve) for excitatory-excitatory, excitatory-inhibitory and inhibitory-inhibitory neuron pairs, respectively. Thick dashed curves represent approximate solutions assuming Inline graphic. F: Low-frequency (LF) power ratios Inline graphic (black), Inline graphic (dark gray), Inline graphic (solid light gray) for the population rate Inline graphic and the excitatory and inhibitory subpopulation rates Inline graphic and Inline graphic, respectively. The LF power ratio represents the ratio between the LF spectra in the recurrent network and for the case where the feedback channels are replaced by feedforward input with Inline graphic (cf. Fig. 5 C). Thick dashed curves in F show power ratios obtained by assuming that the auto-correlations are identical in the feedback and the feedforward scenario (see main text). Vertical dotted lines mark the stability limit of the linear model (see Methods : Linearized network model”). A–F: Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic, Inline graphic.

graphic file with name pcbi.1002596.e019.jpg (19)

The deviations from the full solutions (thin solid curves in Fig. 6 E), i.e. for Inline graphic, are small. In the reduced model, both the external input and the spiking of individual neurons contribute to an effective noise. As the fluctuations in the reduced model depend linearly on the amplitude Inline graphic of this noise, the variances Inline graphic and covariances Inline graphic (Inline graphic) can be expressed in units of the noise variance Inline graphic. Consequently, the correlation coefficients Inline graphic are independent of Inline graphic (see Fig. 6).

The analytical form (19) of the result shows that the correlations are smaller than expected given the amount of shared input a pair of neurons receives: The quantity Inline graphic in the first line is the contribution of shared input to the covariance. For strong coupling Inline graphic, the prefactor Inline graphic causes a suppression of this contribution. Its structure is typical for a feedback system, similar to the solution (3) of the one-population or the solution (52) of the two-population model. The term Inline graphic in the denominator represents the negative feedback of the compound rate. The prefactor Inline graphic in the second line of (19) is again due to the feedback and suppresses the contribution of the factor Inline graphic, which represents the effect of direct connections between neurons.

Our results are consistent with a previous study of the decorrelation mechanism: In [17], the authors considered how correlations scale with the size Inline graphic of the network where the synaptic weights are chosen as Inline graphic. As a result, the covariance Inline graphic in (19) caused by shared input is independent of the network size, while the feedback Inline graphic scales—to leading order—as Inline graphic (see (45)). Consequently, the first line in (19) scales as Inline graphic. The same scaling holds for the second line in (19), explaining the decay of correlations as Inline graphic found in [17].

The first line in (19) is identical for any pair of neurons. The second line is positive for a pair of excitatory neurons and negative for a pair of inhibitory neurons. In other words, excitatory neurons are more correlated than inhibitory ones. Together with the third line in (19), this reveals a characteristic correlation structure: Inline graphic (Fig. 6 B,E). For strong coupling Inline graphic, the difference between the excitatory and inhibitory covariance is Inline graphic. The difference decreases as the level Inline graphic of inhibition is increased, i.e. the further the network is in the inhibition dominated regime, away from the critical point Inline graphic.

To understand the suppression of shared-input correlations in recurrent excitatory-inhibitory networks, consider the correlation between the local inputs Inline graphic of a pair of neurons Inline graphic, Inline graphic. The input-correlation coefficient Inline graphic can be expressed in terms of the averaged spike-train covariances:

graphic file with name pcbi.1002596.e020.jpg (20)

(see Methods : Population averaged correlations in the linear EI network”: The input covariance Inline graphic equals the average quantity Inline graphic given in (67), the input variance Inline graphic is given by (63) as Inline graphic). The term Inline graphic represents the contribution due to the spike-train variances of the shared presynaptic neurons (see (19)). This contribution is always positive (provided the network architecture is consistent with Dale's law; see [15]). In a purely feedforward scenario with uncorrelated presynaptic sources, Inline graphic is the only contribution to the input covariance of postsynaptic neurons. The resulting response correlation for this feedforward case is much larger than in the feedback system (Fig. 6 B, black dotted curve). The correlation coefficient between inputs to a pair of neurons in the feedforward case is identical to the network connectivity Inline graphic (horizontal dotted curve in Fig. 6 D; see [15]). In an inhibition dominated recurrent network, spike-train correlations between pairs of different source neurons contribute the additional term Inline graphic, which is negative and of similar absolute value as the shared-input contribution Inline graphic. Thus, the two terms Inline graphic and Inline graphic partly cancel each other (see Fig. 6 C). In consequence, the resulting input correlation coefficient Inline graphic is smaller than Inline graphic (see Fig. 6 D; here: Inline graphic).

The correlations in a purely inhibitory network can be obtained from (19) by replacing Inline graphic, taking into account the negative sign of Inline graphic in Inline graphic and setting Inline graphic and Inline graphic:

graphic file with name pcbi.1002596.e021.jpg (21)

For finite coupling strength Inline graphic, this expression is negative. The contributions of shared input and spike-train correlations to the input correlation are given by Inline graphic and Inline graphic, respectively (see (19) and (20)). Using (21), we can directly verify that Inline graphic, because pairwise correlations Inline graphic are negative, leading to a partial cancellation Inline graphic: the right hand side is smaller in magnitude by a factor of Inline graphic compared to each individual contribution. Hence, as in the network with excitation and inhibition, shared-input correlations are partly canceled by the contribution due to presynaptic pairwise spike-train correlations. In the feedforward scenario with zero presynaptic spike-train correlations, in contrast, the response correlations are determined by shared input alone and are therefore increased. The suppression of shared-input correlations in the feedback case is what we call ‘decorrelation’ in the current work. In purely inhibitory networks, this decorrelation is caused by weakly negative pairwise correlations (21). For sufficiently strong negative feedback, correlations are smaller in absolute value as compared to the feedforward case. The absolute value of these anti-correlations is bounded by Inline graphic.

The similarity in the results obtained for purely inhibitory networks and excitatory-inhibitory networks demonstrates that the suppression of pairwise correlations and population-activity fluctuations is a generic phenomenon in systems with negative feedback. It does not rely on an internal balance between excitation and inhibition.

As discussed in Results : Suppression of population-rate fluctuations in LIF networks”, the suppression of correlations in the recurrent network is accompanied by a reduction of population-activity fluctuations. With the population averaged correlations (19), the power (1) of the population activity Inline graphic reads

graphic file with name pcbi.1002596.e022.jpg (22)

In Results : Population-activity fluctuations in excitatory-inhibitory networks”, we showed that the population-activity fluctuations are amplified if the local input in the recurrent system is replaced by feedforward input from independent excitatory and inhibitory populations (see Fig. 5 C). This manipulation corresponds to a neglect of correlations Inline graphic between excitatory and inhibitory neurons. All remaining correlations (Inline graphic, Inline graphic, Inline graphic, Inline graphic) are preserved. With the resulting response auto- and cross-correlations Inline graphic and Inline graphic given by (84), the power (1) of the population activity becomes

graphic file with name pcbi.1002596.e023.jpg (23)

For large effective coupling Inline graphic, the power ratio Inline graphic decays as Inline graphic (black curve in Fig. 6 F). Note that the power ratio Inline graphic derived here is indistinguishable from the one we obtained in the framework of the population model in Results : Population-activity fluctuations in excitatory-inhibitory networks” (black solid curve in Fig. 4 B). Although the derivation of the macroscopic model in Results : Population-activity fluctuations in excitatory-inhibitory networks” is different from the one leading to the population averaged correlations described here, the two models are consistent: They describe one and the same system and lead to identical power ratios.

The fluctuation suppression is not only observed at the level of the entire network, i.e. for the population activity Inline graphic, but also for each individual subpopulation Inline graphic and Inline graphic, i.e. for the subpopulation averaged activities Inline graphic and Inline graphic. The derivation of the corresponding power ratios Inline graphic and Inline graphic is analog to the one described above. As a result of the correlation structure Inline graphic in the feedback system (see Fig. 6 B), the power of the inhibitory population activity is smaller than the power of the excitatory population activity. In consequence, Inline graphic (gray curves in Fig. 6 F).

In (22) and (23), the auto-correlations are scaled by Inline graphic, while the cross-correlations enter with a prefactor of order unity. For large Inline graphic, one may therefore expect that the suppression of population-activity fluctuations is essentially mediated by pairwise correlations. In the recurrent system, however, the cross-correlations Inline graphic (Inline graphic) are of order Inline graphic (see Fig. 6 and (19)). It is therefore a priori not clear whether the fluctuation suppression is indeed dominated by pairwise correlations. In our framework, one can explicitly show that the auto-correlation is irrelevant: Replacing the auto-correlation Inline graphic in (23) by the average auto-correlation Inline graphic of the intact feedback system has no visible effect on the resulting power ratio (dashed curves in Fig. 6 F). The difference in the spectra of the population activities Inline graphic and Inline graphic is therefore essentially caused by the cross-correlations.

The absolute population-activity fluctuations in purely inhibitory and in excitatory-inhibitory networks show a qualitatively different dependence on the synaptic coupling Inline graphic, in agreement with the previous sections. In networks with excitation and inhibition, the correlation coefficient increases with increasing synaptic coupling (see Fig. 6 E). Hence, the population-activity fluctuations grow with increasing coupling strength. In purely inhibitory networks, in contrast, the pairwise spike-train correlation decreases monotonously with increasing magnitude of the coupling strength Inline graphic, see (21). In consequence, the population-activity fluctuations decrease. The underlying reason is that, in the inhibitory network, the power of the population activity is directly proportional to the covariance of the input currents, which is actively suppressed, as shown above. For excitatory-inhibitory networks, these two quantities are not proportional (compare (20) and (1)) due to the different synaptic weights appearing in the input covariance.

To compare our theory to simulations of spiking LIF networks, we need to determine the effect of a synaptic input on the response activity of the neuron model. To this end, we employ the Fokker-Planck theory of the LIF model (see Methods : Response kernel of the LIF model”). In this context, the steady state of the recurrent network is characterized by the mean Inline graphic and the standard deviation Inline graphic of the total synaptic input. Both Inline graphic and Inline graphic depend on the steady-state firing rate in the network. The steady-state firing rate can be determined in a self-consistent manner [12] as the fixed point of the firing rate approximation (42). The approximation predicts the firing rate to sufficient accuracy of about Inline graphic (see Fig. 7 A). We then obtain an analytical expression of the low-frequency transfer which relates the fluctuation Inline graphic of a synaptic input to neuron Inline graphic to the fluctuation of neuron Inline graphic's response firing rate to linear order, so that Inline graphic. This relates the postsynaptic potential Inline graphic in the LIF model to the effective linear coupling Inline graphic in our linear theory. The functional relation Inline graphic can be derived in analytical form by linearization of (42) about the steady-state working point. Note that Inline graphic depends on Inline graphic and Inline graphic and, hence, on the steady-state firing rate in the network. The derivation outlined in Methods : Response kernel of the LIF model” constitutes an extension of earlier work [21], [33] to quadratic order in Inline graphic. The results agree well with those obtained by direct simulation for a large range of synaptic amplitudes (see Fig. 8).

Figure 7. Comparison between predictions of the linear theory (thick gray curves) and direct simulation of the LIF-network model (symbols and thin lines).

Figure 7

Dependence of the spike-train and population-rate statistics on the synaptic weight Inline graphic (PSP amplitude) in a recurrent excitatory-inhibitory network (‘feedback system’, ‘FB’) and in a population of unconnected neurons receiving randomized feedforward input (‘feedforward system’, ‘FF’) from neurons in the recurrent network. Average presynaptic firing rates and shared-input structure are identical in the two systems. In the FF case, the average correlations between presynaptic spike-trains are homogenized (i.e. Inline graphic) as a result of the random reassignment of presynaptic neuron types. The mapping of the LIF dynamics to the linear reduced dynamics ( Methods : Response kernel of the LIF model”) relates the PSP amplitude Inline graphic to the effective coupling strength Inline graphic by (45), as shown in Fig. 8 B. A: Average firing rates Inline graphic in the FB (black up-triangles: excitatory neurons; gray down-triangles: inhibitory neurons) and in the FF system (open circles). Analytical prediction (??) (gray curve). B: Spike-train correlation coefficients Inline graphic (black up-triangles), Inline graphic (gray squares) and Inline graphic (gray down-triangles) for excitatory-excitatory, excitatory-inhibitory, and inhibitory-inhibitory neuron pairs, respectively, in the FB system. Analytical prediction (19) (gray curves). Spike-train correlation coefficient Inline graphic (open circles) in the FF system with homogenized presynaptic spike-train correlations. Analytical prediction (86) (underlying gray curve). C: Shared-input (Inline graphic; black up-triangles) and spike-correlation contribution Inline graphic (FB: gray down-triangles; FF: open circles) to the input correlation Inline graphic (normalized by Inline graphic). Analytical predictions (20). D: Low-frequency (LF) power ratio of the compound activity. Vertical dotted lines in A–D mark the stability limit of the linear model (see Methods : Linearized network model”). Inline graphic, Inline graphic, Inline graphic, Inline graphic. Size of postsynaptic population in the FF case: Inline graphic. Simulation time: Inline graphic.

Figure 8. Linear response and relation between synaptic weight Inline graphic and effective coupling strength Inline graphic.

Figure 8

A: Firing-rate deflection Inline graphic of a LIF neuron caused by an incoming spike event of postsynaptic amplitude Inline graphic. B: Integral Inline graphic of the firing rate deflection shown in A as a function of the postsynaptic amplitude Inline graphic (simulation: black dots; analytical approximation (45) : gray curve). The neuron receives constant synaptic background input with Inline graphic, Inline graphic, and rates Inline graphic, Inline graphic resulting in a first and second moment (42) Inline graphic and Inline graphic. Simulation results are obtained by averaging over Inline graphic trials of Inline graphic duration each with Inline graphic input impulses on average. For further parameters of the neuron model, see Table 1 and Table 2.

Fig. 7 B compares the population averaged correlation coefficients Inline graphic obtained from the linear reduced model, see (19), and simulations of LIF networks. Note that the absolute value of the noise amplitude Inline graphic in the reduced model does not influence the correlation coefficient Inline graphic, as both quantities Inline graphic and Inline graphic depend linearly on Inline graphic. Theory and simulation agree well for synaptic weights up to Inline graphic. For larger synaptic amplitudes, the approximation of the effective linear transfer for a single neuron obtained from the Fokker-Planck theory deviates from its actual value (see Fig. 8 B). Fig. 7 C shows that the cancellation of the input covariance in the LIF network is well explained by the theory.

Previous work [17] suggested that positive correlations between excitatory and inhibitory inputs lead to a negative component in the input correlation which, in turn, suppresses shared-input correlations. The mere existence of positive correlations between excitatory and inhibitory inputs is however not sufficient. To explain the effect, it is necessary to take the particular correlation structure Inline graphic into account. To illustrate this, consider the case where the correlation structure is destroyed by replacing all pairwise correlations in the input spike-train ensemble by the overall population average Inline graphic (homogenization of correlations). The resulting response correlations (upper gray curve in Fig. 7 B) are derived in Methods : Population averaged correlations in the linear EI network”, eq. (86). In simulations of LIF networks, we study the effect of homogenized spike-train correlations by first recording the activity of the intact recurrent network, randomly reassigning the neuron type (Inline graphic or Inline graphic) to each recorded spike train, and feeding this activity into a second population of neurons. Compared to the intact recurrent network, the response correlations are significantly larger (Fig. 7 B). The contribution of homogenized spike-train correlations to the input covariance Inline graphic (see (20)) is given by Inline graphic. For positive spike-train correlations Inline graphic, this contribution is greater or equal zero (zero for Inline graphic). Hence, it cannot compensate the (positive) shared-input contribution Inline graphic (see Fig. 7 C). In consequence, input correlations, output correlations and, in turn, population-rate fluctuations (Fig. 7 D) cannot be suppressed by homogeneous positive correlations in the input spike-train ensemble. Canceling of shared-input correlations requires either negative spike-train correlations (as in purely inhibitory networks) or a heterogeneity in correlations across different pairs of neurons (e.g. Inline graphic).

Effect of feedback manipulations

In the previous subsections, we quantified the suppression of population-rate fluctuations in recurrent networks by comparing the activity in the intact recurrent system (feedback scenario) to the case where the feedback is replaced by feedforward input with some predefined statistics (feedforward scenario). We particularly studied the effect of neglecting the auto-statistics of the compound feedback, (the structure of) correlations within the feedback ensemble and/or correlations between the feedback and the external input. In all cases, we observed a significant amplification of population-activity fluctuations in the feedforward scenario. In this subsection, we further investigate the role of different types of feedback manipulations by means of simulations of LIF networks with excitatory-inhibitory coupling. To this end, we record the spiking activity of the recurrent network (feedback case), apply different types of manipulations to this activity (described in detail below) and feed this modified activity into a second population of identical (unconnected) neurons (feedforward case). As before, the connectivity structure (in-degrees, shared-input structure, synaptic weights) is exactly identical in the feedback and the feedforward case.

In Methods : Linearized network model”, we show that the low-frequency fluctuations of the population rate Inline graphic of the spiking model are captured by the reduced model Inline graphic presented in the previous subsections. To verify that the theory based on excitatory and inhibitory population rates is indeed sufficient to explain the decorrelation mechanism, we first consider the case where the sender identities of the presynaptic spike train are randomly shuffled. Fig. 9 A shows the power-spectrum of the population activity recorded in the original network (FB) as well as the spectra obtained after shuffling spike-train identities within the excitatory and inhibitory subpopulations separately (Shuff2D), or across the entire network (Shuff1D). As shuffling of neuron identities does not change the population rates, all three compound spectra are identical. Fig. 9 B shows the response power-spectra of the neuron population receiving the shuffled spike trains. Shuffling within the subpopulations (Shuff2D) preserves the population-specific fluctuations and average correlations. The effect on the response fluctuations is negligible (compare black and light gray curves in Fig. 9 B). In particular, the power of low-frequency fluctuations remains unchanged (Fig. 9 C). This result confirms that population models which take excitatory and inhibitory activity separately into account are sufficient to explain the observations. Shuffling of spike-train identities across subpopulations (Shuff1D), in contrast, causes an increase in the population fluctuations by about one order of magnitude (Fig. 9 B,C; dark gray). This outcome is in agreement with the result obtained by homogenizing pairwise correlations (see Fig. 7) and demonstrates that the excitatory and inhibitory subpopulation rates have to be conserved to explain the observed fluctuation suppression.

Figure 9. Amplification of population-rate fluctuations by different types of feedback manipulations in a random network of excitatory and inhibitory LIF neurons (simulation results).

Figure 9

Top row (AC): Unperturbed feedback (FB; black), shuffling of spike-train senders across entire network (Shuff1D; dark gray) and within each subpopulation (E,I) separately (Shuff2D; light gray). Bottom row (DF): Unperturbed feedback (FB; black), replacement of spike trains by realizations of inhomogeneous (PoissI; dark gray) and homogeneous Poisson processes (PoissH; light gray). In the PoissI (PoissH) case, the (time averaged) subpopulation rates are approximately preserved. A, D: Compound power-spectra Inline graphic of input spike-train ensembles. B, E: Power-spectra Inline graphic of population-response rates. C, F: Low-frequency (LF; Inline graphicInline graphic) power ratio Inline graphic (increase in LF power relative to the unperturbed case [FB]; logarithmic scaling). Note that in A, the compound-input spectra (FB, Shuff1D, Shuff2D) are identical. In D, the input spectra for the intact recurrent network (FB) and the inhomogeneous-Poisson case (PoissI) are barely distinguishable. See Table 1 and Table 2 for details on the network model and parameters. Simulation time Inline graphic. Single-trial spectra smoothed by moving average (frame size Inline graphic).

The shuffling experiments and the results of the linear model in the previous subsections suggest that the precise temporal structure of the population averaged activities within homogeneous subpopulations is essential for the suppression of population-rate fluctuations. Preserving the exact structure of individual spike trains is not required. This is confirmed by simulation experiments where new sender identities were randomly reassigned for each individual presynaptic spike (rather than for each spike train; data not shown). This operation destroys the structure of individual spike trains but preserves the compound activities. The results are similar to those reported here.

So far, it is unclear how sensitive the fluctuation-suppression mechanism is to perturbations of the temporal structure of the population rates. To address this question, we replaced the excitatory and inhibitory spike trains in the feedback ensemble by independent realizations of inhomogeneous Poisson processes (PoissI) with intensities given by the measured excitatory and inhibitory population rates Inline graphic and Inline graphic of the recurrent network, respectively. Note that the compound rates of a single realization of this new spike-train ensemble are similar but not identical to the original population rates Inline graphic, Inline graphic (in each time window Inline graphic, the resulting spike count is a random number drawn from a Poisson distribution with mean and variance proportional to Inline graphic and Inline graphic, respectively). Although the compound spectrum of the resulting local input is barely distinguishable from the compound spectrum of the intact recurrent system (Fig. 9 D; black and dark gray curves), the response spectra are very different: replacing the feedback ensemble by inhomogeneous Poisson processes leads to a substantial amplification of low-frequency fluctuations (Fig. 9 E; compare black and dark gray curves). The effect is as strong as if the temporal structure of the population rates was completely ignored, i.e. if the feedback channels were replaced by realizations of homogeneous Poisson processes with constant rates (PoissH; light gray curves in Fig. 9 D,E). This result indicates that the precise temporal structure of the population rates is essential and that even small deviations can significantly weaken the fluctuation-suppression mechanism. The results of the Poisson experiments can be understood by considering the effect of the additional noise caused by the stochastic realization of individual spikes. Considering the auto-correlation, a Poisson spike-train ensemble with rate profile Inline graphic is equivalent to a sum of the rate profile and a noise term resulting from the stochastic (Poissonian) realization of spikes, Inline graphic. Here, Inline graphic denotes a Gaussian white noise with auto-correlation Inline graphic and Inline graphic the mean firing rate. The response fluctuations of the population driven by the rate modulated Poisson activity are, to linear approximation, given by Inline graphic. Inserting Inline graphic, we obtain an additional noise term Inline graphic in the spectrum Inline graphic which explains the increase in power compared to the spectrum Inline graphic of the recurrent network. As a generalization of the Poisson model, one may replace the noise amplitude Inline graphic by some arbitrary prefactor Inline graphic. In simulation experiments, we observed a gradual amplification of the population-rate fluctuations with increasing noise amplitude Inline graphic (data not shown).

Discussion

We have shown that negative feedback in recurrent neural networks actively suppresses low-frequency fluctuations of the population activity and pairwise correlations. This mechanism allows neurons to fire more independently than expected given the amount of shared presynaptic input. We demonstrated that manipulations of the feedback statistics, e.g. replacing feedback by uncorrelated feedforward input, can lead to a significant amplification of response correlations and population-rate fluctuations.

The suppression of correlations and population-rate fluctuations by feedback can be observed in networks with both purely inhibitory and mixed excitatory-inhibitory coupling. In purely inhibitory networks, the effect can be understood by studying the role of the effective negative feedback experienced by the compound activity. In networks of excitatory and inhibitory neurons, a change of coordinates, technically a Schur decomposition, exposes the underlying feedback structure: the sum of the excitatory and inhibitory activity couples negatively to itself if the network is in an inhibition dominated regime (which is required for its stability; see, e.g., [12). This negative feedback suppresses fluctuations in a similar way as in purely inhibitory networks. The fluctuation suppression becomes more efficient the further the network is brought into the inhibition dominated regime, away from the critical point of equal recurrent excitation and inhibition (Inline graphic). Having identified negative feedback as the underlying cause of small fluctuations and correlations, we can rule out previous explanations based on a balance between (correlated) excitation and inhibition [17]. We presented a self-consistent theory for the average pairwise spike-train correlations which illuminates that the suppression of population-rate fluctuations and the suppression of pairwise correlations are two expressions of the same effect: as the single spike-train auto-covariance is the same in the feedforward and the feedback case, the suppression of population-rate fluctuations implies smaller correlations. Our theory enables us to identify the cancellation of input correlations as a hallmark of small spike-train correlations.

In previous studies, shared presynaptic input has often been considered a main source of correlation in recurrent networks (e.g. [15], [52]). Recently [17], suspected that correlations between excitatory and inhibitory neurons and the fast tracking of external input by the excitatory and the inhibitory population are responsible for an active decorrelation. We have demonstrated here that the mere fact that excitatory and inhibitory neurons are correlated is not sufficient to suppress shared-input correlations. Rather, we find that the spike-train correlation structure in networks of excitatory and inhibitory networks arranges such that their overall contribution to the covariance between the summed inputs to a pair of neurons becomes negative, canceling partly the effect of shared inputs. This cancellation becomes more precise the stronger the negative compound feedback Inline graphic is. In homogeneous networks where excitatory and inhibitory neurons receive statistically identical input, the particular structure of correlations is Inline graphic. It can further be shown that this structure of correlations is preserved in the limit of large networks Inline graphic (Inline graphic). For non-homogeneous synaptic connectivity, if the synaptic amplitudes depend on the type of the target neuron (i.e. Inline graphic or Inline graphic), the structure of correlations may be different. Still, the correlation structure arranges such that shared input correlation is effectively suppressed. Formally, this can be seen from a self-consistency equation similar to our equation (80).

The study by [17] has shown that correlations are suppressed in the limit of infinitely large networks of binary neurons receiving randomly drawn inputs from a common external population. Its argument rests on the insight that the population-activity fluctuations in a recurrent balanced network follow the fluctuations of the external common population. An elegant scaling consideration for infinitely large networks Inline graphic with vanishing synaptic efficacy Inline graphic shows that this fast tracking becomes perfect in the limit. This allows to determine the zero-lag pairwise correlations caused by the external input. The analysis methods and the recurrent networks presented here differ in several respects from these previous results: We study networks of a finite number of spiking model neurons. The neurons receive uncorrelated external input, so that correlations are due to the local recurrent connectivity among neurons, not due to tracking of the common external input [17]. Moreover, we consider homogeneous connectivity where synaptic weights depend only on the type of the presynaptic neuron (as, e.g., in [12]), resulting in a correlation structure Inline graphic. For such connectivity, networks of binary neurons with uncorrelated external input exhibit qualitatively the same correlation structure as reported here (results not shown).

In purely inhibitory networks, the decorrelation occurs in an analog manner as in excitatory-inhibitory networks. As only a single population of neurons is available here, population averaged spike-train correlations Inline graphic are negative. This negative contribution compensates the positive contribution of shared input.

The structure of integrated spike-train covariances in networks constitutes an experimentally testable prediction. Note, however, that the prediction (19) obtained in the current work rests on two simplifying assumptions: identical internal dynamics of excitatory and inhibitory neurons and homogeneous connectivity (i.e. Inline graphic, Inline graphic; see Results : Population-activity fluctuations in excitatory-inhibitory networks”). For such networks, the structure of correlations is given by Inline graphic. Further, the relation between subthreshold membrane-potential fluctuations and spike responses is the same for both neuron types. Consequently, the above correlation structure can be observed not only at the level of spike trains but also for membrane potentials, provided the assumptions hold true. A recent experimental study [53] reports neuron-type specific cross-correlation functions in the barrel cortex of behaving mice, both for spike trains and membrane potentials. It is however difficult to assess the integral correlations from the published data. A direct test of our predictions requires either a reanalysis of the data or a theory predicting the entire correlation functions. The raw (unnormalized) II and EI spike-train correlations in [53] are much more pronounced than the EE correlations (Fig. 6 in [53]). This seems to be in contradiction to our results. Note, however, that the firing rates of excitatory and inhibitory neurons are very different in [53]. In our study, in contrast, the average firing rates of excitatory and inhibitory neurons are identical as a consequence of the assumed network homogeneity. Future theoretical work is needed to generalize our model to networks with heterogeneous firing rates and non-homogeneous connectivity. Recent results on the dependence of the correlation structure on the connectivity may prove useful in this endeavor [54][56].

Correlations in spike-train ensembles play a crucial role for the en- and decoding of information. A set of uncorrelated spike trains provides a rich dynamical basis which allows readout neurons to generate a variety of responses by tuning the strength and filter properties of their synapses [1]

In the presence of correlations, the number of possible readout signals is limited. Moreover, spike-train correlations impair the precision of such readout signals in the presence of noise. Consider, for example, a linear combination Inline graphic of Inline graphic presynaptic spike trains with arbitrary (linear) filter kernels Inline graphic (e.g. synaptic filters). In a realistic scenario, the individual spike trains Inline graphic typically vary across trials [3], [57]. To understand how robust the resulting readout signal Inline graphic is against this spike-train variability, let's consider the variability of its Fourier transform Inline graphic. Assuming homogeneous spike-train statistics,

graphic file with name pcbi.1002596.e024.jpg (24)

the (squared) signal-to-noise ratio of the readout signal Inline graphic is given by

graphic file with name pcbi.1002596.e025.jpg (25)

Here, Inline graphic denotes the average across the ensemble of spike-train realizations, Inline graphic the spike-train coherence, and the coefficients Inline graphic and Inline graphic the 1st- and 2nd-order filter statistics. For uncorrelated spike trains, i.e. Inline graphic, and Inline graphic, the signal-to-noise ratio Inline graphic grows unbounded with the population size Inline graphic. Thus, even for noisy spike trains (Inline graphic), the compound signal Inline graphic can be highly reliable if the population size Inline graphic is sufficiently large. In the presence of correlations, Inline graphic, however, Inline graphic converges towards a constant value Inline graphic as Inline graphic grows. Even for large populations, the readout signal remains prone to noise. These findings constitute a generalization of the results reported for population-rate coding, i.e. sums of unweighted spike counts (see, e.g., [2], [3]). The above arguments illustrate that the same reasoning applies to coding schemes which are based on the spatio-temporal structure of spike patterns.

In a previous study [9], we demonstrated that active decorrelation in recurrent networks is a necessary prerequisite for a controlled propagation of synchronous volleys of spikes in embedded feedforward subnetworks (‘synfire chains’; Fig. 10): A synfire chain receiving background input from a finite population of independent Poisson sources amplifies the resulting shared-input correlations, thereby leading to spontaneous synchronization within the chain (Fig. 10 B). A distinction between these spurious synchronous events and those triggered by an external stimulus is impossible. The synfire chain loses its asynchronous ground state [58]. A synfire chain receiving background inputs from a recurrent network, in contrast, is much more robust. Here, shared-input correlations are actively suppressed by the recurrent-network dynamics. Synchronous events can be triggered by external stimuli in a controlled manner (Fig. 10 A). Apart from the spontaneous synchronization illustrated in Fig. 10, decorrelation by inhibition might solve another problem arising in embedded synfire structures: In the presence of feedback connections between the synfire chain and the embedding background network, synchronous spike volleys can excite (high-frequency) oscillatory modes in the background network which, in turn, interfere with the synfire dynamics and prevent a robust propagation of synchronous activity within the chain (‘synfire explosion’; see [59], [60]). The decorrelation mechanism we refer to in our work is efficient only at low frequencies. It cannot prevent the build-up of these oscillations. [61] demonstrated that the ‘synfire explosion’ can be suppressed by adding inhibitory neurons to each synfire layer (‘shadow inhibition’) which diffusely project to neurons in the embedding network, thereby weakening the impact of synfire activity on the embedding network.

Figure 10. Recurrent network dynamics stabilizes dynamics of embedded synfire chains.

Figure 10

Spiking activity in a synfire chain (Inline graphic layers, layer width Inline graphic) receiving background input from an excitatory-inhibitory network (A, cf. Fig. 1 C) or from a finite pool of excitatory and inhibitory Poisson processes (B, cf. Fig. 1 D). Average input firing rates, in-degrees and amount of shared input are identical in both cases. Neurons of the first synfire layer (neuron ids Inline graphic) are stimulated by current pulses at times Inline graphic and Inline graphic. Each neuron in layer Inline graphic receives inputs from all Inline graphic neurons in the preceding layer Inline graphic (synaptic weights Inline graphic, spike transmission delays Inline graphic), and Inline graphic and Inline graphic excitatory and inhibitory background inputs, respectively, randomly drawn from the presynaptic populations. Neurons in the first layer Inline graphic receive Inline graphic and Inline graphic excitatory and inhibitory background inputs, respectively. Note that there is no feedback from the synfire chain to the embedding network. See Table 2 for network parameters.

In the present work we focus on the integral of the correlation function, nurtured by our interest in the low-frequency fluctuations. An analog treatment can however easily be performed for the zero-lag correlations. In contrast to infinite networks with sparse connectivity (Inline graphic, Inline graphic), in the case of finite networks, pairs of neurons must be distinguished according to whether they are synaptically connected or not in order to arrive at a self-consistent theory for the averaged correlations. Providing explicit expressions for correlations between connected and unconnected neurons, the current work provides the tools to relate experimentally observed spiking correlations to the underlying synaptic connectivity.

The quantification of pairwise correlations is a necessary prerequisite to understand how correlation sensitive synaptic plasticity rules, like spike-timing dependent plasticity [62], interact with the recurrent network dynamics [63]. Existing theories quantifying correlations employ stochastic neuron models and are limited to purely excitatory networks [63][65]. Here, we provide an analytical equivalence relation between a reduced linear model and spiking integrate-and-fire neurons describing fluctuations correctly up to linear order. A formally similar approach has been employed earlier to study delayed cumulative inhibition in spiking networks [66]. We show that the correlations observed in recurrent networks in the asynchronous irregular regime are quantitatively captured for realistic synaptic coupling with postsynaptic potentials of up to about Inline graphic. The success of this approach can be explained by the linearization of the neural threshold units by the afferent noise experienced in the asynchronous regime. For linear neural dynamics, the second-order description of fluctuations is closed [67]. We exploit this finding by applying perturbation theory to the Fokker-Planck description of the integrate-and-fire neuron to obtain the linear input-output transfer at low frequencies [33], thereby determining the effective coupling in our linear model.

The scope of the theory presented in the current work is limited mainly by three assumptions. The first is the use of a linear theory which exhibits an instability as soon as a single eigenvalue of the effective connectivity matrix assumes a positive real part. This ultimately happens when increasing the synaptic coupling strength, because the eigenvalues of the random connectivity matrix are located in a circle centered in the left half of the complex plain with a radius given by the square root of the variance of the matrix elements [68], [69]. Nonlinearities, like those imposed by strictly positive firing rates, prevent such unbounded growth (or decay) by saturation. For nonlinear rate models with sigmoidal transfer functions it has been shown that the activity of recurrent random networks of such units makes a transition to chaos at the point where the linearized dynamics would loose stability [70]. However, this point of transition is sharp only in the limit of infinitely large networks. From the population averaged firing rate and the pairwise correlations averaged over pairs of neurons considered in Fig. 7 we cannot conclude whether or not a transition to chaos occurs in the spiking network. In simulations and in the linearized reduced model, we could however observe that the distribution of pairwise correlations broadens when approaching the point of instability. Future work needs to examine this question in detail, e.g. by considering measures related to the Lyapunov exponent. Recently developed semi-analytical theories accounting for nonlinear neural features [71] may be helpful to answer this question. The second limiting factor of the current theory is the use of a perturbative approach to quantify the response of the integrate-and-fire model. Although the steady-state firing rate of the network is found as the fixed point of the nonlinear self-consistency equation, the response to a synaptic fluctuation is determined up to linear order in the amplitude of the afferent rate fluctuation, which is only valid for sufficiently small fluctuations. For larger input fluctuations, nonlinear contributions to the neural response can become more important [33]. Also for strong synaptic coupling, deviations from our theory are to be expected. Thirdly, the employment of Fokker-Planck theory to determine the steady-state firing rate and the response to incoming fluctuations assumes uncorrelated presynaptic firing with Poisson statistics and synaptic amplitudes which are vanishingly small compared to the distance between reset and threshold. For larger synaptic amplitudes, the Fokker-Planck theory becomes approximate and deviations are expected [33], [34], [72], [73]. This can be observed in Fig. 7 A, showing a deviation between the self-consistent firing rate and the analytical prediction at about Inline graphic. In this work, we obtained a sufficiently precise self-consistent approximation of the correlation coefficient Inline graphic by relating the random recurrent network of spiking neurons in the asynchronous irregular state to a reduced linear model which obeys the same relation between Inline graphic and Inline graphic up to linear order. This reduced linear model, however, does not predict the absolute values of the variance Inline graphic and covariance Inline graphic. The variance Inline graphic of the LIF model, for example, is dominated by nonlinear effects, such as the reset mechanism after each action potential. Previous work [12], [31] has shown that the single spike-train statistics can be approximated in the diffusion approximation if the recurrent firing rate in the network is determined by mean-field theory. One may therefore extend our approach and determine the integral auto-correlation function as Inline graphic with the Fano factor Inline graphic (see [51]). For a renewal process and long observation times, the Fano factor is given by Inline graphic [74], [75]. The coefficient of variation Inline graphic can be obtained from the diffusion approximation of the membrane-potential dynamics (App. A.1 in [12). The covariance Inline graphic can then be determined by (19). Another possibility is the use of a refractory-density approach [76], [77].

The spike-train correlation as a function of the time lag is an experimentally accessible measure. Future theoretical work should therefore also focus on the temporal structure of correlations in recurrent networks, going beyond zero-lag correlations [15], [17] and the integral measures studied in the current work. This would allow to compare the theoretical predictions to direct experimental observations in a more detailed manner. Moreover, the relative spike timing between pairs of neurons is a decisive property for Hebbian learning [78] in recurrent networks, as implemented by spike timing-dependent plasticity [62], and suspected to play a role for synapse formation and elimination [79].

The simulation experiments performed in this work revealed that the suppression of correlations is vulnerable to certain types of manipulations of the feedback loop. One particular biological source of additional variability in the feedback loop is probabilistic vesicle release at synapses [80]. In feedforward networks, such unreliable synaptic transmission has been shown to decrease the transmission of correlations by pairs of neurons [22]. Stochastic synaptic release is very similar to the replacement of the population activity in the feedback branch by a rate modulated Poisson processes that conserves the population rate. In these simulations we observed an increase of correlations due to the additional noise caused by the stochastic Poisson realization. Future work should investigate more carefully which of the two opposing effects of probabilistic release on correlations dominates in recurrent networks.

The results of our study do not only shed light on the decorrelation of spiking activity in recurrent neural networks. They also demonstrate that a standard modeling approach in theoretical neuroscience is problematic: When studying the dynamics of a local neural network (e.g. a “cortical column”), it is a common strategy to replace external inputs to this neural population Inline graphic by spike-train ensembles with some predefined statistics, e.g. by stationary Poisson processes. Most neural systems, however, exhibit a high degree of recurrence. Nonlocal input to the population Inline graphic, i.e. input from other brain areas, therefore has to be expected to be shaped by the activity within Inline graphic. The omission of these feedback loops can lead to qualitatively wrong predictions of the population statistics. The analytical results for the correlation structure of recurrent networks presented in this study provide the means to a more realistic specification of such external activity.

Methods

LIF network model

In the present study, we consider two types of sparsely connected random networks: networks with purely inhibitory coupling (“I networks”) and networks with both excitatory and inhibitory interactions (“EI networks”). To illustrate the main findings of this study and to test the predictions of the linear model described in Methods : Linearized network model”, both architectures were implemented as networks of leaky integrate-and-fire (LIF) neurons. The model details and parameters are reported in Table 1 and Table 2, respectively. All network simulations were carried out with NEST (www.nest-initiative.org, [81]).

Table 1. LIF network: Model overview.

A Model summary
Populations one (inhibitory network) or two (excitatory-inhibitory network)
Connectivity random, fixed in-degrees
Neuron leaky integrate-and-fire (LIF)
Synapse current based, delta-shaped postsyn. currents with constant amplitudes
Input uncorrelated Gaussian white noise currents

Table 2. LIF network: Parameters (default values).

A Connectivity
Name Value Description
Inline graphic Inline graphic (inhibitory network) in-degree
Inline graphic (E-I network) excitatory in-degree
Inline graphic Inline graphic network connectivity
Inline graphic Inline graphic (E-I network) relative size of inhibitory subpopulation

Linearized network model

In this section we show how the dynamics of the spiking network can be reduced to an effective linear model with fluctuations fulfilling, by construction, the same relationship as the original system up to linear order. We first outline the conceptual steps of this reduction, and then provide the formal derivation.

We make use of the observation that the effect of a single synaptic impulse on the output activity of a neuron is typically small. Writing the response spike train of a neuron as a functional of the history of all incoming impulses therefore allows us to perform a linearization with respect to each of the afferent spike trains. Formally, this corresponds to a Volterra expansion up to linear order, the generalization of a Taylor series to functionals. In Methods : Response kernel of the LIF model”, we perform this linearization explicitly for the example of the LIF model. This determines how the linear response kernel depends on the parameters of the LIF model. The linear dependence on the input leads to an approximate convolution equation (31) linearly connecting the auto- and the cross-correlation functions in the network. As this equation is complicated to solve directly, we introduce a reduced linear model (35) obeying the same convolution equation. The reduced linear model can be solved by standard Fourier methods and yields an explicit form for the covariance matrix in the frequency domain (37). The diagonal and off-diagonal elements of the Inline graphic dimensional covariance matrix Inline graphic in (56) correspond to the power-spectra of individual neurons and the cross-spectra of individual neuron pairs, respectively. As, in this linear approximation, both the auto- and the cross-covariances are proportional to the variance of the driving noise, the resulting correlation coefficients are independent of the noise amplitude (see Methods : Population averaged correlations in the linear EI network”). As shown in Results : Suppression of population-activity fluctuations by negative feedback” and Results : Population-activity fluctuations in excitatory-inhibitory networks”, the suppression of fluctuations in recurrent networks is most pronounced at low frequencies. It is therefore sufficient to restrict the discussion to the zero-frequency limit Inline graphic. Note that the zero-frequency variances and covariances correspond to the integrals of the auto- and cross-correlation functions in the time domain. In this limit, we may combine the two different sources of fluctuations caused by the spiking of the neurons and by external input to the network into a single source of white noise with variance Inline graphic (see (39)).

In general, the spiking activity Inline graphic of neuron Inline graphic at time Inline graphic is determined by the entire history Inline graphic of the activity of all neurons Inline graphic in the network up to time Inline graphic. Formally, this dependence can be expressed by a functional

graphic file with name pcbi.1002596.e026.jpg (26)

The subscript Inline graphic in Inline graphic indicates that Inline graphic (causality). In the following, we use the abbreviation Inline graphic. The effect of a single synaptic input on the state of a neuron is typically small. We therefore approximate the influence of an incoming spike train on the activity of the target neuron up to linear order. The sensitivity of neuron Inline graphic's activity to the input from neuron Inline graphic can be expressed by the functional derivative of Inline graphic with respect to input spike train Inline graphic:

graphic file with name pcbi.1002596.e027.jpg (27)

It represents the response of the functional to a single Inline graphic-shaped perturbation in input channel Inline graphic at time Inline graphic, normalized by the perturbation amplitude Inline graphic. In (27), Inline graphic denotes the unity vector with elements Inline graphic and Inline graphic for all Inline graphic. By introducing the vector Inline graphic of spike trains with the Inline graphic-th component set to zero, Inline graphic can be approximated by

graphic file with name pcbi.1002596.e028.jpg (28)

Eq. (28) is a Volterra expansion up to linear order, the formal extension of a Taylor expansion of a function of Inline graphic variables to a functional, truncated after the linear term. With the linearized dynamics (28), the pairwise spike-train cross-correlation function between two neurons Inline graphic and Inline graphic is given by

graphic file with name pcbi.1002596.e029.jpg (29)

Note that (29) is valid only for positive time lags Inline graphic, because for Inline graphic a possible causal influence of Inline graphic on Inline graphic is not expressed by the functional. Here, Inline graphic denotes the average across the ensemble of realizations of spike trains in the stationary state of the network (e.g. the ensemble resulting from different initial conditions), and Inline graphic the centralized (zero mean) spike train. In the last line in (29), the average Inline graphic is split into the average Inline graphic across all realizations of spike trains excluding Inline graphic and the average Inline graphic across all realizations of Inline graphic. Note that the latter does not affect the functional derivative because it is, by construction, independent of the actual realization of Inline graphic. A consistent approximation up to linear order is equivalent to the assumption that for all Inline graphic the linear dependence of the functional on Inline graphic is completely contained in the respective derivative with respect to Inline graphic (28). Dependencies beyond linear order include higher-order derivatives and are neglected in this approximation. This is equivalent to neglecting the dependence of Inline graphic on Inline graphic for any Inline graphic. Hence, we can average the inner term over Inline graphic and Inline graphic separately. In the stationary state, this correlation can only depend on Inline graphic and equals the auto- or the cross-correlation function:

graphic file with name pcbi.1002596.e839.jpg

The pairwise spike-train correlation function is therefore given by

graphic file with name pcbi.1002596.e840.jpg

where we used the fact that Inline graphic for any functional Inline graphic that does not depend on Inline graphic. The average of the functional derivative has the intuitive meaning of a response kernel with respect to a Inline graphic-shaped perturbation of input Inline graphic at time Inline graphic. Averaged over the realizations of the stationary network activity this response can only depend on the relative time Inline graphic. In a homogeneous random network, the input statistics (number of synaptic inputs and synaptic weights) and the parameters of the internal dynamics are identical for each cell, so that the temporal shape Inline graphic of the response kernel can be assumed to be the same for all neurons. The synaptic coupling strength from neuron Inline graphic to neuron Inline graphic determines the prefactor Inline graphic:

graphic file with name pcbi.1002596.e030.jpg (30)

In this notation, the linear equation connecting the auto-correlations Inline graphic and the cross-correlations Inline graphic takes the form

graphic file with name pcbi.1002596.e031.jpg (31)

Eq. (31) can be solved numerically or by means of Wiener-Hopf theory taking the symmetry Inline graphic into account [82].

Our aim is to find a simpler model which is equivalent to the LIF dynamics in the sense that it fulfills the same equation (31). Let's Inline graphic denote the vector of dynamic variables of this reduced model. Analog to the original model, we define the cross-correlation for Inline graphic and Inline graphic as

graphic file with name pcbi.1002596.e032.jpg (32)

The simplest functional Inline graphic consistent with equation (31) is linear in Inline graphic. Since we require equivalence only with respect to the ensemble averaged quantities, i.e. Inline graphic, the reduced activity and therefore Inline graphic can contain a stochastic element which would disappear after averaging. The linear functional

graphic file with name pcbi.1002596.e033.jpg (33)

with a pairwise uncorrelated, centralized white noise Inline graphic (Inline graphic) fulfills the requirement, since for Inline graphic and Inline graphic

graphic file with name pcbi.1002596.e866.jpg

This equation has the same form as (31), so both models, within the linear approximation, exhibit an identical relationship between the auto- and cross-covariances. The physical meaning of the noise Inline graphic is the variance caused by the spiking of the neurons. The auto-correlation function of a spike train of rate Inline graphic has a Inline graphic-peak of weight Inline graphic. The reduced model (33) exhibits such a Inline graphic-peak if we set Inline graphic. A related approach has been pursued before (see Sec. 3.5 in [31]) to determine the auto-correlation of the population averaged firing rate. This similarity will be discussed in detail below.

So far, we considered a network without external drive, i.e. all spike trains Inline graphic originated from within the network. If the network is driven by external input, each neuron receives, in addition, synaptic input Inline graphic from neurons outside the network. We assume uncorrelated external drive Inline graphic. In the reduced model, this input constitutes a separate source of noise:

graphic file with name pcbi.1002596.e034.jpg (34)

Here, Inline graphic denotes the convolution and Inline graphic the response kernel with respect to an external input. For simplicity, let's assume that the shape of these kernels is identical for all pairs of pre- and postsynaptic sources, i.e. Inline graphic. If we further absorb the synaptic amplitude of the external drive in the strength of the noise Inline graphic, the linearized dynamics (34) can be written in matrix notation

graphic file with name pcbi.1002596.e035.jpg (35)

with Inline graphic. The reduced model (35) can be solved directly by means of Fourier transform:

graphic file with name pcbi.1002596.e036.jpg (36)

The full covariance matrix follows by averaging over the sources of noise Inline graphic and Inline graphic as

graphic file with name pcbi.1002596.e037.jpg (37)

The diagonal elements of Inline graphic represent the auto-covariances, the off-diagonal elements the cross-covariances. Both are proportional to the driving noise Inline graphic. This is consistent with (31) which is a linear relationship between the cross- and auto-covariances.

For networks which can be decomposed into homogeneous subpopulations, the Inline graphic dimensional system (35) can be further simplified by population averaging. Consider, for example, a homogeneous random network with purely inhibitory coupling. Assume that the neurons are randomly connected with probability Inline graphic and coupling strength Inline graphic. The average number of in/outputs per neuron (in/out-degree) is thus given by Inline graphic. By introducing the population averaged external input Inline graphic, the averaged spiking noise Inline graphic, and the effective coupling strength Inline graphic, the dynamics of the population averaged activity becomes

graphic file with name pcbi.1002596.e038.jpg (38)

Here we assumed that Inline graphic is independent of the presynaptic neuron Inline graphic and can be replaced by Inline graphic. Note that this replacement is exact for networks with homogeneous out-degree, i.e. if the number of outgoing connections is identical for each neuron Inline graphic. For large random networks with binomially distributed out-degrees (e.g. Erdös-Rényi networks or random networks with constant in-degree), (38) serves as an approximation.

To relate our approach to the treatment of finite-size fluctuations in [31], consider the population-averaged dynamics (38) of a single population with mean firing rate Inline graphic. We set Inline graphic for all single neuron noises Inline graphic in order for the reduced model's auto-covariances to reproduce the Inline graphic-peak of the spiking dynamics. In the population averaged dynamics, this leads to the variance of the noise Inline graphic given by Inline graphic. This agrees with the variance of the population rate in [31]. Therefore, the dynamics of the population averaged quantity Inline graphic in (38) agrees with the earlier definition of a population averaged firing rate Inline graphic for the spiking network [31].

In equation (38), two distinct sources of noise appear: The noise due to external uncorrelated activity Inline graphic and the noise Inline graphic which is required to obtain the Inline graphic-peak of the auto-correlation functions of the reduced model. The qualitative results of Results : Suppression of population-activity fluctuations by negative feedback” and Results : Population-activity fluctuations in excitatory-inhibitory networks”, however can be understood with an even simpler model. As we are mainly concerned with the low-frequency fluctuations, we only need a model with the same limit Inline graphic. As we normalized the kernel so that Inline graphic we can combine both sources of noise and require Inline graphic in (36) in the zero frequency limit. Hence, in Results : Suppression of population-activity fluctuations by negative feedback” and Results : Population-activity fluctuations in excitatory-inhibitory networks”, we consider the model

graphic file with name pcbi.1002596.e039.jpg (39)

with a pairwise uncorrelated centralized white noise Inline graphic to explain the suppression of fluctuations at low frequencies.

As a second example, consider a random network composed of an excitatory and an inhibitory subpopulation Inline graphic and Inline graphic with population sizes Inline graphic and Inline graphic, respectively. Assume that each neuron receives excitatory and inhibitory inputs from Inline graphic and Inline graphic with coupling strengths Inline graphic and Inline graphic, respectively, and probability Inline graphic, such that the average excitatory and inhibitory in/out-degrees are given by Inline graphic and Inline graphic, respectively. The dynamics of the subpopulation averaged activities Inline graphic is given by (35) with subpopulation averaged noise Inline graphic and Inline graphic and effective coupling

graphic file with name pcbi.1002596.e040.jpg (40)

Here, Inline graphic denotes the effective coupling strength, Inline graphic the effective balance parameter and Inline graphic and Inline graphic the (sub)population averaged external and spiking sources of noise, respectively. Again, the reduction of the Inline graphic-dimensional linear dynamics to the two-dimensional dynamics (40) is exact if the out-degrees are constant within each subpopulation. As before, both sources of noise can be combined into a single source of noise, if we are only interested in the low-frequency behavior of the model, leading to the dynamics (39) with the effective coupling (40).

The linear theory is only valid in the domain of its stability, which is determined by the eigenvalue spectrum of the effective coupling matrix Inline graphic. For random coupling matrices, the eigenvalues are located within a circle with a radius equal to the square root of the variance of the matrix entries [69] Inline graphic. Writing the effective dynamics for the exponential kernel as a differential equation Inline graphic, the eigenvalues of the right hand side matrix Inline graphic are confined to a circle centered at Inline graphic in the complex plain with radius Inline graphic. Given Inline graphic, eigenvalues might exist which have a positive real part, leading to unstable dynamics. This condition is indicated by the vertical dotted lines in Fig. 6 A–F and Fig. 7 B–D near Inline graphic. Beyond this line, the linear model predicts an explosive growth of fluctuations. In the LIF-network model, an unbounded growth is avoided by the nonlinearities of the single-neuron dynamics.

Response kernel of the LIF model

We now perform the formal linearization (30) for a network of Inline graphic LIF neurons Inline graphic. A similar approach has been employed in previous studies to understand the population dynamics in these networks [12], [31]. We consider the input Inline graphic received by neuron Inline graphic from the local network, where Inline graphic denotes the spike train of the neuron Inline graphic projecting to neuron Inline graphic with synaptic weight Inline graphic. Given the time dependent firing rate Inline graphic of each afferent, and assuming small correlations and small synaptic weights, the total input to neuron Inline graphic can be replaced by a Gaussian white noise with mean Inline graphic and variance Inline graphic,

graphic file with name pcbi.1002596.e041.jpg (41)

where Inline graphic sums over all synaptic inputs. Inline graphic denotes the amplitude of the postsynaptic potential evoked by synapse Inline graphic. Inline graphic is the membrane time constant of the model. In the stationary state, the firing rate of each afferent is well described by the constant time average Inline graphic. The working point at which we perform the linearization of the neural response (30) is then given by analog equations as (41), resulting in a constant mean Inline graphic and variance Inline graphic. If the amplitude of each postsynaptic potential is small compared to the distance of the membrane potential to threshold, the dynamics of the LIF model can be approximated by a diffusion process, employing Fokker-Planck theory [83]. The stationary firing rate of the neuron is then given by [12], [31], [84]

graphic file with name pcbi.1002596.e042.jpg (42)

with the reset voltage Inline graphic, the threshold voltage Inline graphic and the refractory time Inline graphic. In homogeneous random networks, the stationary rate (Fig. 7 A) is the same for all neurons. It is determined in a self-consistent manner [12] as the fixed point of (42). The stationary mean Inline graphic and variance Inline graphic are determined by the stationary rate. To determine the kernel (30) we need to consider how a Inline graphic-shaped deflection in the input to this neuron at time point Inline graphic affects its output up to linear order in the amplitude of the fluctuation. In the stationary state, we may set Inline graphic. It is therefore sufficient to focus on the effect of a single fluctuation

graphic file with name pcbi.1002596.e043.jpg (43)

We therefore ask how the density of spikes per time Inline graphic of neuron Inline graphic, averaged over different realizations of the remaining inputs to neuron Inline graphic, changes in response to the fluctuation (43) of the presynaptic neuron Inline graphic in the limit of vanishing amplitude Inline graphic. This kernel Inline graphic (30) is identical to the impulse response of the neuron and can directly be measured in simulation by trial averaging over many responses to the given Inline graphic-deflection (43) in the input (see Fig. 8 A). For the theory of low-frequency fluctuations, we only need the integral of the kernel, also known as the DC susceptibility,

graphic file with name pcbi.1002596.e044.jpg (44)

The second equality follows from the equivalence of the integral of the impulse response and the step response in linear approximation [21], [33]. Following from [41], both mean and variance are perturbed as Inline graphic and Inline graphic in response to a step Inline graphic in the afferent rate Inline graphic. Moreover, we used the chain rule Inline graphic. The variation of the afferent firing rate hence co-modulates the mean and the variance and both modulations need to be taken into account to derive the neural response [31]. Although the finite amplitude of postsynaptic potentials has an effect on the response properties [33], [34], the integral response is rather insensitive to the granularity of the noise [33]. We therefore employ the diffusion approximation to linearize the dynamics of the LIF neuron around its working point characterized by the mean Inline graphic and the variance Inline graphic of the total synaptic input. In (44), we evaluate the partial derivatives of Inline graphic with respect to Inline graphic and Inline graphic using (42). First, observe that by chain rule Inline graphic. We then again make use of the chain rule Inline graphic. Analog expressions hold for the derivative with respect to Inline graphic. The first derivative yields Inline graphic, the one with respect to Inline graphic follows analogously, but with a negative sign. We further observe that Inline graphic and Inline graphic with Inline graphic. Taken together, we obtain the explicit result for (44)

graphic file with name pcbi.1002596.e045.jpg (45)

Note that the modulation of Inline graphic results in a contribution to Inline graphic that is linear in Inline graphic, whereas the modulation of Inline graphic causes a quadratic dependence on Inline graphic. This expression therefore presents an extension to the integral response presented in [21], [85]. Fig. 8 B shows the comparison of the analytical expression (45) and direct simulation. The agreement is good over a large range of synaptic amplitudes Inline graphic in the case of constant background noise caused by small synaptic amplitudes (here Inline graphic for excitation and Inline graphic for inhibition). For background noise caused by stronger impulses, the deviations are expected to grow [33].

Population-activity spectra in the linear model: feedback vs. feedforward scenario

The recurrent linear neural dynamics defined in the previous section is conveniently solved in the Fourier domain. The driving external Gaussian white noise Inline graphic is mapped to the response Inline graphic by means of the transfer matrix Inline graphic. According to (39), it is given by Inline graphic. The covariance matrix in the frequency domain, the spectral matrix, thus reads

graphic file with name pcbi.1002596.e046.jpg (46)

where we used Inline graphic and the expectation operator Inline graphic represents an average over noise realizations. To identify the effect of recurrence on the network dynamics, we replace the local feedback input by a feedforward input Inline graphic with spectral matrix Inline graphic. The resulting response firing rate is given by Inline graphic. Assuming that the feedforward input Inline graphic is uncorrelated to the external noise source Inline graphic (Inline graphic) yields a response spectrum

graphic file with name pcbi.1002596.e047.jpg (47)

Population-activity spectrum of the linear inhibitory network

In the Fourier domain, the solution of the mean-field dynamics (38) of the inhibitory network is Inline graphic. The power-spectrum Inline graphic hence becomes

graphic file with name pcbi.1002596.e048.jpg (48)

using the spectrum of the noise Inline graphic.

We compare this power-spectrum to the case where the feedback loop is opened, i.e. where the recurrent input is replaced by feedforward input with unchanged auto-statistics Inline graphic, but which is uncorrelated to the external input Inline graphic. The resulting power-spectrum is given by (47) as Inline graphic.

Population-activity spectra of the linear excitatory-inhibitory network

In a homogeneous random network of excitatory and inhibitory neurons, the population averaged activity (40) can be solved in the Schur basis (9) introduced in Results : Population-activity fluctuations in excitatory-inhibitory networks”

graphic file with name pcbi.1002596.e049.jpg (49)

with Inline graphic and Inline graphic. The power of the population rate therefore is

graphic file with name pcbi.1002596.e050.jpg (50)

The fluctuations of the excitatory and the inhibitory population follow as

graphic file with name pcbi.1002596.e051.jpg (51)

So the power-spectra are

graphic file with name pcbi.1002596.e052.jpg (52)

Replacing the recurrent input of the sum activity Inline graphic by activity Inline graphic with the same auto-statistics, but which is uncorrelated to the remaining input into Inline graphic (Fig. 5 D′) results in the fluctuations

graphic file with name pcbi.1002596.e053.jpg (53)

The power-spectrum of the sum activity therefore becomes

graphic file with name pcbi.1002596.e054.jpg (54)

If, alternatively, the excitatory and the inhibitory feedback terms Inline graphic and Inline graphic are replaced by uncorrelated feedforward input Inline graphic and Inline graphic with power-spectra Inline graphic and Inline graphic (Fig. 5 C,D), the spectrum of the sum activity reads

graphic file with name pcbi.1002596.e055.jpg (55)

The limit (14) for inhibition dominated networks with Inline graphic can be obtained from this and the former expressions by taking Inline graphic and assuming strong coupling Inline graphic.

Population averaged correlations in the linear EI network

In this subsection, we derive a self-consistency equation for the covariances in a recurrent network. We start from (37) (we drop the superscript Inline graphic of Inline graphic for brevity) multiply by Inline graphic from left and its transpose from right to obtain

graphic file with name pcbi.1002596.e056.jpg (56)

We assume a recurrent network of Inline graphic excitatory and Inline graphic inhibitory neurons, in which each neuron receives Inline graphic excitatory inputs of weight Inline graphic and Inline graphic inhibitory inputs of weight Inline graphic drawn randomly from the presynaptic pool of neurons. To obtain a theory for the variances and covariances at zero frequency (with Inline graphic) we may abbreviate Inline graphic by Inline graphic. For a population averaged theory, we need to replace in (56) the variances Inline graphic of an individual neuron by the population average and replace the covariance Inline graphic for a given pair of neurons Inline graphic by the average over pairs that are statistically equivalent to Inline graphic. For a pair Inline graphic of neurons we will show that the set of equivalent pairs depends on the current realization of the connectivity since unconnected pairs are not equivalent to connected ones. Therefore it is necessary to first average the covariance matrix over statistically equivalent neuron pairs given a fixed connectivity and to subsequently average over all possible realizations of the connectivity. The latter will be denoted as Inline graphic. For compactness of the notation, first we perform the averaging for the general case, where neuron Inline graphic belongs to population Inline graphic and neuron Inline graphic to population Inline graphic. We denote by Inline graphic, Inline graphic the sets of neuron indices belonging to populations Inline graphic and Inline graphic, respectively. Subsequently replacing Inline graphic and Inline graphic by all possible combinations Inline graphic, we obtain the averaged self-consistency equations for the network. We denote the number of incoming connections to a neuron of type Inline graphic from the population of neurons of type Inline graphic as Inline graphic and the strength of a synaptic coupling as Inline graphic. Rewriting the self-consistency equation (56) explicitly with indices yields

graphic file with name pcbi.1002596.e057.jpg (57)

The last equation shows that for a connected pair Inline graphic of neurons (Inline graphic or Inline graphic) either of the first two sums contains a contribution Inline graphic or Inline graphic proportional to the variance of the projecting neuron. We therefore need to perform the averaging separately for connected and for unconnected pairs of neurons. We use the notation

graphic file with name pcbi.1002596.e058.jpg (58)

for the average covariance over pairs of neurons of types Inline graphic with a connection from neuron Inline graphic to neuron Inline graphic, where Inline graphic is the number neuron pairs connected in this way. An arrow to the right, Inline graphic, denotes a connection from neuron Inline graphic to neuron Inline graphic. Note that we use the same letter Inline graphic for the population averaged covariances and for the covariances of individual pairs. The distinction can be made by the indices: Inline graphic throughout indexes a single neuron, Inline graphic identifies one of the populations Inline graphic. We denote the covariance averaged over unconnected pairs as

graphic file with name pcbi.1002596.e059.jpg (59)

We further use

graphic file with name pcbi.1002596.e060.jpg (60)

for the integrated variance averaged over all neurons of type Inline graphic. Connected and the unconnected averaged covariances differ by the term proportional to the variance of the projecting neuron, as mentioned above

graphic file with name pcbi.1002596.e061.jpg (61)

As a consequence, we can express all quantities in terms of the averaged variance (60) and the covariance averaged over unconnected pairs (59). We now proceed to average the integrated variance over population Inline graphic. Since there are no self-connections in the network, we do not need to distinguish two cases here. Replacing Inline graphic on the right hand side of (60), the first term of (57) contributes

graphic file with name pcbi.1002596.e062.jpg (62)

From the second to the third step we used that the sum over Inline graphic (Inline graphic) yields non-zero contributions only if neuron Inline graphic (Inline graphic) connects to neuron Inline graphic. This happens in Inline graphic (Inline graphic) cases with the coupling weight Inline graphic (Inline graphic). Therefore the covariance averaged over connected pairs appears on the right hand side. In the last line we used the relation (61) to express the connected covariance in terms of the variance and the covariance over unconnected pairs. The second term in (60) is identical because of the symmetry Inline graphic. Up to here, the structure of the network only entered in terms of the in-degree of the neurons. The contribution of the third term follows from a similar calculation

graphic file with name pcbi.1002596.e063.jpg (63)

From the second to the third step we assumed that among the Inline graphic pairs of neurons Inline graphic projecting to neuron Inline graphic, the fraction Inline graphic has a connection Inline graphic. These pairs contribute with the connected covariance. The connections in opposite direction contribute the other term of similar structure. We ignore multiple and reciprocal connections here, assuming the connection probability is low. We introduce the shorthand Inline graphic for the covariance averaged over all neuron pairs including connected and unconnected pairs

graphic file with name pcbi.1002596.e064.jpg (64)

This is the covariance which is observed on average when picking a pair of neurons of type Inline graphic and Inline graphic randomly. In this step, beyond the in-degree, the structure of the network entered through the expected number of connections between two populations. Taken all three terms together, we arrive at

graphic file with name pcbi.1002596.e065.jpg (65)

The averaged covariances follow by similar calculations. Here we only need to calculate the average over unconnected pairs Inline graphic given by (59), because the connected covariance follows from (61). The first sum in (57) contributes

graphic file with name pcbi.1002596.e066.jpg (66)

where due to the absence of a direct connection between Inline graphic and Inline graphic, the term linear in the coupling and proportional to the variance is absent. From the symmetry Inline graphic it follows that the second term corresponds to an exchange of Inline graphic and Inline graphic in the last expression. The third sum in (57) follows from an analog calculation as before

graphic file with name pcbi.1002596.e067.jpg (67)

In summary, the contributions from (66) and (67) together result in the self-consistency equation for the covariance

graphic file with name pcbi.1002596.e068.jpg (68)

We now simplify the expressions by assuming that the in-degree of a neuron and the incoming synaptic amplitudes do not depend on the type of the neuron, i.e. that excitatory and inhibitory neurons receive statistically the same input. Formally this means that we need to replace Inline graphic by Inline graphic, the number of incoming connections from population Inline graphic and Inline graphic by Inline graphic, the coupling strength of a projection from a neuron of type Inline graphic. The covariance Inline graphic then has two distinct contributions, Inline graphic that depends on the type of neurons Inline graphic, and Inline graphic that does not. In particular Inline graphic and Inline graphic do not depend on Inline graphic and we omit their subscripts in the following. The variances fulfill

graphic file with name pcbi.1002596.e069.jpg (69)

the covariances satisfy

graphic file with name pcbi.1002596.e070.jpg (70)

The disjoint part Inline graphic determines the difference between the covariances for pairs of neurons of different type. Using the parameters Inline graphic, Inline graphic, Inline graphic, Inline graphic, the explicit form is

graphic file with name pcbi.1002596.e071.jpg (71)

Therefore, also the covariances in the network obey the relation

graphic file with name pcbi.1002596.e072.jpg (72)

i.e. the mixed covariance can be eliminated and is given by the arithmetic mean of the covariances between neurons of same type. In matrix representation with the vector Inline graphic, the self-consistency equation is

graphic file with name pcbi.1002596.e073.jpg (73)
graphic file with name pcbi.1002596.e074.jpg (74)
graphic file with name pcbi.1002596.e075.jpg (75)

The self consistent covariance can then be obtained by solving the system of linear equations

graphic file with name pcbi.1002596.e076.jpg (76)

The numerical solution shows that the variances for excitatory and inhibitory neurons are approximately the same, as depicted in Fig. 6 A. In the following we therefore assume Inline graphic and then solve (76) for the covariances. With the abbreviation Inline graphic, the third and fourth line yields the equation for the covariances

graphic file with name pcbi.1002596.e077.jpg (77)

The structure of the equation suggests to introduce the linear combination Inline graphic which satisfies

graphic file with name pcbi.1002596.e078.jpg (78)

We solve (77) for Inline graphic and Inline graphic and insert (78) for Inline graphic to obtain the covariances as

graphic file with name pcbi.1002596.e079.jpg (79)

The covariance Inline graphic between unconnected neurons can be related to the covariance between the incoming currents this pair of neurons receives. Expressing the self-consistency (68) in terms of the covariances averaged over connected and unconnected pairs (64) uncovers the connection

graphic file with name pcbi.1002596.e080.jpg (80)

This self-consistency equation yields the argument, why the shared-input correlation Inline graphic (19) cancels the contribution Inline graphic (20) due to spike-train correlations in the covariance to the input currents (see Fig. 6 C,D). Rewriting (80) in terms of these quantities results in

graphic file with name pcbi.1002596.e081.jpg (81)

If a self-consistent solution with small correlation Inline graphic exists, the right hand side of (81) must be of the same order of magnitude. The right hand side of this equation has a prefactor Inline graphic which typically is Inline graphic (for the parameters in Fig. 6, Inline graphic becomes larger than Inline graphic for Inline graphic). The first term in the bracket is proportional to the contribution of shared input, the second term is due to correlations among pairs of different neurons. Each of these terms is of order Inline graphic. Due to the prefactor Inline graphic, however, the sum of the two terms needs to be of order Inline graphic to fulfill the equation. Hence, the terms must have different signs to cause the mutual cancellation.

To illustrate how the correlation structure is affected by feedback, let us now consider the case where the feedback activity is perturbed (“feedforward scenario”). We start from (47) and, again, only consider the fluctuations at zero frequency,

graphic file with name pcbi.1002596.e082.jpg (82)

First, we consider a manipulation that preserves the single-neuron statistics Inline graphic, Inline graphic and the pairwise correlations Inline graphic, Inline graphic within each subpopulation, but neglects correlations Inline graphic between excitatory and inhibitory neurons. Formally, this corresponds to the block diagonal correlation matrix

graphic file with name pcbi.1002596.e083.jpg (83)

Here, we have replaced the individual entries of the correlation matrix by the corresponding subpopulation averaged correlations. The calculation of the response auto- and cross-correlation Inline graphic and Inline graphic is similar as for the expressions (63) and (67), with the difference that terms containing Inline graphic are absent:

graphic file with name pcbi.1002596.e084.jpg (84)

As an alternative type of feedback manipulation, we assume that all correlations are equal, irrespective of the neuron type. To this end, we replace all spike correlations by the population average Inline graphic. Thus, the covariance matrix reads

graphic file with name pcbi.1002596.e085.jpg (85)

The calculation follows the one leading to the expressions (63) and (67) and results in

graphic file with name pcbi.1002596.e086.jpg (86)

Acknowledgments

We thank the three reviewers for their constructive comments.

Funding Statement

We acknowledge partial support by the Research Council of Norway (eVITA [eNEURO], Notur), the Helmholtz Alliance on Systems Biology, the Next-Generation Supercomputer Project of MEXT, Japan, EU Grant 15879 (FACETS), EU Grant 269921 (BrainScaleS), DIP F1.2, and BMBF Grant 01GQ0420 to BCCN Freiburg. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Tripp B, Eliasmith C (2007) Neural populations can induce reliable postsynaptic currents without observable spike rate changes or precise spike timing. Cereb Cortex 17: 1830–1840. [DOI] [PubMed] [Google Scholar]
  • 2. Zohary E, Shadlen MN, Newsome WT (1994) Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370: 140–143. [DOI] [PubMed] [Google Scholar]
  • 3. Shadlen MN, Newsome WT (1998) The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. J Neurosci 18: 3870–3896. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Salinas E, Sejnowski TJ (2001) Correlated neuronal activity and the ow of neural information. Nat Rev Neurosci 2: 539–550. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.von der Malsburg C (1981) The correlation theory of brain function. Internal report 81-2, Department of Neurobiology, Max-Planck-Institute for Biophysical Chemistry, Göttingen, Germany.
  • 6. Bienenstock E (1995) A model of neocortex. Network: Comput Neural Sys 6: 179–224. [Google Scholar]
  • 7.Abeles M (1991) Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge: Cambridge University Press, 1st edition.
  • 8. Diesmann M, Gewaltig MO, Aertsen A (1999) Stable propagation of synchronous spiking in cortical neural networks. Nature 402: 529–533. [DOI] [PubMed] [Google Scholar]
  • 9. Tetzlaff T, Morrison A, Geisel T, Diesmann M (2004) Consequences of realistic network size on the stability of embedded synfire chains. Neurocomputing 58–60: 117–121. [Google Scholar]
  • 10. Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK (2010) Decorrelated neuronal firing in cortical microcircuits. Science 327: 584–587. [DOI] [PubMed] [Google Scholar]
  • 11. van Vreeswijk C, Sompolinsky H (1996) Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274: 1724–1726. [DOI] [PubMed] [Google Scholar]
  • 12. Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 8: 183–208. [DOI] [PubMed] [Google Scholar]
  • 13. Gerstner W (2000) Population dynamics of spiking neurons: fast transients, asynchronous states, and locking. Neural Comput 12: 43–89. [DOI] [PubMed] [Google Scholar]
  • 14. Tetzlaff T, Rotter S, Stark E, Abeles M, Aertsen A, et al. (2008) Dependence of neuronal correlations on filter characteristics and marginal spike-train statistics. Neural Comput 20: 2133–2184. [DOI] [PubMed] [Google Scholar]
  • 15. Kriener B, Tetzlaff T, Aertsen A, Diesmann M, Rotter S (2008) Correlations and population dynamics in cortical networks. Neural Comput 20: 2185–2226. [DOI] [PubMed] [Google Scholar]
  • 16. Hertz J (2010) Cross-correlations in high-conductance states of a model cortical network. Neural Comput 22: 427–447. [DOI] [PubMed] [Google Scholar]
  • 17. Renart A, De La Rocha J, Bartho P, Hollender L, Parga N, et al. (2010) The asynchronous state in cortical cicuits. Science 327: 587–590. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Stroeve S, Gielen S (2001) Correlation between uncoupled conductance-based integrate-and-fire neurons due to common and synchronous presynaptic firing. Neural Comput 13: 2005–2029. [DOI] [PubMed] [Google Scholar]
  • 19. Tetzlaff T, Buschermöhle M, Geisel T, Diesmann M (2003) The spread of rate and correlation in stationary cortical networks. Neurocomputing 52–54: 949–954. [Google Scholar]
  • 20. Moreno-Bote R, Parga N (2006) Auto- and crosscorrelograms for the spike response of leaky integrate-and-fire neurons with slow synapses. Phys Rev Lett 96: 028101. [DOI] [PubMed] [Google Scholar]
  • 21. De la Rocha J, Doiron B, Shea-Brown E, Kresimir J, Reyes A (2007) Correlation between neural spike trains increases with firing rate. Nature 448: 802–807. [DOI] [PubMed] [Google Scholar]
  • 22. Rosenbaum R, Josic K (2011) Mechanisms that modulate the transfer of spiking correlations. Neural Comput 23: 1261–1305. [DOI] [PubMed] [Google Scholar]
  • 23. Battaglia D, Brunel N, Hansel D (2007) Temporal decorrelation of collective oscillations in neural networks with local inhibition and long-range excitation. Phys Rev Lett 99: 238106. [DOI] [PubMed] [Google Scholar]
  • 24. Monteforte M, Wolf F (2010) Dynamical entropy production in spiking neuron networks in the balanced state. Phys Rev Lett 105 doi: 268104. [DOI] [PubMed] [Google Scholar]
  • 25. Jahnke S, Memmesheimer RM, Timme M (2009) How chaotic is the balanced state? Front Comput Neurosci 3: 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Legenstein R, Maass W (2007) Edge of chaos and prediction of computational performance for neural circuit models. Neural Netw 20: 323–334. [DOI] [PubMed] [Google Scholar]
  • 27. Jahnke S, Memmesheimer R, Timme M (2008) Stable irregular dynamics in complex neural networks. Phys Rev Lett 100: 048102. [DOI] [PubMed] [Google Scholar]
  • 28. Toyoizumi T, Abbott LF (2010) Beyond the edge: Amplification and temporal integration by recurrent networks in the chaotic regime. Front Neurosci doi: 10.3389/conf.fnins.2010.03.00155. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Zillmer R, Livi R, Politi A, Torcini A (2006) Desynchronization in diluted neural networks. Phys Rev E 74: 036203. [DOI] [PubMed] [Google Scholar]
  • 30. Ginzburg I, Sompolinsky H (1994) Theory of correlations in stochastic neural networks. Phys Rev E 50: 3171–3191. [DOI] [PubMed] [Google Scholar]
  • 31. Brunel N, Hakim V (1999) Fast global oscillations in networks of integrate-and-_re neurons with low firing rates. Neural Comput 11: 1621–1671. [DOI] [PubMed] [Google Scholar]
  • 32. Harris KD, Thiele A (2011) Cortical state and attention. Nat Rev Neurosci 12: 509–523. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Helias M, Deger M, Rotter S, Diesmann M (2010) Instantaneous non-linear processing by pulsecoupled threshold units. PLoS Comput Biol 6: e1000929. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Richardson MJE, Swarbrick R (2010) Firing-rate response of a neuron receiving excitatory and inhibitory synaptic shot noise. Phys Rev Lett 105: 178102. [DOI] [PubMed] [Google Scholar]
  • 35. Fourcaud-Trocmé N, Hansel D, van Vreeswijk C (2003) Brunel (2003) How spike generation mechanisms determine the neuronal response to uctuating inputs. J Neurosci 23: 11628–11640. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Naundorf B, Geisel T, Wolf F (2005) Action potential onset dynamics and the response speed of neuronal populations. J Comput Neurosci 18: 297–309. [DOI] [PubMed] [Google Scholar]
  • 37. Brunel N, Chance FS, Fourcaud N, Abbott LF (2001) Effects of synaptic noise and filtering on the frequency response of spiking neurons. Phys Rev Lett 86: 2186–2189. [DOI] [PubMed] [Google Scholar]
  • 38. Nordlie E, Tetzlaff T, Einevoll GT (2010) Rate dynamics of leaky integrate-and-fire neurons with strong synapses. Front Comput Neurosci 4: 149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Pressley J, Troyer TW (2009) Complementary responses to mean and variance modulations in the perfect integrate-and-fire model. Biol Cybern 101: 63–70. [DOI] [PubMed] [Google Scholar]
  • 40. Knight BW (1972) The relationship between the firing rate of a single neuron and the level of activity in a population of neurons. J Gen Physiol 59: 767–778. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Köndgen H, Geisler C, Fusi S, Wang XJ, Lüscher HR, et al. (2008) The dynamical response properties of neocortical neurons to temporally modulated noisy inputs in vitro. Cereb Cortex 18: 2086–2097. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Boucsein C, Tetzlaff T, Meier R, Aertsen A, Naundorf B (2009) Dynamical response properties of neocortical neuron ensembles: multiplicative versus additive noise. J Neurosci 29: 1006–1010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Blomquist P, Devor A, Indahl UG, Ulbert I, Einevoll GT, et al. (2009) Estimation of thalamocortical and intracortical network models from joint thalamic single-electrode and cortical laminarelectrode recordings in the rat barrel system. PLoS Comput Biol 5: e1000328. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Knight BW (1972) Dynamics of encoding in a population of neurons. J Gen Physiol 59: 734–766. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Lindner B, Schimansky-Geier L (2001) Transmission of noise coded versus additive signals through a neuronal ensemble. Phys Rev Lett 86: 2934–2937. [DOI] [PubMed] [Google Scholar]
  • 46. Fourcaud N, Brunel N (2002) Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Comput 14: 2057–2110. [DOI] [PubMed] [Google Scholar]
  • 47.Oppenheim A, Wilsky A (1996) Systems and signals. Upper Saddle River, NJ: Prentice Hall.
  • 48. Troyer TW, Krukowski AE, Miller KD (2002) LGN input to simple cells and contrast-invariant orientation tuning: An analysis. J Neurophysiol 87: 2741–2752. [DOI] [PubMed] [Google Scholar]
  • 49. Zhaoping L, Lewis A, Scarpetta S (2004) Mathematical analysis and simulations of the neural circuit for locomotion in lampreys. Phys Rev Lett 92: 198106. [DOI] [PubMed] [Google Scholar]
  • 50. Murphy BK, Miller KD (2009) Balanced amplification: A new mechanism of selective amplification of neural activity patterns. Neuron 61: 635–648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Moreno-Bote R, Renart A, Parga N (2008) Theory of input spike auto- and cross-correlations and their effect on the response of spiking neurons. Neural Comput 20: 1651–1705. [DOI] [PubMed] [Google Scholar]
  • 52. Shadlen MN, Newsome WT (2001) Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. J Neurophysiol 86: 1916–1936. [DOI] [PubMed] [Google Scholar]
  • 53. Gentet L, Avermann M, Matyas F, Staiger JF, Petersen CC (2010) Membrane potential dynamics of GABAergic neurons in the barrel cortex of behaving mice. Neuron 65: 422–435. [DOI] [PubMed] [Google Scholar]
  • 54. Pernice V, Staude B, Cardanobile S, Rotter S (2011) How structure determines correlations in neuronal networks. PLoS Comput Biol 7: e1002059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Pernice V, Staude B, Cardanobile S, Rotter S (2012) Recurrent interactions in spiking networks with arbitrary topology. Phys Rev E 85: 031916. [DOI] [PubMed] [Google Scholar]
  • 56. Trousdale J, Hu Y, Shea-Brown E, Josic K (2012) Impact of network structure and cellular response on spike time correlations. PLoS Comput Biol 8: e1002408. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57. Softky WR, Koch C (1993) The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J Neurosci 13: 334–350. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Tetzlaff T, Geisel T, Diesmann M (2002) The ground state of cortical feed-forward networks. Neurocomputing 44–46: 673–678. [Google Scholar]
  • 59. Mehring C, Hehl U, Kubo M, Diesmann M, Aertsen A (2003) Activity dynamics and propagation of synchronous spiking in locally connected random networks. Biol Cybern 88: 395–408. [DOI] [PubMed] [Google Scholar]
  • 60. Aviel Y, Mehring C, Abeles M, Horn D (2003) On embedding synfire chains in a balanced network. Neural Comput 15: 1321–1340. [DOI] [PubMed] [Google Scholar]
  • 61. Aviel Y, Horn D, Abeles M (2005) Memory capacity of balanced networks. Neural Comput 17: 691–713. [DOI] [PubMed] [Google Scholar]
  • 62. Bi G, Poo M (1998) Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci 18: 10464–10472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL (2009) Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. I. Input selectivity - strengthening correlated input pathways. Biol Cybern 101: 81–102. [DOI] [PubMed] [Google Scholar]
  • 64. Burkitt AN, Gilson M, van Hemmen J (2007) Spike-timing-dependent plasticity for neurons with recurrent connections. Biol Cybern 96: 533–546. [DOI] [PubMed] [Google Scholar]
  • 65. Pfister JP, Tass PA (2010) Stdp in oscillatory recurrent networks: theoretical conditions for desynchronization and applications to deep brain stimulation. Front Comput Neurosci 454 doi: 10.3389/fncom.2010.00022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66. Lindner B, Doiron B, Longtin A (2005) Theory of oscillatory firing induced by spatially correlated noise and delayed inhibitory feedback. Phys Rev E 72: 061919. [DOI] [PubMed] [Google Scholar]
  • 67. Buice MA, Cowan JD, Chow CC (2009) Systematic uctuation expansion for neural network activity equations. Neural Comput 22: 377–426. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68. Sommers H, Crisanti A, Sompolinsky H, Stein Y (1988) Spectrum of large random asymmetric matrices. Phys Rev Lett 60: 1895–1898. [DOI] [PubMed] [Google Scholar]
  • 69. Rajan K, Abbott L (2006) Eigenvalue spectra of random matrices for neural networks. Phys Rev Lett 97: 188104. [DOI] [PubMed] [Google Scholar]
  • 70. Sompolinsky, Crisanti, Sommers (1988) Chaos in random neural networks. Phys Rev Lett 61: 259–262. [DOI] [PubMed] [Google Scholar]
  • 71. Toyoizumi T, Rad KR, Paninski L (2009) Mean-field approximations for coupled populations of generalized linear model spiking neurons with markov refractoriness. Neural Comput 21: 1203–1243. [DOI] [PubMed] [Google Scholar]
  • 72. Sirovich L, Omurtag A, Knight BW (2000) Dynamics of neuronal populations: The equilibrium solution. SIAM J Appl Math 60: 2009–2028. [Google Scholar]
  • 73. Jacobsen M, Jensen AT (2007) Exit times for a class of piecewise exponential markov processes with two-sided jumps. Stoch Proc Appl 117: 1330–1356. [Google Scholar]
  • 74.Cox DR (1962) Renewal Theory. Science Paperbacks. London: Chapman and Hall.
  • 75. Nawrot MP, Boucsein C, Rodriguez Molina V, Riehle A, Aertsen A, et al. (2008) Measurement of variability dynamics in cortical spike trains. J Neurosci Methods 169: 374–390. [DOI] [PubMed] [Google Scholar]
  • 76. Chizhov AV, Graham LJ (2008) Efficient evaluation of neuron populations receiving colored-noise current based on a refractory density method. Phys Rev E 77: 011910. [DOI] [PubMed] [Google Scholar]
  • 77. Meyer C, van Vreeswijk C (2002) Temporal correlations in stochastic networks of spiking neurons. Neural Comput 14: 369–404. [DOI] [PubMed] [Google Scholar]
  • 78.Hebb DO (1949) The organization of behavior: A neuropsychological theory. New York: John Wiley & Sons.
  • 79. Helias M, Rotter S, Gewaltig M, Diesmann M (2008) Structural plasticity controlled by calcium based correlation detection. Front Comput Neurosci 2 doi:10.3389/neuro.10.007.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80. Loebel A, Silberberg G, Helbig D, Markram H, Tsodyks M, et al. (2009) Multiquantal release underlies the distribution of synaptic efficacies in the neocortex. Front Comput Neurosci 3: 27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81. Gewaltig MO, Diesmann M (2007) NEST (NEural Simulation Tool). Scholarpedia 2: 1430. [Google Scholar]
  • 82. Hawkes A (1971) Point spectra of some mutually exciting point process. J R Statist Soc Ser B 33: 438–443. [Google Scholar]
  • 83.Risken H (1996) The Fokker-Planck Equation. Berlin: Springer Verlag.
  • 84. Siegert AJ (1951) On the first passage time probability problem. Phys Rev 81: 617–623. [Google Scholar]
  • 85. Helias M, Deger M, Diesmann M, Rotter S (2010) Equilibrium and response properties of the integrate-and-fire neuron in discrete time. Front Comput Neurosci 3 doi:10.3389/neuro.10.029.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Tuckwell HC (1988) Introduction to Theoretical Neurobiology, volume 1. Cambridge: Cambridge University Press.
  • 87. Rotter S, Diesmann M (1999) Exact digital simulation of time-invariant linear systems with applications to neuronal modeling. Biol Cybern 81: 381–402. [DOI] [PubMed] [Google Scholar]

Articles from PLoS Computational Biology are provided here courtesy of PLOS

RESOURCES