Skip to main content
Springer logoLink to Springer
. 2009 Jul 1;27(2):177–200. doi: 10.1007/s10827-008-0135-1

Correlations in spiking neuronal networks with distance dependent connections

Birgit Kriener 1,2,4,5,, Moritz Helias 1,2, Ad Aertsen 1,2, Stefan Rotter 1,3
PMCID: PMC2731936  PMID: 19568923

Abstract

Can the topology of a recurrent spiking network be inferred from observed activity dynamics? Which statistical parameters of network connectivity can be extracted from firing rates, correlations and related measurable quantities? To approach these questions, we analyze distance dependent correlations of the activity in small-world networks of neurons with current-based synapses derived from a simple ring topology. We find that in particular the distribution of correlation coefficients of subthreshold activity can tell apart random networks from networks with distance dependent connectivity. Such distributions can be estimated by sampling from random pairs. We also demonstrate the crucial role of the weight distribution, most notably the compliance with Dales principle, for the activity dynamics in recurrent networks of different types.

Keywords: Spiking neural networks, Small-world networks, Pairwise correlations, Distribution of correlation coefficients

Introduction

The collective dynamics of balanced random networks was extensively studied, assuming different neuron models as constituting dynamical units (van Vreeswijk and Sompolinsky 1996, 1998; Brunel and Hakim 1999; Brunel 2000; Mattia and Del Guidice 2002; Timme et al. 2002; Mattia and Del Guidice 2004; Kumar et al. 2008b; Jahnke et al. 2008; Kriener et al. 2008).

Some of these models have in common that they assume random network topologies with a sparse connectivity ϵ ≈ 0.1 for a local, but large neuronal network, embedded into an “external” population that supplies unspecific white noise drive to the local network. These systems are considered as minimal models for cortical networks of about 1 mm3 volume, because they can display activity states similar to those observed invivo, such as asynchronous irregular spiking. Yet, as recently reported (Song et al. 2005; Yoshimura et al. 2005; Yoshimura and Callaway 2005), local cortical networks are characterized by a circuitry, which is specific and hence, non-random even on a small spatial scale. Since it is still impossible to experimentally uncover the whole coupling structure of a neuronal network, it is necessary to infer some of its features from its activity dynamics. Timme (2007) e.g. studied networks of N coupled phase oscillators in a stationary phase locked state. In these networks it is possible to reconstruct details of the network coupling matrix (i.e. topology and weights) by slightly perturbing the stationary state with different driving conditions and analyzing the network response. Here, we focus on both network structure and activity dynamics in spiking neuronal networks on a statistical level. We consider several abstract model networks that range from strict distance dependent connectivity to random topologies, and examine their activity dynamics by means of numerical simulation and quantitative analysis. We focus on integrate-and-fire neurons arranged on regular rings, random networks, and so-called small-world networks (Watts and Strogatz 1998). Small-world structures seem to be optimal brain architectures for fast and efficient inter-areal information transmission with potentially low metabolic consumption and wiring costs due to a low characteristic path length ℓ (Chklovskii et al. 2002), while at the same time they may provide redundancy and error tolerance by highly recurrent computation (high clustering coefficient Inline graphic, for the general definitions of ℓ and Inline graphic, cf. e.g. Watts and Strogatz (1998), Albert and Barabasi (2002)). Also on an intra-areal level cortical networks may have pronounced small-world features as was shown in simulations by Sporns and Zwi (2004), who assumed a local Gaussian connection probability and a uniform long-range connection probability for local cortical networks, assumptions that are in line with experimental observations (Hellwig 2000; Stepanyants et al. 2007). Network topology is just one aspect of neuronal network coupling, though. Here, we also demonstrate the crucial role of the weight distribution, especially with regard to the notion that all inhibitory neurons project only hyperpolarizing synapses onto their postsynaptic targets, and excitatory neurons only project depolarizing synapses. This assumption is sometimes referred to as Dales principle (Li and Dayan 1999; Dayan and Abbott 2001; Hoppensteadt and Izhikevich 1997). Strikingly, this has strong implications for the dynamical states of random networks already (Kriener et al. 2008). Yet, the main focus of the present study is put on the distance dependence and overall distribution of correlation coefficients. Especially the joint statistics of subthreshold activity, i.e. correlations and coherences between the incoming currents that neurons integrate, has been shown to contain elusive information about network parameters, e.g. the mean connectivity in random networks (Tetzlaff et al. 2007).

The paper is structured as follows: In Section 2 we give a short description of the details of the neuron model and the simulation parameters used throughout the paper. In Section 3 we introduce the notion of small-world networks and in Section 4 we discuss features of the activity dynamics in dependence of the topology. In ring and small-world networks groups of neighboring neurons tend to spike highly synchronously, while the population dynamics in random networks is asynchronous-irregular. To understand the source of this differences in the population dynamics, we analyze the correlations of the inputs of neurons in dependence of the network topology. Section 5 is devoted to the theoretical framework we apply to calculate the input correlations in dependence of the pairwise distance in sparse ring (Section 5.1) and small-world networks (Section 5.2). In Section 6 we finally derive the full distribution of correlation coefficients for ring and random networks. Random networks have rather narrow distributions centered around the mean correlation coefficient, while sparse ring and small-world networks have distributions with heavy tails. This is due to the high probability to share a common input partner if the neurons are topological neighbors, and very low probability if they are far apart, yielding distributions with a few high correlation coefficients and many small ones. This offers a way to potentially distinguish random topologies from topologies with small-world features by their subthreshold activity dynamics on a statistical level.

Neuronal dynamics and synaptic input

The neurons in the network of size N are modeled as leaky integrate-and-fire point neurons with current-based synapses. The membrane potential dynamics Vk(t), k ∈ {1,...,N} of the neurons is given by

graphic file with name M3.gif 1

with membrane resistance R and membrane time constant Inline graphic. Whenever Vk(t) reaches the threshold θ, a spike is emitted, Vk(t) is reset to Inline graphic, and the neuron stays refractory for a period Inline graphic. Synaptic inputs

graphic file with name M7.gif 2

from the local network are modeled as δ-currents. Whenever a presynaptic neuron i fires an action potential at time til, it evokes an exponential postsynaptic potential (PSP) of amplitude

graphic file with name M8.gif 3

after a transmission delay Δ that is the same for all synapses. Note that multiple connections between two neurons and self-connections are excluded in this framework. In addition to the local input, each neuron receives an external Poisson current Inline graphic mimicking inputs from other cortical areas or subcortical regions. The total input is thus given by

graphic file with name M10.gif 4

Parameters

The neuron parameters are set to Inline graphic ms, R = 80 MΩ, J = 0.1 mV, and Δ = 2 ms. The firing threshold θ is 20 mV and the reset potential Inline graphic mV. After a spike event, the neurons stay refractory for Inline graphic ms. If not stated otherwise, all simulations are performed for networks of size N = 12,500, with Inline graphic and Inline graphic. We set the fraction of excitatory neurons in the network to Inline graphic. The connectivity is set to ϵ = 0.1, such that each neuron receives exactly κ = ϵN inputs. For g = 4 inhibition hence balances excitation in the local network, while for g > 4 the local network is dominated by a net inhibition. Here, we choose g = 6. External inputs are modeled as Inline graphic independent Poissonian sources with frequencies Inline graphic. All network simulations were performed using the NEST simulation tool (Gewaltig and Diesmann 2007) with a temporal resolution of h = 0.1 ms. For details of the simulation technique see Morrison et al. (2005).

Structural properties of small-world networks

Many real world networks, including cortical networks (Watts and Strogatz 1998; Strogatz 2001; Sporns 2003; Sporns and Zwi 2004), possess so-called small-world features. In the framework originally studied by Watts and Strogatz (1998), small-world networks are constructed from a ring graph of size N, where all nodes are connected to their κ ≪ N nearest neighbors (“boxcar footprint”), by random rewiring of connections with probability Inline graphic (cf. Fig. 1(a), (b)). Watts and Strogatz (1998) characterized the small-world regime by two graph-theoretical measures, a high clustering coefficient Inline graphic and a low characteristic pathlength ℓ (cf. Fig. 1(c)). The clustering coefficient Inline graphic measures the transitivity of the connectivity, i.e. how likely it is, that, given there is a connection between nodes i and j, and between nodes j and k, there is also a connection between nodes i and k. The characteristic path length ℓ on the other hand quantifies how many steps on average suffice to get from some node in the network to any other node. In the following we will analyze small-world networks of spiking neurons. Networks can be represented by the adjacency matrix A with Aki = 1, if node i is connected to k and Aki = 0 otherwise. We neglect self-connections, i.e. Akk = 0 for all k ∈ {1,...,N}. In the original paper by Watts and Strogatz (1998) undirected networks were studied. Connections between neurons, i.e. synapses are however generically directed. We define the clustering coefficient Inline graphic for directed networks1 here as

graphic file with name M23.gif 5

with

graphic file with name M24.gif 6

In this definition, Inline graphic measures the likelihood of having a connection between two neurons, given they have a common input neuron, and it is hence directly related to the amount of shared input Inline graphic between neighboring neurons l and k, where Wki are the weighted connections from neuron i to neuron k (cf. Section 2), i.e.

graphic file with name M27.gif 7

The characteristic path length ℓ of the network graph is given by

graphic file with name M28.gif 8

where ℓij is the shortest path between neurons i and j, i.e. Inline graphic (Albert and Barabasi 2002). The clustering coefficient is a local property of a graph, while the characteristic path length is a global quantity. This leads to the relative stability of the clustering coefficient during gradual rewiring of connections, because the local properties are hardly affected, whereas the introduction of random shortcuts decreases the average shortest path length dramatically (cf. Fig. 1(c)).

Fig. 1.

Fig. 1

A sketch of (a) the ring and (b) a small-world network with the neuron distribution we use throughout the paper for the Dale-conform networks (gray triangle: excitatory neuron, black square: inhibitory neuron, ratio excitation/inhibition Inline graphic). The footprint κ of the ring in this particular example is 4, i.e. each neuron is connected to its κ = 4 nearest neighbors, irrespective of the identity. To derive the small-world network we rewire connections randomly with probability Inline graphic. Note, that in the actual studied networks all connections are directed. (c) The small-world regime is characterized by a high relative clustering coefficient Inline graphic and a low characteristic path length Inline graphic (here N = 2,000, κ = 200, averaged over 10 network realizations)

Activity dynamics in spiking small-world networks

In a ring graph directly neighboring neurons receive basically the same input, as can be seen from the high clustering coefficient Inline graphic2 which is the same as in undirected ring networks (Albert and Barabasi 2002). This leads to high input correlations and synchronous spiking of groups of neighboring neurons (Fig. 2(a)). As more and more connections are rewired, the local synchrony is attenuated and we observe a transition to a rather asynchronous global activity (Fig. 2(b), (c)). The clustering coefficient of the corresponding random graph equals Inline graphic (here ϵ = 0.1), because the probability to be connected is always ϵ for any two neurons, independent of the adjacency of the neurons (Albert and Barabasi 2002). This corresponds to the strength of the input correlations observed in these networks (Kriener et al. 2008). However, the population activity still shows pronounced fluctuations around ∼1/4Δ (with the transmission delay Δ = 2 ms, cf. Section 2) even when the network is random (Inline graphic, Fig. 2(c)). These fluctuations decrease dramatically, if we violate Dalesprinciple, i.e. the constraint that any neuron can either only depolarize or hyperpolarize all its postsynaptic targets, but not both at the same time. We refer to the latter as the hybrid scenario in which neurons project both excitatory and inhibitory synapses (Kriener et al. 2008). Ren et al. (2007) suggest that about 30% of pyramidal cell pairs in layer 2/3 mouse visual cortex have effectively strongly reliable, short latency inhibitory couplings via axo-axonic glutamate receptor mediated excitation of the nerve endings of inhibitory interneurons, thus bypassing dendrites, soma, and axonal trunk of the involved interneuron. These can be interpreted as hybrid-like couplings in real neural tissue.

Fig. 2.

Fig. 2

Activity dynamics for (a) a ring network, (b) a small-world network (Inline graphic) and (c) a random network that all comply with Dalesprinciple. (d) shows activity in a ring network with hybrid neurons. In the Dale-conform ring network (a) we observe synchronous spiking of large groups of neighboring neurons. This is due to the high amount of shared input: neurons next to each other have basically the same presynaptic input neurons. This local synchrony is slightly attenuated in small-world networks (b). In random networks the activity is close to asynchronous-irregular (AI), apart from network fluctuations due to the finite size of the network (c). Networks made of hybrid neurons have a perfect AI activity, even if the underlying connectivity is a ring graph (d). The simulation parameters were N = 12,500, κ = 1,250, g = 6, J = 0.1 mV, with Inline graphic equidistantly distributed inhibitory neurons and Inline graphic independent Poisson inputs per neuron of strength Inline graphic Hz each (cf. Section 2)

The average rate in all four networks is hardly affected by the underlying topology or weight distribution of the networks (cf. Table 1), while the variances of the population activity are very different. This is reflected in the respective Fano factors Inline graphic of population spike counts Inline graphic per time bin h = 0.1 ms, where Inline graphic is the number of spikes emitted by neuron i at time points sl within the interval [t,t + h) (cf. Appendix A). If the population spike count n(t;h) is a compound process of independent stationary Poisson random variables ni(t;h) with parameter Inline graphic, we have

graphic file with name M45.gif 9

because the covariances Inline graphic are zero for all i ≠ j and the variance of the sum equals the sum of the variances. If it is larger than one this indicates positive correlations between the spiking activities of the individual neurons (cf. Appendix A) (Papoulis 1991; Nawrot et al. 2008; Kriener et al. 2008). We see (cf. Table 1) that indeed it is largest for the Dale-conform ring network, still manifestly larger than one for the Dale-conform random network, and about one for the hybrid networks for both the ring and the random case. The quantitative differences of the Fano factors in all four cases can be explained by the different amount of pairwise spike train correlations (cf. Appendix A, Section 6). This demonstrates how a violation of Dale’s principle stabilizes and actually enables asynchronous irregular activity, even in networks whose adjacency, i.e. the mere unweighted connectivity, suggests highly correlated activity, as it is the case for Dale-conform ring (Fig. 2(a)) and small-world networks (Fig. 2(b)).

Table 1.

Mean population rates νo and Fano factors FFInline graphic of population spike count n(t;h) per time bin h (10 s of population activity, N = 12,500, bin size h = 0.1 ms) for the random Dale and hybrid networks and the corresponding ring networks

Network type Mean rate νo Fano factor FF
Random, Dale 12.9 Hz 9.27
Random, Hybrid 12.8 Hz 1.25
Ring, Dale 13.5 Hz 26.4
Ring, Hybrid 13.1 Hz 1.13

If all spike trains contributing to the population spike count were uncorrelated Poissonian, the FF would equal 1. A FF larger than 1 indicates correlated activity (cf. Appendix A)

To understand the origin of the different correlation strengths in the various network types, and hence the different spiking dynamics and population activities in dependence on both the weight distribution and the rewiring probability, we will extend our analysis introduced in Kriener et al. (2008) to ring and small-world networks in the following sections.

Distance dependent correlations in a shot-noise framework

We assume that all incoming spike trains Si(t) = ∑ lδ(t − til) are realizations of point processes corresponding to stationary correlated Poisson processes, such that

graphic file with name M48.gif 10

with spike train correlations cij ∈ [ − 1,1], and mean rates νi, νj (cf. however Fig. 4(d)). The spike trains can either stem from the pool of local neurons i ∈ {1,...,N} or from external neurons Inline graphic, where we assume that each neuron receives external inputs from Inline graphic neurons, which are different for all N local neurons. We describe the total synaptic input Ik(t) of a model neuron k as a sum of linearly filtered presynaptic spike trains (i.e. the spike trains are convolved with filter-kernels fki(t)), also called shot noise (Papoulis 1991; Kriener et al. 2008):

graphic file with name M51.gif 11

Ik(t) could represent e.g. the weighted input current, the synaptic input current (fki(t) = unit postsynaptic current, PSC), or the free membrane potential (fki(t) = unit postsynaptic potential, PSP). All synapses are identical in their kinetics and differ only in strength Wki, hence we can write

graphic file with name M52.gif 12

With si(t): = (Si ∗ f)(t), Eq. (11) is then rewritten as

graphic file with name M53.gif 13

The covariance function of the inputs Ik, Il is given by

graphic file with name M54.gif 14

This sum can be split into

graphic file with name M55.gif 15

The first sum Eq. (15) (i) contains contributions of the auto-covariance functions Inline graphic of the filtered input spike trains, i.e. the spike trains that stem from common input neurons i ∈ {1,...,N} (WkiWli ≠ 0, including Inline graphic). The second sum Eq. (15) (ii) contains all contributions of the cross-covariance functions Inline graphic of filtered spike trains that stem from presynaptic neurons i ≠ j, i,j ∈ {1,...,N}, where we already have taken into account that the external spike sources are uncorrelated, and hence Inline graphic for all Inline graphic. It is apparent that the high degree of shared input, as present in ring and small-world topologies, should show up in the spatial structure of input correlations between neurons. The closer two neurons k,l are located on the ring, the more common presynaptic neurons i they share. This will lead to a dominance of the first sum, unless the general strength of spike train covariances, accounted for in the second sum, is too high, and the second sum dominates the structural amplifications, because it contributes quadratically in neuron number. If the input covariances due to the structural overlap of presynaptic pools is however dominant, a fraction of this input correlation should also be present at the output side of the neurons, i.e. the spike train covariances cij should be a function of the interneuronal distance as well. This is indeed the case as we will see in the following.

Fig. 4.

Fig. 4

Input current (a, b, e) and spike train (c, f) correlation coefficients as a function of the pairwise interneuronal distance D for a ring network of size N = 12,500, κ = 1,250, g = 6, J = 0.1 mV with βN equidistantly distributed inhibitory neurons and Inline graphic external Poisson inputs per neuron of strength Inline graphic Hz each. (a) depicts the input correlation coefficients Eq. (18) derived with the assumption that the spike train correlation coefficients cij(D,0) go linearly like Inline graphic, cf. Eq. (30), and (b) fitted as a decaying exponential function Inline graphic (red). The gray curves show the input correlations estimated from simulations. (c) shows the spike train correlation coefficients estimated from simulations (gray), and both the linear (black) and exponential fit (red) used to obtain the theoretical predictions for the input correlation coefficients in (a) and (b). (d) shows the measured spike train cross-correlation functions ψij(τ,D,0) for four different distances D = {1, 325, 625, 1250}. (e) shows the average input correlation coefficients (averaged over 50 neuron pairs per distance) and (f) the average spike train correlation coefficients measured in a hybrid ring network (for the full distribution cf. Fig. 7(c)). Note, that the average input correlations in (e) are even smaller than the spike train correlations in (c). For each network realization, we simulated the dynamics during 30 s. We then always averaged over 50 pairs for the input current correlations and 1,000 pairs for the spike train correlations with selected distances D ∈ {1, 10, 20,...,100, 200,..., 6,000}

We will hence assume that all incoming spike train correlations cij are in general dependent on the pairwise distance Dij = |i − j| of neurons i,j (neuron labeling across the ring in clockwise manner), and the rewiring probability Inline graphic. With Campbell’s theorem for shot noise (Papoulis 1991; Kriener et al. 2008) we can write

graphic file with name M62.gif 16

with

graphic file with name M63.gif

Here, Inline graphic represents the auto-correlation of the filter kernel f(t).

We now want to derive the zero time-lag input covariances, i.e. the auto-covariance Inline graphic and cross-covariance Inline graphic of Ik,Il, defined as

graphic file with name M67.gif 17

in dependence of the auto- and cross-covariances Inline graphic, Inline graphic of the individual filtered input spike trains to obtain the input correlation coefficient

graphic file with name M70.gif 18

With the definitions Eq. (16) the input auto-covariance function at zero time lag Inline graphic, i.e. the variance of the input Ik, explicitly equals

graphic file with name M72.gif 19

while the cross-covariance of the input currents Inline graphic is given by

graphic file with name M74.gif 20

To assess the zero-lag shot noise covariances Inline graphic and Inline graphic we derive with Eqs. (10), (16)

graphic file with name M77.gif 21

We assume Inline graphic for all i ∈ {1,...,N}, with Inline graphic denoting the average stationary rate of the network neurons, and Inline graphic for all Inline graphic, with Inline graphic denoting the rate of the external neurons. Hence, Inline graphic, and Inline graphic are the same for all neurons i ∈ {1,...,N}, and Inline graphic, respectively. For the cross-covariance function we analogously have Inline graphic. We define Hk as the contribution of the shot noise variances as to the variance Inline graphic of the inputs (cf. Fig. 3(a))

graphic file with name M88.gif 22

Gkl as the contribution of the shot noise variances as to the cross-covariance Inline graphic of the inputs (cf. Fig. 3(c))

graphic file with name M90.gif 23

Lk as the contribution of the shot noise cross-covariances cs to the auto-covariance Inline graphic of the inputs (cf. Fig. 3(b))

graphic file with name M92.gif 24

and Mkl as the contribution of the shot noise cross-covariances cs to the cross-covariance Inline graphic of the inputs (cf. Fig. 3(d))

graphic file with name M94.gif 25

Finally, if we assume input structure homogeneity, i.e. that the expected values of these individual contributions do not depend on k and l, but only on the relative distance Dkl and the rewiring probability Inline graphic, we can rewrite Eq. (18) as

graphic file with name M96.gif 26

The next two sections are devoted to the calculation of these expressions for ring and small-world networks.

Fig. 3.

Fig. 3

Sketch of the different contributions to the input correlation coefficient Inline graphic, cf. Eq. (26) for the ring graph. The variance Inline graphic, Eq. (17) of the input to a neuron k is given by the sum of the variances H (panel (a)), Eq. (22) and the sum of the covariances L(0) (panel (b)), Eq. (24) of the incoming filtered spike trains si from neurons i ≠ j with WkiWkj ≠ 0. The cross-covariance Inline graphic, Eq. (17) is given by the sum of the variances of the commonly seen spike trains si with WkiWli ≠ 0, G(Dkl,0) (panel (c)), Eq. (23) and the sum of the covariances of the spike trains si from non-common input neurons i ≠ j with WkiWlj ≠ 0, M(Dkl,0), Eq. (25). We always assume that the only source of spike train correlations cij(Dij,0) stems from presynaptic neurons sharing a common presynaptic neuron m (green)

Ring graphs

First we consider the case of Dale-conform ring networks, i.e. Inline graphic. A fraction of βκ of the presynaptic neurons i within the local input pool of neuron k is excitatory and depolarizes the postsynaptic neuron by Wki = J with each spike, while (1 − β) κ presynaptic neurons are inhibitory and hyperpolarize the target neuron k by Wki = − gJ per spike (cf. Section 2). Moreover, each neuron receives Inline graphic excitatory inputs from the external neuron pool with Wki = J. Hence, for all neurons k ∈ {1,...,N} we obtain for the input variance Hk = H, Eq. (22), Fig. 3(a)

graphic file with name M102.gif 27

Because of the boxcar footprint, the contribution of the auto-covariances as(0) of the individual filtered spike trains to the input cross-covariance Gkl(Dkl,0), Eq. (23), is basically the same as H, only scaled by the respective overlap of the two presynaptic neuron pools of neurons k and l. This overlap only depends on the distance Dkl between k and l, cf. Fig. 3(c). Hence, for all k,l ∈ {1,...,N}

graphic file with name M103.gif 28

with minor modulations because of the exclusion of self-couplings and the relative position of the inhibitory neurons with respect to the boxcar footprint, but for large κ these corrections are negligible. Θ[x] is the Heaviside stepfunction that equals 1 if x ≥ 0, and 0 if x < 0. If all incoming spike trains from local neurons are uncorrelated and Poissonian, and the external drive is a direct current, the complete input covariance stems from the structural (i.e. common input) correlation coefficient Inline graphic alone, that can then be written as

graphic file with name M105.gif 29

The spike train correlations cij(Dij,0) show however a pronounced distance dependent decay and reach non-negligible amplitudes up to cij(1,0) ≈ 0.04 (cf. Fig. 4(c)). In the following we will use two approximations of the distance dependence of cij(Dij,0), a linear relation and an exponential relation. We start by assuming a linear decay on the interval (0,κ] (cf. Fig. 4(c), black). This choice is motivated by two assumptions. First, we assume that the main source of spike correlations stems from the structural input correlations Inline graphic, Eq. (29), of the input neurons i,j alone, i.e. the strength of the correlations between two input spike trains Si and Sj depends on the overlap of their presynaptic input pools, determined by their interneuronal distance Dij. Analogous to the reasoning that lead to the common input correlations G(Dkl,0)/H, Eq. (29), before, the output spike train correlation between neurons i and j will hence be zero if Dij ≥ κ. Moreover, the neurons i and j will only be contributing to the input currents of k and l, if they are within a distance κ/2, that is Dki < κ/2 and Dlj < κ/2. Hence, for the correlations of two input spike trains Si, Sj, i ≠ j to contribute to the input covariance of neurons k and l, these must be within a range κ + 2 κ/2 = 2κ. Additionally, we assume that the common input correlations Inline graphic are transmitted linearly with the same transmission gain Inline graphic to the output side of i and j. We hence make the following ansatz for the distance dependent correlations between the filtered input spike trains from neuron i to k and from neuron j to l (we always indicate the dependence on k and l by |kl):

graphic file with name M109.gif 30

For the third sum in Eq. (19) this yields for all k ∈ {1,...,N} (cf. Appendix B for details of the derivation)

graphic file with name M110.gif 31

For the second term in Eq. (20) we have

graphic file with name M111.gif 32

which again only depends on the distance, so we dropped the sub-script of M. It is derived in Appendix B and explicitly given by Eq. (67). After calculation of Inline graphic and Inline graphic with the ansatz Eq. (30), we can plot Inline graphic as a function of distance and get a curve as shown in Fig. 4(a). It is obvious (Fig. 4(c)) that the linear fit overestimates the spike train correlations as a function of distance, the correlation transmission decreases with interneuronal distance, i.e. strength of input correlation Inline graphic, non-linearly (De la Rocha et al. 2007; Shea-Brown et al. 2008). This leads to an overestimation of the total input correlations for distances Dkl ≥ κ (Fig. 4(a)). If we instead fit the distance dependence of the spike train correlations of neuron i and j by a decaying exponential function with a cut-off at Dij = κ,

graphic file with name M116.gif 33

and fit the corresponding parameters γ and η to the values estimated from the simulations, the sums in Eqs. (31) and (32) can still be reduced to simple terms (cf. Eqs. (69), (70)) and the correspondence with the observed input correlations becomes very good over the whole range of distances (Fig. 4(b)). We conclude that the strong common input correlations Inline graphic of neighboring neurons due to the structural properties of Dale-conform ring networks predominantly cause the spatio-temporally correlated spiking of neuron groups of size ∼κ.

However, we saw (cf. Fig. 2(d)) that the spiking activity in ring networks becomes highly asynchronous irregular, if we relax Dale’s principle and consider hybrid neurons instead. Since the number of excitatory, inhibitory and external synapses is the same for all neurons, we get the same expressions for H, Inline graphic and Inline graphic for hybrid neurons as well, but the common input correlations become (the expectation value Inline graphic is with regard to network realizations)

graphic file with name M125.gif 34

The ratio of Inline graphic and G hence corresponds to the one reported for random networks (Kriener et al. 2008) and equals 0.02 for the parameters used here. This is in line with the average input correlations in the hybrid ring network (Fig. 4(e)). They are hence only about half the correlation of the spike trains in the Dale-conform ring network (Fig. 4(c)). If we assume the correlation transmission for the highest possible input correlation to be the same as in the Dale case (γ ≈ 0.04), we estimate spike train correlations of the order of Inline graphic. The measured average values from simulations give indeed correlations of that range (Fig. 4(f)) and are, hence, of the same order as in hybrid random networks (Kriener et al. 2008). As we will show in Section 6, the distribution of input correlation coefficients Inline graphic is centered close to zero with a high peak at zero and both negative and positive contributions. In the Dale-conform ring network, however, we only observe positive correlation coefficients with values up to nearly one (it can only reach Inline graphic, if we apply identical external input to all neurons).

This transfers to the spike generation process and hence explains the dramatically different global spiking behavior, as well as the different Fano factors of the population spike counts (cf. Table 1, Appendix A) in both network types due to the decorrelation of common inputs in hybrid networks.

Small-world networks

As we stated before, the clustering coefficient Inline graphic (cf. Eq. (5)) is directly related to the amount of shared input Inline graphic between two neurons l and k, and hence to the strength of distance dependent correlations. When it gets close to the random graph value, as it is the case for Inline graphic, also the input correlations become similar to that of the corresponding balanced random network (cf. Fig. 5). If we randomize the network gradually by rewiring a fraction of Inline graphic connections, the input variance H is not affected. However, the input covariances due to common input Inline graphic do not only depend on the distance, but also on Inline graphic. The boxcar footprints of the ring network get diluted during rewiring, so a distance Dkl < κ does not imply anymore that all neurons within the overlap (κ − Dkl) of the two boxcars project to both or any of the neurons k,l (cf. Appendix B, Fig. 8(b)). At the same time the probability to receive inputs from neurons outside the boxcar increases during rewiring. These contributions are independent of the pairwise distance. Still, the probability for two neurons k,l with Dkl < κ to receive input from common neurons within the overlap of the (diluted) boxcars is always higher than the probability to get synapses from common neurons in the rest of the network, as long as Inline graphic: those input synapses that were not chosen for rewiring adhere to the boxcar footprint, and at the same time the boxcar regains a fraction of its synapses during the random rewiring. So, if Dkl < κ there are three different sources for common input to two neurons k and l that we have to account for: neurons within the overlap of input boxcars, that still have or re-established their synapses to k and l (possible in region ‘a’ in Fig. 8(b)), those that had not have any synapse to either k or l, but project to both k and l after rewiring (possible in region ‘c’ in Fig. 8(b)), and those that are in the boxcar footprint of one neuron k and got randomly rewired to another neuron outside of l’s boxcar footprint (possible in region ‘b’ in Fig. 8(b)). This implies that after rewiring neurons can be correlated due to common input in the regions ‘b’ and ‘c’, even if they are further apart than κ. These correlations due to the random rewiring alone are then independent of the distance between k and l. The probabilities for all these contributions to the total common input covariance Inline graphic are derived in detail in Appendix B. Ignoring the minor corrections due to the exclusion of self-couplings, we obtain for all k,l ∈ {1,...,N}

graphic file with name M138.gif 35

with (cf. Appendix B)

graphic file with name M139.gif 36

and

graphic file with name M140.gif 37

Since we always assume that the spike train correlations Inline graphic are caused solely by common input correlations transmitted to the output, i.e. that they are some function of Inline graphic, we also have to take this into account in the ansatz for the functional form of Inline graphic. Again, these spike train correlations lead to contributions to the cross-covariances of inputs Ik,Il if Dkl < 2κ. With the linear distance dependence assumption we obtain (cf. Appendix B)

graphic file with name M144.gif 38

and for all k,l ∈ {1,...,N}

graphic file with name M145.gif 39

where we assumed that the rates, and the auto- and cross-covariances of the spike trains are the same for all neurons and neuron pairs, respectively. Inline graphic can be for evaluated as before. The same procedure as in the case Inline graphic hence gives the respective distance dependent input correlations (cf. Appendix B for details) for Inline graphic. The correspondence with the observed curves is good (Fig. 6). If we on the other hand apply cut-off exponential fits

graphic file with name M149.gif 40

of the distance dependent part of the spike train covariance functions, the shot noise covariance becomes

graphic file with name M150.gif 41

With this ansatz the correspondence of the predicted and measured input correlations Inline graphic is nearly perfect, as it was the case for the ring graphs (Fig. 6). For the random network the spike correlations cij(Dij,1) are independent of distance and are proportional to the network connectivity ϵ (cf. also Kriener et al. (2008)), i.e. cij(Dij,1) = ϵγ(1). This is indeed the case with the linear ansatz, as one can easily check with p1(1) = p2(1) = κ/N, cf. Eqs. (36), (37).

Fig. 5.

Fig. 5

Clustering coefficient Inline graphic (black) versus the normalized input correlation coefficient Inline graphic (gray) estimated from simulations and evaluated at its maximum at distance D = 1 in a semi-log plot. Inline graphic decays slightly faster with Inline graphic than the clustering coefficient, but the overall shape is very similar. This shows how the topological properties translate to the joint second order statistics of neuronal inputs

Fig. 8.

Fig. 8

Sketch how to derive the different contributions a, b, and c to the covariances in Eqs. (35), (39), (68), omitting the correction due to the non-existence of self-couplings. The ring is flattened-out and the left and right ends of each row are connected ((N + 1) = 1. (a) The ring network case Inline graphic: the neurons are in the center of their respective boxcar-neighborhoods of size κ marked in black. Black indicates a connection probability of 1, white indicates connection probability 0. The two neurons k and l > k are within a distance Dkl = |k − l| < κ from each other, hence they share common input from (κ − Dkl) neurons. The neurons k and l′, however, are farther apart than κ and do not have any common input neurons. (b) After rewiring, Inline graphic: The boxcar-neighborhood is diluted (dark-patterned) and the neurons have a certain probability to get input from outside the boxcar (light-patterned). Common input can now come in three different varieties: ‘a’, ‘b’ and ‘c’. If two neurons k, l are closer together than κ, the contribution of variety ‘a’ is proportional to the probability that a neuron within range Inline graphic still projects to both k and l after rewiring (cf. Eq. (61)). The contributions of variety ‘b’ are from neurons that projected to only one of both k or l before rewiring, but do now project to both k and l (cf. Eq. (62)). The contributions of the third variety ‘c’ are due to the number of common inputs proportional to the probability that neurons projected to neither k nor l in the original ring network, but do project to both after rewiring (cf. Eq. (63)). If two neurons k and l′ are farther apart than κ, they can have common input neurons after rewiring of varieties ‘b’ and ‘c’ only

Fig. 6.

Fig. 6

Structural (a), spike train (b), and input (c, d) correlation coefficients as a function of the rewiring probability Inline graphic and the pairwise interneuronal distance D for a ring network of size N = 12,500, κ = 1,250, g = 6, J = 0.1 mV with βN equidistantly distributed inhibitory neurons and Inline graphic external Poisson inputs per neuron of strength Inline graphic Hz each. (a) The structural correlation coefficients Inline graphic. For Inline graphic they are close to one for D = 1 and tend to zero for D = 1,250. These would be the expected input correlation coefficients, if the spike train correlations were zero and the external input was DC. (b) shows the spike train correlations Inline graphic as estimated from simulations (gray) and the exponential fits Inline graphic (red) we used to calculate Inline graphic as shown in panel (d). (c) shows the input correlation coefficients Inline graphic (gray) estimated from simulations and the theoretical prediction (red) using linear fits of the respective Inline graphic. (d) shows the same as (c), but with Inline graphic fitted as decaying exponentials as shown in panel (c). For each network realization, we simulated the dynamics for 30 s. We then always averaged over 50 pairs for the input current correlations and 1,000 pairs for the spike train correlations with selected distances D ∈ {1, 10, 20,...,100, 200,..., 6000} and rewiring probabilities Inline graphic, always shown from top to bottom

With the spike train correlations fitted by an exponential, the correlation length Inline graphic actually diverges for Inline graphic, and Eq. (41) gives the wrong limit for Dkl < 2κ. As one can see in Fig. 6(b), the distance dependence of the spike correlations approaches a linear relation as the networks leave the small-world regime (Inline graphic), and the linear model becomes more adequate Eq. (30).

Distribution of correlation coefficients in ring and random networks

After derivation of the distance dependent correlation coefficients of the inputs in different neuronal network types, we can now ask for the distribution of correlation coefficients. In the following, we restrict the quantitative analysis to correlations of weighted input currents, but the qualitative results also hold for different linear synaptic filter kernels fki(t), cf. Section 5. Note that the mean structural input correlation coefficient Inline graphic is independent of ring or random topology, both in the Dale and the hybrid case, while the distributions differ dramatically (Fig. 7).

Fig. 7.

Fig. 7

Estimated (gray) and predicted (red) input correlation coefficient probability density function (pdf) for a Dale-conform (a) ring and (b) random network, and for a hybrid (c) ring and (d) random network. (N = 12,500, κ = ϵN = 1,250, g = 6, estimation window size 0.005). The estimated data (gray) stem from 10s of simulated network activity and are compared with the structural input correlation probability mass functions Inline graphic in (b), (c), and (d) as derived in Eq. (46), Eq. (73), and Appendix D (red, binned with the same window as the simulated data), and compared to the full theory including spike correlations in (a, red). In (b), (c), and (d) the spike correlations are very small and hence close to the distributions predicted from the structural correlation Inline graphic (red). For the ring network, however, the real distribution differs substantially, due to the pronounced distance dependent spike train correlations. To obtain the full distribution in (a) and (c), we recorded from 6,250 subsequent neurons. Both have their maxima close to zero, we clipped the peaks to emphasize the less trivial parts of the distributions (the maximum of the pdf in (a) is 142 in theory and 136 in the estimated distribution; the maximum of the pdf in (b) is 160 in theory and 156 in the estimated distribution). For the random networks (b, d), we computed all pairwise input correlations of a random sample of 50 neurons. The oscillations of the analytically derived pdf in (b, red) are due to the specific discrete nature of the problem, cf. Eq. (46)

Ring networks

graphic file with name M174.gif 42

with the distribution of pairwise distances Inline graphic,3 and

graphic file with name M176.gif 43

Random networks (Kriener et al. 2008)

graphic file with name M177.gif 44

and

graphic file with name M178.gif 45

where (1 − β) and β are the fractions of inhibitory and excitatory inputs per neuron (each the same for all neurons).

The distribution of structural correlation coefficients in the random Dale network is given by

graphic file with name M179.gif 46

where Inline graphic, Inline graphic is the number of common inhibitory inputs and Inline graphic that of excitatory ones,

graphic file with name M183.gif 47

and

graphic file with name M184.gif 48

Note that Inline graphic for non-integer Inline graphic. The correlation coefficient distribution for the random hybrid network is derived in Appendix D.

For a ring graph the structural correlation coefficient distribution Inline graphic has the probability mass Inline graphic at the origin, Inline graphic in the discrete open interval (0,1), and Inline graphic at 1, if we include the variance for distance D = 0. However, due to the non-negligible spike train correlations cij(D,0), the actually measured input correlations Inline graphic have a considerably different distribution that has less mass at 0 due to the positive input correlations up to a distance ∼2κ. They are very well described by the full theory with an exponential ansatz for the spike train correlations as described in Section 5.1, Eq. (41). These two limiting cases emphasize that the distribution of input (subthreshold) correlations may give valuable information about whether there is a high degree of locally shared input (heavy tail probability distribution Inline graphic) or if it is rather what is to be expected from random connectivity in a Dale-conform network.

Discussion

We analyzed the activity dynamics in sparse neuronal networks with ring, small-world and random topologies. In networks with a high clustering coefficient Inline graphic such as ring and small-world networks, neighboring neurons tend to fire highly synchronously. With increasing randomness, governed by the rewiring probability Inline graphic, activity becomes more asynchronous, but even in random networks we observe a high Fano factor FF of the population spike counts, indicating its residual synchrony.4 As shown by Kriener et al. (2008) these fluctuations become strongly attenuated for hybrid neurons which have both excitatory and inhibitory synaptic projections.

Here, we demonstrated that the introduction of hybrid neurons leads to highly asynchronous (FF ≈ 1) population activity even in networks with ring topology. Recent experimental data suggest, that there are abundant fast and reliable couplings between pyramidal cells, which are effectively inhibitory (Ren et al. 2007) and which might be intepreted as hybrid-like couplings. However, the hybrid concept contradicts the general paradigm that pyramidal cells depolarize all their postsynaptic targets while inhibitory interneurons hyperpolarize them, a paradigm known as Dalesprinciple (Li and Dayan 1999; Dayan and Abbott 2001; Hoppensteadt and Izhikevich 1997). As we showed here, a severe violation of Dale’s principle turns the specifics of network topology meaningless, and might even impede functionally potentially important processes, as for example pattern formation or line attractors in ring networks (see e.g. Ben-Yishai et al. 1995; Ermentrout and Cowan 1979), or propagation of synchronous activity in recurrent cortical networks (Kumar et al. 2008a).

We demonstrated that the difference of the amplitude of population activity fluctuations in Dale-conform and hybrid networks can be understood from the differences in the input correlation structure in both network types. We extended the ansatz presented in Kriener et al. (2008) to networks with ring and small-world topology and derived the input correlations in dependence of the pairwise distance of neurons and the rewiring probability. Because of the strong overlap of the input pools of neighboring neurons in ring and small-world networks, the assumption that the spike trains of different neurons are uncorrelated, an assumption justified in sparse balanced random networks, is no longer valid. We fitted the distance dependent instantaneous spike train correlations and took them adequately into account. This lead to a highly accurate prediction of input correlations.

A fully self-consistent treatment of correlations is however beyond the scope of the analysis presented here. As we saw in Section 5.1, in Dale-conform ring graphs neurons cover basically the whole spectrum of positive input correlation strengths between almost one (depending on the level of variance of the uncorrelated external input) and zero as a function of pairwise distance D. If we look at the ratio between input and output correlation strength, we see that it is not constant, but that stronger correlations have a higher gain. The exact mechanisms of this non-linear correlation transmission needs further analysis. Recent analysis of correlation transfer in integrate-and-fire neurons by De la Rocha et al. (2007) and Shea-Brown et al. (2008) showed that the spike train correlations can be written as a linear function of the input correlations, given they are small Inline graphic. For larger Inline graphic (De la Rocha et al. 2007; Shea-Brown et al. 2008), however, report supralinear correlation transmission. Such correlation transmission properties were also observed and analytically derived for arbitrary input correlation strength in an alternative approach that makes use of correlated Gauss processes (Tchumatchenko et al. 2008). These results are all in line with the non-linear dependence of spike train correlations on the strength of input correlations that we observed and fitted by an exponential decay with interneuronal distance.

We saw that correlations are weakened as they are transferred to the output side of the neurons, but, as is to be expected, they are much higher for neighboring neurons in ring networks as it is the case in the homogeneously correlated random networks that receive more or less uncorrelated external inputs. The assumption that the spike train covariance functions are delta-shaped is certainly an over-simplification, especially in the Dale-conform ring networks (cf. the examples of spike train cross-correlation functions ψij(τ,D,0), Fig. 4(d)). The temporal width of the covariance functions leads to an increase in the estimation of spike train correlations if the spike count bin-size h is increased. In Dale-conform ring graphs we found cij(1,0) ≤ 0.041 for time bins h = 0.1 ms (cf. Fig. 4(c), (d)). For h = 10 ms, a time window of the order of the membrane time constant Inline graphic, we observed cij(1,0) ≤ 0.25 (not shown). This covers the spectrum of correlations reported in experimental studies, which range from 0.01 to approximately 0.3 (Zohary et al. 1994; Vaadia et al. 1995; Shadlen and Newsome 1998; Bair et al. 2001). For hybrid networks, however, the pairwise correlations have a narrow distribution around zero, irrespective of the topology. This explains the highly asynchronous dynamics in hybrid neuronal networks.

Finally, we suggest that the distribution of pairwise correlation coefficients of randomly chosen intracellularly recorded neurons may provide a means to distinguish different neuronal network topologies. Real neurons, however, have conductance-based synapses, and their filtering is strongly dependent on the membrane depolarization (Destexhe et al. 2003; Kuhn et al. 2004). Moreover, spikes are temporally extended events, usually with different synaptic time scales, and transmission delays are distributed and likely dependent on the distance between neurons. These effects, amongst others, might distort the results presented here. Still, though intracellular recordings are technically more involved than extracellular recordings, they are basically analog signals, and hence much shorter periods of recording time are necessary to get a sufficiently good statistics, as compared to estimation of pairwise spike train correlations from low rate spiking neurons (Lee et al. 2006). So, bell-shaped distributions of membrane potential correlations may hint towards an underlying random network structure, while heavy-tail distributions should be observed for networks with locally confined neighborhoods. Naturally, the distribution will depend on the relation between the sampled region and the footprint of the neuron-type one is interested in. This is true for both the model as well as for real neuronal tissue. Some idea about the potential input footprint, e.g. from reconstructions of the axonal and dendritic arbors (Hellwig 2000; Stepanyants et al. 2007), can help to estimate the spatial distance that must be covered. It also is a matter of the spatial scale that one is interested in: if one is mostly interested in very small, local networks < 200 μm, where the connection probability might be considered approximately homogeneous (Hellwig 2000; Stepanyants et al. 2007), the correlation coefficient distribution will be akin to that of a random topology. If one, however, samples several millimeter, the distribution may tend more to a heavy-tail shape, due to the increase of the relative number of weakly correlated neuron pairs. At this scale radial inhomogeneities, as for example due to axonal patches (Lund et al. 2003), in two dimensions or different connection probabilities within and between cortical layers (Binzegger et al. 2004) in three dimensions must be taken into account as well, as they will distort the over-simplified assumption of the connectivity made here. In conclusion, we think that a further extension of the line of research presented here might provide a way to access structural features of neuronal networks by the analysis of their input statistics. This could eventually prove helpful in separating correlations that arise due to the specifics of the network structure from those that arise due to correlated input from other areas, e.g. sensory inputs, and provide insight into the relation between structure and function.

Acknowledgements

We thank Benjamin Staude, Marc Timme, and two anonymous reviewers for their valuable comments on an earlier version of the manuscript. We gratefully acknowledge funding by the German Federal Ministry of Education and Research (BMBF grants 01GQ0420 and 01GQ0430) and the European Union (EU Grant 15879, FACETS). All network simulations were carried out with the NEST simulation tool (http://www.nest-initiative.org).

Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Appendix A: Fano factor

In this appendix we want to quantitatively formulate the Fano factor FF[n(t;h)] of spike counts n(t;h) per time bin h. We assume that the compound spike train, i.e. the population activity Inline graphic is an ensemble of Poisson point processes. The population spike count with regard to a certain time bin h is then a sum of random variables Inline graphic (Papoulis 1991) defined by

graphic file with name M200.gif 49

The expectation value is given by (exploiting the linearity of the expectation value):

graphic file with name M201.gif 50

Inline graphic is generally given by (Papoulis 1991; Nawrot et al. 2008)

graphic file with name M203.gif 51

For stationary Poisson processes with intensity ν, we have mean and variance

graphic file with name M204.gif 52

Hence Inline graphic, provided all processes are independent. If we have homogeneously correlated Poisson processes (cf. Eq. (10)), such that for all i,j ∈ {1,...,N} Inline graphic, the variance of the population count is given by

graphic file with name M207.gif 53

For the Fano factor we hence obtain Inline graphic. For homogeneously correlated networks like random networks this estimate is indeed very close to the actually measured FF (Kriener et al. 2008). For the ring and small-world networks we have Inline graphic in Eq. (53), since the spike count estimation is a linear filtering and hence Campbell’s theorem Eq. (16) can be applied. For the Dale-conform ring network Inline graphic, yielding a Inline graphic. The FF estimated from simulations is indeed of that order, cf. Table 1.

Appendix B: Distance dependent correlations—linear fit

To estimate the contributions to the covariances in Eqs. (35), (39), (68) in detail for Inline graphic we first calculate the distribution of non-zero entries in each matrix row in dependence of Inline graphic. If we remove exactly Inline graphic (incoming) synapses from the boxcar neighborhood of a neuron and randomly redraw them (without establishing multiple and self-connections) from the Inline graphic possible presynaptic neurons, the distribution for reestablishing q connections that were there before is given by

graphic file with name M216.gif 54

and that of establishing r new connections is

graphic file with name M217.gif 55

In expectation we hence have

graphic file with name M218.gif 56

connections from within the ring boxcar neighborhood κ, and

graphic file with name M219.gif 57

new connections from outside the boxcar neighborhood.5 We then define the probability Inline graphic of a neuron within the boxcar neighborhood and the probability Inline graphic of a neuron outside the boxcar neighborhood to project to a neuron k, k ∈ {1,...,N} by

graphic file with name M222.gif 58

and

graphic file with name M223.gif 59

With the notation from Fig. 8 we get the expected number

graphic file with name M224.gif 60

of common inputs Q to neuron k and neuron l in dependence of Dkl and Inline graphic with

graphic file with name M226.gif 61
graphic file with name M227.gif 62

and

graphic file with name M228.gif 63

For two neurons k and l we assumed (cf. Eq. (38)) the spike train correlations between input neuron i of k and input neuron j of l to be given by

graphic file with name M232.gif

The double-sum over all pairwise distances Dij = |i − j| can be expressed by a simple summation formula:

graphic file with name M233.gif 64

and hence we can evaluate with Defs. (61), (62), (63) and Inline graphic:

graphic file with name M235.gif 65

We assume k ≠ l and, without loss of generality, k > l. We set Dkl = |k − l| = (k − l): = d and we always omit the explicit modulo-notation due to periodic boundary conditions. For Inline graphic we calculate

graphic file with name M237.gif 66

We define the alias

graphic file with name M238.gif

and obtain:

graphic file with name M239.gif

We can evaluate the Heaviside step-functions and rewrite the sums as

graphic file with name M240.gif

where the cases occur due to the negligence of k,l in the summation and we need to subtract the over- counted terms i = j. A shift of the summation indices yields:

graphic file with name M241.gif

Now we perform a similar, but more involved resorting as in Eq. (64) and split the sum into various contributions that correspond to the same boundary conditions:

graphic file with name M242.gif

We have to keep in mind that d ∈ {1,...,2κ}. Hence, e.g. the first term in the latter identity

graphic file with name M243.gif

has two distance-regimes, one where κ ≥ d and the stepfunction is always one, and the second, where κ < d ≤ 2κ, and the stepfunction truncates all summands with q > κ. This has to be taken into account when calculating the summation formula:

graphic file with name M244.gif

where in the first identity we used the stepfunction, and have to keep in mind that all summands in the inner sum with a lower summation index exceeding the upper one are zero, hence Inline graphic, i.e. Inline graphic. In the second identity we performed a simple index shift operation. After we evaluated the stepfunctions, it is straightforward to find the corresponding summation formulae (Bronstein and Semendjajew 1987):

graphic file with name M247.gif 67

(*) The minus one contribution comes about by correcting for the omission of neurons k and l during summation over the κ neighbors.

In general we numerically evaluated:

graphic file with name M248.gif 68

Appendix C: Distance dependent correlations—exponential fit

With the ansatz Eq. (38)

graphic file with name M249.gif

we obtain analogously to the linear case Appendix B, Eq. (39)

graphic file with name M250.gif 69

and for Inline graphic:

graphic file with name M252.gif 70

We once more omit the explicit modulo-notation due to periodic boundary conditions and obtain analogously to the linear case, Appendix B:

graphic file with name M253.gif 71

(*) The minus Inline graphic contribution comes about by correcting for the omission of neurons k and l during summation over the κ neighbors.

In general we numerically evaluated:

graphic file with name M255.gif 72

Appendix D: Distribution of correlation coefficients in the hybrid ring and random network

In the hybrid random network case we get a distribution

graphic file with name M256.gif 73

for the distribution of the number of total common inputs Q. κ is the total number of input synapses per neuron. Given that common input pool of size Q, we then ask for the probability to have ni +  incoming excitatory synapses to neuron Ni and nj +  incoming excitatory synapses to neuron Nj. We have

graphic file with name M257.gif 74

If we, moreover, know n + + , i.e. the number of incoming synapses from Q that are excitatory for both Ni and Nj, we know the residual possible combinations of signs (cf. Fig. 9). n + +  follows the probability distribution

graphic file with name M258.gif 75

with Inline graphic and Inline graphic. We get

graphic file with name M261.gif 76

Fig. 9.

Fig. 9

Sketch how to derive n − + , n + −  and n − − , given ni + , nj +  and n + + 

Hence, the correlation coefficient given ni + , nj +  and n + +  yields (Inline graphic)

graphic file with name M263.gif 77

The probability distribution is then given by

graphic file with name M264.gif 78

The hybrid ring case is obtained analogously by taking into account that all nontrivial overlaps Q ∈ {1,...,κ − 1} occur exactly twice, hence the probability P(Q) for all possible non-trivial Q is 2/(N − 1). Additionally, there are (N − 2κ − 1) possible neuron pairs drawn with probability 1/(N − 1) that do not share common input and are hence zero.

Footnotes

1

The value of the clustering coefficient does in expectation not depend on the exact choice of triplet connectivity we ask for.

2

The reverse is not true, we can have a high amount of shared input in a network without having a high clustering coefficient. An example is a star graph in that a central neuron projects to N other neurons which in turn are not connected.

3

The density of neurons in a distance D generally behaves like P(D)∼Ddim − 1 with dimensionality dim.

4

This is due to the finite size of the network. For increasing network size N→ ∞ the asynchronous-irregular state becomes stable.

5

We refer to the distance in neuron indices here, that are arbitrarily defined to run from 1 to N in a clockwise manner. Hence the boxcar neighborhood of a neuron i includes {i − κ/2,...,i + κ/2} (modulo network size). Note that for Inline graphic this does not generally correspond to the topological neighborhood defined by adjacency anymore.

References

  1. Albert, R., & Barabasi, A. (2002). Statistical mechanics of complex networks. Reviews of Modern Physics, 74, 47–97. [DOI]
  2. Bair, W., Zohary, E., & Newsome, W. (2001). Correlated firing in Macaque visual area MT: Time scales and relationship to behavior. Journal of Neuroscience, 21(5), 1676–1697. [DOI] [PMC free article] [PubMed]
  3. Ben-Yishai, R., Bar-Or, R., & Sompolinsky, H. (1995). Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 92, 3844. [DOI] [PMC free article] [PubMed]
  4. Binzegger, T., Douglas, R. J., & Martin, K. A. C. (2004). A quantitative map of the circuit of cat primary visual cortex. Journal of Neuroscience, 39(24), 8441–8453. [DOI] [PMC free article] [PubMed]
  5. Bronstein, I. N., & Semendjajew, K. A. (1987). Taschenbuch der Mathematik (23rd ed.). Thun und Frankfurt/Main: Verlag Harri Deutsch.
  6. Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8(3), 183–208. [DOI] [PubMed]
  7. Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11(7), 1621–1671. [DOI] [PubMed]
  8. Chklovskii, D. B., Schikorski, T., & Stevens, C. F. (2002). Wiring optimization in cortical circuits. Neuron, 34, 341–347. [DOI] [PubMed]
  9. Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience. Cambridge: MIT.
  10. De la Rocha, J., Doiron, B., Shea-Brown, E., Kresimir, J., & Reyes, A. (2007). Correlation between neural spike trains increases with firing rate. Nature, 448(16), 802–807. [DOI] [PubMed]
  11. Destexhe, A., Rudolph, M., & Pare, D. (2003). The high-conductance state of neocortical neurons in vivo. Nature Reviews. Neuroscience, 4, 739–751. [DOI] [PubMed]
  12. Ermentrout, G. B., & Cowan, J. D. (1979). A mathematical theory of visual hallucination patterns. Biological Cybernetics, 34, 137–150. [DOI] [PubMed]
  13. Gewaltig, M.-O., & Diesmann, M. (2007). NEST (Neural simulation tool). Scholarpedia, 2(4), 1430.
  14. Hellwig, B. (2000). A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex. Biological Cybernetics, 2(82), 111–121. [DOI] [PubMed]
  15. Hoppensteadt, F. C., & Izhikevich, E. M. (1997). Weakly connected neural networks. New York: Springer.
  16. Jahnke, S., Memmesheimer, R., & Timme, M. (2008). Stable irregular dynamics in complex neural networks. Physical Review Letters, 100, 048102. [DOI] [PubMed]
  17. Kriener, B., Tetzlaff, T., Aertsen, A., Diesmann, M., & Rotter, S. (2008). Correlations and population dynamics in cortical networks. Neural Computation, 20, 2185–2226. [DOI] [PubMed]
  18. Kuhn, A., Aertsen, A., & Rotter, S. (2004). Neuronal integration of synaptic input in the fluctuation-driven regime. Journal of Neuroscience, 24(10), 2345–2356. [DOI] [PMC free article] [PubMed]
  19. Kumar, A., Rotter, S., & Aertsen, A. (2008a). Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model. Journal of Neuroscience, 28(20), 5268–5280. [DOI] [PMC free article] [PubMed]
  20. Kumar, A., Schrader, S., Aertsen, A., & Rotter, S. (2008b). The high-conductance state of cortical networks. Neural Computation, 20(1), 1–43. [DOI] [PubMed]
  21. Lee, A., Manns, I., Sakmann, B., & Brecht, M. (2006). Whole-cell recordings in freely moving rats. Neuron, 51, 399–407. [DOI] [PubMed]
  22. Li, Z., & Dayan, P. (1999). Computational differences between asymmetrical and symmetrical networks. Network: Computing Neural Systems, 10, 59–77. [DOI] [PubMed]
  23. Lund, J. S., Angelucci, A., & Bressloff, P. C. (2003). Anatomical substrates for functional columns in macaque monkey primary visual cortex. Cerebral Cortex, 12, 15–24. [DOI] [PubMed]
  24. Mattia, M., & Del Guidice, P. (2002). Population dynamics of interacting spiking neurons. Physical Review E, 66, 051917. [DOI] [PubMed]
  25. Mattia, M., & Del Guidice, P. (2004). Finite-size dynamics of inhibitory and excitatory interacting spiking neurons. Physical Review E, 70, 052903. [DOI] [PubMed]
  26. Morrison, A., Mehring, C., Geisel, T., Aertsen, A., & Diesmann, M. (2005). Advancing the boundaries of high connectivity network simulation with distributed computing. Neural Computation, 17(8), 1776–1801. [DOI] [PubMed]
  27. Nawrot, M. P., Boucsein, C., Rodriguez Molina, V., Riehle, A., Aertsen, A., & Rotter, S. (2008). Measurement of variability dynamics in cortical spike trains. Journal of Neuroscience Methods, 169, 374–390. [DOI] [PubMed]
  28. Papoulis, A. (1991). Probability, random variables, and stochastic processes (3rd ed.). Boston: McGraw-Hill.
  29. Ren, M., Yoshimura, Y., Takada, N., Horibe, S., & Komatsu, Y. (2007). Specialized inhibitory synaptic actions between nearby neocortical pyramidal neurons. Science, 316, 758–761. [DOI] [PubMed]
  30. Shadlen, M. N., & Newsome, W. T. (1998). The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. Journal of Neuroscience, 18(10), 3870–3896. [DOI] [PMC free article] [PubMed]
  31. Shea-Brown, E., Josic, K., de la Rocha, J., & Doiron, B. (2008). Correlation and synchrony transfer in integrate-and-fire neurons: Basic properties and consequences for coding. Physical Review Letters, 100, 108102. [DOI] [PubMed]
  32. Song, S., Per, S., Reigl, M., Nelson, S., & Chklovskii, D. (2005). Highly nonrandom features of synaptic connectivity in local cortical circuits. Public Library of Science, Biology, 3(3), 0507–0519. [DOI] [PMC free article] [PubMed]
  33. Sporns, O. (2003). Network analysis, complexity and brain function. Complexity, 8(1), 56–60. [DOI]
  34. Sporns, O., & Zwi, D. Z. (2004). The small world of the cerebral cortex. Neuroinformatics, 2, 145–162. [DOI] [PubMed]
  35. Stepanyants, A., Hirsch, J., Martinez, L. M., Kisvarday, Z. F., Ferecsko, A. S., & Chklovskii, D. B. (2007). Local potential connectivity in cat primary visual cortex. Cerebral Cortex, 18(1), 13–28. [DOI] [PubMed]
  36. Strogatz, S. H. (2001). Exploring complex networks. Nature, 410, 268–276. [DOI] [PubMed]
  37. Tchumatchenko, T., Malyshev, A., Geisel, T., Volgushev, M., & Wolf, F. (2008). Correlations and synchrony in threshold neuron models. http://arxiv.org/pdf/0810.2901. [DOI] [PubMed]
  38. Tetzlaff, T., Rotter, S., Stark, E., Abeles, M., Aertsen, A., & Diesmann, M. (2007). Dependence of neuronal correlations on filter characteristics and marginal spike-train statistics. Neural Computation, 20, 2133–2184. [DOI] [PubMed]
  39. Timme, M. (2007). Revealing network connectivity from response dynamics. Physical Review Letters, 98, 224101. [DOI] [PubMed]
  40. Timme, M., Wolf, F., & Geisel, T. (2002). Coexistence of regular and irregular dynamics in complex networks of pulse-coupled oscillators. Physical Review Letters, 89(25), 258701. [DOI] [PubMed]
  41. Vaadia, E., Haalman, I., Abeles, M., Bergman, H., Prut, Y., Slovin, H., & Aertsen, A. (1995). Dynamics of neuronal interactions in monkey cortex in relation to behavioural events. Nature, 373(6514), 515–518. [DOI] [PubMed]
  42. van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274, 1724–1726. [DOI] [PubMed]
  43. van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural Computation, 10, 1321–1371. [DOI] [PubMed]
  44. Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of small-world networks. Nature, 393, 440–442. [DOI] [PubMed]
  45. Yoshimura, Y., & Callaway, E. (2005). Fine-scale specificity of cortical networks depends on inhibitory cell type and connectivity. Nature Neuroscience, 8(11), 1552–1559. [DOI] [PubMed]
  46. Yoshimura, Y., Dantzker, J., & Callaway, E. (2005). Excitatory cortical neurons form fine-scale functional networks. Nature, 433(24), 868–873. [DOI] [PubMed]
  47. Zohary, E., Shadlen, M. N., & Newsome, W. T. (1994). Correlated neuronal discharge rate and its implications for psychophysical performance. Nature, 370, 140–143. [DOI] [PubMed]

Articles from Journal of Computational Neuroscience are provided here courtesy of Springer

RESOURCES