Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 May 29.
Published in final edited form as: Phys Rev E. 2018 Dec 27;98(6):062414. doi: 10.1103/physreve.98.062414

Finite-size effects for spiking neural networks with spatially dependent coupling

Si-Wei Qiu 1, Carson C Chow 1
PMCID: PMC7258138  NIHMSID: NIHMS1560330  PMID: 32478211

Abstract

We study finite-size fluctuations in a network of spiking deterministic neurons coupled with nonuniform synaptic coupling. We generalize a previously developed theory of finite-size effects for globally coupled neurons with a uniform coupling function. In the uniform coupling case, mean-field theory is well defined by averaging over the network as the number of neurons in the network goes to infinity. However, for nonuniform coupling it is no longer possible to average over the entire network if we are interested in fluctuations at a particular location within the network. We show that if the coupling function approaches a continuous function in the infinite system size limit, then an average over a local neighborhood can be defined such that mean-field theory is well defined for a spatially dependent field. We then use a path-integral formalism to derive a perturbation expansion in the inverse system size around the mean-field limit for the covariance of the input to a neuron (synaptic drive) and firing rate fluctuations due to dynamical deterministic finite-size effects.

I. INTRODUCTION

The dynamics of neural networks have traditionally been studied in the limit of very large numbers of neurons, where mean-field theory can be applied, e.g., Refs. [110], or for a small number of neurons, where traditional dynamical systems approaches can be used, e.g., Refs. [1113]. The intermediate regime of large but finite numbers of neurons can have interesting properties that are independent of the small and infinite system limits [1421]. However, these previous works have not fully explored fluctuations due to finite-size effects at specific locations within the network when all the neurons receive nonhomogeneous input from other neurons because of nonuniform coupling. Here we consider finite-size effects in a network of spiking neurons with nonuniform synaptic coupling. Previously [1416], a perturbation expansion in the inverse network neuron number had been developed for networks with global spatially uniform coupling and we generalize that theory to include nonuniform coupling. We first show that mean-field theory in the infinite nonuniform system limit can be realized in a single network if a spatial metric can be imposed on the network and the coupling function is a continuous function of this distance measure. We then analyze finite-size fluctuations around such mean-field solutions using a path-integral formalism to derive a perturbation expansion in the inverse network neuron number for the spatially dependent covariance function for the synaptic drive and spatially dependent neuron firing rate.

II. COUPLED NEURON MODEL

Consider a network of N theta neurons (phase reduction of quadratic integrate-and-fire neurons [11]) on a one dimensional periodic domain of size L although the theory can be applied to any domain. The network obeys the following deterministic microscopic equations:

θ˙i=1cosθi+[Ii+ui(t)](1+cosθi), (2.1)
ui=LNj=1Nwijsj, (2.2)
s˙j=βsj+βlδ(ttjl), (2.3)

where θi is the phase of neuron i, ui is the synaptic drive to neuron i, Ii is the external input to neuron i, β is the decay constant of the synaptic drive, sj is the time-dependent synaptic input from neuron j, and tjl represents the spike times when the phase of neuron j crosses π. sj rises instantaneously when neuron j spikes and relaxes to zero with a time constant of 1/β. The synaptic drive represents the total time-dependent synaptic input where the contribution from each neuron is weighted by the synaptic coupling function wij (a real N × N matrix). When Ii + ui > 0, the neuron receives suprathreshold input and θi will progress in time. When it passes π, the neuron is said to spike. When Ii + ui < 0 the neuron receives subthreshold input and the phase will approach a fixed point. The theta neuron is the normal form of a Type I spiking neuron near the bifurcation point to firing [11]. By linearity, the synaptic drive obeys the more convenient form of

u˙i=βui+βLNj=1Nwijlδ(ttjl). (2.4)

We define an empirical density

ηj(θ,t)=δ(θθj(t)) (2.5)

that assigns a point mass to the phase of each neuron in the network. Hence, we can write the sum of a spike train as lδ(ttjl)=ηj(π,t)θ˙j|θj=π. For the theta model, θ˙j|θj=π=2 and thus we can then rewrite (2.4) as

u˙i=βui+2βLNj=1Nwijηj(π,t). (2.6)

Neuron number is conserved so the neuron density formally obeys a conservation (Klimontovich) equation [16]:

tηi(θ,t)+θFi(θ,ui)ηi(θ,t)=0, (2.7)

where Fi(θ, ui) = 1 – cos θ + (1 + cos θ )(Ii + ui ). The Klimontovich equation together with (2.6) fully describes the system. However, it is only a formal definition since η is not in general differentiable. In the following, we develop a method to regularize the Klimontovich equation so that desired quantities can be calculated.

III. MEAN-FIELD THEORY

The Klimontovich equation (2.7) only exists in a weak sense. We can regularize it by taking a suitable average over an ensemble of initial conditions:

tηi(θ,t)+θFi(θ,ui)ηi(θ,t)=0. (3.1)

This equation is not closed because it involves covariances such as 〈ηη〉, which in turn depend on higher-order cumulants in a Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy [1416]. This hierarchy can be rendered tractable if we can truncate it. Mean-field theory truncates the hierarchy at first order by assuming that all cumulants beyond the first are zero so we can write

tρi(θ,t)+θFi(θ,ai)ρi(θ,t)=0, (3.2)

where ai = 〈ui〉 and ρi = 〈ηi〉. The full set of closed mean-field equations are given by

tρi(θ,t)+θ[1cosθ+(Ii+ai)(1+cosθ)]ρi(θ,t)=0a˙i=βai+2βLNjwijρj(π,t). (3.3)

Although we can always write the mean-field equations (3.3), it is not clear that a given network would obey it in the infinite-N limit. In previous work [8,16,22], it was shown that mean-field theory applies to a network of coupled oscillators with uniform coupling in the infinite-N limit. However, it is not known when or if mean-field theory applies for nonuniform coupling.

To see this, consider first the stationary system

θ[1cosθ+(Ii+ui)(1+cosθ)]ηi(θ)=0, (3.4)
ui=2LNjwijηj(π), (3.5)

with uniform coupling, wij = w, and uniform external input, Ii = I. If the neurons are initialized with random phases and remain asynchronous, then we can suppose that in the limit of N → ∞ the quantity

ρ(π)=LNjηj(π) (3.6)

converges to an invariant quantity [8,16,22]. This then implies that ui = 2a is also a constant. Thus each neuron will have identical inputs so if we apply the network averaging operator LNi=1N to (3.4) we obtain

θ[1cosθ+(I+a)(1+cosθ)]ρ(θ)=0, (3.7)
a=2wρ(π). (3.8)

Covariances vanish and mean-field theory is realized in the infinite network limit. Given that the drive equation (2.6) is linear, the time-dependent mean-field theory will similarly hold in the large-N limit.

In the case where wij is not uniform, covariances are not guaranteed to vanish and an infinite network need not obey mean-field theory. Our goal is to find conditions such that mean-field theory applies. Again, consider the stationary equations (3.4) and (3.5). Now, instead of averaging over the entire domain, take a local interval around j, [jcN/2, j + cN/2], where c < 1 is a constant that can depend on N and we map jcN/2 < 1 to N + jcN/2 and j + cN/2 > N to j + cN/2 − N. We want to express our mean-field equation in terms of the locally averaged empirical density

ρj=1cNk=jcN2j+cN2ηk. (3.9)

If cN → ∞ for N → ∞, then it is feasible that the local empirical density can be invariant (to random initial conditions) and correlations can vanish; we seek conditions on the coupling for which this is true.

Inserting (3.5) into (3.4) and taking the local average yields

1cNi=kcN2k+cN2θ{1cosθ+[Ii+2LNj=1Nwijηj(π)](1+cosθ)}ηi(θ)=0. (3.10)

We immediately see that correlations can arise from the sums over the product of ηj(π)ηi(θ). Consider the identity j=1Nwijηj(π)=j=1N(cN)1l=jcN2j+cN2wilηl(π), which is exact for periodic boundary conditions. For nonperiodic boundary conditions there will be an edge contribution but this should be negligible in the large network limit. Using this summation identity, we can rewrite the sum as

j=1N1cNi=kcN2k+cN2wijηj(π)ηi(θ)=j=1N[wkjρj(π)ρk(θ)+Rjk], (3.11)

where the remainder

Rjk=1(cN)2l=jcN2j+cN2i=kcN2k+cN2(wijwkj)ρj(π)ηi(θ)+1(cN)2l=jcN2j+cN2i=kcN2k+cN2(wilwij)ηl(π)ηi(θ) (3.12)

carries the correlations. Mean-field theory is valid in the N → ∞ limit if Rjk vanishes. Its magnitude obeys

|Rjk||1(cN)2l=jcN2j+cN2i=kcN2k+cN2(wijwkj)ρj(π)ηi(θ)|+|1(cN)2l=jcN2j+cN2i=kcN2k+cN2(wilwij)ηl(π)ηi(θ)|ρj(π)[1cNi=kcN2k+cN2ηi(θ)]supi(kcN2,k+cN2)|wijwkj|+[1(cN)2l=jcN2j+cN2i=kcN2k+cN2(ηl(π)ηi(θ))]supi(kcN2,k+cN2),l(jcN2,j+cN2)|wilwij| (3.13)

since the density is non-negative.

Applying (3.9) then leads to

|R|ρj(π)ρk(θ)[supi(kcN2,k+cN2)|wijwkj|+supi(kcN2,k+cN2),l(jcN2,j+cN2)|wilwij|]. (3.14)

We introduce a distance measure z = iL/N, z′ = jL/N, z″ = kL/N, z‴ = lL/N and write ρi(θ)|i=zN/L = ρ(z, θ) and wij|i=zN/L,j=zN/L=w(z,z). Then

|R|ρ(z,π)ρ(z,θ)[supz[zcL/2,z+cL/2]|w(z,z)w(z,z)| (3.15)
+supz[zcL/2,z+cL/2],z[zcL/2,z+cL/2]|w(z,z)w(z,z)|]. (3.16)

Hence, if we set c = Nα, 0 < α < 1, then as N → ∞, the number of neurons in the local neighborhood cN approaches infinity as N1−α while c → 0. Then |R| → 0 as N → ∞ if limzzw(z,z)w(z,z)=0 and limzzw(z,z)w(z,z)=0, i.e., wij is a continuous function in both indices. A similar argument shows that

1cNi=kcN2k+cN2(IiIk)ηi(θ)0 (3.17)

if Ii approaches a continuous function in index i in the infinite-N limit. Then (3.4) and (3.5) can be written as

θ[1cosθ+(Ik+ak)(1+cosθ)]ρk(θ)=0, (3.18)
ak=2LNjwkjρj(π). (3.19)

Equations (3.18) and (3.19) form a mean-field theory that is realized in a nonuniform coupled network in the infinite size limit as long as the input and coupling function are continuous functions. By linearity, the time-dependent mean-field theory should equally apply if the external input and the coupling are continuous functions of the indices.

In the N → ∞ limit, setting izN/L, ai(t) → a(z, t), ρi(θ, t) → ρ(z, θ, t), IiI(z) is continuous, i(N/L)Ωdz, and wijw(z, z′) is continuous, we can write mean-field theory in continuum form as

tρ(z,θ,t)+θ{1cosθ+[I(z)+a(z,t)]×(1+cosθ)}ρ(z,θ,t)=0ta(z,t)=βa(z,t)+2βw(z,z)ρ(z,π,t)dz. (3.20)

The stationary solutions obey

θ{1cosθ+[I(z)+a(z)](1+cosθ)}ρ(z,θ)=0, (3.21)
a(z)=2w(z,z)ρ(z,π)dz. (3.22)

The stationary solutions will be qualitatively different depending on the sign of I + a. Consider first the suprathreshold regime where I + a > 0. We can then solve (3.21) to obtain

ρ(z,θ)=I(z)+a(z)π{1cosθ+[I(z)+a(z)](1+cosθ)}, (3.23)

which has been normalized such that ρ(z,θ)dθ=1. Inserting this back into (3.22) gives

a(z)=1πw(z,z)I(z)+a(z)dz. (3.24)

In the subthreshold regime, I + a < 0, (3.23) has a singularity at 1 – cos θ + [I(z) + a(z)](1 + cos θ) = 0, for which there are two solutions θ± that coalesce in a saddle node bifurcation at I + a = 0. Although ρ is no longer differentiable at equilibrium in the subthreshold regime there is still a weak solution. It has been shown previously [11] that θ is stable and θ+ is unstable for a single theta neuron. This implies that the density is given by ρ(z, θ) = δ(θθ) and that ρ(z, π) = 0 (i.e., no firing) as expected in the subthreshold regime. Figure 1 shows an example of a stationary “bump” solution for the periodic coupling function, w(z) = −J0 + J2 cos(2π/Lz), which has been used in models of orientation tuning of visual cortex [23] and the rodent head direction system [24]).

FIG. 1.

FIG. 1.

(a) Mean-field theory synaptic drive for (b) connectivity weight w(z) = −J0 + J2 cos(2π/Lz), J0 = 0.2, and J2 = 0.8, and (c) external input I(z) = I0 + sin[2π/L(zz0)], I0 = 1, z0 = 0.25.

IV. BEYOND MEAN-FIELD THEORY

In the infinite-N limit when mean-field theory applies, the fields η and u are completely described by their means. The time trajectories of these fields are independent of the initial conditions of the individual neurons. For finite N, the trajectories can differ for different initial conditions and going beyond mean-field theory involves understanding these fluctuations. Implicit in going beyond mean-field theory is that these fields are themselves random variables that are drawn from a distribution functional. In this section, we will derive this distribution functional formally and then use it to compute perturbative expressions for the covariances of η and u.

Recall that the microscopic system is fully described by

tηi(θ,t)+θFi(θ,ui)ηi(θ,t)δ(tt0)ηi0(θ)=0, (4.1)
u˙i(t)+βui(t)2βLNj=1Nwijηj(π,t)δ(tt0)ui0=0, (4.2)

where we have expressed the initial conditions as forcing terms. The probability density functional for the fields is then composed of point masses constrained to the dynamical system marginalized over the distribution of the initial data densities:

P[η,u]=Dη0iδ[tηi+θFi(θ,ui)ηiδ(tt0)ηi0(θ)]δ[u˙i+βui2βLNjwijηj(π,t)δ(tt0)ui0]P[η0], (4.3)

where P[η0] is the probability density functional of the initial neuron densities for all neurons and Dη0 is the functional integration measure. We consider the initial condition of u to be fixed to u0. Using the functional Fourier transform for the Dirac delta functionals, we then obtain

P[η,u]=Dη˜Dη0Du˜eidtdθη˜i[tηi+θFi(θ,ui)ηiδ(tt0)ηi0(θ)]eidtu˜i[u˙i+βui2βLNjwijηj(π,t)δ(tt0)ui0]P[η0], (4.4)

where η˜i and u˜i are response fields for neuron i with functional integration measures Dη˜ and Du˜ over all neurons. If we set ηi0(θ)=δ[θθi(t=0)], then the distribution over initial densities is given by the distribution over the initial phase, ρi0(θ). Thus we can write Dη0P[η0]=idθρi0(θ). The initial condition contribution is given by the integral

eW0[η˜]=idθiρi0(θi)eiη˜i(θi,t0), (4.5)
=idθρi0(θ)eη˜i(θ,t0), (4.6)
=eiln{1dθρi0(θ)[eη˜i(θ,t0)1]}. (4.7)

Hence, the system given by (2.6) and (2.7) can be mapped to the distribution functional P[η,u]=Dη˜Dη0Du˜eS with action S = Sη + Su given by

Sη=it0t1dtππdθη˜i(θ,t)[tηi(θ,t)+θF(θ,ui)ηi(θ,t)]+iln{1dθρi0(θ)[eη˜i(θ,t0)1]}, (4.8)
Su=it0t1dtu˜i(t)[u˙i+βui2βLNjwijηj(π,t)δ(tt0)ui0]. (4.9)

The exponential in the initial data contribution to the action (which corresponds to a generating function for a Poisson distribution) can be bilinearized via the Doi-Peliti-Janssen transformation [1416,2729]: ψi=ηiexp(η˜i), ψ˜i=exp(η˜i)1, resulting in

Sψ=idθdtψ˜i(θ,t)[tψi(θ,t)+θF(θ,ui)ψi(θ,t)]+iln[1dθρi0(θ)ψ˜i(θ,t0)], (4.10)
Su=idtu˜i(t){u˙i(t)+βuiδ(tt0)ui02βLNjwij[ψ˜j(π,t)+1]ψj(π,t)}, (4.11)

where we have not included the noncontributing terms that arise after integration by parts.

We now make the coarse-graining transformation izN/L, ui (t) → u(z, t), ψi (θ, t) → ψ(z, θ, t), ρi (θ, t) → ρ(z, θ, t), IiI (z), i(N/L)Ωdz, and wijw(z – z′), which yields

Sψ=NLdzdθdtψ˜(z,θ,t)[tψ(z,θ,t)+θF(θ,u)ψ(z,θ,t)]+NLdzln[1dθρ0(z,θ)ψ˜(z,θ,t0)], (4.12)
Su=NLdzdtu˜(z,t){u˙+βuδ(tt0)u0(z)2βdzw(zz)[ψ˜(z,π,t)+1]ψ(z,π,t)}. (4.13)

We examine perturbations around the mean-field solutions a(z, t) and ρ(z, θ, t) of (3.20) with ua(z, t)H(tt0) + v(z, t), u˜v˜, ψρ(z, θ, t)H(tt0) + φ(z, θ, t), and ψ˜φ˜, where ρ(z, θ, t = t0) = ρ0(z, θ) and H(tt0) is the Heaviside function. We then obtain

Sφ=NLdθdtdzφ˜{tφ+θ[1cosθ+[I+a+v](1+cosθ)]φ+θv(1+cosθ)ρ}+NLdzln[1dθρ0(z,θ)φ˜(z,θ,t0)]+NLdzdθφ˜(z,θ,t0)ρ0(z,θ)=NLdθdtdzφ˜{tφ+θ[1cosθ+[I+a+v](1+cosθ)]φ+θv(1+cosθ)ρ}+N2Ldzdθφ˜(z,θ,t0)ρ0(z,θ)dθφ˜(z,θ,t0)ρ0(z,θ), (4.14)
Sv=NLdtdzv˜[(ddt+β)v2βΩdzw(zz)(φ˜(z,π,t)+1)φ(z,π,t)2βΩdzw(zz)φ˜(z,π,t)ρ(z,π)δ(tt0)(u0(z)a(z,t0))]. (4.15)

We have only included the quadratic term of the initial condition since it is the only one that plays a role at first-order perturbation theory (tree level). Finally, if we set the mean-field solutions to the stationary solutions ρ(z, θ) and a(z), then we obtain

Sφ=NLdθdtdzφ˜{tφ+θ[1cosθ+[I+a+v](1+cosθ)]φ+θv(1+cosθ)ρ}+N2Ldzdθφ˜(z,θ,t0)ρ0(z,θ)dθφ˜(z,θ,t0)ρ0(z,θ), (4.16)
Sv=NLdtdzv˜{(ddt+β)v2βΩdzw(zz)[φ˜(z,π,t)+1]φ(z,π,t)2βΩdzw(zz)φ˜(z,π,t)ρ(z,π)}. (4.17)

Without loss of generality, we set L = 1. In the limit of N → ∞, the dominant term in the probability density functional for the fields will be the extrema of the action, which defines mean-field theory. Moments of the fields can be computed perturbatively as an expansion in 1/N by using Laplace’s method around mean field (i.e., a loop expansion). The bilinear terms in the action (comprising of a product of a field and a response field) are the linear response functions or propagators. All the other terms are vertices. Each vertex contributes a factor of N while each propagator contributes 1/N. To make the scaling more transparent, we make the rescaling transformation where v˜v˜/N and φ˜φ˜/N. This change will rescale the propagators to order unity and the vertices to order 1 or higher depending on how many response fields they possess. The resulting action is

Sφ=dθdtdzφ˜(tφ+θ[1cosθ+{I(z)+[a(z)+v]}(1+cosθ)]φ+θv(1+cosθ)ρ)+12Ndzdθφ˜(z,θ,t0)ρ0(z,θ)dθφ˜(z,θ,t0)ρ0(z,θ)Sv=dtdzv˜[(ddt+β)v2βΩdzw(zz)(φ˜(z,π,t)/N+1)φ(z,π,t)2βNΩdzw(zz)φ˜(z,π,t)ρ(z,π)]. (4.18)

The propagators and vertices can be represented by Feynman graphs or diagrams (see Fig. 2). Each response field corresponds to an outgoing branch (branch on the left) and each field corresponds to an incoming branch (branch on the right). Time flows from right to left and causality is respected by the propagators. To each branch is attached a corresponding propagator.

FIG. 2.

FIG. 2.

(a) Propagators, Δvv(z,t;z,t) (upper left), Δvφ(z,t;z,θ,t) (lower left), Δφv(z,θ,t;z,t) (upper right), and Δφφ(z,θ,t;z,θ,t) (lower right). (b) Vertices for action in (4.18). From left to right, they are ∂θ (1 + cos θ), 0, ρ0(z, θ)ρ0(z, θ), 2βNw(zz)ρ(z,π), 2βNw(zz).

The propagators are defined by

G1[Δvv(x;x)Δvφ(x;y)Δφv(y;x)Δφφ(y;y)]1=[δ2Sδv˜(x)δv(x)δ2Sδv˜(x)δφ(y)δ2Sδφ˜(y)δv(x)δ2Sδφ˜(y)δφ(y)]|v,φ,v˜,φ˜=0 (4.19)
=((d/dt+β)δ(xx)2βw(zz)δ(πθ)δ(tt)θ(1+cosθ)ρ(z,θ)δ(xx){t+θ[1cosθ+(I+a)(1+cosθ)]}δ(yy)), (4.20)

where x = (z, t), and y = (z, θ, t). The propagator Δab(x;x) is the response of field a at the nonprimed location to field b at the primed location. The propagator satisfies the condition

dqG1(q,q)G(q,q)=[δ(xx)00δ(yy)], (4.21)

where q is x or y as appropriate. Inserting (4.20) into (4.21) yields

(d/dt+β)Δvv(x;x)2βdzw(zz)Δφv(z,π,t;x)=δ(xx), (4.22)
(d/dt+β)Δvφ(x;y)2βdzw(zz)Δφφ(z,π,t;y)=0, (4.23)
(t+θ{1cosθ+[I(z)+a(z)](1+cosθ)})Δφv(y;x)+θ(1+cosθ)ρ(z,θ)Δvv(x;x)=0, (4.24)
(t+θ{1cosθ+[I(z)+a(z)](1+cosθ)})Δφφ(y;y)+θ(1+cosθ)ρ(z,θ)Δvφ(x;y)=δ(yy). (4.25)

A. Computation of propagators

In order to perform perturbation theory we must compute the Green’s functions or propagators. There are four types of propagators at each spatial location. The propagator equations are comprised of two sets of 2N coupled integro-partial-differential equations. They can be simplified to ordinary differential equations, which greatly reduces the computational complexity. The solutions of the equations change qualitatively depending on whether I + a > 0, suprathreshold regime, and I + a ⩽ 0, subthreshold regime. Given that the propagators depend on two coordinates, there are four separate cases. However, the subthreshold neurons are by definition silent so propagators with the second variable in the subthreshold regime are zero, which leaves two cases for the first variable being supra- or subthreshold.

1. Suprathreshold regime

In the suprathreshold regime, z ϵ {ζ : I + a(ζ) > 0}, we make the following transformation φ> : θϕ, where:

ϕ=ϑ>(θ)=2tan1tanθ2I(z)+a(z), (4.26)

which obeys

dϕdθ=dϑ>(θ)dθ=2I+a(1cosθ)+(I+a)(1+cosθ)=2πρ(z,θ), (4.27)

where the last equality comes from (3.23). This transformation has the nice property that ϑ>(π) = π.

Equations (4.22) and (4.23) transform to

(d/dt+β)Δvv(z,t;z,t)2β>dzw(zz)I(z)+a(z)Δφv(z,π,t;z,t)=δ(zz)δ(tt), (4.28)
(d/dt+β)Δvφ(z,t;z,θ,t)2β>dzw(zz)I(z)+a(z)Δφφ(z,π,t;z,θ,t)=0, (4.29)

where we set Δ^φ.(z,ϕ,t;)=Δφ(z,ϑ>1(ϕ),t;)(dθ/dϕ)Δφ·(z,ϕ,t,;), where dθ/dϕ is the Jacobian of the transformation.

Equation (4.25) transforms to

{t+ϕ(dϕ/dθ)[1cosθ+(I+a)(1+cosθ)]}Δφφ(z,ϕ,t;z,θ,t)(dϕ/dθ)+ϕ(dϕ/dθ)(1+cosθ)ρ(z,θ)Δvφ(z,t;z,θ,t)=δ(zz)δ(tt)δ(ϕϑ>(θ))(dϕ/dθ). (4.30)

Now consider

(1+cosθ)ρ(z,θ)=1πI+a(z)(1+cosθ)(1cosθ)+[I+a(z)](1+cosθ)=1πI+atan2(θ/2)+(I+a)=1πI+a(I+a)tan2(ϕ/2)+(I+a)=12π1+cosϕI+a(z), (4.31)

where we have used (4.26) and the tangent half-angle formula

tan2θ2=1cosθ1+cosθ. (4.32)

Inserting (4.31) back into (4.30) gives

(t+2I+a)ϕΔφφ(z,ϕ,t;z,θ,t)sinϕ2πI+aΔvφ(z,t;z,θ,t)=δ(xx)δ(ϕϑ>(θ))δ(tt). (4.33)

Similarly, we obtain

(t+2I+aϕ)Δφv(z,ϕ,t;z,t)sinϕ2πI+aΔvv(z,t;z,t)=0. (4.34)

The transformed propagator equations are given by Eqs. (4.28), (4.29), (4.33), and (4.34). Equations (4.33) and (4.34) are advection equations in ϕ, which can be integrated to

Δφv(z,ϕ,t;z,t)=C(z)ttdτsin[ϕv>(z)(tτ)]Δvv(z,τ;z,t)Δφφ(z,ϕ,t;z,θ,t)=C(z)ttdτsin[ϕv>(z)(tτ)]Δvφ(z,τ;z,θ,t)+δ(ϕϑ>(θ)v>(z)(tt))δ(zz), (4.35)

where

C(z)12πI(z)+a(z)v>(z)2I(z)+a(z). (4.36)

We then define the following variables:

rv(z,t;z,t)=Δφv(z,π,t;z,t)=C(z)ttdτsin(v>(z)(tτ))Δvv(z,τ;z,t), (4.37)
rφ(z,t;z,θ,t)=Δφφ(z,π,t;z,θ,t)δ[πϑ>(θ)v>(z)(tt)]δ(zz)=C(z)ttdτsin[v>(z)(tτ)]Δvφ(z,τ;z,θ,t). (4.38)

We thus obtain after repeated derivatives and using the propagator equations (4.28), (4.29), (4.33), and (4.34):

d2dt2rv(z,t;z,t)=1πΔvv(z,t;z,t)v>2(z)rv(z,t;z,t), (4.39)
(ddt+β)Δvv(z,t;z,t)βdzw(zz)v>(z)rv(z,t;z,t)=δ(zz)δ(tt), (4.40)
d2dt2rφ(z,t;z,θ,t)=1πΔvφ(z,t;z,θ,t)v>2(z)rφ(z,t;z,θ,t), (4.41)
(ddt+β)Δvφ(z,t;z,θ,t)βdzw(zz)v>(z)rφ(z,t;z,θ,t)=βw(zz)v>(z)δ[πϑ>(θ)v>(z)(tt)]. (4.42)

The covariance function (4.72) involves the integral quantity

U(z,t;z,t0)=dθΔvφ(z,t;z,t0,θ)ρ0(z,θ) (4.43)

by our choice of transformation convention. However, instead of computing the propagator at all values of θ′, we create another pair of ordinary differential equation (ODE) for U. Applying the integral operator dθρ0(z,θ) to (4.41) and (4.42) gives

d2dt2r(z,t;z,t)=1πU(z,t;z,t)v>2(z)r(z,t;z,t) (4.44)
(ddt+β)U(z,t;z,,t)β>dzw(zz)v>(z)r(z,t;z,t)=βw(zz)v>(z)dθρ0(z,θ)δ[πϑ(θ)v>(z)(tt)]=βw(zz)v>(z)ρ0(z,θ0)dθdϕ|θ=θ0=β2πw(zz)v>(z)ρ0(z,θ0)ρ(z,θ0), (4.45)

where r(z,t;z,t)=rφ(z,t;z,θ,t)ρ0(z,θ)dθ and θ0 = −ϑ−1[v>(z′)(tt′)]. Hence, we need to numerically integrate the following equations:

d2dt2r(z,t;z,t)=1πU(z,t;z,t)v>2(z)r(z,t;z,t), (4.46)
(ddt+β)U(z,t;z,,t)β>dzw(zz)v>(z)r(z,t;z,t)=12πβw(zz)v>(z), (4.47)
d2dt2rv(z,t;z,t)=1πΔvv(z,t;z,t)v>2(z)rv(z,t;z,t), (4.48)
(ddt+β)Δvv(z,t;z,t)β>dzw(zz)v>(z)rv(z,t;z,t)=δ(zz)δ(tt), (4.49)
d2dt2rφ(z,t;z,π,t)=1πΔvφ(z,t;z,π,t)v>2(z)rφ(z,t;z,π,t), (4.50)
(ddt+β)Δvφ(z,t;z,π,t)β>dzw(zz)v>(z)rφ(z,t;z,π,t)=β2w(zz)δ(tt)+βl=1w(zz)δ[ttTl(z)], (4.51)

where Tl(z′) = {s|v(z)s = 2π} marks the time intervals from t′ such that 2πlv>(z′)Tl(z′) = 0. The source at t = t′ in (4.51) has a factor of one half because because it comes form the θ delta function, which is symmetric about θ = θ′ since the propagator is symmetric at θ = θ′, unlike the contribution from the time delta function, which is one sided due to causality.

2. Subthreshold regime

In the subthreshold regime, namely I + a ⩽ 0, the mean-field solution for the density ρ is a point mass, and this will change the form of the propagators. The propagator equations are

(d/dt+β)Δvv(x;x)2βdzw(zz)Δφv(z,π,t;x)=δ(xx), (4.52)
(d/dt+β)Δvφ(x;y)2βdzw(zz)Δφφ(z,π,t;y)=0, (4.53)
(t+θ{1cosθ+[I(z)+a(z)](1+cosθ)})Δφv(y;x)+θ(1+cosθ)δ[θθ(z)]Δvv(x;x)=0, (4.54)
(t+θ{1cosθ+[I(z)+a(z)](1+cosθ)})Δφφ(y;y)+θ(1+cosθ)δ[θθ(z)]Δvφ(x;y)=δ(yy), (4.55)

where the equations are defined on I(z) + a(z) < 0, and θ± are the mean-field fixed points, where sinθ±=±2|I+a|/(1+|I+a|). However, note that the primed variables are defined over the entire z domain since subthreshold neurons can receive input from suprathreshold neurons.

We simplify these equations by breaking the domain of θ into two pieces: D1 = (θ+, θ) and D2 = (θ, θ+). In the two advection equations, there will be a clockwise advection of the propagators towards θ in D1 and in a counterclockwise advection towards θ in D2. π is in D1 but not D2 so neurons starting in D2 will never fire. In D1, we make the transformation ϑ< : θ → χ:

χ=ϑ<(θ)=ln[sinθ|I+a|(1+cosθ)sinθ+|I+a|(1+cosθ)], (4.56)
χ=2coth1tanθ2|I(z)+a(z)|, (4.57)
dχdθ=2|I+a|(1cosθ)|I+a|(1+cosθ), (4.58)
dθdχ=|I+a|(|I+a|2+1)cosh2(χ/2)1, (4.59)
dθdχ=2|I+a|(|I+a|2+1)[cosh(χ)+1]2, (4.60)
1+cosθ=21+|I+a|coth2χ/2, (4.61)

which maps D1 to the real line where −∞ corresponds to θ+ and ∞ corresponds to θ.

We then have the following propagator equations in the χ representation:

(d/dt+β)Δvv(z,t;z,t)β>dzw(zz)v>(z)Δφv(z,π,t;z,t)=δ(zz)δ(tt), (4.62)
(t+v<χ)Δφv(z,χ,t;z,t)=Q(z,χ)Δvv(z,t;z,t), (4.63)
(d/dt+β)Δvφ(z,t;z,θ,t)β>dzw(zz)v>(z)Δφφ(z,π,t;z,θ,t)=0, (4.64)
(t+v<χ)Δφφ(z,χ,t;z,θ,t)=Q(z,χ)Δvφ(z,t;z,θ,t)+δ(zz)δ(χϑ<(θ))δ(tt), (4.65)

where

Q(z,χ)=χ21+|I+a|coth2χ/2δ[ϑ1(χ)θ(z)]

and v<(z)=2|I(z)+a(z)|. Integrating yields

Δφv(z,χ,t;w)=ttQ[z,χv<(z)(tτ)]Δvv(z,τ;z,t)dτ, (4.66)
Δφφ(z,χ,t;w)=ttQ[z,χv<(z)(tτ)]Δvφ(z,τ;z,θ,t)dτ+δ(zz)δ[χϑs(θ)v<(z)(tt)]. (4.67)

Hence, the only contribution from the subthreshold neurons are from any neuron that is initially in D1, which for uniformly distributed phases the probability will be [1 − (θ+θ]/2π. The subthreshold propagators are thus passively driven by the superthreshold propagators. Hence, for z in the subthreshold regime, the relevant propagator equations are

(d/dt+β)Δvv(z,t;z,t)β>dzw(zz)v>(z)rv(z,t;z,t), (4.68)
=δ(zz)δ(tt), (4.69)
(d/dt+β)Δvφ(z,t;z,π,t)β>dzw(zz)v>(z)rφ(z,t;z,π,t)=β2w(zz)δ(tt)+βl=1w(zz)δ[ttTl(z)], (4.70)
(d/dt+β)U(z,t;z,t)β>dzw(zz)v>(z)r(z,t;z,t)=β2πw(zz)v>(z). (4.71)

B. Covariance functions

1. Drive covariance

As described previously [16], the covariances between the fields to order 1/N are comprised of vertices with two outgoing branches. Using the diagrams in Figs. 2 and 3, we obtain

Nδv(z,t)δv(z,t)=2βdz1dz2dτΔvv(z,t;z1,τ)Δvφ(z,t;z2,π,τ)×w(z1z2)ρ(z2,π)+(xx)dz1{dθΔvφ(z,t;z1,t0,θ)ρ(z1,θ,t0)×dθΔvφ(z,t;z1,t0,θ)ρ(z1,θ,t0)}. (4.72)
FIG. 3.

FIG. 3.

Tree level diagrams for (a) drive covariance 〈υυ〉 and (b) rate covariance 〈φφ〉. The lower two diagrams are zero for (a) and (b). For the upper three diagrams in (a) and (b), the first diagram corresponds to the third term, while the second and third diagrams correspond to the first and second terms of Eq. (4.72) and Eq. (4.80), respectively.

Evaluating the covariance function in (4.72) requires computing the propagators using the equations derived in the previous section. Our numerical methods for integrating these equations are in the Appendix. We compared the theory to microscopic simulations of (2.4) with fixed initial condition of u(z) set to the mean-field solution a(z), and the initial condition of θ(z) is sampled from the probability distribution obeying the mean-field solution ρ(z, θ). For the suprathreshold region, the cumulative distribution function for ρ(z, θ) is

P(z,θ)=1πtan1[sinθI(z)+a(z)(1+cosθ)]+12 (4.73)

from which we can sample θ by applying the inverse of (4.73) to a uniform random number. For the subthreshold region, all the samples are taken to be at the stable solution θ(z)=2tan1{[|I(z)+a(z)|]}.

A comparison between the variance of synaptic drive fluctuations for the microscopic simulation as a function of space at a fixed time for two values of N and the theory is shown in Fig. 4(a) for external input and synaptic coupling weight as in Fig. 1. This is a case where all neurons are in the suprathreshold region. We see that the theory starts to break down for smaller system sizes at the local maxima and minima of the variance. This is expected since the theory is valid to order N−1 in perturbation theory and the maxima and minima are where the effective local population is smallest. Figure 4(b) shows the variance near a maximum as a function of N, showing an accurate prediction after N = 800. The sample size for these microscopic simulations is 5 × 105, and we estimate the error of the variance using bootstrap. The error is of order 10−2. A segment of the spatio-temporal dynamics is shown in Fig. 5. The theory matches the simulation quite well with the greatest deviation near the maxima and minima.

FIG. 4.

FIG. 4.

Variance times N at time t = 10 for parameters in Fig. 1. (a) Comparison between microscopic simulation and theory calculation for N = 200 and N = 800. (b) The N dependence of Nδu(z)u(z)〉 at z = 0.2. Standard errors for the microscopic simulation are estimated by bootstrap.

FIG. 5.

FIG. 5.

Spatial-temporal dynamics of the synaptic drive variance for the microscopic simulation for N = 800 in (a) and (c) and the theory in (b) and (d). Parameters are as in Fig. 1.

Figure 6 shows the two-time and two-space covariances of the synaptic drive for the same network parameters. The spatial covariance mirrors the coupling function as expected.

FIG. 6.

FIG. 6.

Spatiotemporal plot of covariance 〈δu(z, 20)δu(z, 20 − τ)〉 for (a) theory and (b) microscopic simulation using parameters from Figure 1. (c) Covariance at a single spatial location, 〈δu(0.5, 20)δu(0.5, 20 − τ)〉. (d) Covariance at a single time, 〈δu(z = .005, 20)δu(z′, 20)〉. Standard errors are estimated by jackknife.

Figure 7 shows a comparison between the theory and the microscopic simulation when subthreshold neurons are included. There is a good match when N is large. As N decreases the theory starts to fail at the edges of the bump first. This is likely due to the fact that the location of the edge could move and this is not captured by the theory since it assumes fluctuations around a stationary mean-field solution. However, the spontaneous firing of subthreshold neurons due to either the initial conditions or from the fluctuating inputs of suprathreshold neurons can cause the edge of the bump to move and this is a nonperturbative effect.

FIG. 7.

FIG. 7.

(a) Variance multiplied by N and (b) mean of the synaptic drive with subthreshold neurons for constant stimulus I = −1 and (c) coupling weight w(z) = A exp(−az) − exp(−bz) + A exp[−a(Lz)] − exp[−b(Lz)] with A = 150, a = 30, and b = 20. The suprathreshold edge of bump is at u = 1. (a) Evaluated at time 10. Standard errors are estimated by bootstrap.

2. Rate covariance

The firing rate is defined as ν = 2η(z, π, t) with mean

v(z,π,t)=2ρ(z,π,t) (4.74)

and covariance

δv(z,t)δv(z,t)=(v(z,π,t)v(z,π,t))[v(z,π,t)v(z,π,t)], (4.75)
=v(z,π,t)v(z,π,t)v(z,π,t)v(z,π,t) (4.76)
=4η(z,π,t)η(z,π,t)4ρ(z,π,t)ρ(z,π,t),=4(φ˜(z,t)φ(z,t)+φ(z,t))(φ˜(z,t)φ(z,t)+φ(z,t)), (4.77)
=4φ(z,t)φ(z,t)+4φ(z,t)φ˜(z,t)φ(z,t), (4.78)
=4φ(z,t)φ(z,t)+4NΔφφ(z,π,t;z,π,t)ρ(z,π,t). (4.79)

At tree level, from the diagrams in Fig. 3(b),

Nφ(z,t)φ(z,t)=2βdz1dz2dτΔφv(z,t;z1,τ)Δφφ(z,π,t;z2,π,τ)w(z1z2)ρ(z2,π,τ)+(xx)dz1{dθΔφφ(z,t;z1,t0,θ)ρ(z1,θ,t0)dθΔφφ(z,t;z1,t0,θ)ρ(z1,θ,t0)}. (4.80)

We rewrite as

Nφ(z,t)φ(z,t)=2βdz1dz2dτ(rv(z,t;z1,τ){rφ(z,t;z2,π,τ)+δ[πϑ>(π)+v>(z)(tτ)]δ(zz2)}w(z1z2)ρ(z2,π,τ))+(xx)dz1(dθ{rφ(z,t;z1,θ,t0)+δ[πϑ>(θ)+v>(z)(tt0)]δ(zz1)}ρ(z1,θ,t0)×dθ{rφ(z,t;z1,θ,t0)+δ[πϑ>(θ)+v>(z)(tt0)]δ(zz1)}ρ(z1,θ,t0))=2βdτdz1dz2rv(z,t;z1,τ)rφ(z,t;z2,π,τ)w(z1z2)ρ(z2,π,τ)+2β|v>(z)|ldz1rv(z,t;z1,t2πl/v)w(z1z)ρ(z,π,τ)+(xx)dz1dθrφ(z,t;z1,θ,t0)ρ(z1,θ,t0)dθrφ(z,t;z1,θ,t0)ρ(z1,θ,t0)12πdθrφ(z,t;z,θ,t0)ρ(z,θ,t0)12πdθrφ(z,t;z,θ,t0)ρ(z,θ,t0)14π2dz1δ(zz1)δ(zz1),

where

Δφv(z,π,t;z,t)=rv(z,t;z,t)Δφφ(z,π,t;z,θ,t)=rφ(z,t;z,θ,t)+δ(πϑ>(θ)+v>(z)(tt))δ(zz)r(z,t;z,t)=rφ(z,t;z,θ,t)ρ0(z,θ)dθ.

Hence

δv(z,t)δv(z,t)=v>(z)v>(z){8βNdτdz1dz2rv(z,t;z1,τ)rφ(z,t;z2,π,τ)w(z1z2)ρ(z2,π,τ)+8β|v>(z)|Nldz1rv[z,t;z1,t2πl/v>(z)]w(z1z)ρ(z,π,τ)+(xx)4Ndz1r(z,t;z1,t0)r(z,t;z1,t0)2πNr(z,t;z,t0)2πNr(z,t;z,t0)1π2Ndz1δ(zz1)δ(zz1)}+4NΔφφ(z,π,t;z,π,t)ρ(z,π,t).

This quantity is well behaved for tt′ and zz′. However, in the limit of t′ → t, the rate covariance is singular since

limttΔφφ(z,π,t;z,π,t)ρ(z,π,t)=δ(zz)δ[v>(z)(tt)]dϕdθ|θ=πρ(z,π,t), (4.81)
=v>(z)2v>(z)δ(zz)δ(0)ρ(z,π,t). (4.82)

We regularize the singularity at t = t′ by considering the time integral over a small interval:

Δv(z,t)=tΔt/2t+Δt/2δv(z,s)ds,

giving

Δv(z,t)Δv(z,t)Δt2=v>(z)v>(z){8βNdτdz1dz2rv(z,t;z1,τ)rφ(z,t;z2,π,τ)w(z1z2)ρ(z2,π,τ)+8β|v>(z)|Nldz1rv[z,t;z1,t2πl/v>(z)]w(z1z)ρ[z,π,t2πl/v>(z)]+(xx)4Ndz1r(z,t;z1,t0)r(z,t;z1,t0)2πNr(z,t;z,t0)2πNr(z,t;z,t0)1π2Ndz1δ(zz1)δ(zz1)}+2NΔtρ(z,π,t)δ(zz). (4.83)

We regularize the singularity at z = z′ by taking a local spatial average over [−cN/2 + z, cN/2 + z]. We make the approximation that within this local region, the propagator is constant on space, which is valid under the large-N limit. This results in

Δv¯(z,t)Δv¯(z,t)Δt2=v>(z)v>(z){8βNdτdz1dz2rv(z,t;z1,τ)rφ(z,t;z2,π,τ)w(z1z2)ρ(z2,π,τ)+8β|v>(z)|Nldz1rv[z,t;z1,t2πl/v>(z)]w(z1z)ρ[z,π,t2πl/v>(z)]+(xx)4Ndz1r(z,t;z1,t0)r(z,t;z1,t0)2πNr(z,t;z,t0)2πNr(z,t;z,t0)1π2Nc}+2ΔtcNρ(z,π,t). (4.84)

Figure 8 shows a comparison of the theory in (4.84) to the microscopic simulations. As shown in Fig. 8(a), at N = 1200, the theory predicts the mean firing rate well. In Fig. 8(c), we show the variance of the firing rate at fixed location. In Fig. 8(d), we show the spatial structure of the variance. Again, the theory captures the simulations.

FIG. 8.

FIG. 8.

(a) Comparison between theory and microscopic simulations of time dependence of mean firing rate at one spatial location. (b) Spatial dependence of mean firing rate at time 3. [(c) and (d)] The same comparisons for the variance given in Eq. (4.84). Parameters are from Fig. 1 and N = 1200. Standard errors are estimated by bootstrap.

V. DISCUSSION

Our goal was to understand the dynamics of a large but finite network of deterministic synaptically coupled neurons with nonuniform coupling. In particular, we wanted to quantify the dynamics of individual neurons within the network. We first showed that a self-consistent local mean-field theory can describe the dynamics of a single network if the external input and coupling weight are continuous functions. This imposes a spatial metric on the network where neurons within a local neighborhood experience similar inputs and can thus be averaged over locally. This local continuity does not impose any conditions on long range interactions, which can still be random. We thus propose a new kind of network to study, continuous randomly coupled spiking networks, where the coupling is continuous but irregular at longer scales.

We show that corrections to mean-field theory can be computed as an expansion in the number of neurons in a local neighborhood. In this paper, we have chosen to scale the local neighborhood to the total number of neurons but this is not necessary. We do this by first writing down a formal and complete statistical description of the theory, mirroring the Klimontovich approach used in the kinetic theory of plasmas [14,25,26]. This formal theory is regularized by averaging, which leads to a BBGKY moment hierarchy. As in previous works [1416,2729], we showed that the Klimontovich description can be mapped to an equivalent Doi-Peliti-Jansen path-integral description from which a perturbation expansion in terms of Feynman diagrams can be derived. The path-integral formalism is a convenient tool for calculations. Although we only computed covariances to first order (tree level) it is straightforward (although computationally intensive) to continue to higher order as well as compute higher-order moments. We only considered a deterministic network for clarity but our method can easily incorporate stochastic effects, which would just add a new vertex to the action.

We showed that the theory works quite well for largeenough network size, which can be quite small if all neurons receive suprathreshold input. However, the expansion works less well for neurons with critical input such as neurons at the edge of a bump where infinitesimally small perturbations can produce qualitatively different behavior. Quantitatively capturing the dynamics at the edge may require renormalization. The formalism could be a systematic means to understanding randomly connected networks [30] and the so-called balanced network [31,32], where the mean inputs from excitatory and inhibitory synapses are attracted to a fixed point near zero and the neuron dynamics is dominated by the fluctuations.

ACKNOWLEDGMENTS

This research was supported by the Intramural Research Program of the NIH, NIDDK.

APPENDIX: NUMERICAL METHODS

Discretization schemes

We use full backward’s Euler for green function calculation for propagators,

drijdt=sij, (A1)
dsijdt=1πUijvi2rij, (A2)
tUij=βUij+βNjwijvjrjk+β2πwijvj, (A3)
rijt=rijt1+hsijt, (A4)
sijt=sijt1+h(1πUijtvi2rijt), (A5)
Uijt=Uijt1+h(βUijt+βNjwijvjrjkt+β2πwijvj), (A6)
rijthsijt=rijt1, (A7)
sijt+hvi2rijth1πUijt=sijt1, (A8)
UijthβNjwijvjrjkt+hβUijt=Uijt1+hβ2πwijvj, (A9)
(IhI0hv2.IIh/πIhβ/Nw.v0I+hβI)(rjtsjtUjt)=(rjt1sjt1Ujt1+hβ/2πw.jvj). (A10)

We add the spike terms in Eq. (4.51) directly to the propagator Δvφ(z,t;z,π,t) for all possible l when tTl(z′) = t′. These spike terms add stiffness to the differential equation and explicit differential equation solvers like Runge-Kutta have poor stability properties.

References

  • [1].Abbott LF and van Vreeswijk C, Asynchronous states in networks of pulse-coupled oscillators, Phys. Rev. E 48, 1483 (1993). [DOI] [PubMed] [Google Scholar]
  • [2].Amari S-I, Dynamics of pattern formation in lateral-inhibition type neural fields, Biol. Cybern 27, 77 (1977). [DOI] [PubMed] [Google Scholar]
  • [3].Brunel N and Hakim V, Fast global oscillations in networks of integrate-and-fire neurons with low firing rates, Neural Comput. 11, 1621 (1999). [DOI] [PubMed] [Google Scholar]
  • [4].Cohen MA and Grossberg S, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Transactions on Systems, Man, and Cybernetics SMC-13, 815 (1983). [Google Scholar]
  • [5].Fourcaud N and Brunel N, Dynamics of the firing probability of noisy integrate-and-fire neurons, Neural Comput. 14, 2057 (2002). [DOI] [PubMed] [Google Scholar]
  • [6].Hopfield JJ, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. U.S.A 79, 2554 (1982). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Wilson HR and Cowan JD, Excitatory and inhibitory interactons in localized populations of model neurons, Biophys. J 12, 1 (1972). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Mirollo RE and Strogatz SH, Synchronization of pulse-coupled biological oscillators, SIAM J. Appl. Math 50, 1645 (1990). [Google Scholar]
  • [9].Nykamp DQ and Tranchina D, A population density approach that facilitates large-scale modeling of neural networks: Extension to slow inhibitory synapses, Neural Comput. 13, 511 (2001). [DOI] [PubMed] [Google Scholar]
  • [10].Treves A, Mean-field analysis of neuronal spike dynamics, Netw. Comput. Neural Syst 4, 259 (1993). [Google Scholar]
  • [11].Ermentrout B, Type I membranes, phase resetting curves, and synchrony, Neural Comput. 8, 979 (1996). [DOI] [PubMed] [Google Scholar]
  • [12].Jones SR and Kopell N, Local network parameters can affect inter-network phase lags in central pattern generators, J. Math. Biol 52, 115 (2006). [DOI] [PubMed] [Google Scholar]
  • [13].Maran SK and Canavier CC, Using phase resetting to predict 1:1 and 2:2 locking in two neuron networks in which firing order is not always preserved, J. Comput. Neurosci 24, 37 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Hildebrand EJ, Buice MA, and Chow CC, Kinetic Theory of Coupled Oscillators, Phys. Rev. Lett 98, 054101 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Buice MA and Chow CC, Correlations, fluctuations, and stability of a finite-size network of coupled oscillators, Phys. Rev. E 76, 031118 (2007). [DOI] [PubMed] [Google Scholar]
  • [16].Buice MA and Chow CC, Dynamics finite size effects in spiking neural network, PLoS Comput. Biol 9, e1002872 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Dahmen D, Bos H, and Helias M, Correlated Fluctuations in Strongly Coupled Binary Networks Beyond Equilibrium, Phys. Rev. X 6, 031024 (2016). [Google Scholar]
  • [18].Dumont G, Payeur A, and Longtin A, A stochastic-field description of finite-size spiking neural networks, PLOS Comput. Biol 13, e1005691 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Helias M, Tetzlaff T, and Diesmann M, Echoes in correlated neural systems, New J. Phys 15, 023002 (2013). [Google Scholar]
  • [20].Lang E and Stannat W, Finite-size effects on traveling wave solutions to neural field equations, J. Math. Neurosci 7, 5 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Touboul JD and Bard Ermentrout G, Finite-size and correlation-induced effects in mean-field dynamics, J. Math. Neurosci 31, 453 (2011). [DOI] [PubMed] [Google Scholar]
  • [22].Desai RC and Zwanzig R, Statistical mechanics of a nonlinear stochastic model, J. Stat. Phys 19, 1 (1978). [Google Scholar]
  • [23].Ben-Yishai R, Bar-Or RL, and Sompolinsky H, Theory of orientation tuning in visual cortex, Proc. Natl. Acad. Sci. U.S.A 92, 3844 (1995). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Zhang K, Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory, J. Neurosci 16, 2112 (1996). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Ichimaru S, Basic Principles of Plasma Physics: A Statistical Approach (W. A. Benjamin, Amsterdam, 1973). [Google Scholar]
  • [26].Nicholson DR, Introduction to Plasma Theory, Vol. 2 (Cambridge University Press, Cambridge, 1984). [Google Scholar]
  • [27].Buice MA, Cowan JD, and Chow CC, Systematic fluctuation expansion for neural network activity equations, Neural Comput. 22, 377 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Buice M and Chow C, Generalized activity equations for spiking neural network dynamics, Front. Comput. Neurosci 7, 162 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Buice MA and Chow CC, Beyond mean field theory: Statistical field theory for neural networks, J. Stat. Mech.: Theory Exp (2013) P03003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Sompolinsky H, Crisanti A, and Sommers HJ, Chaos in Random Neural Networks, Phys. Rev. Lett 61, 259 (1988). [DOI] [PubMed] [Google Scholar]
  • [31].Ostojic S, Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons, Nat. Neurosci 17, 594 (2014). [DOI] [PubMed] [Google Scholar]
  • [32].Van Vreeswijk C and Sompolinsky H, Chaos in neuronal networks with balanced excitatory and inhibitory activity, Science 274, 1724 (1996). [DOI] [PubMed] [Google Scholar]

RESOURCES