An official website of the United States government
Here's how you know
Official websites use .gov
A
.gov website belongs to an official
government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you've safely
connected to the .gov website. Share sensitive
information only on official, secure websites.
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
. Author manuscript; available in PMC: 2020 May 29.
Published in final edited form as: Phys Rev E. 2018 Dec 27;98(6):062414. doi: 10.1103/physreve.98.062414
Finite-size effects for spiking neural networks with spatially dependent coupling
1Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), National Institutes of Health (NIH), Bethesda, Maryland 20892, USA
1Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), National Institutes of Health (NIH), Bethesda, Maryland 20892, USA
1Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), National Institutes of Health (NIH), Bethesda, Maryland 20892, USA
The publisher's version of this article is available at Phys Rev E
Abstract
We study finite-size fluctuations in a network of spiking deterministic neurons coupled with nonuniform synaptic coupling. We generalize a previously developed theory of finite-size effects for globally coupled neurons with a uniform coupling function. In the uniform coupling case, mean-field theory is well defined by averaging over the network as the number of neurons in the network goes to infinity. However, for nonuniform coupling it is no longer possible to average over the entire network if we are interested in fluctuations at a particular location within the network. We show that if the coupling function approaches a continuous function in the infinite system size limit, then an average over a local neighborhood can be defined such that mean-field theory is well defined for a spatially dependent field. We then use a path-integral formalism to derive a perturbation expansion in the inverse system size around the mean-field limit for the covariance of the input to a neuron (synaptic drive) and firing rate fluctuations due to dynamical deterministic finite-size effects.
I. INTRODUCTION
The dynamics of neural networks have traditionally been studied in the limit of very large numbers of neurons, where mean-field theory can be applied, e.g., Refs. [1–10], or for a small number of neurons, where traditional dynamical systems approaches can be used, e.g., Refs. [11–13]. The intermediate regime of large but finite numbers of neurons can have interesting properties that are independent of the small and infinite system limits [14–21]. However, these previous works have not fully explored fluctuations due to finite-size effects at specific locations within the network when all the neurons receive nonhomogeneous input from other neurons because of nonuniform coupling. Here we consider finite-size effects in a network of spiking neurons with nonuniform synaptic coupling. Previously [14–16], a perturbation expansion in the inverse network neuron number had been developed for networks with global spatially uniform coupling and we generalize that theory to include nonuniform coupling. We first show that mean-field theory in the infinite nonuniform system limit can be realized in a single network if a spatial metric can be imposed on the network and the coupling function is a continuous function of this distance measure. We then analyze finite-size fluctuations around such mean-field solutions using a path-integral formalism to derive a perturbation expansion in the inverse network neuron number for the spatially dependent covariance function for the synaptic drive and spatially dependent neuron firing rate.
II. COUPLED NEURON MODEL
Consider a network of N theta neurons (phase reduction of quadratic integrate-and-fire neurons [11]) on a one dimensional periodic domain of size L although the theory can be applied to any domain. The network obeys the following deterministic microscopic equations:
(2.1)
(2.2)
(2.3)
where θi is the phase of neuron i, ui is the synaptic drive to neuron i, Ii is the external input to neuron i, β is the decay constant of the synaptic drive, sj is the time-dependent synaptic input from neuron j, and represents the spike times when the phase of neuron j crosses π. sj rises instantaneously when neuron j spikes and relaxes to zero with a time constant of 1/β. The synaptic drive represents the total time-dependent synaptic input where the contribution from each neuron is weighted by the synaptic coupling function wij (a real N × N matrix). When Ii + ui > 0, the neuron receives suprathreshold input and θi will progress in time. When it passes π, the neuron is said to spike. When Ii + ui < 0 the neuron receives subthreshold input and the phase will approach a fixed point. The theta neuron is the normal form of a Type I spiking neuron near the bifurcation point to firing [11]. By linearity, the synaptic drive obeys the more convenient form of
(2.4)
We define an empirical density
(2.5)
that assigns a point mass to the phase of each neuron in the network. Hence, we can write the sum of a spike train as . For the theta model, and thus we can then rewrite (2.4) as
(2.6)
Neuron number is conserved so the neuron density formally obeys a conservation (Klimontovich) equation [16]:
(2.7)
where Fi(θ, ui) = 1 – cos θ + (1 + cos θ )(Ii + ui ). The Klimontovich equation together with (2.6) fully describes the system. However, it is only a formal definition since η is not in general differentiable. In the following, we develop a method to regularize the Klimontovich equation so that desired quantities can be calculated.
III. MEAN-FIELD THEORY
The Klimontovich equation (2.7) only exists in a weak sense. We can regularize it by taking a suitable average over an ensemble of initial conditions:
(3.1)
This equation is not closed because it involves covariances such as 〈ηη〉, which in turn depend on higher-order cumulants in a Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy [14–16]. This hierarchy can be rendered tractable if we can truncate it. Mean-field theory truncates the hierarchy at first order by assuming that all cumulants beyond the first are zero so we can write
(3.2)
where ai = 〈ui〉 and ρi = 〈ηi〉. The full set of closed mean-field equations are given by
(3.3)
Although we can always write the mean-field equations (3.3), it is not clear that a given network would obey it in the infinite-N limit. In previous work [8,16,22], it was shown that mean-field theory applies to a network of coupled oscillators with uniform coupling in the infinite-N limit. However, it is not known when or if mean-field theory applies for nonuniform coupling.
To see this, consider first the stationary system
(3.4)
(3.5)
with uniform coupling, wij = w, and uniform external input, Ii = I. If the neurons are initialized with random phases and remain asynchronous, then we can suppose that in the limit of N → ∞ the quantity
(3.6)
converges to an invariant quantity [8,16,22]. This then implies that ui = 2wρ ≡ a is also a constant. Thus each neuron will have identical inputs so if we apply the network averaging operator to (3.4) we obtain
(3.7)
(3.8)
Covariances vanish and mean-field theory is realized in the infinite network limit. Given that the drive equation (2.6) is linear, the time-dependent mean-field theory will similarly hold in the large-N limit.
In the case where wij is not uniform, covariances are not guaranteed to vanish and an infinite network need not obey mean-field theory. Our goal is to find conditions such that mean-field theory applies. Again, consider the stationary equations (3.4) and (3.5). Now, instead of averaging over the entire domain, take a local interval around j, [j − cN/2, j + cN/2], where c < 1 is a constant that can depend on N and we map j − cN/2 < 1 to N + j − cN/2 and j + cN/2 > N to j + cN/2 − N. We want to express our mean-field equation in terms of the locally averaged empirical density
(3.9)
If cN → ∞ for N → ∞, then it is feasible that the local empirical density can be invariant (to random initial conditions) and correlations can vanish; we seek conditions on the coupling for which this is true.
Inserting (3.5) into (3.4) and taking the local average yields
(3.10)
We immediately see that correlations can arise from the sums over the product of ηj(π)ηi(θ). Consider the identity , which is exact for periodic boundary conditions. For nonperiodic boundary conditions there will be an edge contribution but this should be negligible in the large network limit. Using this summation identity, we can rewrite the sum as
(3.11)
where the remainder
(3.12)
carries the correlations. Mean-field theory is valid in the N → ∞ limit if Rjk vanishes. Its magnitude obeys
We introduce a distance measure z = iL/N, z′ = jL/N, z″ = kL/N, z‴ = lL/N and write ρi(θ)|i=zN/L = ρ(z, θ) and . Then
(3.15)
(3.16)
Hence, if we set c = N−α, 0 < α < 1, then as N → ∞, the number of neurons in the local neighborhood cN approaches infinity as N1−α while c → 0. Then |R| → 0 as N → ∞ if and , i.e., wij is a continuous function in both indices. A similar argument shows that
(3.17)
if Ii approaches a continuous function in index i in the infinite-N limit. Then (3.4) and (3.5) can be written as
(3.18)
(3.19)
Equations (3.18) and (3.19) form a mean-field theory that is realized in a nonuniform coupled network in the infinite size limit as long as the input and coupling function are continuous functions. By linearity, the time-dependent mean-field theory should equally apply if the external input and the coupling are continuous functions of the indices.
In the N → ∞ limit, setting i → zN/L, ai(t) → a(z, t), ρi(θ, t) → ρ(z, θ, t), Ii → I(z) is continuous, , and wij → w(z, z′) is continuous, we can write mean-field theory in continuum form as
(3.20)
The stationary solutions obey
(3.21)
(3.22)
The stationary solutions will be qualitatively different depending on the sign of I + a. Consider first the suprathreshold regime where I + a > 0. We can then solve (3.21) to obtain
(3.23)
which has been normalized such that . Inserting this back into (3.22) gives
(3.24)
In the subthreshold regime, I + a < 0, (3.23) has a singularity at 1 – cos θ + [I(z) + a(z)](1 + cos θ) = 0, for which there are two solutions θ± that coalesce in a saddle node bifurcation at I + a = 0. Although ρ is no longer differentiable at equilibrium in the subthreshold regime there is still a weak solution. It has been shown previously [11] that θ− is stable and θ+ is unstable for a single theta neuron. This implies that the density is given by ρ(z, θ) = δ(θ – θ−) and that ρ(z, π) = 0 (i.e., no firing) as expected in the subthreshold regime. Figure 1 shows an example of a stationary “bump” solution for the periodic coupling function, w(z) = −J0 + J2 cos(2π/Lz), which has been used in models of orientation tuning of visual cortex [23] and the rodent head direction system [24]).
(a) Mean-field theory synaptic drive for (b) connectivity weight w(z) = −J0 + J2 cos(2π/Lz), J0 = 0.2, and J2 = 0.8, and (c) external input I(z) = I0 + sin[2π/L(z − z0)], I0 = 1, z0 = 0.25.
IV. BEYOND MEAN-FIELD THEORY
In the infinite-N limit when mean-field theory applies, the fields η and u are completely described by their means. The time trajectories of these fields are independent of the initial conditions of the individual neurons. For finite N, the trajectories can differ for different initial conditions and going beyond mean-field theory involves understanding these fluctuations. Implicit in going beyond mean-field theory is that these fields are themselves random variables that are drawn from a distribution functional. In this section, we will derive this distribution functional formally and then use it to compute perturbative expressions for the covariances of η and u.
Recall that the microscopic system is fully described by
(4.1)
(4.2)
where we have expressed the initial conditions as forcing terms. The probability density functional for the fields is then composed of point masses constrained to the dynamical system marginalized over the distribution of the initial data densities:
(4.3)
where P[η0] is the probability density functional of the initial neuron densities for all neurons and η0 is the functional integration measure. We consider the initial condition of u to be fixed to u0. Using the functional Fourier transform for the Dirac delta functionals, we then obtain
(4.4)
where and are response fields for neuron i with functional integration measures and over all neurons. If we set , then the distribution over initial densities is given by the distribution over the initial phase, . Thus we can write . The initial condition contribution is given by the integral
(4.5)
(4.6)
(4.7)
Hence, the system given by (2.6) and (2.7) can be mapped to the distribution functional with action S = Sη + Su given by
(4.8)
(4.9)
The exponential in the initial data contribution to the action (which corresponds to a generating function for a Poisson distribution) can be bilinearized via the Doi-Peliti-Janssen transformation [14–16,27–29]: , , resulting in
(4.10)
(4.11)
where we have not included the noncontributing terms that arise after integration by parts.
We now make the coarse-graining transformation i → zN/L, ui (t) → u(z, t), ψi (θ, t) → ψ(z, θ, t), ρi (θ, t) → ρ(z, θ, t), Ii → I (z), , and wij → w(z – z′), which yields
(4.12)
(4.13)
We examine perturbations around the mean-field solutions a(z, t) and ρ(z, θ, t) of (3.20) with u → a(z, t)H(t − t0) + v(z, t), , ψ → ρ(z, θ, t)H(t − t0) + φ(z, θ, t), and , where ρ(z, θ, t = t0) = ρ0(z, θ) and H(t − t0) is the Heaviside function. We then obtain
(4.14)
(4.15)
We have only included the quadratic term of the initial condition since it is the only one that plays a role at first-order perturbation theory (tree level). Finally, if we set the mean-field solutions to the stationary solutions ρ(z, θ) and a(z), then we obtain
(4.16)
(4.17)
Without loss of generality, we set L = 1. In the limit of N → ∞, the dominant term in the probability density functional for the fields will be the extrema of the action, which defines mean-field theory. Moments of the fields can be computed perturbatively as an expansion in 1/N by using Laplace’s method around mean field (i.e., a loop expansion). The bilinear terms in the action (comprising of a product of a field and a response field) are the linear response functions or propagators. All the other terms are vertices. Each vertex contributes a factor of N while each propagator contributes 1/N. To make the scaling more transparent, we make the rescaling transformation where and . This change will rescale the propagators to order unity and the vertices to order 1 or higher depending on how many response fields they possess. The resulting action is
(4.18)
The propagators and vertices can be represented by Feynman graphs or diagrams (see Fig. 2). Each response field corresponds to an outgoing branch (branch on the left) and each field corresponds to an incoming branch (branch on the right). Time flows from right to left and causality is respected by the propagators. To each branch is attached a corresponding propagator.
(a) Propagators, (upper left), (lower left), (upper right), and (lower right). (b) Vertices for action in (4.18). From left to right, they are ∂θ (1 + cos θ), 0, ρ0(z, θ)ρ0(z, θ), , .
The propagators are defined by
(4.19)
(4.20)
where x = (z, t), and y = (z, θ, t). The propagator is the response of field a at the nonprimed location to field b at the primed location. The propagator satisfies the condition
(4.21)
where q is x or y as appropriate. Inserting (4.20) into (4.21) yields
(4.22)
(4.23)
(4.24)
(4.25)
A. Computation of propagators
In order to perform perturbation theory we must compute the Green’s functions or propagators. There are four types of propagators at each spatial location. The propagator equations are comprised of two sets of 2N coupled integro-partial-differential equations. They can be simplified to ordinary differential equations, which greatly reduces the computational complexity. The solutions of the equations change qualitatively depending on whether I + a > 0, suprathreshold regime, and I + a ⩽ 0, subthreshold regime. Given that the propagators depend on two coordinates, there are four separate cases. However, the subthreshold neurons are by definition silent so propagators with the second variable in the subthreshold regime are zero, which leaves two cases for the first variable being supra- or subthreshold.
1. Suprathreshold regime
In the suprathreshold regime, z ϵ {ζ : I + a(ζ) > 0}, we make the following transformation φ> : θ → ϕ, where:
(4.26)
which obeys
(4.27)
where the last equality comes from (3.23). This transformation has the nice property that ϑ>(π) = π.
The covariance function (4.72) involves the integral quantity
(4.43)
by our choice of transformation convention. However, instead of computing the propagator at all values of θ′, we create another pair of ordinary differential equation (ODE) for U. Applying the integral operator to (4.41) and (4.42) gives
(4.44)
(4.45)
where and θ0 = −ϑ−1[v>(z′)(t – t′)]. Hence, we need to numerically integrate the following equations:
(4.46)
(4.47)
(4.48)
(4.49)
(4.50)
(4.51)
where Tl(z′) = {s|v(z)s = 2π} marks the time intervals from t′ such that 2πl – v>(z′)Tl(z′) = 0. The source at t = t′ in (4.51) has a factor of one half because because it comes form the θ delta function, which is symmetric about θ = θ′ since the propagator is symmetric at θ = θ′, unlike the contribution from the time delta function, which is one sided due to causality.
2. Subthreshold regime
In the subthreshold regime, namely I + a ⩽ 0, the mean-field solution for the density ρ is a point mass, and this will change the form of the propagators. The propagator equations are
(4.52)
(4.53)
(4.54)
(4.55)
where the equations are defined on I(z) + a(z) < 0, and θ± are the mean-field fixed points, where . However, note that the primed variables are defined over the entire z domain since subthreshold neurons can receive input from suprathreshold neurons.
We simplify these equations by breaking the domain of θ into two pieces: D1 = (θ+, θ−) and D2 = (θ−, θ+). In the two advection equations, there will be a clockwise advection of the propagators towards θ− in D1 and in a counterclockwise advection towards θ− in D2. π is in D1 but not D2 so neurons starting in D2 will never fire. In D1, we make the transformation ϑ< : θ → χ:
(4.56)
(4.57)
(4.58)
(4.59)
(4.60)
(4.61)
which maps D1 to the real line where −∞ corresponds to θ+ and ∞ corresponds to θ−.
We then have the following propagator equations in the χ representation:
(4.62)
(4.63)
(4.64)
(4.65)
where
and . Integrating yields
(4.66)
(4.67)
Hence, the only contribution from the subthreshold neurons are from any neuron that is initially in D1, which for uniformly distributed phases the probability will be [1 − (θ+ − θ−]/2π. The subthreshold propagators are thus passively driven by the superthreshold propagators. Hence, for z in the subthreshold regime, the relevant propagator equations are
(4.68)
(4.69)
(4.70)
(4.71)
B. Covariance functions
1. Drive covariance
As described previously [16], the covariances between the fields to order 1/N are comprised of vertices with two outgoing branches. Using the diagrams in Figs. 2 and 3, we obtain
Tree level diagrams for (a) drive covariance 〈υυ〉 and (b) rate covariance 〈φφ〉. The lower two diagrams are zero for (a) and (b). For the upper three diagrams in (a) and (b), the first diagram corresponds to the third term, while the second and third diagrams correspond to the first and second terms of Eq. (4.72) and Eq. (4.80), respectively.
Evaluating the covariance function in (4.72) requires computing the propagators using the equations derived in the previous section. Our numerical methods for integrating these equations are in the Appendix. We compared the theory to microscopic simulations of (2.4) with fixed initial condition of u(z) set to the mean-field solution a(z), and the initial condition of θ(z) is sampled from the probability distribution obeying the mean-field solution ρ(z, θ). For the suprathreshold region, the cumulative distribution function for ρ(z, θ) is
(4.73)
from which we can sample θ by applying the inverse of (4.73) to a uniform random number. For the subthreshold region, all the samples are taken to be at the stable solution .
A comparison between the variance of synaptic drive fluctuations for the microscopic simulation as a function of space at a fixed time for two values of N and the theory is shown in Fig. 4(a) for external input and synaptic coupling weight as in Fig. 1. This is a case where all neurons are in the suprathreshold region. We see that the theory starts to break down for smaller system sizes at the local maxima and minima of the variance. This is expected since the theory is valid to order N−1 in perturbation theory and the maxima and minima are where the effective local population is smallest. Figure 4(b) shows the variance near a maximum as a function of N, showing an accurate prediction after N = 800. The sample size for these microscopic simulations is 5 × 105, and we estimate the error of the variance using bootstrap. The error is of order 10−2. A segment of the spatio-temporal dynamics is shown in Fig. 5. The theory matches the simulation quite well with the greatest deviation near the maxima and minima.
Variance times N at time t = 10 for parameters in Fig. 1. (a) Comparison between microscopic simulation and theory calculation for N = 200 and N = 800. (b) The N dependence of N〈δu(z)u(z)〉 at z = 0.2. Standard errors for the microscopic simulation are estimated by bootstrap.
Spatial-temporal dynamics of the synaptic drive variance for the microscopic simulation for N = 800 in (a) and (c) and the theory in (b) and (d). Parameters are as in Fig. 1.
Figure 6 shows the two-time and two-space covariances of the synaptic drive for the same network parameters. The spatial covariance mirrors the coupling function as expected.
Spatiotemporal plot of covariance 〈δu(z, 20)δu(z, 20 − τ)〉 for (a) theory and (b) microscopic simulation using parameters from Figure 1. (c) Covariance at a single spatial location, 〈δu(0.5, 20)δu(0.5, 20 − τ)〉. (d) Covariance at a single time, 〈δu(z = .005, 20)δu(z′, 20)〉. Standard errors are estimated by jackknife.
Figure 7 shows a comparison between the theory and the microscopic simulation when subthreshold neurons are included. There is a good match when N is large. As N decreases the theory starts to fail at the edges of the bump first. This is likely due to the fact that the location of the edge could move and this is not captured by the theory since it assumes fluctuations around a stationary mean-field solution. However, the spontaneous firing of subthreshold neurons due to either the initial conditions or from the fluctuating inputs of suprathreshold neurons can cause the edge of the bump to move and this is a nonperturbative effect.
(a) Variance multiplied by N and (b) mean of the synaptic drive with subthreshold neurons for constant stimulus I = −1 and (c) coupling weight w(z) = A exp(−az) − exp(−bz) + A exp[−a(L − z)] − exp[−b(L − z)] with A = 150, a = 30, and b = 20. The suprathreshold edge of bump is at u = 1. (a) Evaluated at time 10. Standard errors are estimated by bootstrap.
2. Rate covariance
The firing rate is defined as ν = 2η(z, π, t) with mean
This quantity is well behaved for t ≠ t′ and z ≠ z′. However, in the limit of t′ → t−, the rate covariance is singular since
(4.81)
(4.82)
We regularize the singularity at t = t′ by considering the time integral over a small interval:
giving
(4.83)
We regularize the singularity at z = z′ by taking a local spatial average over [−cN/2 + z, cN/2 + z]. We make the approximation that within this local region, the propagator is constant on space, which is valid under the large-N limit. This results in
(4.84)
Figure 8 shows a comparison of the theory in (4.84) to the microscopic simulations. As shown in Fig. 8(a), at N = 1200, the theory predicts the mean firing rate well. In Fig. 8(c), we show the variance of the firing rate at fixed location. In Fig. 8(d), we show the spatial structure of the variance. Again, the theory captures the simulations.
(a) Comparison between theory and microscopic simulations of time dependence of mean firing rate at one spatial location. (b) Spatial dependence of mean firing rate at time 3. [(c) and (d)] The same comparisons for the variance given in Eq. (4.84). Parameters are from Fig. 1 and N = 1200. Standard errors are estimated by bootstrap.
V. DISCUSSION
Our goal was to understand the dynamics of a large but finite network of deterministic synaptically coupled neurons with nonuniform coupling. In particular, we wanted to quantify the dynamics of individual neurons within the network. We first showed that a self-consistent local mean-field theory can describe the dynamics of a single network if the external input and coupling weight are continuous functions. This imposes a spatial metric on the network where neurons within a local neighborhood experience similar inputs and can thus be averaged over locally. This local continuity does not impose any conditions on long range interactions, which can still be random. We thus propose a new kind of network to study, continuous randomly coupled spiking networks, where the coupling is continuous but irregular at longer scales.
We show that corrections to mean-field theory can be computed as an expansion in the number of neurons in a local neighborhood. In this paper, we have chosen to scale the local neighborhood to the total number of neurons but this is not necessary. We do this by first writing down a formal and complete statistical description of the theory, mirroring the Klimontovich approach used in the kinetic theory of plasmas [14,25,26]. This formal theory is regularized by averaging, which leads to a BBGKY moment hierarchy. As in previous works [14–16,27–29], we showed that the Klimontovich description can be mapped to an equivalent Doi-Peliti-Jansen path-integral description from which a perturbation expansion in terms of Feynman diagrams can be derived. The path-integral formalism is a convenient tool for calculations. Although we only computed covariances to first order (tree level) it is straightforward (although computationally intensive) to continue to higher order as well as compute higher-order moments. We only considered a deterministic network for clarity but our method can easily incorporate stochastic effects, which would just add a new vertex to the action.
We showed that the theory works quite well for largeenough network size, which can be quite small if all neurons receive suprathreshold input. However, the expansion works less well for neurons with critical input such as neurons at the edge of a bump where infinitesimally small perturbations can produce qualitatively different behavior. Quantitatively capturing the dynamics at the edge may require renormalization. The formalism could be a systematic means to understanding randomly connected networks [30] and the so-called balanced network [31,32], where the mean inputs from excitatory and inhibitory synapses are attracted to a fixed point near zero and the neuron dynamics is dominated by the fluctuations.
ACKNOWLEDGMENTS
This research was supported by the Intramural Research Program of the NIH, NIDDK.
APPENDIX: NUMERICAL METHODS
Discretization schemes
We use full backward’s Euler for green function calculation for propagators,
(A1)
(A2)
(A3)
(A4)
(A5)
(A6)
(A7)
(A8)
(A9)
(A10)
We add the spike terms in Eq. (4.51) directly to the propagator for all possible l when t − Tl(z′) = t′. These spike terms add stiffness to the differential equation and explicit differential equation solvers like Runge-Kutta have poor stability properties.
References
[1].Abbott LF and van Vreeswijk C, Asynchronous states in networks of pulse-coupled oscillators, Phys. Rev. E
48, 1483 (1993). [DOI] [PubMed] [Google Scholar]
[2].Amari S-I, Dynamics of pattern formation in lateral-inhibition type neural fields, Biol. Cybern
27, 77 (1977). [DOI] [PubMed] [Google Scholar]
[3].Brunel N and Hakim V, Fast global oscillations in networks of integrate-and-fire neurons with low firing rates, Neural Comput. 11, 1621 (1999). [DOI] [PubMed] [Google Scholar]
[4].Cohen MA and Grossberg S, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Transactions on Systems, Man, and Cybernetics
SMC-13, 815 (1983). [Google Scholar]
[5].Fourcaud N and Brunel N, Dynamics of the firing probability of noisy integrate-and-fire neurons, Neural Comput. 14, 2057 (2002). [DOI] [PubMed] [Google Scholar]
[6].Hopfield JJ, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. U.S.A
79, 2554 (1982). [DOI] [PMC free article] [PubMed] [Google Scholar]
[7].Wilson HR and Cowan JD, Excitatory and inhibitory interactons in localized populations of model neurons, Biophys. J
12, 1 (1972). [DOI] [PMC free article] [PubMed] [Google Scholar]
[8].Mirollo RE and Strogatz SH, Synchronization of pulse-coupled biological oscillators, SIAM J. Appl. Math
50, 1645 (1990). [Google Scholar]
[9].Nykamp DQ and Tranchina D, A population density approach that facilitates large-scale modeling of neural networks: Extension to slow inhibitory synapses, Neural Comput. 13, 511 (2001). [DOI] [PubMed] [Google Scholar]
[10].Treves A, Mean-field analysis of neuronal spike dynamics, Netw. Comput. Neural Syst
4, 259 (1993). [Google Scholar]
[11].Ermentrout B, Type I membranes, phase resetting curves, and synchrony, Neural Comput. 8, 979 (1996). [DOI] [PubMed] [Google Scholar]
[12].Jones SR and Kopell N, Local network parameters can affect inter-network phase lags in central pattern generators, J. Math. Biol
52, 115 (2006). [DOI] [PubMed] [Google Scholar]
[13].Maran SK and Canavier CC, Using phase resetting to predict 1:1 and 2:2 locking in two neuron networks in which firing order is not always preserved, J. Comput. Neurosci
24, 37 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
[14].Hildebrand EJ, Buice MA, and Chow CC, Kinetic Theory of Coupled Oscillators, Phys. Rev. Lett
98, 054101 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
[15].Buice MA and Chow CC, Correlations, fluctuations, and stability of a finite-size network of coupled oscillators, Phys. Rev. E
76, 031118 (2007). [DOI] [PubMed] [Google Scholar]
[16].Buice MA and Chow CC, Dynamics finite size effects in spiking neural network, PLoS Comput. Biol
9, e1002872 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
[17].Dahmen D, Bos H, and Helias M, Correlated Fluctuations in Strongly Coupled Binary Networks Beyond Equilibrium, Phys. Rev. X
6, 031024 (2016). [Google Scholar]
[18].Dumont G, Payeur A, and Longtin A, A stochastic-field description of finite-size spiking neural networks, PLOS Comput. Biol
13, e1005691 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
[19].Helias M, Tetzlaff T, and Diesmann M, Echoes in correlated neural systems, New J. Phys
15, 023002 (2013). [Google Scholar]
[20].Lang E and Stannat W, Finite-size effects on traveling wave solutions to neural field equations, J. Math. Neurosci
7, 5 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
[21].Touboul JD and Bard Ermentrout G, Finite-size and correlation-induced effects in mean-field dynamics, J. Math. Neurosci
31, 453 (2011). [DOI] [PubMed] [Google Scholar]
[22].Desai RC and Zwanzig R, Statistical mechanics of a nonlinear stochastic model, J. Stat. Phys
19, 1 (1978). [Google Scholar]
[23].Ben-Yishai R, Bar-Or RL, and Sompolinsky H, Theory of orientation tuning in visual cortex, Proc. Natl. Acad. Sci. U.S.A
92, 3844 (1995). [DOI] [PMC free article] [PubMed] [Google Scholar]
[24].Zhang K, Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory, J. Neurosci
16, 2112 (1996). [DOI] [PMC free article] [PubMed] [Google Scholar]
[25].Ichimaru S, Basic Principles of Plasma Physics: A Statistical Approach (W. A. Benjamin, Amsterdam, 1973). [Google Scholar]
[26].Nicholson DR, Introduction to Plasma Theory, Vol. 2 (Cambridge University Press, Cambridge, 1984). [Google Scholar]
[28].Buice M and Chow C, Generalized activity equations for spiking neural network dynamics, Front. Comput. Neurosci
7, 162 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
[29].Buice MA and Chow CC, Beyond mean field theory: Statistical field theory for neural networks, J. Stat. Mech.: Theory Exp (2013) P03003. [DOI] [PMC free article] [PubMed] [Google Scholar]
[30].Sompolinsky H, Crisanti A, and Sommers HJ, Chaos in Random Neural Networks, Phys. Rev. Lett
61, 259 (1988). [DOI] [PubMed] [Google Scholar]
[31].Ostojic S, Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons, Nat. Neurosci
17, 594 (2014). [DOI] [PubMed] [Google Scholar]
[32].Van Vreeswijk C and Sompolinsky H, Chaos in neuronal networks with balanced excitatory and inhibitory activity, Science
274, 1724 (1996). [DOI] [PubMed] [Google Scholar]