Abstract
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.
Author summary
Characterizing the dynamics of biophysically modeled, large neuronal networks usually involves extensive numerical simulations. As an alternative to this expensive procedure we propose efficient models that describe the network activity in terms of a few ordinary differential equations. These systems are simple to solve and allow for convenient investigations of asynchronous, oscillatory or chaotic network states because linear stability analyses and powerful related methods are readily applicable. We build upon two research lines on which substantial efforts have been exerted in the last two decades: (i) the development of single neuron models of reduced complexity that can accurately reproduce a large repertoire of observed neuronal behavior, and (ii) different approaches to approximate the Fokker-Planck equation that represents the collective dynamics of large neuronal networks. We combine these advances and extend recent approximation methods of the latter kind to obtain spike rate models that surprisingly well reproduce the macroscopic dynamics of the underlying neuronal network. At the same time the microscopic properties are retained through the single neuron model parameters. To enable a fast adoption we have released an efficient Python implementation as open source software under a free license.
Introduction
There is prominent evidence that information in the brain, about a particular stimulus for example, is contained in the collective neuronal spiking activity averaged over populations of neurons with similar properties (population spike rate code) [1, 2]. Although these populations can comprise a large number of neurons [3], they often exhibit low-dimensional collective spiking dynamics [4] that can be measured using neural mass signals such as the local field potential or electroencephalography.
The behavior of cortical networks at that level is often studied computationally by employing simulations of multiple (realistically large or subsampled) populations of synaptically coupled individual spiking model neurons. A popular choice of single cell description for this purpose are two-variable integrate-and-fire models [5, 6] which describe the evolution of the fast (somatic) membrane voltage and an adaptation variable that represents a slowly-decaying potassium current. These models are computationally efficient and can be successfully calibrated using electrophysiological recordings of real cortical neurons and standard stimulation protocols [5, 7–10] to accurately reproduce their subthreshold and spiking activity. The choice of such (simple) neuron models, however, does not imply reasonable (short enough) simulation durations for a recurrent network, especially when large numbers of neurons and synaptic connections between them are considered.
A fast and mathematically tractable alternative to simulations of large networks are population activity models in terms of low-dimensional ordinary differential equations (i.e., which consist of only a few variables) that typically describe the evolution of the spike rate. These reduced models can be rapidly solved and allow for convenient analyses of the dynamical network states using well-known methods that are simple to implement. A popular example are the Wilson-Cowan equations [11], which were also extended to account for (slow) neuronal adaptation [12] and short-term synaptic depression [13]. Models of this type have been successfully applied to qualitatively characterize the possible dynamical states of coupled neuronal populations using phase space analyses [11–13], yet a direct link to more biophysically described networks of (calibrated) spiking neurons in terms of model parameters is missing.
Recently, derived population activity models have been proposed that bridge the gap between single neuron properties and mesoscopic network dynamics. These models are described by integral equations [14, 15] or partial differential equations [16, 17]
Here we derive simple models in terms of low-dimensional ordinary differential equations (ODEs) for the spike rate dynamics of sparsely coupled adaptive nonlinear integrate-and-fire neurons that are exposed to noisy synaptic input. The derivations are based on a Fokker-Planck equation that describes the neuronal population activity in the mean-field limit of large networks. We develop reduced models using recent methodological advances on two different approaches: the first is based on a spectral decomposition of the Fokker-Planck operator under two different slowness assumptions [18–20]. In the second approach we consider a cascade of linear temporal filters and a nonlinear function which are determined from the Fokker-Planck equation and semi-analytically approximated, building upon [21]. Both approaches are extended for an adaptation current, a nonlinear spike generating current and recurrent coupling with distributed synaptic delays.
We evaluate the developed low-dimensional spike rate models quantitatively in terms of reproduction accuracy in a systematic manner over a wide range of biologically plausible parameter values. In addition, we provide numerical implementations for the different reduction methods as well as the Fokker-Planck equation under a free license as open source project.
For the derived models in this contribution we use the adaptive exponential integrate-and-fire (aEIF) model [5] to describe individual neurons, which is similar to the model proposed by Izhikevich [6] but includes biophysically meaningful parameters and a refined description of spike initiation. However, the presented derivations are equally applicable when using the Izhikevich model instead (requiring only a small number of simple substitutions in the code).
Through their parameters the derived models retain a direct, quantitative link to the underlying spiking model neurons, and they are described in a well-established, convenient form (ODEs) that can be rapidly solved and analyzed. Therefore, these models are well suited (i) for mathematical analyses of dynamical states at the population level, e.g., linear stability analyses of attractors, and (ii) for application in multi-population brain network models. Apart from a specific network setting, the derived models are also appropriate as a spike rate description of individual neurons under noisy input conditions.
The structure of this article contains mildly redundant model specifications allowing the readers who are not interested in the methodological foundation to directly read the self-contained Sect. Results.
Results
Model reduction
The quantity of our interest is the population-averaged number of spikes emitted by a large homogeneous network of N sparsely coupled aEIF model neurons per small time interval, i.e., the spike rate rN(t). The state of neuron i at time t is described by the membrane voltage Vi(t) and adaptation current wi(t), which evolve piecewise continuously in response to overall synaptic current Isyn,i = Iext,i(t) + Irec,i(t). This input current consists of fluctuating network-external drive Iext,i = C[μext(t) + σext(t)ξext,i(t)] with membrane capacitance C, time-varying moments μext, and unit Gaussian white noise process ξext,i as well as recurrent input Irec,i. The latter causes delayed postsynaptic potentials (i.e., deflections of Vi) of small amplitude J triggered by the spikes of K presynaptic neurons (see Sect. Methods for details).
Here we present two approaches of how the spike rate dynamics of the large, stochastic delay-differential equation system for the 2N states (Vi, wi) can be described by simple models in terms of low-dimensional ODEs. Both approaches (i) take into account adaptation current dynamics that are sufficiently slow, allowing to replace the individual adaptation current wi by its population-average 〈w〉, governed by
(1) |
where a, Ew, b, τw are the adaptation current model parameters (subthreshold conductance, reversal potential, spike-triggered increment, time constant, respectively), 〈V〉∞ is the steady-state membrane voltage averaged across the population (which can vary over time, see below), and r is the spike rate of the respective low-dimensional model. Furthermore, both approaches (ii) are based on the observation that the collective dynamics of a large, sparsely coupled (and noise driven) network of integrate-and-fire type neurons can be well described by a Fokker-Planck equation. In this intermediate Fokker-Planck (FP) model the overall synaptic input is approximated by a mean part with additive white Gaussian fluctuations, Isyn,i/C ≈ μsyn(t, rd) + σsyn(t, rd)ξi(t), that are uncorrelated between neurons. The moments of the overall synaptic input,
(2) |
depend on time via the moments of the external input and, due to recurrent coupling, on the delayed spike rate rd. The latter is governed by
(3) |
which corresponds to individual propagation delays drawn from an exponentially distributed random variable with mean τd. The FP model involves solving a partial differential equation (PDE) to obtain the time-varying membrane voltage distribution p(V, t) and the spike rate r(t).
The first reduction approach is based on the spectral decomposition of the Fokker-Planck operator and leads to the following two low-dimensional models: the “basic” model variant (spec1) is given by a complex-valued differential equation describing the spike rate evolution in its real part,
(4) |
where λ1(μtot, σtot) is the dominant eigenvalue of and r∞(μtot, σtot) is the steady-state spike rate. Its parameters λ1, r∞, and 〈V〉∞ (cf. Eq (1)) depend on the total input moments given by μtot(t) = μsyn − 〈w〉/C and which closes the model (Eqs (1)–(4)). The other, “advanced” spectral model variant (spec2) is given by a real-valued second order differential equation for the spike rate,
(5) |
where the dots denote time derivatives. Its parameters β2, β1, β0, βc, r∞ and 〈V〉∞ depend on the total input moments (μtot, ) as follows: the latter two parameters explicitly as in the basic model above, the former four indirectly via the first two dominant eigenvalues λ1, λ2 and via additional quantities obtained from the (stationary and the first two nonstationary) eigenfunctions of and its adjoint . Furthermore, the parameter βc depends explicitly on the population-averaged adaptation current 〈w〉, the delayed spike rate rd, and on the first and second order time derivatives of the external moments μext and .
The second approach is based on a Linear-Nonlinear (LN) cascade, in which the population spike rate is generated by applying to the time-varying mean and standard deviation of the overall synaptic input, μsyn and σsyn, separately a linear temporal filter, followed by a common nonlinear function. These three components–two linear filters and a nonlinearity–are extracted from the Fokker-Planck equation. Approximating the linear filters using exponentials and damped oscillating functions yields two model variants: In the basic “exponential” (LNexp) model the filtered mean μf and standard deviation σf of the overall synaptic input are given by
(6) |
where the time constants τμ(μeff, σeff), τσ(μeff, σeff) depend on the effective (filtered) input mean μeff(t) = μf − 〈w〉/C and standard deviation σeff(t) = σf. The “damped oscillator” (LNdos) model variant, on the other hand, describes the filtered input moments by
(7) |
(8) |
where the time constants τ(μtot, σtot), τσ(μtot, σtot) and the angular frequency ω(μtot, σtot) depend on the total input moments defined above. In both LN model variants the spike rate is obtained by the nonlinear transformation of the effective input moments through the steady-state spike rate,
(9) |
and the steady-state mean membrane voltage 〈V〉∞ (cf. Eq (1)) is also evaluated at (μeff, σeff).
These four models (spec1, spec2, LNexp, LNdos) from both reduction approaches involve a number of parameters that depend on the strengths of synaptic input and adaptation current only via the total or effective input moments. We refer to these parameters as quantities below to distinguish them from fixed (independent) parameters. The computational complexity when numerically solving the models forward in time (for different parametrizations) can be greatly reduced by precomputing those quantities for a range of values for the total/effective input moments and using look-up tables during time integration. Changing any parameter value of the external input, the recurrent coupling or the adaptation current does not require renewed precomputations, enabling rapid explorations of parameter space and efficient (linear) stability analyses of network states.
The full specification of the “ground truth” system (network of aEIF neurons), the derivations of the intermediate description (FP model) and the low-dimensional spike rate models complemented by concrete numerical implementations are provided in Sect. Methods (that is complemented by the supporting material S1 Text). In Fig 1 we visualize the outputs of the different models using an example excitatory aEIF network exposed to external input with varying mean μext(t) and standard deviation σext(t).
Performance for variations of the mean input
Here, and in the subsequent two sections, we assess the accuracy of the four low-dimensional models to reproduce the spike rate dynamics of the underlying aEIF population. The intermediate FP model is included for reference. The derived models generate population activity in response to overall synaptic input moments μsyn and . These depend on time via the external moments μext(t) and , and the delayed spike rate rd(t). Therefore, it is instrumental to first consider an uncoupled population and suitable variations of external input moments that effectively mimic a range of biologically plausible presynaptic spike rate dynamics. This allows us to systematically compare the reproduction performance of the different models over a manageable parameter space (without K, J, τd), yet it provides useful information on the accuracy for recurrent networks.
For many network settings the dominant effect of synaptic coupling is on the mean input (cf. Eq (2)). Therefore, we consider first in detail time-varying mean but constant variance of the input. Specifically, to account for a wide range of oscillation frequencies for presynaptic spike rates, μext is described by an Ornstein-Uhlenbeck (OU) process
(10) |
where denotes the correlation time, and ϑμ are the mean and standard deviation of the stationary normal distribution, i.e., , and ξ is a unit Gaussian white noise process. Sample time series generated from the OU process are filtered using a Gaussian kernel with a small standard deviation σt to obtain sufficiently differentiable time series (due to the requirements of the spec2 model and the LNdos model). The filtered realization is then used for all models to allow for a quantitative comparison of the different spike rate responses to the same input. The value of σt we use in this study effectively removes very large oscillation frequencies which are rarely observed, while lower frequencies [22] are passed.
The parameter space we explore covers large and small correlation times , strong and weak input mean and standard deviation σext, and for each of these combinations we consider an interval from small to large variation magnitudes ϑμ. The values of and ϑμ determine how rapid and intense μext(t) fluctuates.
We apply two performance measures, as in [21]. One is given by Pearson’s correlation coefficient,
(11) |
between the (discretely given) spike rates of the aEIF population and each derived model with time averages and over a time interval of length tM − t1. For comparison, we also include the correlation coefficient between the aEIF population spike rate and the time-varying mean input, ρ(rN, μext). In addition, to assess absolute differences we calculate the root mean square (RMS) distance,
(12) |
where M denotes the number of elements of the respective time series (rN, r).
We find that three of the four low-dimensional spike rate models (spec2, LNexp, LNdos) very well reproduce the spike rate rN of the aEIF neurons: for the LNexp model ρ > 0.95 and for the spec2 and LNdos models ρ ≳ 0.8 (each) over the explored parameter space, see Fig 2. Only the basic spectral model (spec1) is substantially less accurate. Among the best models, the simplest (LNexp) overall outperforms spec2 and LNdos, in particular for fast and strong mean input variations. However, in the strongly mean-driven regime the best performing model is spec2.
We observe that the performance of any of the spike rate models decreases (with model-specific slope) with (i) increasing variation strength ϑμ larger than a certain (small) value, and with (ii) smaller , i.e., faster changes of μext. For small values of ϑμ fluctuations of rN, which are caused by the finite aEIF population size N and do not depend on the fluctuations of μext, deteriorate the performance measured by ρ (see also [21], p.13 right). This explains why ρ does not increase as ϑμ decreases (towards zero) for any of the models. Naturally, the FP model is the by far most accurate spike rate description in terms of both measures, correlation coefficient ρ and RMS distance. This is not surprising because the four low-dimensional models are derived from that (infinite-dimensional) representation. Thus, the performance of the FP system defines an upper bound on the correlation coefficient ρ and a lower bound on the RMS distance for the low-dimensional models.
In detail: for moderately fast changing mean input (large ) the three models spec2, LNexp and LNdos exhibit excellent reproduction performance with ρ > 0.95, and spec1 shows correlation coefficients of at least ρ = 0.9 (Fig 2A), which is substantially better than ρ(rN, μext). The small differences between the three top models can be better assessed from the RMS distance measure. For large input variance the two LN models perform best (cf. Fig 2A, top, and for an example, 2C). For weak input variance and large mean (small σext, large ) the spec2 model outperforms the LN models, unless the variation magnitude ϑμ is very large. For small mean , where transient activity is interleaved with periods of quiescence, the LNexp model performs best, except for weak variations ϑμ, where LNdos is slightly better (see Fig 2A, bottom).
Stronger differences in performance emerge when considering faster changes of the mean input μext(t) (i.e., for small ), see Fig 2B, and for examples, Fig 2C. The spec1 model again performs worst with ρ values even below the input/output correlation baseline ρ(rN, μext) for large mean input (cf. Fig 2B, left). The spec1 spike rate typically decays too slowly (cf. Fig 2C). The three better performing models differ as follows: for large input variance and mean (large σext and ), where the spike rate response to the input is rather fast (cf. increased ρ(rN, μext)), the performance of all three models in terms of ρ is very high, but the RMS distance measure indicates that LNexp is the most accurate model (cf. Fig 2B, top). For weak mean input LNexp is once again the top model while LNdos and, especially noticeable, spec2 show a performance decline (see example in Fig 2C). For weak input variance (Fig 2A, bottom), where significant (oscillatory) excursions of the spike rates in response to changes in the mean input can be observed (see also Fig 1), we obtain the following benchmark contrast: for large mean drive the spec2 model performs best, except for large variation amplitudes ϑμ, at which LNexp is more accurate. Smaller mean input on the other hand corresponds to the most sensitive regime where periods of quiescence alternate with rapidly increasing and decaying spike rates. The LNexp model shows the most robust and accurate spike rate reproduction in this setting, while LNdos and spec2 each exhibit decreased correlation and larger RMS distances–spec2 even for moderate input variation intensities ϑμ. The slowness approximation underlying the spec2 model likely induces an error due to the fast external input changes in comparison with the rather slow intrinsic time scale by the dominant eigenvalue, vs. 1/|Re{λ1}| ≈ 15 ms (cf. visualization of the spectrum in Sect. Spectral models). Note that for these weak inputs the distribution of the spike rate is rather asymmetric (cf. Fig 2B). Interestingly the LNdos model performs worse than LNexp for large mean input variations (i.e., large ϑμ) in general, and only slightly better for small input variance and mean input variations that are not too large and fast.
We would like to note that decreasing the Gaussian filter width σt to smaller values, e.g., fractions of a millisecond, can lead to a strong performance decline for the spec2 model because of its explicit dependence on first and second order time derivatives of the mean input.
Furthermore, we show how the adaptation parameters affect the reproduction performance of the different models in Fig 3. The adaptation time constant τw and spike-triggered adaptation increment b are varied simultaneously (keeping their product constant) such that the average spike rate and adaptation current, and thus the spiking regime, remain comparable for all parametrizations. As expected, the accuracy of the derived models decreases for faster adaptation current dynamics, due to the adiabatic approximation that relies on sufficiently slow adaptation (cf. Sect. Methods). Interestingly however, the performance of all reduced models (except spec1) declines only slightly as the adaptation time constant decreases to the value of the membrane time constant (which means the assumption of separated time scales underlying the adiabatic approximation is clearly violated). This kind of robustness is particularly pronounced for input with large baseline mean and small noise amplitude σext, cf. Fig 3B.
Performance for variations of the input variance
For perfectly balanced excitatory and inhibitory synaptic coupling the contribution of presynaptic activity to the mean input μsyn is zero by definition, but the input variance is always positively (linearly) affected by a presynaptic spike rate–even for a negative synaptic efficacy J (cf. Eq (2)). To assess the performance of the derived models in this scenario, but within the reference setting of an uncoupled population, we consider constant external mean drive μext and let the variance evolve according to a filtered OU process (such as that used for the mean input μext in the previous section) with parameters and ϑσ2 of the stationary normal distribution , correlation time and Gaussian filter standard deviation σt as before.
The results of two input parametrizations are shown in Fig 4. For large input mean μext and rapidly varying variance the spike rate response of the aEIF population is very well reproduced by the FP model and, to a large extent, by the spec2 model (cf. Fig 4A). This may be attributed to the fact that the latter model depends on the first two time derivatives of the input variance . The LN models cannot well reproduce the rapid spike rate excursions in this setting, and the spec1 model performs worst, exhibiting time-lagged spike rate dynamics compared to rN(t) which leads to a very small value of correlation coefficient ρ (below the input/output correlation baseline ). For smaller mean input μext and moderately fast varying variance (larger correlation time ) the fluctuating aEIF population spike rate is again nicely reproduced by the FP model while the rate response of the spec2 model exhibits over-sensitive behavior to changes in the input variance, as indicated by the large RMS distance (see Fig 4B). This effect is even stronger for faster variations, i.e., smaller (cf. supplementary visualization S1 Fig). The LN models perform better in this setting, and the spec1 model (again) performs worst in terms of correlation coefficient ρ due to its time-lagged spike rate response.
It should be noted that the lowest possible value of the input standard deviation, i.e., σext (plus a nonnegative number in case of recurrent input) cannot be chosen completely freely but must be large enough () for our parametrization. This is due to theoretical reasons (Fokker-Planck formalism) and practical reasons (numerics for Fokker-Planck solution and for calculation of the derived quantities, such as r∞).
Oscillations in a recurrent network
To demonstrate the applicability of the low-dimensional models for network analyses we consider a recurrently coupled population of aEIF neurons that produces self-sustained network oscillations by the interplay of strong excitatory feedback and spike-triggered adaptation or, alternatively, by delayed recurrent synaptic inhibition [16, 23]. The former oscillation type is quite sensitive to changes in input, adaptation and especially coupling parameters for the current-based type of synaptic coupling considered here and due to lack of (synaptic) inhibition and refractoriness. For example, a small increase in coupling strength can lead to a dramatic (unphysiologic) increase in oscillation amplitude because of strong recurrent excitation. Hence we consider a difficult setting here to evaluate the reduced spike rate models–in particular, when the network operates close to a bifurcation.
In Fig 5A we present two example parametrizations from a region (in parameter space) that is characterized by stable oscillations. This means the network exhibits oscillatory spike rate dynamics for constant external input moments μext and . The derived models reproduce the limit cycle behavior of the aEIF network surprisingly well, except for small frequency and amplitude deviations (FP, spec2, LNdos, LNexp) and larger frequency mismatch (spec1), see Fig 5A, top. For weaker input moments and increased spike-triggered adaptation strength the network is closer to a Hopf bifurcation [16, 23]. It is, therefore, not surprising that the differences in oscillation period and amplitude are more prominent (cf. Fig 5A, bottom). The bifurcation point of the LNexp model is slightly shifted, shown by the slowly damped oscillatory convergence to a fixed point. This suggests that the bifurcation parameter value of each of the derived models is not far from the true critical parameter value of the aEIF network but can quantitatively differ (slightly) in a model-dependent way.
The second type of oscillation is generated by delayed synaptic inhibition [22] and does not depend on the (neuronal) inhibition that is provided by an adaptation current. To demonstrate this independence the adaptation current was disabled (by setting the parameters a = b = 0) for the two respective examples that are shown in Fig 5B. Similarly as for the previous oscillation type, the low-dimensional models (except spec1) reproduce the spike rate limit cycle of the aEIF network surprisingly well, in particular for weak external input (see Fig 5B, top). For larger external input and stronger inhibition with shorter delay the network operates close to a Hopf bifurcation, leading to larger differences in oscillation amplitude and frequency in a model-dependent way (Fig 5B, bottom). Note that the intermediate (Fokker-Planck) model very well reproduces the inhibition-based type of oscillation which demonstrates the applicability of the underlying mean-field approximation. We would also like to note that enabling the adaptation current dynamics (only) leads to decreased average spike rates but does not affect the reproduction accuracy.
We would like to emphasize that the previous comprehensive evaluations for an uncoupled population provide a deeper insight on the reproduction performance–also for a recurrent network–than the four examples shown here, as explained in the Sect. Performance for variations of the mean input. For example, the (improved) reproduction performance for increased input variance in the uncoupled setting (cf. Fig 2) informs about the reproduction performance for networks of excitatory and inhibitory neurons that are roughly balanced, i.e., where the overall input mean is rather small compared to the input standard deviation.
Implementation and computational complexity
We have developed efficient implementations of the derived models using the Python programming language and by employing the library Numba for low-level machine acceleration [24]. These include: (i) the numerical integration of the Fokker-Planck model using an accurate finite volume scheme with implicit time discretization (cf. Sect. Methods), (ii) the parallelized precalculation of the quantities required by the low-dimensional spike rate models and (iii) the time integration of the latter models, as well as example scripts demonstrating (i)–(iii). The code is available as open source software under a free license at GitHub: https://github.com/neuromethods/fokker-planck-based-spike-rate-models
With regards to computational cost, summarizing the results of several aEIF network parametrizations, the duration to generate population activity time series for the low-dimensional spike rate models is usually several orders of magnitude smaller compared to numerical simulation of the original aEIF network and a few orders of magnitude smaller in comparison to the numerical solution of the FP model. For example, considering a population of 50,000 coupled neurons with 2% connection probability, a single simulation run of 5 s and the same integration time step across the models, the computation times amounted to 1.1–3.6 s for the low-dimensional models (with order–fast to slow–LNexp, spec1, LNdos, spec2), about 100 s for the FP model and roughly 1500 s for the aEIF network simulation on a dual-core laptop computer. The time difference to the network simulation substantially increases with the numbers of neurons and connections, and with spiking activity within the network due to the extensive propagation of synaptic events. Note that the speedup becomes even more pronounced with increasing number of populations, where the runtimes of the FP model and the aEIF network simulation scale linearly and the low-dimensional models show a sublinear runtime increase due to vectorization of the state variables representing the different populations.
The derived low-dimensional (ODE) spike rate models are very efficient to integrate given that the required input-dependent parameters are available as precalulated look-up quantities. For the grids used in this contribution, the precomputation time was 40 min. for the cascade (LNexp, LNdos) models and 120 min. for the spectral (spec1, spec2) models, both on a hexa-core desktop computer. The longer calculation time for the spectral models was due to the finer internal grid for the mean input (see S1 Text).
Note that while the time integration of the spec2 model is on the same order as for the other low-dimensional models its implementation complexity is larger because of the many quantities it depends on, cf. Eqs (63)–(66).
Discussion
In this contribution we have developed four low-dimensional models that approximate the spike rate dynamics of coupled aEIF neurons and retain all parameters of the underlying model neurons. These simple spike rate models were derived in two different ways from a Fokker-Planck PDE that describes the evolving membrane voltage distribution in the mean-field limit of large networks, and is complemented by an ODE for the population-averaged slow adaptation current. Two of the reduced spike rate models (spec1 and spec2) were obtained by a truncated spectral decomposition of the Fokker-Planck operator assuming vanishingly slow (for spec1) or moderately slow (for spec2) changes of the input moments. The other two reduced models (LNexp and LNdos) are described by a cascade of linear filters (one for the input mean and another for its standard deviation) and a nonlinearity which were derived from the Fokker-Planck equation, and subsequently the filters were semi-analytically approximated. Our approaches build upon [18–20] as well as [21], and extend those methods for adaptive nonlinear integrate-and-fire neurons that are sparsely coupled with distributed delays (cf. Sect. Methods).
We have compared the different spike rate representations for a range of biologically plausible input statistics and found that three of the reduced models (spec2, LNexp and LNdos) accurately reproduce the spiking activity of the underlying aEIF population while one model (spec1) shows the least accuracy. Among the best models, the simplest (LNexp) was the most robust and (somewhat surprisingly) overall outperformed spec2 and LNdos–especially in the sensitive regime of rapidly changing sub- and suprathreshold mean drive and in general for rapid and strong input variations. The LNexp model did not exhibit exaggerated deflections in that regime as compared to the other two models. This result is likely due to the importance of the quantitatively correct decay time of the filter for the mean input in the LNexp model, while the violations of the slowness assumptions for the spec2 and LNdos models seem more harmful in this regime. In the strongly mean-driven regime, however, the best performing model was spec2 for variations both in the mean drive (as long as those variations are not too strong and fast) and for variations of the input variance.
We have also demonstrated that the low-dimensional models well reproduce the dynamics of recurrently coupled aEIF populations in terms of asynchronous states (see Fig 1) and spike rate oscillations (cf. Fig 5), where mild deviations at critical (bifurcation) parameter values are expected due to the approximative nature of the model reduction.
The computational demands of the low-dimensional models are very modest in comparison to the aEIF network and also to the integration of the Fokker-Planck PDE, for which we have developed a novel finite volume discretization scheme. We would like to emphasize that any change of a parameter value for input, coupling or adaptation current does not require renewed precomputations. To facilitate the application of the presented models we have made available implementations that precompute all required quantities and numerically integrate the derived low-dimensional spike rate models as well as the Fokker-Planck equation, together with example (Python) scripts, as open source software.
Since the derived models are formulated in terms of simple ODEs, they allow to conveniently perform linear stability analyses, e.g., based on the eigenvalues of the Jacobian matrix of the respective vector field. In this way network states can be rapidly characterized by quantifying the bifurcation structure of the population dynamics–including regions of the parameter space where multiple fixed points and/or limit cycle attractors co-exist. For a characterization of stable network states by numerical continuation and an assessment of their controllability through neuromodulators using the LNexp model see [23] ch. 4.2 and [25]. Furthermore, the low-dimensional models are well suited to be employed in large neuronal networks of multiple populations for efficient simulations of population-averaged activity time series. Overall, the LNexp model seems a good candidate for that purpose considering its accuracy and robustness, as well as its computational and implementational simplicity.
Extensions
Heterogeneity
We considered a homogeneous population of neurons in the sense that the parameter values across model neurons are identical except those for synaptic input. Thereby we assume that neurons with similar dynamical properties can be grouped into populations [3]. Heterogeneity is incorporated by distributed synaptic delays, by sparse random coupling, and by fluctuating external inputs for each neuron. The (reduced) population models further allow for heterogeneous synaptic strengths that are sampled from a Gaussian distribution and can be included in a straightforward way [16, 26] (see also Sect. Methods). Distributed values for other parameters (of the isolated model neurons within the same population) are currently not supported.
Multiple populations
The presented mean-field network model can be easily adjusted for multiple populations. In this case we obtain a low-dimensional ODE for each population and the overall synaptic moments for population k become
(13) |
where Jkl is the synaptic strength for the Kkl neurons from population l targeting neurons from population k and rd,kl is the delayed spike rate of population l affecting population k (cf. Eq (2)). For each pair of coupled populations we may consider identical or distributed delays (using distributions from the exponential family) as well as identical or distributed synaptic strengths (sampled from a Gaussian distribution).
Synaptic coupling
Here we described synaptic interaction by delayed (delta) current pulses with delays sampled from an exponential distribution. This description leads to a fluctuating overall synaptic input current with white noise characteristics. Interestingly, for the mean-field dynamics this setting is very similar to considering exponentially decaying synaptic currents with a decay constant that matches that of the delay distribution, although the overall synaptic input current is a colored noise process in that case, see [27] and, for an intuitive explanation [28].
A conductance-based model of synaptic coupling can also be considered in principle [16, 29], which results in a multiplicative noise process for the overall synaptic input. This, however, would in general impede the beneficial concept of precalculated “look-up” quantities that are unaffected by the input and coupling parameters.
It should be noted that most current- or conductance-based models of synaptic coupling (including the one considered here) can produce unphysiologically large amounts of synaptic current in case of high presynaptic activity, unless the coupling parameters are carefully tuned. This problem can be solved, for example, by considering a (more realistic) model of synaptic coupling based on [30], from which activity-dependent coupling terms can be derived for the mean-field and reduced population models [23] ch. 4.2. Using that description ensures robust simulation of population activity time series without having to fine-tune the coupling parameter values, which is particularly useful for multi-population network models. In this contribution though we used for simplicity a basic synaptic coupling model that has frequently been applied in the mean-field literature.
Input noise process
The Gaussian stochastic process driving the individual neurons could also be substituted by colored noise, which would lead to a Fokker-Planck model with increased dimensionality [31]. However, this would require more complex and computationally expensive numerical schemes not only to solve that model but also for the different dimension reduction approaches.
Slow adaptation
To derive low-dimensional models of population activity we approximated the adaptation current by its population average, justified by its slow dynamics compared to the other time scales of the system. This approximation is equivalent to a first order moment closure method [17]. In case of a faster adaptation time scale the approximation may be improved by considering second and higher order moments [17, 32].
Population size
The mean-field models presented here can well reproduce the dynamics of population-averaged state variables (that is, spike rate, mean membrane voltage, and mean adaptation current) for large populations (N → ∞ in the derivation). Fluctuations of those average variables due to the finite size of neuronal populations, however, are not captured. Hence, it would be interesting to extend the mean-field models so as to reproduce these (so-called) finite size effects, for example, by incorporating an appropriate stochastic process [18] or using concepts from [15].
Cascade approach
For uncoupled EIF populations (without an adaptation current) and constant input standard deviation it has been shown that the LN cascade approximation performs well for physiological ranges of amplitude and time scale for mean input variations [21]. Our results for the cascade models are consistent with [21], but the performance is substantially improved for the sensitive low (baseline) input regime (LNexp and LNdos, also in absence of adaptation), and damped oscillatory behavior (including over- and undershoots) is accounted for by the LNdos model.
To achieve these improvements we semi-analytically fit the linear filters derived from the Fokker-Planck equation using exponential and damped oscillator functions considering a range of input frequencies. The approximation can be further improved by using more complex functions, such as a damped oscillator with two time scales. That, however, can lead to less robustness (i.e., undesired model behavior) for rapid and strong changes of the input moments (cf. Sect. Methods).
LN cascade models are frequently applied in neuroscience to describe population activity, and the model components are often determined from electrophysiological recordings using established techniques. The methodology presented here contributes to establishing quantitative links between networks of spiking neurons, a mesoscopic description of population activity and recordings at the population level.
Spectral approach
Here we provide a new numerical solver for the eigenvalue problem of the Fokker-Planck operator and its adjoint. This allows to compute the full spectrum together with associated eigenfunctions and is applicable to nonlinear integrate-and-fire models, extending [18, 19, 33].
Using that solver the spec2 model, which is based on two eigenvalues, can be further improved by interpolating its coefficients, Eqs (63)–(66), around the double eigenvalues at the spectrum’s real-to-complex transition. This interpolation would effectively smooth the quantities—e.g., preventing the jumps and kinks that are present in the visualization of Sect. Spectral models—and is expected to increase the spike rate reproduction accuracy (particularly for weak mean input) beyond what was reported in this contribution.
The spec2 model can also be extended to yield a third order ODE with everywhere smooth coefficients by considering an additional eigenvalue (cf. Sect. Remarks on the spectrum).
Moreover, the spec2 model, and more generally the whole spectral decomposition approach, can be extended to account for a refractory period in the presence of time-varying total input moments, e.g., by building upon previous attempts [18, 34, 35].
Furthermore, it could be beneficial to explicitly quantify the approximation error due to the slowness assumption that underlies the spec2 model by integration of the (original) spectral representation of the Fokker-Planck model.
Both reduced spectral models allow for a refined description of the mean adaptation current dynamics, cf. Eq (1), by replacing the mean membrane voltage 〈V〉 with its steady-state value 〈V〉∞, using that the membrane voltage distribution is available through the eigenfunctions of the Fokker-Planck operator.
The numerical eigenvalue solver can be extended in a straightforward way to yield quantities that are required by the original spectral representation of the Fokker-Planck model and by the corresponding stochastic equation for finite population size [18].
Alternative derived models
In addition to the work we build upon [18–21] (cf. Sect. Methods) there are a few other approaches to derive spike rate models from populations of spiking neurons. Some methods also result in an ODE system, taking into account (slow) neuronal adaptation [17, 26, 36–38] or disregarding it [39]. The settings differ from the work presented here in that (i) the intrinsic neuronal dynamics are adiabatically neglected [17, 26, 36, 37], (ii) only uncoupled populations [38] or all-to-all connected networks [17, 36, 39] are assumed in contrast to sparse connectivity, and (iii) (fixed) heterogeneous instead of fluctuating input is considered [39]. Notably, these previous methods yield rather qualitative agreements with the underlying spiking neuron population activity except for [39] where an excellent quantitative reproduction for (non-adaptive) quadratic integrate-and-fire oscillators with quenched input randomness is reported.
Other approaches yield mesoscopic representations of population activity in terms of model classes that are substantially less efficient to simulate and more complicated to analyze than low-dimensional ODEs [14–17, 40–42]. The spike rate dynamics in these models has been described (i) by a rather complex ODE system that depends on a stochastic jump process derived for integrate-and-fire neurons without adaptation [40], (ii) by PDEs for recurrently connected aEIF [16] or Izhikevich [17] neurons, (iii) by an integro-PDE with displacement for non-adaptive neurons [42] or (iv) by integral equations that represent the (mean) activity of coupled phenomenological spiking neurons without [41] and with adaptation [14, 15].
Furthermore, the stationary condition of a noise-driven population of adaptive EIF neurons [32, 43, 44] and the first order spike rate response to weak input modulations [43, 44] have been analyzed using the Fokker-Planck equation. Ref. [32] also considered a refined approximation of the (purely spike-triggered) adaptation current including higher order moments.
It may be interesting for future studies to explore ways to extend the presented methods and relax some of the underlying assumptions, in particular, considering (i) the diffusion approximation (via shot noise input, e.g., [45, 46]), (ii) the Poisson assumption (e.g., using the concept from [47] in combination with results from [48]) and (iii) (noise) correlations (see, e.g., [49]).
Methods
Here we present all models in detail—the aEIF network (ground truth), the mean-field FP system (intermediate model) and the low-dimensional models: spec1, spec2, LNexp, LNdos—including step-by-step derivations and essential information on the respective numerical solution methods. An implementation of these models using Python is made available at GitHub: https://github.com/neuromethods/fokker-planck-based-spike-rate-models
Network model
We consider a large (homogeneous) population of N synaptically coupled aEIF model neurons [5]. Specifically, for each neuron (i = 1, …, N), the dynamics of the membrane voltage Vi is described by
(14) |
where the capacitive current through the membrane with capacitance C equals the sum of three ionic currents and the synaptic current Isyn,i. The ionic currents consist of a linear leak current IL(Vi) = −gL(Vi − EL) with conductance gL and reversal potential EL, a nonlinear term Iexp(Vi) = gL ΔT exp((Vi − VT)/ΔT) that approximates the rapidly increasing Na+ current at spike initiation with threshold slope factor ΔT and effective threshold voltage VT, and the adaptation current wi which reflects a slowly deactivating K+ current. The adaptation current evolves according to
(15) |
with adaptation time constant τw. Its strength depends on the subthreshold membrane voltage via conductance a. Ew denotes its reversal potential. When Vi increases beyond VT, it diverges to infinity in finite time due to the exponentially increasing current Iexp(Vi), which defines a spike. In practice, however, the spike is said to occur when Vi reaches a given value Vs—the spike voltage. The downswing of the spike is not explicitly modeled; instead, when Vi ≥ Vs, the membrane voltage Vi is instantaneously reset to a lower value Vr. At the same time, the adaptation current wi is incremented by a value of parameter b, which implements suprathreshold (spike-dependent) activation of the adaptation current.
Immediately after the reset, Vi and wi are clamped (i.e., remain constant) for a short refractory period Tref, and subsequently governed again by Eqs (14) and (15). At the end of the Methods section we describe how (optionally) a spike shape can be included in the aEIF model, together with the associated small changes for the models derived from it.
To complete the network model the synaptic current in Eq (14) needs to be specified: for each cell it is given by the sum of recurrent and external input, Isyn,i = Irec,i(t) + Iext,i(t). Recurrent synaptic input is received from K other neurons of the network, that are connected in a sparse (K ≪ N) and uniformly random way, and is modeled by
(16) |
where δ denotes the Dirac delta function. Every spike by one of the K presynaptic neurons with indices j and spike times tj causes a postsynaptic membrane voltage jump of size Jij. The coupling strength is positive (negative) for excitation (inhibition) and of small magnitude. Here it is chosen to be constant, i.e., Jij = J. Each of these membrane voltage deflections occur after a time delay dij that takes into account (axonal and dendritic) spike propagation times and is sampled (independently) from a probability distribution pd. In this work we use exponentially distributed delays, i.e., pd(τ) = exp(−τ/τd)/τd (for τ ≥ 0) with mean delay τd.
The second type of synaptic input is a fluctuating current generated from network-external neurons,
(17) |
with time-varying moments μext and , and unit Gaussian white noise process ξext,i. The latter is uncorrelated with that of other neurons j ≠ i, i.e., 〈ξext,i(t)ξext,j(t + τ)〉 = δ(τ)δij, where 〈·〉 denotes expectation (w.r.t. the joint ensemble of noise realizations at times t and t + τ) and δij is the Kronecker delta. This external current, for example, accurately approximates the input generated from a large number of independent Poisson neurons that produce instantaneous postsynaptic potentials of small magnitude, cf. [48].
The spike rate rN of the network is defined as the population-averaged number of emitted spikes per time interval [t, t + ΔT],
(18) |
where the interval size ΔT is practically chosen small enough to capture the dynamical structure and large enough to yield a comparably smooth time evolution for a finite network, i.e., N < ∞.
We chose values for the neuron model parameters to describe cortical pyramidal cells, which exhibit “regular spiking” behavior and spike frequency adaptation [7, 50, 51]. For the complete parameter specification see Table 1.
Table 1. Parameter values used throughout the study.
Name | Symbol | Value |
---|---|---|
Network model | ||
Number of neurons | N | 50,000 |
Membrane capacitance | C | 200 pF |
Leak conductance | gL | 10 nS |
Leak reversal potential | EL | −65 mV |
Threshold slope factor | ΔT | 1.5 mV |
Threshold voltage | VT | −50 mV |
Spike voltage | Vs | −40 mV |
Reset voltage | Vr | −70 mV |
Subthreshold adaptation conductance1 | a | 4 nS |
Spike-triggered adaptation increment1 | b | 40 pA |
Adaptation reversal potential | Ew | −80 mV |
Adaptation time constant | τw | 200 ms |
Refractory period2 | Tref | 0 ms |
Gaussian filter width for external input | σt | 1 ms |
Discretization time step | Δt | 0.05 ms |
Spike rate estimation bin width | ΔT | 1 ms |
Fokker-Planck model | ||
Membrane voltage lower bound | Vlb | −200 mV |
Finite-volume membrane voltage spacing | ΔV | 0.028 mV |
Discretization time step | Δt | 0.05 ms |
Low-dimensional models | ||
Discretization time step | Δt | 0.01 ms |
Membrane voltage spacing3 | ΔV | 0.01 mV |
Spacing of mean input3 | Δμ | 0.025 mV/ms |
Spacing of input standard deviation3 | Δσ |
1If not specified otherwise.
2A nonzero refractory period is not supported by the spec2 model.
3Parameters for precalculation of model quantities (before simulation).
All network simulations were performed using the Python software BRIAN2 [52, 53] with C++ code generation enabled for efficiency. The aEIF model Eqs (14) and (15) were discretized using the Euler-Maruyama method with equidistant time step Δt and initialized with wi(0) = 0 and Vi(0) that is (independently) sampled from a Gaussian initial distribution p0(V) with mean Vr − δV and standard deviation δV/2 where δV = VT − Vr. Note that the models derived in the following Sects. do not depend on this particular initial density shape but allow for an arbitrary (density) function p0.
Fokker-Planck system
Adiabatic approximation
The time scales of (slow) K+ channel kinetics which are effectively described by the adaptation current wi, cf. Eq (15), are typically much larger than the faster membrane voltage dynamics modeled by Eq (14), i.e., τw ≫ C/gL [54–57]. This observation justifies to replace the individual adaptation current wi in Eq (14) by its average across the population, , in order to reduce computational demands and enable further analysis. The mean adaptation current is then governed by [16, 26, 48, 58]
(19) |
where 〈V〉N denotes the time-varying population average of the membrane voltage of non-refractory neurons.
The dynamics of the population-averaged adaptation current reflecting the non-refractory proportion of neurons are well captured by Eq (19) as long as Tref is small compared to τw. In this (physiologically plausible) case 〈w〉N from Eq (19) can be considered equal to the average adaptation current over the refractory proportion of neurons [16, 48].
Mean field limit
For large networks (N → ∞) the recurrent input can be approximated by a mean part with additive fluctuations, with delayed spike rate
(20) |
i.e., the spike rate convolved with the delay distribution, and unit white Gaussian noise process ξrec,i that is uncorrelated to that of any other neuron [16, 18, 26, 34].
The step is valid under the assumptions of (i) sufficiently many incoming synaptic connections (K ≫ 1) with small enough weights |Jij| in comparison with VT − Vr and sufficient presynaptic activity (diffusion approximation) (ii) that neuronal spike trains can be approximated by independent Poisson processes (Poisson assumption) and (iii) that the correlations between the fluctuations of synaptic inputs for different neurons vanish (mean-field limit). The latter assumption is fulfilled by sparse and uniformly random synaptic connectivity, but also when synaptic strengths Jij and delays dij are independently distributed (in case of less sparse or random connections) [18].
This approximation of the recurrent input allows to replace the overall synaptic current in Eq (14) by Isyn,i = C[μsyn(t, rd) + σsyn(t, rd)ξi(t)] with overall synaptic moments
(21) |
and (overall) unit Gaussian white noise ξi that is uncorrelated to that of any other neuron. Here we have used that external Iext,i and recurrent synaptic current Irec,i are independent from each other.
The resulting mean-field dynamics of the membrane voltage is given by
(22) |
and corresponds to a McKean-Vlasov type of equation with distributed delays [59] and discontinuity due to the reset mechanism [60] that complements the dynamics of Vi as before. The population-averaged adaptation current 〈w〉 = limN → ∞〈w〉N is governed by
(23) |
with mean membrane voltage (of non-refractory neurons), 〈V〉 = limN→∞〈V〉N, and spike rate r = limN→∞,Δt→0 rN(t).
Remarks: Instead of exponentially distributed synaptic delays we may also consider other continuous densities pd, identical delays, pd(τ) = δ(τ − d) with d > 0, or no delays at all, pd(τ) = δ(τ). Instead of identical synaptic strengths one may also consider strengths Jij that are drawn independently from a normal distribution with mean Jm and variance Jv instead, in which case the overall synaptic moments become μsyn = μext(t) + JmKrd(t) and , cf. [16, 26].
Continuity equation
In the membrane voltage evolution, Eq (22), individual neurons are exchangeable as they are described by the same stochastic equations and are coupled to each other exclusively through the (delayed) spike rate via the overall synaptic moments μsyn and . Therefore, the adiabatic and mean-field approximations allow us to represent the collective dynamics of a large network by a (1+1-dimensional) Fokker-Planck equation [16, 18, 26, 34],
(24) |
which describes the evolution of the probability density p(V, t) to find a neuron in state V at time t (in continuity form). The probability flux is given by
(25) |
with total input mean and standard deviation,
(26) |
(27) |
Note that the mean adaptation current (simply) subtracts from the synaptic mean in the drift term, cf. Eq (22).
The mean adaptation current evolves according to Eq (23) with time-dependent mean membrane voltage (of the non-refractory neurons)
(28) |
The spike rate r is obtained by the probability flux through Vs,
(29) |
To account for the reset condition of the aEIF neuron dynamics and ensuring that probability mass is conserved, Eq (24) is complemented by the reinjection condition,
(30) |
where and , an absorbing boundary at Vs,
(31) |
and a natural (reflecting) boundary condition,
(32) |
Together with the initial membrane voltage distribution p(V, 0) = p0(V) and mean adaptation current 〈w〉(0) = 0 the Fokker-Planck mean-field model is now completely specified.
Note that p(V, t) only reflects the proportion of neurons which are not refractory at time t, given by (<1 for Tref > 0 and r(t) > 0). The total probability density that the membrane voltage is V at time t is given by p(V, t) + pref(V, t) with refractory density pref(V, t) = (1 − P(t)) δ(V − Vr). At the end of the Methods section we describe how an (optional) spike shape extension for the aEIF model changes the calculation of pref and 〈V〉.
In practice we consider a finite reflecting lower barrier Vlb instead of negative infinite for the numerical solution (next section) and for the low-dimensional approximations of the Fokker-Planck PDE (cf. sections below). Vlb is chosen sufficiently small in order to not distort the free diffusion of the membrane voltage for values below the reset, i.e., Vlb ≪ Vr. The density p(V, t) is then supported on [Vlb, Vs] for each time t, and in all expressions above V → −∞ is replaced by Vlb.
Finite volume discretization
In this work we focus on low-dimensional approximations of the FP model. To obtain a reference for the reduced models it is, however, valuable to solve the (full) FP system, Eqs (24)–(32). Here we outline an accurate and robust method of solution that exploits the linear form of the FP model in contrast to previously described numerical schemes [61, 62] which both require rather small time steps due to the steeply increasing exponential current Iexp in the flux qp close to the spike voltage Vs.
We first discretize the (finite) domain [Vlb, Vs] into NV equidistant grid cells with centers Vm (m = 1, …, NV) that satisfy V1 < V2 < ⋯ < VNV, where and are the outmost cell borders. Within each cell the numerical approximation of p(V, t) is assumed to be constant and corresponds to the average value denoted by p(Vm, t). Integrating Eq (24) combined with Eq (30) over the volume of cell m, and applying the divergence theorem, yields
(33) |
where ΔV is the grid spacing and mr corresponds to the index of the cell that contains the reset voltage Vr. To solve Eq (33) forward in time the fluxes at the borders of each cell need to be approximated. Since the Fokker-Planck PDE belongs to the class of drift-diffusion equations this can be accurately achieved by the first order Scharfetter-Gummel flux [63, 64],
(34) |
where and denote the drift and diffusion coefficients, respectively (cf. Eq (25)). This exponentially fitted scheme [64] is globally first order convergent [65] and yields for large drifts, , the upwind flux, sharing its stability properties. For vanishing drifts, on the other hand, the centered difference method is recovered [64], leading to more accurate solutions than the upwind scheme in regimes of strong diffusion.
For the time discretization we rewrite Eq (33) (with Eq (34)) in vectorized form and approximate the involved time derivative as first order backward difference to ensure numerical stability. This yields in each time step of length Δt a linear system for the values pn+1 of the (discretized) probability density at tn+1, given the values pn at the previous time step tn, and the spike rate at the time tn+1−nref for which the refractory period has just passed,
(35) |
with vector elements , m = 1, …, NV, and . The refractory period in time steps is given by nref = ⌈Tref/Δt⌉, where the brackets denote the ceiling function, and I is the identity matrix. This linear equation can be efficiently solved with runtime complexity due to the tridiagonal structure of which contains the discretization of the membrane voltage (cf. Eqs (33) and (34)), including the absorbing and reflecting boundary conditions (Eqs (31) and (32)). For details we refer to S1 Text.
The spike rate, Eq (29), in this representation is obtained by evaluating the Scharfetter-Gummel flux, Eq (34), at the spike voltage Vs, taking into account the absorbing boundary condition, Eq (31), and introducing an auxiliary ghost cell [66], with center VNV+1, which yields
(36) |
where the drift and diffusion coefficients, and D, are evaluated at tn. The mean membrane voltage (of non-refractory neurons), Eq (28), used for the dynamics of the mean adaptation current, Eq (23), is calculated by .
Practically, we use the initialization and solve in each time step the linear system, Eq (35), using the function banded_solve from the Python library SciPy [67]. Note that (for a recurrent network or time-varying external input) the tridiagonal matrix Gn has to be constructed in each time step tn, which can be time consuming–especially for small ΔV and/or small Δt. Therefore, we employ low-level virtual machine acceleration for this task through the Python package Numba [24] which yields an efficient implementation.
Remark: for a vanishing refractory period Tref = 0 the matrix Gn would lose its tridiagonal structure due to the instantaneous reinjection, cf. Eq (36). In this case we enforce a minimal refractory period of one time step, Tref = Δt, which is an excellent approximation if the time step is chosen sufficiently small and the spike rate does not exceed biologically plausible values.
Low-dimensional approximations
In the following sections we present two approaches of how simple spike rate models can be derived from the Fokker-Planck mean-field model described in the previous section, cf. Eqs (20), (21) and (23)–(32).
The derived models are described by low-dimensional ordinary differential equations (ODEs) which depend on a number of quantities defined in the plane of (generic) input mean and standard deviation (μ, σ). To explain this concept more clearly we consider, as an example, the steady-state spike rate, which is a quantity required by all reduced models. The steady-state spike rate as a function of μ and σ,
(37) |
denotes the stationary value of Eq (29) under replacement of the (time-varying) total moments μtot and in the probability flux qp, Eq (25), by (constants) μ and σ2, respectively. Thus the steady-state spike rate r∞ effectively corresponds to that of an uncoupled EIF population whose membrane voltage is governed by dVi/dt = [IL(Vi) + Iexp(Vi)]/C + μ + σξi(t) plus reset condition, i.e., adaptation and synaptic current dynamics are detached. For a visualization of r∞(μ, σ) see Fig 6.
When simulating the reduced models these quantities need to be evaluated for each discrete time point t at a certain value of (μ, σ) which depends on the overall synaptic moments μsyn(t), and on the mean adaptation current 〈w〉(t) in a model-specific way (as described in the following Sects.). An example trajectory of r∞ in the (μ, σ) space for a network showing stable spike rate oscillations is shown in Fig 5.
Importantly, these quantities depend on the parameters of synaptic input (J, K, τd, μext, σext) and adaptation current (a, b, τw, Ew) only through their arguments (μ, σ). Therefore, for given parameter values of the EIF model (C, gL, EL, ΔT, VT, Vr, Tref) we precalculate those quantities on a (reasonably large and sufficiently dense) grid of μ and σ values, and access them during time integration by interpolating the quantity values stored in a table. This greatly reduces the computational complexity and enables rapid numerical simulations.
The derived low-dimensional models describe the spike rate dynamics and generally do not express the evolution of the entire membrane voltage distribution. Therefore, the mean adaptation dynamics, which depends on the density p(V, t) (via 〈V〉, cf. Eq (23)) is adjusted through approximating the mean membrane voltage 〈V〉 by the expectation over the steady-state distribution,
(38) |
which is valid for sufficiently slow adaptation current dynamics [48, 58]. The steady-state distribution is defined as p∞(V) = limt → ∞ p(V, t; μtot = μ, σtot = σ), representing the stationary membrane voltages of an uncoupled EIF population for generic input mean μ and standard deviation σ. The mean adaptation current in all reduced models is thus governed by
(39) |
where the evaluation of quantity 〈V〉∞ in terms of particular values for μ and σ at a given time t is model-specific (cf. following Sects.). Note again that the calculation of 〈V〉∞ slightly changes when considering an (optional) spike shape extension for the aEIF model, as described at the end of the Methods section.
The Fokker-Planck model does not restrict the form of the delay distribution pd, except that the convolution with the spike rate r, Eq (20), has to be well defined. Here, however, we aim at specifying the complete network dynamics in terms of a low-dimensional ODE system. Exploiting the exponential form of the delay distribution pd we obtain a simple ordinary differential equation for the delayed spike rate,
(40) |
which is equivalent to the convolution rd = r * pd.
Note that more generally any delay distribution from the exponential family allows to represent the delayed spike rate rd by an equivalent ODE instead of a convolution integral [68]. Identical delays, rd(t) = r(t − d), are also possible but lead to delay differential equations. Naturally, in case of no delays, we simply have rd(t) = r(t).
To simulate the reduced models standard explicit time discretization schemes can be applied–directly to the first order equations of the LNexp model, and for the other models (LNdos, spec1, spec2)–to the respective equivalent (real) first order systems. We would like to note that when using the explicit Euler method to integrate any of the latter three low-dimensional models a sufficiently small integration time step Δt is required to prevent oscillatory artifacts. Although the explicit Euler method works well for the parameter values used in this contribution, we have additionally implemented the method of Heun, i.e., the explicit trapezoidal rule, which is second order accurate.
Spectral models
Eigendecomposition of the Fokker-Planck operator
Following and extending [18] we can specify the Fokker-Planck operator
(41) |
for an uncoupled EIF population receiving (constant) input (μ, σ), cf. Sect. Low-dimensional approximations. This operator allows to rearrange the FP dynamics of the recurrent aEIF network, Eq (24), as
(42) |
which depends on the (time-varying) total input moments μtot(t, rd, 〈w〉) and , cf. Eqs (26) and (27), in the drift and diffusion coefficients, respectively.
For each value of (μ, σ) the operator possesses an infinite, discrete set of eigenvalues λn in the left complex half-plane including zero [18], i.e., Re{λn} ≤ 0, and associated eigenfunctions ϕn(V) (n = 0, 1, 2, …) satisfying
(43) |
Furthermore, the boundary conditions, Eqs (30)–(32) have to be fulfilled for each eigenfunction ϕn separately, i.e., the absorbing boundary at the spike voltage,
(44) |
and the reflecting barrier at the (finite) lower bound voltage,
(45) |
must hold. The eigenflux is given by
(46) |
i.e., the flux qp of Eq (25) with eigenfunction ϕn and (constant) generic input moments (μ, σ2) instead of density p and (time-varying) total input moments, respectively. Moreover, the eigenflux qϕn has to be reinjected into the reset voltage, cf. Eq (30),
(47) |
where we have neglected the refractory period, i.e., Tref = 0. Note that incorporating a refractory period Tref > 0 is straightforward only for the simplified case of vanishing total input moment variations, , which is described in the following section and is not captured here in general.
The spectrum of is shown in Fig 7A and further discussed in Sect. Remarks on the spectrum.
Defining the non-conjugated [69] inner product yields the corresponding adjoint operator given by [18]
(48) |
which satisfies for any complex-valued functions ψ and ϕ that are sufficiently smooth on [Vlb, Vs]. has the same set of eigenvalues λn as but distinct associated eigenfunctions ψn(V), i.e.,
(49) |
which have to satisfy three boundary conditions,
(50) |
(51) |
(52) |
that are determined by integrating by parts and equalling with using the conditions of , Eqs (44)–(47). Note that the last condition, Eq (52), ensures a continuous derivative of ψn at Vr in contrast to the eigenfunctions ϕn of , that have a kink at the reset due to reinjection condition, Eq (47), as shown in Fig 7B.
The eigenfunctions of and are pairwise orthogonal and in the following (without loss of generality) assumed to be scaled according to the biorthonormality condition,
(53) |
The membrane voltage probability density can now be expanded onto the (moving) eigenbasis of [18, 70],
(54) |
where each eigenfunction ϕn depends on time via the total input moments μ = μtot(t, rd, 〈w〉), , and the projection coefficients are given by αn = 〈ψn, p〉 and particularly α0 = 1 [18]. Deriving αn with respect to time (for n = 1, 2, …) and using the expansion, Eq (54), the Fokker-Planck Eq (42) as well as the definition of the adjoint operator yields an infinite-dimensional equation for the complex-valued projection coefficients α(t) = (α1(t), α2(t), …)T,
(55) |
This dynamics is initialized by αn(0) = 〈ψn, p0〉, and is complemented by (i) an expression for the spike rate,
(56) |
that is obtained from Eq (29) (using Eq (54)), and (ii) by the mean adaptation and delayed spike rate dynamics, Eqs (39) and (40). The dots in Eqs (55) and (56) denote time derivatives (e.g., ) and a non-conjugated scalar product of complex vectors (i.e., ), respectively. The matrix Λ = diag(λ1, λ2, …) contains the eigenvalues of , the matrices Cμ and Cσ2 have elements (Cx)n,m = 〈∂xψn, ϕm〉 for x ∈ {μ, σ2} and n, with partial derivative ∂x = ∂/∂x, the vectors cμ and cσ2 consist of components . The steady-state spike rate is given by r∞ = qϕ0(Vs), i.e., the flux of the eigenfunction ϕ0 = p∞ that represents the stationary membrane voltage distribution p∞ and corresponds to the (stationary) eigenvalue λ0 = 0 [18]. The vector f contains the (nonstationary) eigenfluxes evaluated at the spike voltage, fn = qϕn(Vs).
Note that the quantities (Λ, Cμ, Cσ2, cμ, cσ2, r∞, f, 〈V〉∞) all depend on time in Eqs (55), (56) and (39) via the total input moments (μtot, ). Particularly, at time t the (biorthonormal) solution of the eigenvalue problems for and its adjoint , Eqs (43)–(47) and (49)–(52) with , is required for μ = μtot(t, rd, 〈w〉), σ = σtot(t, rd). Because is a real operator its spectrum contains only real eigenvalues and/or complex conjugated pairs depending on (μ, σ). This property carries over to the eigenfunctions and therefore also to the components of all quantities above implying for example that the scalar product f · α is always real-valued.
Although the spectral representation, Eqs (55) and (56), is fully equivalent to the original (partial differential) Fokker-Planck equation without refractory period and while it contains only time derivatives it is still an infinite-dimensional and furthermore generally an implicit model. Therefore, to derive an explicit low-dimensional ordinary differential model for the spike rate r(t), it is not sufficient to truncate the expansion in Eq (54) after, e.g., two terms but additional assumptions have to be considered.
Basic model: One eigenvalue, negligible input variations
The first and simpler, derived spectral model is based on [19] and requires the strong assumption of vanishing changes of the total input moments, , . Under this approximation the projection coefficient dynamics, Eq (55), simplifies to
(57) |
for . Considering only the dominant (nonzero) eigenvalue
(58) |
i.e., the “slowest mode”, we obtain from Eqs (56) and (57) that r(t) = r∞ + m1 Re{f1α1} with m1 = 1 if and m1 = 2 if . Here we have included for a complex eigenvalue λ1 also its complex conjugate that has the projection coefficient . They jointly yield a zero imaginary part in the scalar product of Eq (56). Defining yields the complex first order equation for the spike rate [19],
(59) |
While this derivation is based on neglecting changes of the total input moments, their time-variation is effectively reintroduced in the two quantities (dominant eigenvalue and steady-state rate), i.e., λ1(μtot, σtot) and r∞(μtot, σtot) with μtot(t) and σtot(t) according to Eqs (26) and (27). Therefore, the spike rate evolution, Eq (59), is complemented with the dynamics of mean adaptation current 〈w〉 and delayed spike rate rd, Eqs (39) and (40), where the former involves the third (μtot, σtot)-dependent quantity 〈V〉∞. See Figs 6 and 7C for the involved quantities depending on (generic) input μ, σ.
We call this first derived low-dimensional spike rate model, i.e., Eqs (59), (39) and (40), spec1. It is very simple in comparison with the full Fokker-Planck system in the spectral representation, Eqs (55)–(56), (39) and (40), in the sense that it does not depend on “nonstationary” quantities of or its adjoint except for the dominant eigenvalue λ1.
Note that under the assumption of vanishing input moment variations the dynamics of the expansion coefficients αn, Eq (55), is simply exponentially decaying in time, cf. Eq (57). This allows to incorporate a refractory period Tref > 0 into the spectral decomposition framework by inserting the eigenbasis expansion of the membrane voltage distribution, Eq (54), into the reinjection condition of the Fokker-Planck model, Eq (30), using αn(t) = αn(0) exp(λnt) and the absorbing boundary, Eq (44). This generalizes the reinjection condition for the eigenfunctions ϕn of from Eq (47) to
(60) |
which was applied in [19], and the corresponding boundary condition for the adjoint operator’s eigenfunctions ψn from Eq (50) to
(61) |
Advanced model: Two eigenvalues, slow input moment variations
Another possibility to derive a low-dimensional spike rate model is based on ongoing work of Maurizio Mattia. He has recognized that under the weaker (compared to the basic spec1 model above) assumption of small but not vanishing input moment changes a real-valued second order ordinary differential equation for the spike rate r(t) can be consistently derived [20]. Here we extend this approach to account for neuronal adaptation, time-varying external input moments and delay distributions.
The steps in a nutshell (for the detailed derivation see S1 Text) are (i) taking the derivative of Eq (55) (once) and of (56) (twice) w.r.t. time, (ii) considering only the first two dominant eigenvalues λ1 and λ2, i.e., neglecting all (faster) eigenmodes that correspond to eigenvalues with larger absolute real part (“modal approximation”), (iii) assuming slowly changing input moments, i.e., small and that allow to consider projections coefficients of that order, i.e., and for n = 1, 2, and therefore to consider only linear occurences of , , α1, α2, and neglect terms of second and higher order. The slowness approximation implies that neither the external moments μext(t), nor the (delayed) spike rate rd(t) nor the population-averaged adaptation current 〈w〉(t) should change very fast (cf. Eqs (26), (27) and (21). Note that the dynamics of 〈w〉 is assumed to be slow already, cf. Sect. Fokker-Planck system.
Under these approximations we obtain for the spike rate dynamics the following real second order ODE,
(62) |
that is complemented with the mean adaptation current and delayed spike rate dynamics, Eqs (39) and (40), and we call the model spec2. The coefficients
(63) |
(64) |
(65) |
(66) |
depend on the (lumped) quantities D = 1/λ1 ⋅ 1/λ2, T = 1/λ1 + 1/λ2, M = ∂μr∞ + f ⋅ cμ, S = ∂σ2r∞ + f ⋅ cσ2, R = DMKJ + DSKJ2, Hμ = TM + DMa∂μ〈V〉∞/(τwC) − DFμ, Hσ2 = TS + DMa∂σ2〈V〉∞/(τwC) − DFσ2, Fμ = f ⋅ Λ cμ and Fσ2 = f ⋅ Λ cσ2. Here the diagonal eigenvalue matrix Λ = diag(λ1, λ2) and the vectors f = (f1, f2)T, , are two-dimensional in contrast to infinite as in the original dynamics, Eqs (55) and (56). These individual quantities, that also include 〈V〉∞ (cf. Eq (39)), r∞ and derivatives of both w.r.t. generic mean μ and variance σ2, are evaluated at the total input moments μ = μtot(t, rd, 〈w〉), . A relevant subset of individual and lumped quantities is shown in Figs 6 and 7C.
The four coefficients, Eqs (63)–(66), are real-valued because we define λ2 as the second dominant eigenvalue conditioned that (λ1, λ2) compose either a real or a complex conjugated eigenvalue pair, i.e.,
(67) |
where λ1 is obtained as for the (basic) spec1 model, cf. Eq (58). This condition ensures that all related complex quantities occur in vectors of two complex conjugate components (for example, ) implying that the scalar products above are real-valued (e.g., ) and therefore all (nine) lumped quantities, too. Note that this specific definition of λ2 is required only for integrate-and-fire neuron models that have a lower bound different from the reset, i.e., Vlb < Vr, as discussed in the following section.
The coefficients of the spec2 model require–in addition to eigenvalues λn and steady-state rate r∞ (cf. basic spec1 model, Eq (59))–quantities that involve the first and second eigenfunctions of the Fokker-Planck operator and its adjoint . Additionally, β1, β0 and βc contain the parameters of membrane voltage (C) and mean adaptation current dynamics (a, b, τw, Ew) as well as of the recurrent coupling (K, J, τd) and, importantly, explicit dependencies on the population-averaged adaptation current 〈w〉 and the delayed spike rate rd (that is in addition to implicitly via μtot and σtot). Furthermore βc depends on the first two time derivatives of the external input moments μext(t) and . This explicit occurence of neuronal and coupling parameters, state variables and input moment derivatives in the coefficients is not expressed in the basic spectral model (spec1), Eq (59). A consequence is that for the (advanced) model spec2 the external moments have to be provided twice differentiable or in case of non-smooth time series to be filtered (e.g., see Sect. Performance for variations of the mean input).
The particular coefficients, Eqs (63)–(66), are specific for the choice of an exponential delay distribution, as indicated by the occurrence of the mean delay τd in β1, β0 and βc. Other choices such as identical delays or no delays are described in the supplementary material S1 Text.
Note that the original (infinite-dimensional) spectral dynamics, Eqs (55) and (56), assumes for the refractory period a value of Tref = 0 which carries over to the same choice for the spec2 model, whereas Tref > 0 is not supported (yet).
The spike rate is by definition nonnegative, however, the model spec2, Eq (62), can yield negative rates r(t), especially for small total mean input μtot and fast (external) input changes, e.g., when is large. We explicitly avoid that behaviour by setting both the spike rate of this model, r, and its derivative, , to zero whenever r(t)<0 and continue the integration of the differential equation afterwards.
Remarks on the spectrum
In the previous two sections we have developed two spike rate models based on approximations of the Fokker-Planck system’s spectral representation, Eqs (55) and (56) under different slowness assumptions. In the derivation of the (simple) model spec1, cf. Eq (59), temporal variations of the total input moments are completely neglected while the (advanced) model spec2, cf. Eq (62), incorporates (slow) changes of the total input moments through linear terms (proportional to or ) and neglects (faster) quadratic and higher order ones.
Both slowness approximations imply that the eigenvalue matrix Λ is the dominant term in the homogeneous part of Eq (55). Therefore the eigenvalues approximately correspond to decay time constants 1/|Re{λn}| in their real part and in case of complex eigenvalues they contribute damped oscillatory components with frequency |Im{λn}|/(2π) through their imaginary part to the dynamics. How the spectrum of the Fokker-Planck operator depends on the input moments therefore gives insights into the behavior of the derived spike rate models spec1 and spec2, and of the FP model with slow adaptation in general.
Here we summarize the main properties of the eigenvalues λn(μ, σ) for the (uncoupled, nonadaptive) EIF neuron model as a function of generic input mean μ and standard deviation σ (cf. Sect. Low-dimensional approximations) that are shown in Fig 7A. Note that at time t the total input determines the particularly effective eigenvalue, λn(μtot(t), σtot(t)), in dependence of external input moments, (delayed) spike rate and adaptation current, cf. Eqs (26), (27) and (21. We thereby extend the findings of [18] concerning the perfect integrate-and-fire neuron with reflecting lower barrier at the reset voltage (PIFb), i.e., dVi/dt = μ + σξi(t) with Vlb = Vr.
The eigenvalue λ0 = 0 exists for all (generic) input moments μ, σ2 with stationary membrane voltage distribution as corresponding eigenfunction, ϕ0 = p∞ (see Fig 7A and 7B), the respective adjoint eigenfunction is constant, ψ0 = 1, cf. [18]. The other, nonstationary eigenvalues λn with n ≥ 1 have negative real parts for all (μ, σ) which yields stability of the stationary distribution for constant total input moments. For sufficiently small mean input μmin the eigenvalues λn(μmin, σ) are real-valued for all n and σ, i.e., no (damped) oscillatory spike rate transient in that regime are present (consistent with [18]), cf. Fig 7A.
-
Two classes of modes can be distinguished: eigenvalues of the first kind occur in couples which are real-valued at μmin but merge for increasing mean input μ to become a complex conjugated pair of decreasing absolute real part and almost linearly increasing imaginary part (see Fig 7A). Thus they correspond in this situation to damped oscillatory dynamics with increased frequency |Im{λn}|/(2π) and decay time constant 1/|Re{λn}| for stronger mean input. Note that there is no single critical parameter for the real-to-complex transition, instead that depends on the input mean μ and standard deviation σ as well as on the eigenvalue index n in contrast to the PIFb neuron where μ = 0 (alone) induces the transition [18]. We call this type of eigenvalue regular because it is observed also for very simple integrate-and-fire models (such as PIFb).
The second type of eigenvalue is real for the whole input parameter space of (μ, σ). The corresponding decay time constant is large if noise dominates while for stronger mean input the respective dynamics is negligibly fast. Note that by setting the lower bound Vlb equal to the reset voltage, this eigenvalue class is completely removed for the EIFb model, which is the EIF membrane voltage description with Vlb = Vr (not used here). This explains why this new type of eigenvalue has not been described in [18], and suggests a relationship to the diffusion of the membrane voltage for hyperpolarized neuronal states. The latter correspondence is further supported from the significant values of the respective eigenfunction ϕn below the reset voltage Vr in contrast to the eigenfunctions of regular eigenvalues (cf. Fig 7B). Therefore we label this second type of eigenvalue diffusive.
The input noise intensity σ controls the spectrum’s mixture of the two eigenvalue classes as follows: weak noise favors regular modes, i.e., the dominant two eigenvalues are pairs of real (for smaller mean input μ) or complex conjugated eigenvalues (for larger μ) while the diffusive modes are irrelevantly fast in this regime (cf. Fig 7A, left). Increased input fluctuations, i.e., a larger σ, on the other hand leads to a spectrum with dominant (“slowest” eigenvalue λ1) of the diffusive kind for small mean input μ while for larger μ the dominant mode is from the regular class (being real or complex depending on μ, σ, n), see Fig 7A, right. Furthermore an increased noise strength σ leads to a smaller decay time constant 1/|Re{λn}| for regular modes, i.e., their respective contribution to the spike rate dynamics is faster in the fluctuation-dominated regime than in the drift-dominated one, whereas for diffusive modes the opposite holds true.
The specific definition of the second dominant eigenvalue λ2, Eq (67), is necessary to ensure real coefficients in the spec2 model, Eqs (63)–(66), for regions in (μ, σ)-space where the first dominant eigenvalue λ1 is of the (real) diffusive type and the second is part of a complex conjugate couple (for an example see Fig 7A, right).
Extracting the required dominant eigenvalues λ1 (for both spectral models: spec1, spec2) and λ2 (for spec2 model only) according to Eqs (58) and (67) in the (μ, σ)-plane leads to points of instantaneous changes in the imaginary part (λ1 and λ2) and the real part (only λ2) due to transitions from a dominant diffusive mode for small mean input μ to a regular eigenvalue (or pair) becoming dominant for larger μ (see Fig 7A and 7C and the last property above). These discontinuities (which lie on a one-dimensional curve μ(σ)) could be avoided by either restricting to Vlb = Vr (no diffusive modes) or by deriving a third spectral model based on three eigenvalues: the dominant regular pair together with the dominant diffusive eigenvalue. Making the latter extension is straightforward by using the same steps and slowness approximation as for the model spec2 and would yield a 3rd order ODE for the spike rate r(t) with smooth coefficients and is expected to show increased reproduction accuracy (especially for small mean input) compared to the model spec2.
The properties above imply for the low-dimensional models spec1 and spec2, that both enable damped oscillatory spike rates for mean-dominated input (for large μ) since then the first two dominant eigenvalues (λ1 and λ2) have nonzero imaginary parts and are complex conjugates of each other (see Fig 7C). Furthermore in this case the effective time constant, 1/|Re{λ1}| = 1/|Re{λ2}|, is large (especially for small input standard deviation σ). For noise-dominated input, i.e., when σ (and not μ) is large, on the other hand, the corresponding spike rate dynamics is fast and does not contain an oscillatory component.
Numerical solver
Here we present a numerical solution method of the Fokker-Planck boundary eigenvalue problem (BEVP) for the operator , Eqs (43)–(47), and its adjoint , Eqs (49)–(52). The solution of the two BEVPs in terms of eigenvalues λn and associated (biorthonormal) eigenfunctions (ϕn(V), ψn(V)) as well as quantities that are derived from those and are required by the models spec1 and spec2, is obtained for a rectangle of the input mean μ and standard deviation σ (see Sect. Low-dimensional Approximations). Note that the numerical method is not restricted to the EIF neuron model and supports other integrate-and-fire models as well (e.g., perfect, leaky or quadratic).
The eigenequation , i.e., Eq (43) (omitting the eigenvalue index n), represents a second order ODE (cf. Eq (41)), that is equivalent to the following first order system for the eigenflux qϕ(V) and the eigenfunction ϕ(V),
(68) |
with coefficient matrix A(V) that has a nonlinear component through g(V) = [IL(V) + Iexp(V)]/C that contains leak and exponential (membrane) currents. Here it was used the form of the eigenflux qϕ (cf. Eq (46)), and that for generic input moments μ and σ2 (cf. Eq (41)).
Basically a direct discretization, e.g., by a finite difference approximation, of the membrane voltage derivatives in the System (68) can be applied. In combination with the boundary conds., Eqs (44)–(47), this leads to a (sparse) matrix eigenvalue problem that allows for application of standard (Arnoldi iteration based) numerical solvers. However, the convergence properties of this approach are very poor in the sense that extremely small voltage steps ΔV have to be chosen for the finite differences. Thus, the technique is inefficient as huge systems appear or even inaccurate due to amplified round-off errors by ill-conditioned matrices.
Here we propose an alternative solution procedure which is based on a reformulation of the System (68) with corresponding boundary conditions, Eqs (44)–(47), as a complex-valued algebraic root finding problem
(69) |
whose solutions are the eigenvalues λn. To evaluate the nonlinear function (the left hand side of this equation) for an arbitrary (not necessary an eigenvalue) Eq (68) is integrated backward starting from the spike voltage Vs (initializing one component to satisfy the absorbing boundary cond., Eq (44), i.e., ϕ(Vs) = 0, and another component that can be chosen arbitrarily, , due to the linearity of the problem) via the reset Vr (where the reinjection cond., Eq (47), is enforced, i.e., , which induces a discontinuity in qϕ at the reset voltage) finally to the lower bound voltage Vlb. There, a nonzero value of indicates that λ is not an eigenvalue since in this case ϕ(V) violates the reflecting boundary condition, Eq (45). qϕ(Vlb) = 0 on the other hand shows that λ is an eigenvalue with corresponding eigenfunction ϕ(V) and eigenflux qϕ(V). Note that when considering a nonzero refractory period the generalized version of the reinjection cond., Eq (60), i.e., , is enforced instead of the expression above. The latter is only valid for the spec1 model, Eq (59) (see Sect. Basic model: one eigenvalue, negligible input variations) and makes the Fokker-Planck eigenvalue problem nonlinear due to the exponentiation of λ in Eq (60).
The (complex-valued) root finding problem, Eq (69) can be solved numerically to yield a target eigenvalue λn using an iterative procedure (for example a variant of Newton’s method) given that a sufficiently close initial approximation is available. In our Python implementation we apply Powell’s hybrid method as implemented in MINPACK wrapped through SciPy [67] to the equivalent real system of the two variables Re{λ} and Im{λ}.
Appropriate initial approximations can be achieved (i) by exploiting that for sufficiently small generic mean input μmin all eigenvalues have zero imaginary part (see Sect. Remarks on the spectrum). In that case the eigenvalues are given by the roots {λ0, λ1, …} of q(Vlb; λ)–the one-dimensional function of the real-valued eigenvalue candidate λ ∈ (−∞, 0]–which are obtained, for example, by (dense) evaluation of that function in a sufficiently large (sub)interval below zero. Furthermore (ii) all eigenvalues depend continuously on the (input) parameters μ and σ (cf. Fig 7A), i.e., for a small step in the respective parameter space the solution λn of Eq (69) for the last parametrization is a very good initial value for the current parametrization. With this initialization the solution of Eq (69) is typically found in a few steps of the chosen Newton-like method (see S1 Text for details).
To efficiently and accurately evaluate qϕ(Vlb; λ) in each iteration (of the root finding algorithm), we perform an exponential (backward) integration of the System (68). The resulting scheme is based on truncating the Magnus expansion of the exact solution after one term [71]. Here matrix exponential function evaluations of the form exp[A(V)ΔV] occur that are calculated using an analytic expression [72]. Note that ODE solvers that either do not exploit the linear structure of Eq (68) at all (such as the explicit Euler method but also higher order Runge-Kutta methods) or utilize linearity only in one variable (e.g., [73]) have poor convergence behaviour and thus require very small step sizes ΔV due to the strong nonlinearity g(V) as a consequence of the large value of the current Iexp close to Vs and significantly large absolute eigenvalues |λn|, respectively.
For more information on the numerical solver we refer to S1 Text, where details regarding the adjoint operator, the exponential integration, the initial eigenvalue determination and the (parallelized) treatment of the input parameters are included.
Cascade models
Linear-Nonlinear (LN) cascade models of neuronal activity are often applied in neuroscience, because they are simple and efficient, and the model components can be estimated using established experimental procedures [21, 74, 75]. Here we use the LN cascade as an ansatz to develop a low-dimensional model and we determine its components from the underlying Fokker-Planck model. This section builds upon [21] and extends that approach for recurrently coupled aEIF neurons; specifically, by taking into account an adaptation current and variations of the input variance. Furthermore, we consider an improved approximation of the derived linear filters and include an (optional) explicit description of the spike shape, cf. [23] (ch 4.2).
The cascade models considered here produce spike rate output by applying to the time-varying mean μsyn and standard deviation σsyn of the (overall) synaptic input, cf. Eq (21), separately a linear temporal filter, Dμ and Dσ, followed by a common nonlinear function F. That is,
(70) |
(71) |
(72) |
where μf and σf denote the filtered mean and filtered standard deviation of the input, respectively. is the convolution between Dμ and μsyn. The filters Dμ, Dσ are adaptive in the sense that they depend on the mean adaptation current 〈w〉 and on the (arbitrary) baseline input in terms of baseline mean and standard deviation . For improved readability these dependencies are not explicitly indicated in Eqs (71) and (72). Note, that the nonlinearity F also depends on 〈w〉, which is governed by Eq (23). Since the mean adaptation current depends on the mean membrane voltage 〈V〉 we also consider a nonlinear mapping H for that population output quantity,
(73) |
For the derivation below it is instructive to first consider an uncoupled population, i.e., the input moments do not depend on rd for now. In particular, the input statistics are described by and . In the following, we derive the components F, Dμ and Dσ from the Fokker-Planck model for small amplitude variations , and for a slowly varying adaptation current (as already assumed). We then approximate the derived linear filter components using suitable functions such that the convolutions can be expressed in terms of simple ODEs. Finally, we account for time-varying baseline input (, ) and for recurrent coupling in the resulting low-dimensional spike rate models.
Deriving the components of the cascade
We first expand F in Eq (70) around the baseline , , 〈w〉 = 〈w〉0 to linear order, assuming that the amplitudes of and are small, and the mean adaptation current varies slowly compared to the input moments, to obtain the approximation for Eq (70),
(74) |
Due to slow adaptation (〈w〉(t) = 〈w〉0 + 〈w〉1(t) with vanishing 〈w〉1) we have neglected the expansion term in the direction of 〈w〉 and replaced 〈w〉0 = 〈w〉 in the approximation above. Note also that, without loss of generality, we have assumed normalized filters, . Under the same assumptions the output from the Fokker-Planck model (Eqs (23)–(32)) can be approximated as
(75) |
(76) |
(77) |
where r∞ and 〈V〉∞ are the steady-state spike rate and mean membrane voltage of a population of EIF neurons in response to an input of total mean plus Gaussian white noise with standard deviation . In particular, 〈V〉∞ reflects the mean over all nonrefractory neurons, cf. Eq (38). Rμ and Rσ are the so-called linear rate response functions of the population for weak modulations of the input mean and standard deviation around and , respectively [21, 73, 76]. Comparing Eqs (74) and (75) we obtain for the nonlinearity F and the linear filters Dμ, Dσ,
(78) |
(79) |
(80) |
Furthermore, the function H in Eq (73) is given by
(81) |
ensuring that the mean membrane voltage corresponds with the instantaneous spike rate estimate of the model (in every time step). Note that Rμ and Rσ depend on and (which is again not explicitly indicated for improved readability).
Fortunately, the quantities r∞, 〈V〉∞, Rμ, and Rσ can be calculated from the Fokker-Planck equation using an efficient numerical method [73]. In particular, for r∞ and 〈V〉∞ we need to solve a linear boundary value problem (BVP), and Rμ(t), Rσ(t) are calculated in the Fourier domain, where we need to solve two linear BVPs to obtain , for each frequency f. It is worth noting that the refractory period is included in a straightforward way and does not increase the complexity of the BVPs to be solved, see [23, 73] (and our provided code).
Approximating the filter components
To express the LN model with adaptation, Eqs (70)–(73), (23) and (78)–(81), in terms of a low-dimensional ODE system we next approximate the linear filters Dμ and Dσ using suitable functions. The shapes of the true filters (proportional to Rμ and Rσ, cf. Eqs (79) and (80)) for different input parameter values (, ) are shown in Fig 8A and 8B.
We first consider the linear filter Dμ (Eq (79)) and apply the approximation
(82) |
motivated by the exponential decay exhibited by Rμ, particularly for large input variance compared to its mean. Note that Dμ depends on , and therefore the scaling parameter Aμ and the time constant τμ both depend on , , which is not explicitly indicated. Aμ and τμ may be determined analytically using asymptotic results for the Fourier transform of the linear rate response for vanishing and very large frequencies, respectively [21, 76],
(83) |
To guarantee that these asymptotics are matched by the Fourier transform Aμτμ/(1 + i2πfτμ) of the exponential, taking into account the scaling factor in Eq (79), we obtain
(84) |
Note that matching the zero frequency limit in the Fourier domain is equivalent to the natural requirement that the time integral of the linear filter is reproduced exactly, that is, the approximation is normalized appropriately. An advantage of this approximation is that it is no longer required to calculate the linear rate response function Rμ explicitly. On the other hand, as only the limit f → ∞ is used for fitting in addition to the normalization constraint, the approximation of the linear filter can be poor for a range of intermediate frequencies (in the Fourier domain), in particular for small input mean and standard deviation (see Fig 8A here, and Fig. 4B,C in [21]). To improve the approximation for intermediate frequencies we use the same normalization condition, which fixes the parameter Aμ = 1/τμ, and we determine τμ by a least-squares fit of over the range of frequencies f ∈ [0, 1] kHz. In both cases, using the approximation Eq (82) the filtered mean input μf(t) = Dμ * μsyn(t) can be equivalently obtained by solving the simple scalar ODE,
(85) |
Recall that τμ depends on , . This exponentially decaying filter is part of the LNexp cascade model variant.
A shortcoming of the approximation Eqs (82) and (85) above is that it cannot reproduce damped oscillations exhibited by the true linear filter, in particular, for large input mean and small variance (see Fig 8A). Therefore, we introduce an alternative approximation using a damped oscillator function,
(86) |
Note that here Bμ, τ and ω depend on , , which is not explicitly indicated. We fix the scaling parameter Bμ by (again) requiring that the approximation is normalized to reproduce the time integral of the true linear filter Dμ, which yields Bμ = (1 + (τ2 ω2))/τ. The remaining two parameters τ and ω are determined such that the dominant oscillation frequency is reproduced. Specifically, the approximation should match at the frequencies and in the Fourier domain as close as possible. We would like to note that using the method of least-squares over a range of frequencies instead can generate approximated filters which decay to zero instantly, particularly for large input mean and small variance (not shown). For such inputs a damped oscillator with a single decay time constant is too simple to fit the complete, rather complex linear filter shape. With Eq (86) the filtered mean input can be obtained by solving the second order ODE
(87) |
Using this damped oscillator filter gives rise to the LNdos cascade model variant.
Considering the linear filter Dσ (Eq (80)) we use the approximation Dσ(t) ≈ Aσ exp(−t/τσ), with Aσ = 1/τσ and τσ determined by a least-squares fit of for frequencies f ∈ [0, 1] kHz, as long as . When this condition is not fulfilled, which occurs for large input mean and small variance, the full linear filter cannot be properly fit by an exponential function. This may be seen by the asymptotic behavior [77]
(88) |
which implies negative (cf. Eq (80)) that cannot be approximated for nonnegative τσ. In this case we use τσ → 0 which effectively yields Dσ(t) ≈ δ(t), justified by the observation that the full filter rapidly relaxes to zero (see Fig 8B). The linear filter application can be implemented by solving
(89) |
where σf(t) = Dσ * σsyn(t) is the filtered input standard deviation. This filter is used in both model variants (LNexp and LNdos). Note (again) that τσ depends on , .
Extension for changing input baseline and recurrent coupling
In the derivation above we considered that the synaptic input mean and standard deviation, μsyn(t) and σsyn(t), vary around and with small magnitudes. To extend the LN cascade model(s) to inputs that show large deviations from their baseline values we let the linear filters adjust to a changing input baseline in the following way: using the exponentially decaying mean input filter (LNexp model) the filter parameters τμ and τσ are evaluated at
(90) |
in every time step, i.e., these parameters adapt to the effective input moments. Using the damped oscillating mean input filter (LNdos model), on the other hand, the filter parameters τ, ω and τσ are evaluated at μtot(t), σtot(t) given by Eqs (26) and (27), i.e., these parameters adapt directly to the total input moments, assuming that these moments do not fluctuate too vigorously. Note that because of these adjustments we do not need to consider a (particular) input baseline. Two remarks are in place: (i) the parameters of the damped oscillator cannot be adapted to a changing input baseline using the effective input mean (with μf given by Eq (87)), because this can lead to stable oscillations (for an uncoupled population) and thus decreased reproduction performance (not shown); (ii) for input moments that change very rapidly the reproduction performance of the LNdos model variant may be improved by alternatively evaluating the parameters τ, ω and τσ at , σeff(t), with governed by Eq (85) (combining LNexp and LNdos), cf. [23] (ch. 4.2).
Finally, recurrent coupling within the population is included (in both model variants) by the dependence of the synaptic input moments on the delayed spike rate, μsyn(t, rd), , with rd given by Eq (40). For the LNdos model we can then replace in Eq (87) by
(91) |
In case of identical (constant) propagation delays within the population this term would be and in case of recurrent coupling without delays we would have
(92) |
To summarize both LN cascade models, the population spike rate and mean membrane voltage are described by Eqs (70) and (73), using Eqs (78), (81) and (23), respectively. For the LNexp model input filtering is governed by Eqs (85) and (89), where the filter parameters are evaluated at μeff(t), σeff(t) (Eq (90)). For the LNdos model input filtering is described by Eqs (87) and (89), where the filter parameters are evaluated at μtot(t), σtot(t) given by Eqs (26) and (27). The system for the recurrent network under consideration is closed by Eqs (21) and (40) which relate population spike rate output and overall synaptic input moments. Note that both LN systems here are fully equivalent to the respective ones specified in the Sect. Model reduction.
Further remarks
In order to efficiently simulate the derived cascade rate models it is highly recommended to precalculate the quantities which are needed in each time step on a (μ, σ)-rectangle (see Sect. Low-Dimensional Approximations above). For the LNexp model these quantities are the filter time constants τμ, τσ, and for the LNdos model we need the quantities τ, ω and τσ (all displayed in Fig 8C). Both models additionally require the steady-state quantities r∞ and 〈V〉∞ shown in Fig 6. An efficient implementation to obtain these quantities using Python with the package Numba for low-level virtual machine acceleration is available. Recall, that changing any parameter value of the external input, the recurrent synaptic input or the adaptation current does not require renewed precomputations.
If desired, it is also possible to obtain initial values for the variables of the cascade models (LNexp and LNdos variants) that correspond to a given initial distribution of membrane voltage and adaptation current values {Vi(0)}, {wi(0)} of a population of N aEIF neurons. We can calculate 〈w〉(0) = 1/N∑i wi(0) and determine μf(0), σf(0) by requiring that the initial membrane voltage distribution of the respective LN model p∞(V; μf(0) − 〈w〉(0)/C, σf(0)) matches the initial voltage distribution from the aEIF population as close as possible (e.g., using the Kolmogorov–Smirnov statistic). For the LNdos model we additionally set . This means we assume vanishing changes in the input history which underlies the initial membrane voltage distribution and filter parameters in the LN models (i.e., , for a sufficiently long time interval prior to t = 0).
The components of the LN model are derived in the limit of small amplitude variations of μsyn and σsyn. However, the approximation also provides an exact description of the population dynamics for very slow variations of μsyn and σsyn, where the spike rate, mean membrane voltage and adaptation current are well approximated by their steady-state values in each time step.
Here we approximated the derived linear filters using exponential and damped oscillator functions. We would like to note that, for a given baseline input (, ) the filter application using the latter function (Eq (86)) can be equivalently described by a complex-valued ODE [23] (ch. 4.2). Furthermore, the true linear filters Dμ and Dσ can be substantially better approximated by a damped oscillator function with two time scales (i.e., two exponentials) each. In these three cases, however, the ODE representation for the filter application can lead to decreased reproduction performance when the baseline input changes very rapidly (due to increased sensitivity to variations of the filter parameters).
Spike shape extension (optional)
In this contribution the membrane voltage spike shape has been neglected (typical for IF type neuron models) by clamping Vi and wi during the refractory period, justified by the observation that it is rather stereotyped and its duration is very brief. Furthermore, the spike shape is believed to contain little information compared to the time at which the spike occurs. Nevertheless, it can be incorporated in the aEIF model in a straightforward way using the following reset condition, as suggested in [43]: When Vi reaches the spike voltage Vs from below we let Vi decrease linearly from Vs to Vr during the refractory period and increment the adaptation current wi ← wi + b at the onset of that period. That is, Vi and wi are not clamped during the refractory period, instead, Vi has a fixed time course and wi is incremented by b and then governed again by Eq (15). This modification implies that the average membrane voltage in Eq (23) needs to be calculated over all neurons (and not only the nonrefractory ones), that is, 〈V〉 is calculated with respect to p + pref, where with spike trajectory Vsp(t) = Vs + (Vr − Vs)t/Tref, cf. [43]. The same applies to the steady-state mean membrane potential in Eqs (1), (39) and (76), i.e., 〈V〉∞ is then given by
(93) |
instead of Eq (38). Notably, the accuracy of the adiabatic approximation (Eq (15)) does not depend on the refractory period Tref in this case. That type of spike shape can therefore be considered in the FP model and the low-dimensional models in a straightforward way without significant additional computational demand. Note, however, that for the spec2 model a nonzero refractory period is not supported (see above). For an evaluation of the spike shape extension in terms of reproduction accuracy of the LN models see [23] (Fig. 4.15 in [23]).
Supporting information
Acknowledgments
We thank Maurizio Mattia for several fruitful discussions, sharing his unpublished manuscript on the advanced spectral model and the permission to apply it to our system. Additionally we thank Srdjan Ostojic for providing his insights and code related to the eigenvalue problem of the Fokker-Planck operator. We would finally like to acknowledge that Douglas J. Sterling wrote the initial version of the software framework which was used to integrate and compare the different models.
Data Availability
Numerical implementations of the considered models written in the Python programming language are available under a free license at: https://github.com/neuromethods/fokker-planck-based-spike-rate-models.
Funding Statement
The author JL was funded via German Research Foundation (Collaborative Research Center no. 910), http://www.itp.tu-berlin.de/collaborative_research_center_910/. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
References
- 1. Shadlen MN, Newsome WT. The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci. 1998; 18(10):3870–3896. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal dynamics: from single neurons to networks and models of cognition. Cambridge University Press; 2014. [Google Scholar]
- 3. Harris KD, Shepherd GMG. The neocortical circuit: themes and variations. Nat Neurosci. 2015; 18(2):170–181. 10.1038/nn.3917 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Okun M, Steinmetz NA, Cossell L, Iacaruso MF, Ko H, Barthó P, et al. Diverse coupling of neurons to populations in sensory cortex. Nature. 2015; 521(7553):511–515. 10.1038/nature14273 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J Neurophysiol. 2005; 94(5):3637–3642. 10.1152/jn.00686.2005 [DOI] [PubMed] [Google Scholar]
- 6. Izhikevich EM. Simple model of spiking neurons. IEEE Trans Neural Netw. 2003; 14(6):1569–1572. 10.1109/TNN.2003.820440 [DOI] [PubMed] [Google Scholar]
- 7. Badel L, Lefort S, Brette R, Petersen CCH, Gerstner W, Richardson MJE. Dynamic I-V curves are reliable predictors of naturalistic pyramidal-neuron voltage traces. J Neurophysiol. 2008; 99(2):656–666. 10.1152/jn.01107.2007 [DOI] [PubMed] [Google Scholar]
- 8.Naud R. The dynamics of adapting neurons. PhD Thesis, École Polytechnique Fédérale de Lausanne; 2011.
- 9. Harrison PM, Badel L, Wall MJ, Richardson MJE. Experimentally verified parameter sets for modelling heterogeneous neocortical pyramidal-cell populations. PLOS Comput Biol. 2015; 11(8):e1004165 10.1371/journal.pcbi.1004165 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Rauch A, La Camera G, Lüscher HR, Senn W, Fusi S, Luscher HR. Neocortical pyramidal cells respond as integrate-and-fire neurons to in vivo-like input currents. J Neurophysiol. 2003; 90(3):1598–1612. 10.1152/jn.00293.2003 [DOI] [PubMed] [Google Scholar]
- 11. Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972; 12(1):1–24. 10.1016/S0006-3495(72)86068-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Latham PE, Richmond BJ, Nirenberg S, Nelson PG. Intrinsic dynamics in neuronal networks. I. Theory. J Neurophysiol. 2000; 83(2):828–835. [DOI] [PubMed] [Google Scholar]
- 13. Gigante G, Deco G, Marom S, Del Giudice P. Network events on multiple space and time scales in cultured neural networks and in a stochastic rate model. PLOS Comput Biol. 2015; 11(11):e1004547 10.1371/journal.pcbi.1004547 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Naud R, Gerstner W. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram. PLOS Comput Biol. 2012; 8(10):e1002711 10.1371/journal.pcbi.1002711 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Schwalger T, Deger M, Gerstner W. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLOS Comput Biol. 2017; 13(4):e1005507 10.1371/journal.pcbi.1005507 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Augustin M, Ladenbauer J, Obermayer K. How adaptation shapes spike rate oscillations in recurrent neuronal networks. Front Comput Neurosci. 2013; 7(9):1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Nicola W, Ly C, Campbell SA. One-dimensional population density approaches to recurrently coupled networks of neurons with noise. SIAM J Appl Math. 2015; 75(5):2333–2360. 10.1137/140995738 [DOI] [Google Scholar]
- 18. Mattia M, Del Giudice P. Population dynamics of interacting spiking neurons. Phys Rev E. 2002; 66(5):051917 10.1103/PhysRevE.66.051917 [DOI] [PubMed] [Google Scholar]
- 19. Schaffer ES, Ostojic S, Abbott LF. A complex-valued firing-rate model that approximates the dynamics of spiking networks. PLOS Comput Biol. 2013; 9(10):e1003301 10.1371/journal.pcbi.1003301 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Mattia M. Low-dimensional firing rate dynamics of spiking neuron networks. arXiv Prepr. 2016; 1609.08855(1):1–8. [Google Scholar]
- 21. Ostojic S, Brunel N. From spiking neuron models to linear-nonlinear models. PLOS Comput Biol. 2011; 7(1):e1001056 10.1371/journal.pcbi.1001056 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Brunel N, Wang XJ. What determines the frequency of fast network oscillations with irregular neural discharges? I. synaptic dynamics and excitation-inhibition balance. J Neurophysiol. 2003; 90(1):415–430. 10.1152/jn.01095.2002 [DOI] [PubMed] [Google Scholar]
- 23.Ladenbauer J. The collective dynamics of adaptive neurons: insights from single cell and network models. PhD Thesis, Technische Universität Berlin; 2015.
- 24.Lam SK, Pitrou A, Seibert S. Numba: A LLVM-based python JIT compiler. Proc Second Work LLVM Compil Infrastruct HPC–LLVM’15. 2015; p. 1–6.
- 25. Ladenbauer J, Augustin M, Obermayer K. Intrinsic control mechanisms of neuronal network dynamics In: Schöll E, Klapp SHL, Hövel P, editors. Control of self-organizing nonlinear systems. Springer; 2016. p. 441–460. [Google Scholar]
- 26. Gigante G, Mattia M, Del Giudice P. Diverse population-bursting modes of adapting spiking neurons. Phys Rev Lett. 2007; 98(14):148101 10.1103/PhysRevLett.98.148101 [DOI] [PubMed] [Google Scholar]
- 27.Biggio M, Storace M, Mattia M. Equivalence between synaptic current dynamics and heterogeneous propagation delays in spiking neuron networks. arXiv Prepr. 2017; 1704.02780(1):1–14.
- 28. Roxin A, Montbrió E. How effective delays shape oscillatory dynamics in neuronal networks. Physica D. 2011; 240(3):323–345. 10.1016/j.physd.2010.09.009 [DOI] [Google Scholar]
- 29. Richardson M. Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons. Phys Rev E. 2004; 69(5):051918 10.1103/PhysRevE.69.051918 [DOI] [PubMed] [Google Scholar]
- 30. Destexhe A, Mainen ZF, Sejnowski TJ. An efficient method for computing synaptic conductances based on a kinetic model of receptor binding. Neural Comput. 1994; 6(1):14–18. 10.1162/neco.1994.6.1.14 [DOI] [Google Scholar]
- 31. Schwalger T, Droste F, Lindner B. Statistical structure of neural spiking under non-Poissonian or other non-white stimulation. J Comput Neurosci. 2015; 39(1):29–51. 10.1007/s10827-015-0560-x [DOI] [PubMed] [Google Scholar]
- 32. Hertäg L, Durstewitz D, Brunel N. Analytical approximations of the firing rate of an adaptive exponential integrate-and-fire neuron in the presence of synaptic noise. Front Comput Neurosci. 2014; 8(116):1–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Ostojic S. Inter-spike interval distributions of spiking neurons driven by fluctuating inputs. J Neurophysiol. 2011; 106(1):361–373. 10.1152/jn.00830.2010 [DOI] [PubMed] [Google Scholar]
- 34. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci. 2000; 8(3):183–208. 10.1023/A:1008925309027 [DOI] [PubMed] [Google Scholar]
- 35. Apfaltrer F, Ly C, Tranchina D. Population density methods for stochastic neurons with realistic synaptic kinetics: firing rate dynamics and fast computational methods. Network. 2006; 17(4):373–418. 10.1080/09548980601069787 [DOI] [PubMed] [Google Scholar]
- 36. Nesse WH, Borisyuk A, Bressloff PC. Fluctuation-driven rhythmogenesis in an excitatory neuronal network with slow adaptation. J Comput Neurosci. 2008; 25(2):317–333. 10.1007/s10827-008-0081-y [DOI] [PubMed] [Google Scholar]
- 37. Zerlaut Y, Destexhe A. A mean-field model for conductance-based networks of adaptive exponential integrate-and-fire neurons. arXiv Prepr. 2017; 1703.00698(1):1–23. [DOI] [PubMed] [Google Scholar]
- 38. Buchin A, Chizhov A. Firing-rate model of a population of adaptive neurons. Biophysics. 2010; 55(4):664–673. 10.1134/S0006350910040135 [DOI] [PubMed] [Google Scholar]
- 39. Montbrió E, Pazó D, Roxin A. Macroscopic description for networks of spiking neurons. Phys Rev X. 2015; 5(2):021028. [Google Scholar]
- 40. Zhang JW, Rangan AV. A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony. J Comput Neurosci. 2015; 38(2):355–404. 10.1007/s10827-014-0543-3 [DOI] [PubMed] [Google Scholar]
- 41. Gerstner W. Population dynamics of spiking neurons: fast transients, ascynchronous states, and locking. Neural Comput. 2000; 12(1):43–89. 10.1162/089976600300015899 [DOI] [PubMed] [Google Scholar]
- 42. Iyer R, Menon V, Buice M, Koch C, Mihalas S. The influence of synaptic weight distribution on neuronal population dynamics. PLOS Comput Biol. 2013; 9(10):e1003248 10.1371/journal.pcbi.1003248 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Richardson MJE. Dynamics of populations and networks of neurons with voltage-activated and calcium-activated currents. Phys Rev E. 2009; 80(2):021928 10.1103/PhysRevE.80.021928 [DOI] [PubMed] [Google Scholar]
- 44. Rosenbaum R. A diffusion approximation and numerical methods for adaptive neuron models with stochastic inputs. Front Comput Neurosci. 2016; 10(39):1–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Nykamp DQ, Tranchina D. A population density approach that facilitates large-scale modeling of neural networks: extension to slow inhibitory synapses. Neural Comput. 2001; 13:511–546. 10.1162/089976601300014448 [DOI] [PubMed] [Google Scholar]
- 46. Richardson MJE, Swarbrick R. Firing-rate response of a neuron receiving excitatory and inhibitory synaptic shot noise. Phys Rev Lett. 2010; 105(17):178102 10.1103/PhysRevLett.105.178102 [DOI] [PubMed] [Google Scholar]
- 47. Renart A, Moreno-Bote R, Wang XJ, Parga N. Mean-driven and fluctuation-driven persistent activity in recurrent networks. Neural Comput. 2007; 19(1):1–46. 10.1162/neco.2007.19.1.1 [DOI] [PubMed] [Google Scholar]
- 48. Ladenbauer J, Augustin M, Obermayer K. How adaptation currents change threshold, gain and variability of neuronal spiking. J Neurophysiol. 2014; 111(5):939–953. 10.1152/jn.00586.2013 [DOI] [PubMed] [Google Scholar]
- 49. Rosenbaum R, Smith MA, Kohn A, Rubin JE, Doiron B. The spatial structure of correlated neuronal variability. Nat Neurosci. 2016; 20(1):1–10. 10.1038/nn.4433 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Destexhe A. Self-sustained asynchronous irregular states and up-down states in thalamic, cortical and thalamocortical networks of nonlinear integrate-and-fire neurons. J Comput Neurosci. 2009; 27(3):493–506. 10.1007/s10827-009-0164-4 [DOI] [PubMed] [Google Scholar]
- 51. Wang XJ, Liu YH, Sanchez-Vives MV, Mccormick DA. Adaptation and temporal decorrelation by single neurons in the primary visual cortex. J Neurophysiol. 2003; 89(6):3279–3293. 10.1152/jn.00242.2003 [DOI] [PubMed] [Google Scholar]
- 52. Stimberg M, Goodman DFM, Benichoux V, Brette R. Equation-oriented specification of neural models for simulations. Front Neuroinform. 2014; 8(6):1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Goodman DFM, Brette R. The brian simulator. Front Neurosci. 2009; 3(2):192–197. 10.3389/neuro.01.026.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Brown DA, Adams PR. Muscarinic suppression of a novel voltage-sensitive K+ current in a vertebrate neurone. Nature. 1980; 283(5748):673–676. 10.1038/283673a0 [DOI] [PubMed] [Google Scholar]
- 55. Sanchez-Vives MV, McCormick DA. Cellular and network mechanisms of rhythmic recurrent activity in neocortex. Nat Neurosci. 2000; 3(10):1027–1034. 10.1038/79848 [DOI] [PubMed] [Google Scholar]
- 56. Sanchez-Vives MV, Nowak LG, McCormick DA. Cellular mechanisms of long-lasting adaptation in visual cortical neurons in vitro. J Neurosci. 2000; 20(11):4286–4299. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Stocker M. Ca(2+)-activated K+ channels: molecular determinants and function of the SK family. Nat Rev Neurosci. 2004; 5(10):758–770. 10.1038/nrn1516 [DOI] [PubMed] [Google Scholar]
- 58. Brunel N, Hakim V, Richardson MJE. Firing-rate resonance in a generalized integrate-and-fire neuron with subthreshold resonance. Phys Rev E. 2003; 67(5):051916 10.1103/PhysRevE.67.051916 [DOI] [PubMed] [Google Scholar]
- 59. Touboul J. Limits and dynamics of stochastic neuronal networks with random heterogeneous delays. J Stat Phys. 2012; 149(4):569–597. 10.1007/s10955-012-0607-6 [DOI] [Google Scholar]
- 60. Delarue F, Inglis J, Rubenthaler S, Tanré E. Global solvability of a networked integrate-and-fire model of McKean-Vlasov type. Ann Appl Probab. 2015; 25(4):2096–2133. 10.1214/14-AAP1044 [DOI] [Google Scholar]
- 61. Marpeau F, Barua A, Josić K. A finite volume method for stochastic integrate-and-fire models. J Comput Neurosci. 2009; 26(3):445–457. 10.1007/s10827-008-0121-7 [DOI] [PubMed] [Google Scholar]
- 62. Cáceres MJ, Carrillo JA, Tao L. A numerical solver for a nonlinear Fokker-Planck equation representation of neuronal network dynamics. J Comput Phys. 2011; 230(4):1084–1099. 10.1016/j.jcp.2010.10.027 [DOI] [Google Scholar]
- 63. Scharfetter DL, Gummel HK. Large-signal analysis of a silicon tead diode oscillator. IEEE Trans Electron Dev. 1969; 16(1):64–77. 10.1109/T-ED.1969.16566 [DOI] [Google Scholar]
- 64. Gosse L. Computing qualitatively correct approximations of balance laws. vol. 2 Springer; 2013. [Google Scholar]
- 65. Farrell PA, Gartland EC Jr. On the Scharfetter-Gummel discretization for drift-diffusion continutity equations In: Miller JJH, editor. Computational methods for boundary and interior layers in several dimensions. Boole Press; 1991. p. 51–79. [Google Scholar]
- 66. LeVeque RJ. Finite volume methods for hyperbolic problems. Cambridge University Press; 2002. [Google Scholar]
- 67. Oliphant TE. Python for scientific computing. Comput Sci Eng. 2007; 9(3):10–20. 10.1109/MCSE.2007.58 [DOI] [Google Scholar]
- 68. MacDonald N. Time lags in biological models In: Levin SA, editor. Lecture Notes in Biomath. 27th ed Springer; 1978. p. 1–114. [Google Scholar]
- 69. Risken H. The Fokker-Planck equation: methods of solution and applications. Springer; 1996. [Google Scholar]
- 70. Knight BW, Manin D, Sirovich L. Dynamical models of interacting neuron populations Proc Symp Robot Cybern Lille-France. 1996; p. 4–8. [Google Scholar]
- 71. Hochbruck M, Ostermann A. Exponential integrators. Acta Numer. 2010; 19(5):209–286. 10.1017/S0962492910000048 [DOI] [Google Scholar]
- 72. Bernstein DS, So W. Some explicit formulas for the matrix exponential. IEEE Trans Autom Control. 1993; 38(8):1228–1232. 10.1109/9.233156 [DOI] [Google Scholar]
- 73. Richardson MJE. Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive. Phys Rev E. 2007; 76(2):021919 10.1103/PhysRevE.76.021919 [DOI] [PubMed] [Google Scholar]
- 74. Chichilnisky EJ. A simple white noise analysis of neuronal light responses. Network. 2001; 12(2):199–213. 10.1080/713663221 [DOI] [PubMed] [Google Scholar]
- 75. Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature. 2008; 454(7207):995–999. 10.1038/nature07140 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76. Fourcaud-Trocmé N, Hansel D, van Vreeswijk C, Brunel N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. J Neurosci. 2003; 23(37):11628–11640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77. Fourcaud-Trocmé N, Brunel N. Dynamics of the instantaneous firing rate in response to changes in input statistics. J Comput Neurosci. 2005; 18(3):311–321. 10.1007/s10827-005-0337-8 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Numerical implementations of the considered models written in the Python programming language are available under a free license at: https://github.com/neuromethods/fokker-planck-based-spike-rate-models.