Skip to main content
Springer logoLink to Springer
. 2022 Oct 22;51(1):107–128. doi: 10.1007/s10827-022-00836-6

The steady state and response to a periodic stimulation of the firing rate for a theta neuron with correlated noise

Jannik Franzen 1,, Lukas Ramlow 1,2, Benjamin Lindner 1,2
PMCID: PMC9840600  PMID: 36273087

Abstract

The stochastic activity of neurons is caused by various sources of correlated fluctuations and can be described in terms of simplified, yet biophysically grounded, integrate-and-fire models. One paradigmatic model is the quadratic integrate-and-fire model and its equivalent phase description by the theta neuron. Here we study the theta neuron model driven by a correlated Ornstein-Uhlenbeck noise and by periodic stimuli. We apply the matrix-continued-fraction method to the associated Fokker-Planck equation to develop an efficient numerical scheme to determine the stationary firing rate as well as the stimulus-induced modulation of the instantaneous firing rate. For the stationary case, we identify the conditions under which the firing rate decreases or increases by the effect of the colored noise and compare our results to existing analytical approximations for limit cases. For an additional periodic signal we demonstrate how the linear and nonlinear response terms can be computed and report resonant behavior for some of them. We extend the method to the case of two periodic signals, generally with incommensurable frequencies, and present a particular case for which a strong mixed response to both signals is observed, i.e. where the response to the sum of signals differs significantly from the sum of responses to the single signals. We provide Python code for our computational method: https://github.com/jannikfranzen/theta_neuron.

Keywords: Neuron model, Spike train variability, Neural signal transmission, Stochastic neuron model

Introduction

Neural spiking is a random process due to the presence of multiple sources of noise. This includes the quasi-random input received by a neuron which is embedded in a recurrent network (network noise), the unreliability of the synapses (synaptic noise), and the stochastic opening and closing of ion channels (channel noise) (Gabbiani & Cox, 2017; Koch, 1999). This stochasticity and the resulting response variability is a central feature of neural spiking (Holden, 1976; Tuckwell, 1989). Therefore, studies in computational neuroscience have to account for this stochasticity as it has important implication for the signal transmission properties.

Computational studies of stochastic neuron models often assume that the driving fluctuations are temporally uncorrelated. This white-noise assumption implies that the correlation time τs of the input fluctuations is much smaller than the time scale of the membrane potential τm. Put differently, the input noise is regarded as fast compared to every other process present in the neural system. This assumption grants a far-reaching mathematical tractability of the problem (Abbott & van Vreeswijk, 1993; Brunel, 2000; Burkitt, 2006; Holden, 1976; Lindner & Schimansky-Geier, 2001; Ricciardi, 1977; Richardson, 2004; Tuckwell, 1989) but is violated in a number of interesting cases. First, fluctuations that arise in a recurrent network often exhibit reduced power at low frequencies (green noise) (Bair et al., 1994; Câteau & Reyes, 2006; Pena et al., 2018; Vellmer & Lindner, 2019). Second, fluctuations in oscillatory systems, e.g. caused by the electroreceptor of the paddlefish, can be band-pass filtered (Bauermeister et al., 2013; Neiman & Russell, 2001). Finally and most prominently, fluctuations that emerge due to synaptic filtering of postsynaptic potentials (Brunel & Sergi, 1998; Lindner, 2004; Lindner & Longtin, 2006 Moreno-Bote & Parga, 2010; Rudolph & Destexhe, 2005) or due to slow ion channel kinetics (Fisch et al., 2012; Schwalger et al., 2010), have reduced power at high frequencies (red noise).

There are two important types of neuron models with distinct responses characteristics: Integrators (type I neurons) and resonators (type II neurons) (Izhikevich, 2007). The canonical model for a type I neuron is the quadratic integrate-and-fire model or, mathematically equivalent, in terms of a phase variable, the theta neuron. Here we study the response characteristics of the theta neuron, driven by a low-pass filtered noise, the Ornstein-Uhlenbeck (OU) process. This model has been studied analytically by Brunel and Latham (2003) for the limits of very short and very long correlation times. Furthermore, Naundorf et al. (2005a, b) solved the associated Fokker-Planck equation for the voltage and the noise variable for selected parameter sets in order to obtain the stationary firing rate and the firing rate’s linear response to a weak periodic stimulus.

Here, we put forward semi-analytical results for the stationary firing rate by means of the matrix-continued-fraction (MCF) method for arbitrary ratios of the two relevant time scales τ=τs/τm. We present exhaustive parameter scans of the stationary firing rate with respect to variations of the bifurcation parameter and the correlation time. Furthermore, our method also allows to calculate how a, not necessarily weak, periodic signal in the presence of a correlated background noise is encoded in the firing-rate of the model neuron. Because recently, non-weak signals, for which the linear response does not provide a good approximation to the firing rate, have attracted attention (Novikov & Gutkin, 2020; Ostojic & Brunel, 2011; Voronenko & Lindner, 2017, 2018), we also develop semi-analytical tools for the linear as well as the non-linear response of the firing rate to one or two periodic signals. To the best of our knowledge, this is the first application of the MCF method in computational neuroscience.

This paper is organized as follows. In Sect. 2 we introduce the model system and the associated Fokker-Planck equation. In Sect. 3 we compute the stationary firing rate of a theta neuron subject to correlated noise by means of the MCF method. Section 4 generalizes the ideas of the MCF method to the case where the model is driven by the OU noise and an additional periodic signal. Finally, in Sect. 4.3 we compute the firing rate response to two periodic signals. We conclude with a short summary of our results.

Model

The quadratic integrate-and-fire (QIF) model uses the normal form of a saddle-node on invariant circle (SNIC) bifurcation (Izhikevich, 2007) with a time-dependent input I(t^):

τmdxdt^=x2+I(t^). 1

In order to make the connection to physical time units transparent, we have kept on the l.h.s. a time constant, which is of the order of the membrane time τm1, typically 10ms. In the following however for the ease of notation we use a nondimensional time t=t^/τm, i.e. we measure time as well as any other time constants, e.g. the correlation time below, in multiples of the membrane time constant. Similarly, all frequencies and firing rates are given in multiples of the inverse membrane time constant (additional rescalings are considered below, see e.g. Eqs. (7) and (10)).

In the new nondimensional time the QIF model takes the usual form:

dxdt=x2+I(t). 2

If the variable x(t) reaches the threshold xth=, a spike is created at time ti=t and x(t) is immediately reset to xre=-. If the input is assumed to be constant it can serve as a bifurcation parameter and allows the model to switch between the excitable (I<0) and mean-driven regime (I>0). The model for I<0 is illustrated in Fig. 1A, including the stable and unstable fixed point at x=±I as well as the reset. The QIF model can be transformed into the theta neuron by the transformation x=tan(θ/2) (cf., Fig. 1A):

dθdt=(1-cosθ)+(1+cosθ)I(t). 3

Fig. 1.

Fig. 1

Type-I neuron model. A Representation of the deterministic QIF and the equivalent theta neuron model. The blue line shows the QIF models potential U(x)=-x(x2+I) in the excitable regime (I<0). Upon reaching the threshold xth= a spike is created and x is reset to xre=-. For the equivalent theta neuron model, obtained by the transformation x=tan(θ/2), a spike is created whenever θ passes θth=π, no additional reset rule is needed. B Illustration of a theta neuron subject to a temporally correlated OU noise (blue) as well as a periodic signal (red) and the resulting spike train with stochastic spike times (orange)

The advantage of such a phase description is, that the threshold θth=π and reset θre=-π lie at finite values. We will use this phase description of a canonical Type I neuron in the remainder of this paper.

We assume that the input I(t) consists of three parts:

I(t)=μ+η(t)+s(t), 4
τdηdt=-η+2τσ2ξ(t), 5

a constant mean input μ, a temporally correlated noise η(t) and a periodic signal s(t) (see Fig. 1B). Note that the temporal average of the input I¯=limT0TI(t)dt/T is only affected by μ because the temporal averages are set to η¯(t)=0 and s¯(t)=0 without loss of generality. The correlated noise η is given by an Ornstein-Uhlenbeck process with auto-correlation function η(t)η(t+Δt)=σ2exp(-Δt/τ) and correlation time τ; it can be generated by an extra stochastic differential equation, a trick from statistical physics known as Markovian embedding of a colored noise (see e.g. Dygas et al., 1986; Guardia et al., 1984; Langer, 1969; Mori, 1965; Siegle et al., 2010, and the review by Hänggi & Jung, 1995). We remind the reader that the correlation time τ is given in terms of the membrane time constant, i.e. τ=τs/τm is actually the ratio between the true correlation time τs (given for instance in ms) and the membrane time τm. In the limit τ0 the noise η(t) becomes uncorrelated, i.e. white. However, if the variance σ2 is held constant, as in Eq. (5), the effect of the noise on the neuron vanishes together with the correlation time. A non-trivial white-noise limit can be more properly described in terms of the noise intensity D=τσ2; if D is held constant, the noise still affects the dynamics for vanishing correlation times. For such a constant intensity scaling the effect of the noise vanishes as τ.

For s(t)=0 the system shows spontaneous spiking (not related to any signal). In this case the parameter space is three-dimensional, i.e. all statistics depend only on (μ,σ,τ). This dependence however can be reduced to just two independent parameters (μ^,τ^) defined by

μ^=μ/σ,τ^=στ. 6

This transformation also affects the phase tan(θ^/2)=tan(θ/2)/σ and time t^=σt in Eq. (3) and consequently rescales the firing rate

r^(t^)=rσt/σ. 7

Under an additional periodic driving s(t)=εcos(ωt) the signal will be rescaled as well: s^(t^)=ε^cos(ω^t^) with

ε^=ε/σandω^=ω/σ. 8

For several periodic signals the respective amplitudes and frequencies will be rescaled in the same manner.

For the constant intensity scaling we use a similar transformation and set D=1:

μ~=μ/D2/3,τ~=D1/3τ,ϵ~=ϵ/D2/3,ω~=ω/D1/3 9

again, the state variables are affected by this scaling as well: tan(θ~/2)=tan(θ/2)/D1/3 and t~=D1/3t. The firing rates in the scaled and unscaled parameter space are related by

r~(t~)=r(D1/3t)/D1/3. 10

We make use of these scalings in the discussion of the results. For the ease of notation, we omit the hat and tilde over the parameters.

The Fokker-Planck equation

The stochastic system of interest can be written by two Langevin equations

dθdt=f(θ,η,s(t)), 11
τdηdt=-η+2τσ2ξ(t), 12

where f(θ,η,s(t))=(1-cosθ) +(1+cosθ) (μ+η(t)+s(t)). The relation to the governing equation for the probability density function (PDF) is the well known Fokker-Planck equation (FPE) (Risken, 1984). The PDF denotes the probability to find the phase θ and noise η at time t around certain values. In the neural context the PDF can be related to the instantaneous firing rate r(t) (see for instance Brunel & Sergi, 1998; Naundorf et al., 2005a, b, and Moreno-Bote & Parga, 2010) as we recall in the following. The FPE is given by:

tP(θ,η,t)=L^(θ,η,s(t))P(θ,η,t) 13
L^(θ,η,s(t))=-θf(θ,η,s(t))+1τηη+σ2η. 14

The two dimensional partial differential equation is completed by two natural boundary conditions

P(θ,η=,t)=P(θ,η=-,t)=0, 15

a periodic boundary condition

P(θ=π,η,t)=P(θ=-π,η,t), 16

and the normalization condition

-dη-ππdθP(θ,η,t)=1. 17

There is a corresponding continuity equation that relates the temporal derivative of the PDF to the spatial derivative of the probability current:

tP(θ,η,t)=-θJθ(θ,η,t)-ηJη(θ,η,t), 18

where Jθ and Jη are the probability currents in the θ and η direction, respectively:

Jθ=f(θ,η,s(t))P(θ,η,t), 19
Jη=-1τη+σ2ηP(θ,η,t). 20

An important insight is that the probability current in the phase direction Jθ at the threshold θ=π is directly related to the instantaneous firing rate r(t):

r(t)=-dηJθ(π,η,t)=2-dηP(π,η,t). 21

In the last equality we have used that the dynamics of the theta neuron becomes independent of the input at the threshold; specifically, we have f(π,η,s(t))=2.

The solution of the two-dimensional Fokker-Planck equation and the boundary conditions listed above is a difficult problem, even in the simplest case of the (time-independent) stationary solution in the absence of a periodic stimulus. Different authors have proposed approximate solutions in limit cases, e.g. for the case of very slow or very fast Ornstein-Uhlenbeck noise (Brunel & Latham, 2003), for weak noise in the mean-driven regime (Galán, 2009; Zhou et al., 2013), or, in the case of a periodic modulation of the firing rate, for very low or very high stimulus frequencies (Fourcaud-Trocmé et al., 2003). A numerical method to solve the two-dimensional Fokker-Planck equation in terms of an eigenfunction expansion was presented by Naundorf et al. (2005a, b); similar approaches have been pursued to describe two one-dimensional white-noise driven neuron models either coupled directly (Ly & Ermentrout, 2009) or subject to a shared input noise (Deniz & Rotter, 2017). Eigenfunction expansions have also been used to describe the activity in neural populations and neural networks, see e.g. Knight (2000) and Doiron et al. (2006). Turning back to the problem of single-neuron models, beyond the theta neuron, different approximations to the multi-dimensional Fokker-Planck equation for neuron models with Ornstein-Uhlenbeck noise have been suggested for the perfect integrate-and-fire model (Fourcaud & Brunel, 2002; Lindner, 2004; Schwalger et al., 2010, 2015) and for the leaky integrate-and-fire model (Alijani & Richardson, 2011; Brunel & Sergi, 1998; Brunel et al., 2001; Moreno et al., 2002; Moreno-Bote & Parga, 2004, 2006, 2010; Schuecker et al., 2015; Schwalger & Schimansky-Geier, 2008). We note that with respect to the driving noise, the related simpler case of an exponentially correlated two-state (dichotomous) noise permits the exact analytical solution for a few statistical measures such as the firing rate and stationary voltage distribution (Droste & Lindner, 2014; Müller-Hansen et al., 2015), the power spectrum and linear response function (Droste & Lindner, 2017), and the serial correlation coefficient of the interspike intervals (Lindner, 2004; Müller-Hansen et al., 2015).

Stationary firing rate

If we consider a system that is subject to a temporally correlated noise but no external signal (s(t)=0) then the probability density asymptotically approaches a stationary distribution P0(θ,η) which is what we consider now. The FPE for this stationary distribution reads

0=L^0(θ,η)P0(θ,η), 22

with the stationary Fokker-Planck operator L^0(θ,η)=L^(θ,η,0). Once the stationary probability density is known it can be used to obtain the stationary firing rate r0. Alternatively to Eq. (21) one can calculate the firing rate by

2πr0=-ππdθ-dηJθ,0(θ,η), 23

where Jθ,0(θ,η) denotes the component of the probability current in the direction of the phase for s(t)0. To see how to arrive at this equation, we take the stationary case of Eq. (18) and integrate it over all values of η. The integral term dηηJη=J(θ,η=)-J(θ,η=-) vanishes because of the natural boundary conditions and it follows that θdηJθ=0. Consequently the integrated θ current does not depend on θ and is everywhere equal to the firing rate. An additional integration over θ, yielding the factor 2π, leads to Eq. (23).

The MCF method

In the previous section it was shown that the stationary probability density is interesting on its own because it is directly related to the stationary firing rate. Here we outline the core ideas and assumptions that are necessary to compute the stationary PDF P0(θ,η) by means of the matrix-continued-fraction method, which has been put forward by Risken (1984).

As a first step, the stationary probability density is expanded with respect to the phase θ and noise η by two sets of eigenfunctions, namely the complex exponential functions einθ/2π and Hermite functions ϕp(η) (see Bartussek, 1997 for a similar choice):

P0(θ,η)=ϕ0(η)2πp=0n=-cn,peinθϕp(η). 24

Note, that cn,p=c-n,p because P0(θ,η) is real. Thus, we must only determine the expansion coefficients for n0. Both sets satisfy the periodic and natural boundary conditions in θ and η, respectively. A first application of this result is the determination of the marginal probability density by

P0(θ):=-dηP0(θ,η)=12πn=-cn,0einθ 25

which is illustrated for different values of μ and τ in Fig. 2. The stationary firing rate is conveniently expressed by only two of the coefficients,

r0=(1+μ)-(1-μ)Re(c1,0)+σRe(c1,1)2π. 26

Fig. 2.

Fig. 2

Stationary phase distribution of the theta neuron in the excitable regime (μ=-1, A), at the bifurcation point (μ=0, B) and in the mean-driven regime (μ=1, C). Dynamics of the corresponding deterministic systems are shown at the right. For the phase distributions the variance of the OU noise is held constant at σ2=1 while the correlation time varies as shown in A. For τ0 the effect of the noise vanishes, i.e. the model becomes deterministic. The distributions have been calculated using the MCF method. Parameters MCF method: nmax=pmax=200

This expression can be derived by inserting the expansion into Eq. (23) and using the properties of the coefficients and eigenfunctions, in particular (80) and (81) of the Hermite functions.

The coefficients can be determined by a substitution of the expansion Eq. (24) into the stationary FPE (22) which yields the tridiagonal recurrence relation, see Appendix A:

K^ncn=cn-1+cn+1 27

with the coefficient vectors cn=cn,0,cn,1,...T and c0=1,0,0,...T. The matrix K^n is given by

K^n=2B^-1-1-B^-1A^n, 28

where 1 is the identity matrix and A^, B^ are defined by

A^p,q=iqτsδp,q, 29
B^p,q=1-μ2δp,q-σ2qδp+1,q+q+1δp-1,q. 30

Solving Eq. (27) for cn,p is difficult because the matrices are infinite and the equation constitutes a relation between three unknown. As a first step to find the coefficients, one can truncate the expansion in Eq. 24 to obtain finite matrices. In practice, we assume that all Hermite functions and Fourier modes become negligible for large p or n, so that the corresponding coefficients vanish2cn,p=0 for p>pmax or n>nmax. To solve the second problem (of having three unknowns), we define transition matrices S^n by

cn+1=S^ncn, 31

which upon insertion into Eq. (27) yield:

0=K^n-S^nS^n-1-1cn-1. 32

For any coefficient vectors cn this equation is satisfied provided the term in square brackets vanishes. The relation between the two unknown transition matrices can be expressed by:

S^n-1=K^n-S^n-1 33

and leads by recursive insertion to an infinite matrix continued fraction

S^n=1K^n-1K^n+1-1..., 34

where 1/· denotes the inverse of a matrix. This fraction is truncated after n>nmax. The matrix S^0 determines the following coefficients via Eq. (31):

c1,0=S^00,0;c1,1=S^01,0, 35

which are needed for the computation of the firing rate according to Eq. (26).

Constant variance scaling

The MCF method provides a fast computational method to determine the stationary firing rate r0 in a large part of the parameter space. Together with different analytical approximations it is possible to cover the complete dependence of r0 on the parameters μ, τ and σ. In the following figures, we additionally verify the MCF results by comparison to numerical simulations of Eq. (11) using a Euler-Maruyama scheme with time step Δt=5·10-3 for Ntrials=5·105 trials of length Tmax=500. For more details see the repository. In Fig. 3 we use the constant variance scaling (see Sect. 2) with σ=1. A different choice for σ would result in a rescaling of the axes according to Eq. (6). As depicted in Fig. 3B, for short as well as large correlation times, the firing rate approaches limit values indicated by the horizontal lines. For τ0, the effect of the correlated noise vanishes so that the short time limit is equal to the deterministic firing rate

rdet=r(μ,τ=0)=μπΘ(μ), 36

where Θ(μ) is the Heaviside function. In the case τ, the noise causes a slow modulation of the firing rate; computing the long-correlation-time limit then corresponds to averaging the deterministic firing rate over the distribution of the noise (quasi-static noise approximation, see Moreno-Bote & Parga, 2010)

r(μ)=-dIPη(I-μ)rdet(I). 37

Fig. 3.

Fig. 3

Stationary firing rate in the constant variance scaling (σ2=1) for different values of μ and τ. Contour lines from A are shown again in B and C. Interestingly, the firing rate of the theta neuron can increase, decrease and even exhibit non-monotonic behavior with respect to the correlation time τ of the OU noise as shown in B. Calculations by the MCF Method are confirmed by stochastic simulations (gray dots). Parameters MCF method: nmax=pmax=150

We recall that for a QIF model driven by white noise the firing rate is always larger than the deterministic rate (Lindner et al., 2003). In contrast, a colored noise may decrease the firing rate (Brunel & Latham, 2003; Galán, 2009) as shown in Fig. 3. For large correlation times, the decrease in the firing rate is a direct consequence of the concave curvature of the deterministic firing rate rdet(I) at large μ as illustrated in Fig. 4. This can be understood as follows. If we take the linear approximation of the deterministic rate around the operation point μ then, not surprisingly, with a symmetric input distribution of the noise, the averaging yields the deterministic firing rate at the operation point:

dIPη(I-μ)drdetdII=μ(I-μ)+rdet(μ)_=rdet. 38

Fig. 4.

Fig. 4

Mechanism for the firing rate reduction. The decrease of the firing rate due to strongly correlated noise in the mean-driven regime is a consequence of the concave curvature of the deterministic firing rate rdet(I). For large τ the firing rate can be approximated by averaging the deterministic firing rate over the noise distribution according to Eq. (37), this yields the blue point on the dashed line

In the relevant range the underlined term is larger than the function rdet(I) in Eq. (37) as it can be seen from Fig. 4. Consequently, the resulting integral in Eq. (38) (i.e. the deterministic firing rate) is larger than the actual firing rate in the long-correlation-time limit, Eq. (37). This is the mechanism by which a colored noise can reduce the firing rate in the mean-driven regime.

For weak noise in the mean-driven regime (σμ) this drop in the firing rate can be calculated analytically as done by Galán (2009). The formula requires the phase response curve (PRC) of the theta neuron, which is well known (Ermentrout, 1996), resulting in the following compact expression for the firing rate:

r0rdet-σ22πτ2/μ4μτ2+1 39

(please note the transition from cyclic frequencies used in Galán, 2009 to firing rates). The formula predicts clearly a reduction of the firing rate by colored noise; specifically, r0 decreases monotonically with increasing correlation times. It should be noted, however, that in the strongly mean-driven regime, in which this theory is valid, the changes in the firing rate are very small (see Fig. 5A, B). If the driving is less strong and deviations of the firing rate from rdet are more pronounced, the theory according to Eq. (39) no longer provides a good approximation, (see Fig. 5C).

Fig. 5.

Fig. 5

Decrease of the firing rate with respect to the correlation time τ at fixed variance σ2=1. Analytical approximations according to Eq. (39) (blue line) are compared to the firing rate obtained by the MCF method (orange line) and again verified by stochastic simulations (gray dots). Parameters MCF method: nmax=pmax=200

Is at least the qualitative prediction of an overall rate reduction due to correlated noise correct? To answer this question, we plot in Fig. 6 the difference between the firing rate and the deterministic limit r0-rdet for a broad of correlation times τ and inputs μ. This difference can be both positive and negative. Trivially, in the excitable regime (μ<0) the firing rate in the presence can only be larger than the vanishing deterministic rate (here rdet=0). In the mean-driven regime the changes can be both positive (for sufficiently small μ) and negative (for larger μ); the exact line of separation is displayed by a solid line in Fig. 6.

Fig. 6.

Fig. 6

Comparison between the firing rate and the deterministic rate. Difference between r(μ,τ) and the deterministic firing rate rdet(μ) for σ2=1. As expected, in the excitable regime (μ<0) the firing rate of the stochastic system is increased compared to the deterministic rate. For the mean-driven regime (μ>0) the firing rate can be both increase or decreased depending on the particular value of both μ and τ. Parameters MCF method: nmax=pmax=150

Constant intensity scaling

Instead of a constant variance, we can also keep the noise intensity fixed (D=σ2τ). The corresponding stationary firing rate as a function of μ and τ is shown in Fig. 7A. One advantage of the constant-intensity scaling is that it permits a non-trivial white noise limit (τ0), displayed in Fig. 7B, C by the dashed lines (Brunel & Latham, 2003). In the opposite limit of a long correlation time the noise variance vanishes, which implies that r0 approaches the deterministic rate.

Fig. 7.

Fig. 7

Stationary firing rate in the constant intensity scaling (D=1) for different values of μ and τ. Contour lines from A are shown again in B and C. Interestingly, the firing rate is always smaller than the corresponding white noise limit τ0 (dashed line) and can show non-monotonic behavior with a minimum depending on τ and μ, see B. Here, known analytical approximations by Fourcaud-Trocmé et al. (2003) (solid purple lines) and Moreno-Bote and Parga (2010) (dashed purple lines) are compared to calculations by the MCF Method (orange lines). Parameters MCF method: nmax=pmax=150

Remarkably, for a sufficiently strong mean input current μ, the rate attains a minimum at intermediate correlation times. Considering the long as well as the short correlation-time approximation by Moreno-Bote and Parga (2010) (see our Eq. (37)) and Brunel and Latham (2003) (see Eq. (3.19) therein), respectively, this behavior can be expected. Generally, we find that the firing rate for any τ is smaller than the white-noise limit.

Response to periodic stimulus

In the previous section we have considered a theta neuron with an input current I(t) that consisted of a constant input μ and a colored noise η(t). We now turn to a more general case that involves an additional periodic signal

s(t)=εcosωt 40

as illustrated in Fig. 8A and demonstrate how the MCF method can be used to compute the response of the firing rate.

Fig. 8.

Fig. 8

Cyclo-stationary firing rate. A Illustration of a theta neuron model subject to a temporally correlated OU noise and a periodic signal. B The firing rate (orange line; simulation) approaches a cyclo-stationary state (black line; MCF method) due to the periodicity of the signal (green line). In the linear regime the firing rate is well approximated by r(t)r0+|χ(ω)|s(t-φ11/ω). Parameters: μ=0.5, σ2=1, τ=1, ε=0.1, and ω=2. The cyclo-stationary firing rate was calculated by the MCF method with nmax=pmax=100. Simulation parameters: In this figure, the number of realizations was up-scaled to Ntrials=1·106 for visual purposes. For all realizations, the initial values are η(t=0)=0 and θ(t=0)=-π

We consider the time-dependent signal s(t) as a perturbation with amplitude ε. The respective FPE can be expressed by the stationary Fokker-Planck operator L^0 as defined in the last section and an additional term that represents the effect of the periodic signal:

tP(θ,η,t)=L^0(θ,η)-s(t)L^perP(θ,η,t) 41

with L^per=θ(1+cosθ). As a result of the periodic forcing, we can no longer expect that the probability density converges to a stationary distribution; instead the probability density approaches a so called cyclo-stationary state with period T=2π/ω:

P(θ,η,t+T)=P(θ,η,t). 42

Since this distribution fully determines the asymptotic firing rate, this implies for the latter r(t+T)=r(t).

To determine the cyclo-stationary PDF we again use a twofold expansion, first a Fourier expansion that reflects the periodic nature of the signal and second a Taylor expansion with respect to the small amplitude of the periodic signal ε:

P(θ,η,t)==0k=-εe-ikωtP,k(θ,η). 43

Note that P,k(θ,η)=P,-k(θ,η) because P(θ,η,t) is real. The expansion Eq. (43) can be substituted into Eq. (41) to obtain a system of coupled differential equations that are no longer time dependent and can be solved iteratively with respect to :

L^kP,k=0=0,12L^per(P-1,k-1+P-1,k+1)>0, 44

with L^k=L^0+ikω. The normalization of the probability density provides additional conditions for these functions:

-ππdθ-dηP,k(θ,η)=δk,0δ,0. 45

Here δi,j is the Kronecker delta. Clearly, P0,0(θ,η)=P0(θ,η) is the stationary probability density. This system of coupled differential equations Eq. (44) can be solved iteratively (+1). Notice that whenever P,k(θ,η) is governed by a homogeneous differential equation, i.e. L^kP,k(θ,η)=0, the trivial solution P,k(θ,η)=0 does satisfy Eq. (45) and is thus a solution (except for k==0). Therefore, for =0 we find that all coefficients except P0,0(θ,η) vanish. For =1 we find two non-vanishing coefficients, namely P1,-1(θ,η) and P1,1(θ,η). Generally, all coefficients P,k(θ,η) for k> and k+=odd vanish (see Fig. 16). The remaining inhomogeneous differential equations can be solved by means of the MCF method (see Appendix B).

Fig. 16.

Fig. 16

Expansion of the PDF and coupling hierarchy. If the theta neuron is subject to a periodic signal the cyclo-stationary PDF can be expanded according to Eq. (94). Inserting this ansatz into the FPE leads to a system of time-independent recursively coupled differential equations (93). The coupling hierarchy is shown here. The stationary PDF P0,0 determines P1,1 which in turn determines P2,0 and P2,2 and so on. Gray dots represent terms that vanish as explained in Sect. 4

The cyclo-stationary firing rate can now be expressed in terms of the functions P,k(θ,η) using Eq. (21), exploiting the symmetry P,k=P,-k and P,k>=0:

r(t)==0k=0ε|r,k(ω)|cos(kωt-φ,k(ω)), 46

with:

r,k=2(2-δk,0)-dηP,k(π,η),φ,k=arg(r,k), 47

where arg(·) is the complex argument. We recover our well known stationary firing rate for =k=0, i.e. r0,0=r0. Note that some of the terms r,k in Eq. (46) vanish because of the underlying symmetry of the governing equations Eq. (44).

Linear response

For small ε the linear term in the expansion, i.e. the linear response r1,1, already provides a good approximation of the asymptotic firing rate r(t):

r(t)r0+ε|r1,1(ω)|cos(ωt-φ1,1). 48

Note that all other terms r1,k1 vanish. The function |r1,1(ω)| is also commonly known as the absolute value of the susceptibility |χ(ω)| that quantifies the amplitude response of the firing rate. The phase shift with respect to the signal is described by φ1,1. An exemplary signal s(t) together with the linear response, given in terms of the amplitude and phase shift, is shown in Fig. 8. For the chosen small signal amplitude ε, the linear theory indeed captures very well the cyclo-stationary part of the firing rate. There is also a transient response due to the chosen initial condition of the ensemble, here we however focus solely on the cyclo-stationary response.

Before we discuss the rate modulation with respect to different parameters, we compare our numerical results against known approximations (Fourcaud-Trocmé et al., 2003) (see Fig. 9). First we verify the low frequency limit ω0. In this case the signal s(t) is slow and can be considered as a quasi-constant input. Expanding the firing rate with respect to the signal amplitude ε yields:

r(t)=r0(μ+s(t))r0+r0μs(t). 49

Fig. 9.

Fig. 9

Susceptibility and phase shift. The absolute value of the susceptibility |χ(ω)| and phase shift φ11 are computed by the MCF method for two different correlation times τ. The results are confirmed by stochastic simulations and compared to known limit cases for ω0 and ω according Eqs. (50) and (51), respectively. Parameters: μ=0.1, σ2=1. Parameters MCF method: nmax=pmax=200. Simulation parameters: T=5·103, dt=1·10-2 and Ntrials=1.6·104

A comparison with Eq. (48) allows to identify the low frequency limit of the susceptibility and phase shift:

|χ(ω0)|=r0μ,φ1,1(ω0)=0. 50

As we can compute the firing rate r0 for different values of μ (see Sect. 3), the derivative above can be calculated numerically.

Second, in the opposite limit of large frequencies ω, the theta neuron acts as a low-pass filter (Fourcaud-Trocmé et al., 2003):

|χ(ω)|=2r0ω2,φ(ω)=π. 51

Hence, the susceptibility becomes very small in the high-frequency limit which is also noticeable by the pronounced random deviations of our simulation results in this specific limit. Both limit cases are well captured by our method for two values of the correlation time (τ=0.1,1) in the mean driven regime. We see that here the main effect of increasing the correlation time is to diminish the resonance of the response: For τ=0.1 the susceptibility peaks around ω2πr0 (note, that for small τ: r0rdet); this peak is gone for τ=1 because the effect of the noise, keeping its variance constant, increases with τ. All these features are in detail confirmed by the results of stochastic simulations (symbols in Fig. 9).

The general dependence of the susceptibility, focusing on its magnitude only, is inspected in Fig. 10 for the constant variance and in Fig. 11 for the constant intensity scaling. Qualitative different behavior of |χ(ω)| can be observed between the mean-driven μ>0 and excitable regime μ<0. In the mean-driven regime the theta neuron exhibits a strong resonance near ωdet=2πrdet that increases with decreasing effect of the noise, i.e. in the constant variance scaling the resonance becomes stronger as τ0 (see Fig. 10 top) while for the constant intensity scaling the resonance increases as τ, (see Fig. 11 top).

Fig. 10.

Fig. 10

Amplitude modulation |χ(ω)| in the constant variance scaling with σ2=1 computed by the MCF method with nmax=pmax=150. For a discussion see the main text

Fig. 11.

Fig. 11

Amplitude modulation |χ(ω)| in the constant intensity scaling with D=1 computed by the MCF method with nmax=pmax=150. For a discussion see the main text

In the excitable regime resonances are weak or absent. First of all, the baseline firing rate of the neuron vanishes as the effect of the noise decreases (cf. Figs. 3A and 7A) and so does the susceptibility (see Figs. 10 and 11 bottom). Secondly, the theta neuron becomes a low-pass filter where |χ(ω)| decreases with increasing ω regardless of the correlation time τ.

Right at the bifurcation point μ=0 there are still no pronounced resonances with respect to ω. However, the dependence of the linear response on the correlation time is somewhat different to the excitable regime: the susceptibility increases if the effect of the noise becomes very weak, i.e. τ0 for the constant variance scaling (see Fig. 10 middle) and τ for the constant intensity scaling (see Fig. 11 middle).

Nonlinear response

For larger signal amplitudes nonlinear response functions have to be considered:

r(t)=r0+ε|r1,1|cosωt-φ1,1+ε2[r2,0+|r2,2|cos2ωt-φ2,2]+ε3[|r3,1|cosωt-φ3,1)+|r3,3|cos(3ωt-φ3,3)]+... 52

Here we have included all terms up to the 3rd order in ε (cf. Eq. (46)). The nonlinear response features higher Fourier modes and a correction r2,0 of the time-averaged firing rate. The response functions r,k and their respective argument φ,k of course depend on the model parameters μ, τ and σ as well as the signal frequency ω.

For a neuron in the mean-driven regime the frequency dependence for three selected response functions is shown in Fig. 12B. In contrast to the linear response |r1,1| the functions |r2,2| and |r3,3| display additional resonances for instance at ω1=πrdet. This behavior is not specific to the theta neuron, for instance such resonances can be observed for the LIF neuron as well (Voronenko & Lindner, 2017). These additional resonances give rise to strong nonlinear effects even if the signal is weak, see Fig. 12A. In the particular case shown in Fig. 12 the signal frequency was chosen to match the resonance frequency of the second-order response |r2,2| so that the linear response alone no longer provides a good approximation to the firing rate r(t). Instead the second-order response must be included, illustrating the importance of the nonlinear theory even for comparatively weak signals.

Fig. 12.

Fig. 12

Nonlinear response. A Periodic signal and firing rate response of the theta neuron model. Here, the linear theory (dotted line) fails to accurately describe the firing rate (solid black line). This is mainly because the signal frequency is chosen to match half the deterministic firing frequency ωdet/2=1 where the nonlinear response functions |r2,2| and |r3,3| are close to their local maximum, see B. However, already the second-order response (dashed orange line) provides a good approximation to the actual firing rate and is improved further if higher-order terms are considered (cyan line). All responses are calculated by the MCF method with nmax=pmax=150. Parameters: σ=1, μ=1, τ=0.1, ε=0.5 and ω=1

By means of the MCF method it is possible to achieve a near perfect fit of the actual firing rate by including many correction terms; see Fig. 12A where we have included all terms up to the 10th order. However, note that the computational cost of each further correcting term increases roughly linearly with the order of the signal amplitude.

We now discuss the amplitude response functions |r,k| to the third order in ε for varying values of the mean input and correlation time (cf. Fig. 13). The linear response |r1,1|, already discussed in the preceding section and shown here for completeness (Fig. 13A I, B I, C I), displays in the mean-driven regime (μ=1), and to a lesser degree also at the bifurcation point (μ=0), a well known resonance peak near the firing frequency ω0=2πr0; it acts as a low-pass filter in the excitable regime (μ=-0.5). Increasing the correlation time and thereby the effect of the noise diminishes this resonance.

Fig. 13.

Fig. 13

Firing rate response functions in the excitable regime (A), at the bifurcation point (B) and in the mean-driven regime (C) for a fixed variance σ2=1 and various values of τ. For a discussion see the main text. Parameters MCF Method: pmax=nmax=200

The first nonlinear term r2,0 describes the effect of the periodic signal on the time-averaged firing rate; we discuss this term first for the mean-driven regime (Fig. 13C II). Similar to the findings for a stochastic LIF model (Voronenko & Lindner, 2017, Fig. 3B) at low noise we find that a resonant driving at a frequency corresponding to the firing rate ω0 does not evoke any change of the time-averaged firing rate while a frequency slightly below or above this frequency evokes a reduction or increase of the rate, respectively. If we deviate too strongly from ω0 however the effect of the signal on the time-averaged rate becomes very small. Increasing the correlation time increases the effect of the noise and smears out these nonlinear resonances.

The effect of the periodic signal on the time-averaged firing rate in the excitable regime and at the bifurcation point is quite different (Fig. 13A II, B II). Here the rate is always increased by the periodic signal, similar to what was found already for an excitable LIF model (Voronenko & Lindner, 2017, Fig. 3A). Furthermore, at the bifurcation point and at low noise intensities (green curve in B II) there is a pronounced maximum as a function of frequency ω attained at a frequency higher than ω0.

Generally, in the higher-order response functions, we observe a number of peaks versus frequency (see e.g. Fig. 13A–C V). The resonances in the mean-driven regime (C V) and at low noise (green curve) are found near ωdet, ωdet/2 and ωdet/3. Note again, that in this regime the deterministic frequency ωdet=2πrdet of the oscillator and the stationary firing frequency ω0 are close. In the excitable regime both the linear and nonlinear response functions also exhibit for most driving frequencies a nonmonotonic behavior with respect to the correlation time, i.e. with respect to the strength of the noise.

Response to two periodic signals

So far we have discussed the theta neuron’s linear and nonlinear firing rate response to a single periodic signal. In this section we derive a scheme that allows to calculate the response if the model neuron receives two periodic signals:

s(t)=s1(t)+s2(t)=ε1cos(ω1t)+ε2cos(ω2t). 53

Calculating the firing rate in this case will not only help to understand how a theta neuron responds to two periodic signals but can also be used to calculate the 2nd order response to arbitrary signals (Voronenko & Lindner, 2017).

As a starting point we formulate the corresponding FPE:

tP(θ,η,t)=L^0(θ,η)-s(t)L^per(θ)P(θ,η,t). 54

This equation still agrees with Eq. (41) except for s(t) which contains two periodic signals. Again we are interested in the PDF for which all initial condition have been forgotten and the time dependence of P(θ,η,t) is only due to the time dependence of the signal s(t). Note that since the sum of two periodic signals is not necessarily periodic, the functions P(θ,η,t) and r(t) are not periodic either. In fact, s(t) is only periodic if the ratio of the two frequencies is a rational number, i.e. ω1/ω2Q. We chose a Fourier representation with respect to ω1t, ω2t and expand with respect to the small amplitudes ε1, ε2:

P(θ,η,t)=1=02=0k1=-k2=-ε11ε22e-i(k1ω1+k2ω2)tPk1,k21,2. 55

For notational convenience we have omitted the arguments of the coefficients Pk1,k21,2(θ,η).

Because P(θ,η,t) is a real valued function, the coefficients obey

Pk1,k21,2=P-k1,-k21,2. 56

As for the case of a single periodic signal, inserting Eq. (55) into Eq. (54) gives a system of time-independent coupled differential equations:

L^k1,k2Pk1,k21,2=12L^perPk1+1,k21-1,2+Pk1-1,k21-1,2++Pk1,k2+11,2-1+Pk1,k2-11,2-1 57

with L^k1,k2=L^0+i(k1ω1+k2ω2) and Pk1,k21,2=0 for 1<0 or 2<0. The normalization of the probability density provides again the additional conditions:

-ππdθ0dηPk1,k21,2=δk1,0δk2,0δ1,0δ2,0. 58

The differential equations (57) are analogous to Eq. (44) and can be solved by means of the MCF method (see Appendix C). In the following we explicitly provide the hierarchy of coupled differential equations up to the second order of ε1,ε2, i.e. for 1+22. The zeroth-order term 1+2=0 describes the unperturbed system. As we have already argued for the case of a single periodic signal the function P0,00,0, governed by

L^0,0P0,00,0=0, 59

is the only non-vanshing zeroth-order term because for every other value of k1,k2 the trivial solution does satisfies Eq. (58). Therefore P0,00,0=P0 is the stationary probability density from Sect. 3. The stationary PDF in turn determines the two non-vanishing linear (1+2=1) correction terms:

L^1,0P1,01,0=L^per2P0,00,0, 60
L^0,1P0,10,1=L^per2P0,00,0. 61

Finally, the linear terms determine the second order terms (1+2=2):

L^2,0P2,02,0=L^per2P1,01,0, 62
L^0,2P0,20,2=L^per2P0,10,1, 63
L^0,0P0,00,2=L^per2P0,10,1+(P0,10,1), 64
L^0,0P0,02,0=L^per2P1,01,0+(P1,01,0), 65
L^1,-1P1,-11,1=L^per2P1,01,0+(P0,10,1), 66
L^1,1P1,11,1=L^per2P1,01,0+P0,10,1. 67

As for the case of a single periodic signal, the rate response r(t) can be expressed in terms of the functions Pk1,k21,2 using Eqs. (21) and (56):

r(t)=1,2k1,k2ε11ε22|rk1,k21,2|××cos(k1ω1+k2ω2)t-φk1,k21,2, 68

with

rk1,k21,2=2(2-δk1,0δk2,0)-dηPk1,k21,2(π,η), 69
φk1,k21,2=arg(rk1,k21,2). 70

The response of the firing rate up to the second order in the amplitudes reads:

r(t)r0,00,0+ε1|r1,01,0|cosω1t-φ1,01,0+ε2|r0,10,1|cosω2t-φ0,10,1+ε12r0,02,0+|r2,02,0|cos2ω1t-φ2,02,0+ε22r0,00,2+|r0,20,2|cos2ω2t-φ0,20,2+ε1ε2|r1,11,1|cos(ω1+ω2)t-φ1,11,1+|r1,-11,1|cos(ω1-ω2)t-φ1,-11,1. 71

The first five lines represent the first and second order responses of the firing rate for a theta neuron that receives a single periodic signal, either s1(t) or s2(t). For instance, |r1,01,0(ω1,ω2)|=|r1,1(ω1)| (the linear response amplitude to s1) and |r2,02,0(ω1,ω2)|=|r2,2(ω1)| (the response amplitude at the second harmonic of s1) do not depend on the frequency of the second signal ω2 as it can be seen in Fig. 14AI and AII. The response functions r,k(ω) for a single periodic signal have already been discussed in the previous sections. The last two terms, proportional to ε1ε2, are of particular interest here, because they arise only due to the interaction of two periodic signals. The corresponding response amplitudes |r1,11,1| and |r1,11,-1| are shown in Fig. 14AIII and AIV. In accordance with previous observations for the leaky integrate-and-fire model with white noise and a periodic driving (Voronenko & Lindner, 2017) we find two distinct cases in the mean-driven regime. First, if neither the sum nor the difference ω1±ω2 is close to the firing frequency 2πrdet then the response to the sum of two signals is well described by the sum of responses to the separate signals. A particular set of frequencies ω1 and ω2 for which this is the case is shown in Fig. 14C where the second order response to the sum of two signals (black solid line) agrees very well with the sum of the second order responses to one signal at a time (dashed line). Second, if ω1+ω22πrdet or |ω1-ω2|2πrdet the firing rate is significantly affected by the interaction of both signals (see Fig. 14AIII and AIV). An example of the firing rate as a function of time where these interaction terms are crucial is shown in Fig. 14B. Here the aforementioned response to the sum of two signals and sum of responses to one signal at a time disagree significantly.

Fig. 14.

Fig. 14

Nonlinear response to two periodic signals. AI-AIV) Amplitudes of the response functions rl1,l2k1,k2 (cf. Eq. (71)). Note that the response functions |r0,10,1| and |r0,20,2| that are not shown here are identical to the response functions that are shown in AI and AII if the frequencies ω1 and ω2 are interchanged (both account for a single signal). The response functions |r1,11,1| and |r1,-11,1|, that describe the interaction effect of both signals on the firing rate, exhibit additional resonances near ω1+ω2=2πrdet and |ω1-ω2|=2πrdet. B and C show the firing rate in response to two periodic signals where the sum of the frequencies does and does not match the aforementioned condition ω1+ω2=2πrdet, respectively. If the condition is matched the sum of responses to each individual signal does not provide a good approximation to the actual firing rate but the full response to the sum of signals has to be calculated (see B). Parameters: μ=1, σ2=1, τ=0.05, ε1=0.3, ε2=0.1 with frequencies ω1=0.5, ω2=1.5 in B and ω1=1.0, ω2=1.5 in C. Parameters MCF Method: pmax=nmax=100

Summary and outlook

In this paper we have studied the firing rate of the canonical type-I neuron model, the theta neuron, subject to a temporally correlated Ornstein-Uhlenbeck noise and additional periodic signals. We have solved the associated multi-dimensional Fokker-Planck-equation numerically by means of the matrix-continued-fraction (MCF) method, put forward by Risken (1984). For our problem the MCF method provided reliable solutions for a wide range of parameters; the main restriction is that the correlation time cannot be to large and additionally in the excitable regime the noise intensity (as also known from other application of the method, see Lindner & Sokolov, 2016 for a recent example) cannot be to small. To the best of our knowledge this is the first application of this method in computational neuroscience, advancing the results by Naundorf et al. (2005a, b) on the same model.

When the neuron receives no additional periodic signal, i.e. when the model is driven solely by the correlated noise, our method allows a quick and accurate computation of the stationary firing rate. We investigated the rate for a large part of the parameter space, confirmed the MCF results by comparison with stochastic simulations and discussed the agreement with known analytical approximations (Fourcaud-Trocmé et al., 2003; Galán, 2009; Moreno-Bote & Parga, 2010). We found that, in contrast to the white noise case (Lindner et al., 2003), correlated noise can both increase and decrease the stationary firing rate of a type-I neuron and we identified the conditions under which one or the other behavior can be observed.

In the presence of a single additional periodic signal both the probability density function and the firing rate approach a cyclo-stationary solution, which can be found by extending the MCF method to the time-dependent Fokker-Planck-equation. The corresponding rate modulation is for a weak signal given by the linear response function, the well known susceptibility, which has been addressed before numerically (Naundorf et al., 2005a, b) and analytically in limit cases (Fourcaud-Trocmé et al., 2003). Here we went beyond the linear response and computed also the higher-order response to a single periodic stimulus. Similar to what was found for a periodically driven leaky integrate-and-fire model with white background noise (Voronenko & Lindner, 2017), we identified driving frequencies at which the higher harmonics can be stronger than the firing rate modulation with the fundamental frequency. For a variety of nonlinear response functions, we observed resonant behavior.

Finally, we generalized the numerical approach to the case of two periodic signals and studied the nonlinear response up to second order. We found that for certain frequency combinations the mixed response to the two signals can lead to a drastically different rate modulation than predicted by pure linear response theory; this is similar to what was observed in a leaky integrate-and-fire neuron with white background noise (Voronenko & Lindner, 2017).

Our method could be extended to neuron models that include more complicated correlated noise, for instance, a harmonic noise (Schimansky-Geier & Zülicke, 1990) that can mimic special sources of intrinsic fluctuations (Engel et al., 2009). Another problem that could be addressed by this method is the computation of the spike-train power spectrum in the stationary state. Furthermore the linear and nonlinear response to the modulation of other parameters, e.g. the noise intensity (Boucsein et al., 2009; Lindner & Schimansky-Geier, 2001; Silberberg et al., 2004; Tchumatchenko et al., 2011), could be of interest and be computed with the methods outlined in this paper.

A. Stationary case - derivation of the tridiagonal recurrence relation

Here we demonstrate how the problem of solving the FPE (22) for the stationary PDF P0(θ,η), can be translated into an equivalent problem of solving a tridiagonal recurrence relation for the expansion coefficients cpn. These coefficients can then be found by means of the matrix-continued-fraction method as it was demonstrated in Sect. 3.1.

First, we recall the stationary FPE

0=L^0P0(θ,η)=(L^θ+L^η)P0(θ,η), 72
L^θ=-θf0(θ,η) 73
L^η=1τη(η+σ2η), 74

and the expansion of the PDF by two sets of orthonormal eigenfunctions einθ/2π and ϕp(η)

P0(θ,η)=ϕ0(η)2πp=0n=-cn,peinθϕp(η). 75

The Fourier modes and Hermite functions satisfy the periodic boundary condition in θ and natural boundary conditions in η, respectively. Using the orthonormality

12π-ππdθeinθeimθ=δn,m,-dηϕpϕq=δp,q 76

of these functions one can show that the normalization condition of the PDF determines c0,0:

c0,0=-ππdθ-dηP(θ,η,t)=1. 77

Before addressing the full problem of finding the recursive relation for the coefficients cn,p, we first calculate L^ηP0(θ,η) using the expansion (75):

L^ηP0=12πn,pcn,peinθL^ηϕ0(η)ϕp(η). 78

Remember that the Hermite functions can expressed by the Hermite polynomials Hp(x) as follows:

ϕp(η)=Hp(η/α)2pp!παe-η22α2. 79

Where α is an arbitrary scaling factor. Making use of the two properties

ηϕp=α2pϕp-1+p+1ϕp+1, 80
(ϕ0ϕp)=-2αp+1ϕ0ϕp+1 81

of the Hermite functions we can derive a handy expression for L^ηϕ0(η)ϕp(η) by choosing α=2σ:

L^ηϕ0ϕp=ητη+σ2ηϕ0ϕp=ητα2ϕ0pϕp-1+1-4σ2a2p+1ϕp+1=-pτϕ0ϕp. 82

Hence, ϕ0ϕp is a eigenfunction of the operator L^η. Combining Eq. (82), the expansion (75) and the FPE (72) yields

0=n,pcn,pϕp{[ip/(nτ)-1-μ-η]neinθ+12[1-μ-η](n+1)ei(n+1)θ+12[1-μ-η](n-1)ei(n-1)θ}. 83

We split the sum into three parts, perform an index shift and use Eq. (80) to obtain

0=n,pneinθ{[(ip/(nτ)-1-μ)ϕp-σ(p+1ϕp+1+pϕp-1)]cn,p+12[(1-μ)ϕp-σ(p+1ϕp+1+pϕp-1)]cn+1,p+12[(1-μ)ϕp-σ(p+1ϕp+1+pϕp-1)]cn-1,p}. 84

Furthermore, we introduce the orthonormal operators

O^θ=12π02πdθeinθ,O^η=-dηϕq(η) 85

so that O^θeimθ=δn,m and O^ηϕp=δp,q. Multiplying Eq. (84) from the left by O^ηO^θ allows to get rid of the sum over n and to find the following recursive relation.

0=np{[(ip/(nτ)-1-μ)δp,q-σ(qδp+1,q+q+1δp-1,q)]cn,p+12[(1-μ)δp,qσ(qδp+1,q+q+1δp-1,q)]cn+1,p+12[(1-μ)δp,qσ(qδp+1,q+q+1δp-1,q)]cn-1,p} 86

The sum can be interpreted as a product of matrices and vectors. We introduce the coefficient vector

cn=cn,0cn,1... 87

and the symmetric matrices

A^p,q=iqτδp,q, 88
B^p,q=1-μ2δp,q-σ2qδp+1,q+q+1δp-1,q, 89

which allows for an elegant reformulation of Eq. (86) by the tridiagonal recurrence relation

0=A^+2n(B^-1)cn+nB^cn-1+cn+1. 90

Note, that for n=0 we can readily infer the remaining elements of c0 (remember that c0,0=1)

c0=1,0,0,...T. 91

Equation (90) can by simplified by multiplication with B^-1/n from the left to obtain the expression used in Sect. 3.1:

K^ncn=cn-1+cn+1. 92

This is the tridiagonal recurrence relation which we have solved in the main part by the MCF method (illustrated in Fig. 15).

Fig. 15.

Fig. 15

Illustration of the MCF method. Expanding the probability density using eigenfunctions according to Eq. (75) and insertion into the FPE (72) leads to a relation for the expansion coefficients cn,p where only nearest neighbors interact (including diagonals). The coefficients can then be computed by truncating the system and introducing transition matrices S^n that can be obtained from a matrix continued fraction, see Sect. 3.1

B. Cyclo-stationary case - MCF method

In this section we expand the MCF method to the case of an additional periodic signal s(t)=εcos(ωt) and calculate the cyclo-stationary firing rate r(t). In Sect. 4 we have already shown that the time-dependent FPE for this problem, i.e. Eq. (41), can be transformed into a set of time-independent differential equations that are recursively related (cf. Eq. (44)):

L^0+ikωP,k=L^per2(P-1,k-1+P-1,k+1), 93

with P<0,k=0. This hierarchy of coupled differential equations can be solved iteratively starting at the stationary PDF P0,0. The dependence is illustrated in Fig. 16 where many terms P,k vanish (grey circles) due to the normalization condition of the PDF as explained in Sect. 4. In order to solve the corresponding differential equation for each P,k(θ,η) we chose the same ansatz as in the previous section:

P,k(θ,η)=ϕ0(η)2πp=0n=-cn,p(,k)einθϕp(η). 94

By substituting this ansatz into Eq. (47) a relation between the expansion coefficients cn,p(,k) and response functions r,k (which determine the full firing rate r(t) via Eq. (46)) can be derived:

r,k=(2-δk,0)πn=-(-1)ncn,0(,k). 95

To find cn,p(,k), we again transform the differential equations (93) into coefficient equations by means of the expansion (94) (see derivation of the tridiagonal recurrence relation in the previous section) and obtain

0=A^+kω1+2n(B^-1)cn+nB^cn-1+cn+1-n42cn+cn-1+cn+1. 96

For notational convenience we introduced the coefficient vectors

cn(,k)=cn,0(,k),cn,1(,k),...T, 97

and droped the superscripts cn:=cn(,k). Further we denote the sum of the previously computed coefficient vectors by cn:=cn(-1,k+1)+cn(-1,k-1).

As in the previous section Eq. (96) is multiplied by B^-1/n from the left to obtain a more compact expression

K^ncn=cn+1+cn-1+c~n 98

with

K^n=2B^-1-1-B^-1(A^+kω1)n 99
c~n=-B^-14(2cn+cn+1+cn-1). 100

In order to solve the 2-dimensional coefficient equation (98) we must assume that all Hermite functions and Fourier modes become negligible for large p or n, so that the corresponding coefficients vanish: cn,p:=cn,p(,k)=0 for p>pmax. or n>nmax. Specifically, we have checked how key statistics as for instance the firing rate depend on p and n and observed saturation for sufficiently large p and n; we then take these as maximal values.

The normalization condition of the PDF is determines the coefficient c0,0:=c0,0(,k):

-ππdθ-dηP,k(θ,η)=c0,0=δk,0δ,0 101

The remaining elements of c0 vanish. This can be seen from Eq. (96) that simplifies considerably for n=0

A^+kω1c0=0. 102

The involved matrix Ak=A^+kω1 is diagonal with non-vanishing elements (Ak)p,p0 for p0. This implies that Eq. (102) can only be fulfilled if cp0,0=0. The resulting coefficient vector

c0=(1,...,0)Tifk==0(0,...,0)Telse 103

serves as the initial condition in the following. All other coefficient vectors can be derived iteratively:

cn+1=S^ncn+dn,forn=0,...,nmax-1 104
cn-1=S^nRcn+dnR,forn=0,...,-nmax+1, 105

Here we have introduced the transition matrices S^n and S^nR as done in the case of no periodic signal and the additional vectors dn and dnR, which take the inhomogeneity c~n in Eq. (98) into account (Risken, 1984). The ansatz Eq. (104) is substituted into Eq. (98), which yields

[(K^n-S^n)S^n-1-1]cn-1=[dn-(K^n-S^n)dn-1+c~n]. 106

This equation is satisfied when both expressions in the square brackets vanish. This allows to derive two recursive relation. First, from the left hand side

S^n-1=[K^n-S^n]-1 107

and second, from the right hand side

dn-1=[K^n-S^n]-1(dn+c~n)=S^n-1(dn+c~n). 108

Analogous expressions can be derived for S^nR and dnR. Because all coefficient vectors cn are assumed to vanish for n>|nmax| it follows that

S^nmax=0,S^-nmaxR=0, 109
dnmax=0,d-nmaxR=0. 110

This defines the initial condition that is needed to determine the remaining transition matrices and vectors for 0<n<nmax:

S^n-1=[K^n-S^n]-1, 111
dn-1=S^n-1(dn+c~n), 112

and for -nmax<n<0

S^n+1R=[K^n-S^nR]-1, 113
dn+1R=S^n+1R(dnR+c~n). 114

To summarize, the rate response functions r,k and hence the full firing rate can be calculated following an iterative scheme illustrated in Fig. 16. Starting point is the zeroth order term in the signal amplitude (=0), where cn=0 is known. For each iteration step, i.e. +1, the following series of steps is executed.

  1. Compute the sum cn:=cn(-1,k+1)+cn(-1,k-1) of the previously computed coefficient vectors and the matrices involved in the computations of Kn according to Eq. (99).

  2. Compute all transition matrices S^n and vectors dn (and S^nR, dnR) iteratively starting at n=nmax (and n=-nmax) using Eqs. (111)–(114).

  3. Find all coefficient vectors cn:=cn(,k) iteratively according to Eqs. (104) and (105) using the initial condition Eqs. (103) for n=0 and the transition matrices and vectors from the previous step.

  4. Substitute the coefficients cn,p(,k) into Eq. (95) and determine the response functions r,k.

C. MCF method: response to two periodic signals

In Sect. 4.3 we were interested in the nonlinear response r(t) of the noisy theta neuron subject to two periodic signals. To this end, we need to compute the response functions rk1,k21,2 that are related to the expansion functions Pk1,k21,2 by Eq. (69). The latter in turn obey the following differential equation.

L^0+ik1ω1+k2ω2Pk1,k21,2=L^per2(Pk1+1,k21-1,2+Pk1-1,k21-1,2+Pk1,k2+11,2-1+Pk1,k2-11,2-1) 115

This system of coupled differential equations can be solved iteratively as described in Sect. 4.3. We wish to find the function Pk1,k21,2 given that the functions on the right-hand side of Eq. (115) are already computed. For each Pk1,k21,2 the differential equation has in principle the same structure as Eq. (93) from the previous section. Hence, it can be solved using the same numerical scheme based on the MCF method, using Eqs. (94)–(114), except for a change in the notation that reflects the expansion with respect to two signals:

P,kPl1,l2k1,k2,r,krl1,l2k1,k2,cn(,k)cn(1,2,k1,k2).

Further two differences are:

  1. Replace kω by k1ω1+k2ω2 which effects the computation of Kn according to Eq. (99).

  2. The function Pk1,k21,2 is determined by four previously computed expansion functions Pk1+1,k21-1,2, Pk1-1,k21-1,2, Pk1,k2+11,2-1 and Pk1,k2-11,2-1. This affects the computation of cn as follows:
    cn=cn(1-1,2,k1-1,k2)+cn(1-1,2,k1-1,k2)+cn(1,2-1,k1,k2-1)+cn(1,2-1,k1,k2+1) 116

Note that the known initial coefficient vector in Eq. (103) is still c0=(1,0,...,0)T, when computing the stationary firing rate r0,00,0 and c0=(0,0,...,0)T else.

The hierarchy of coupled differential equations is indeed different to the previous section and is provided up to the 2nd order in the signal amplitude in Sect. 4.3.

Funding

Open Access funding enabled and organized by Projekt DEAL. This work was supported by Deutsche Forschungsgemeinschaft: LI-1046/4-1 and LI-1046/6-1.

Declarations

Competing interests

The authors have no competing interests to declare that are relevant to the content of this article.

Conflict of interest

The authors declare no conflict of interest.

Footnotes

1

One way to derive the QIF model is to consider the limit of a large slope factor Δv in the exponential integrate-and-fire model Cv˙=I0-gLv+gLΔvexp((v-vt)/Δv) which itself results from a simplification of a conductance-based model (Fourcaud-Trocmé et al., 2003). By choosing the new variable x=(v-vt)/(2Δv) and expanding the exponential function up to the second order for v-vtΔv, one finds 2τmx˙=μ+x2, where τm=C/gL is the membrane-time constant. For simplicity we neglected the prefactor 2 in Eq. (1).

2

How fast the MCF method converges with the number of Hermite functions and Fourier modes considered depends on the system parameters as demonstrated in the repository. More precisely, for a fixed pmax or nmax we observed that the MCF method fails for large correlation times and additionally in the excitable regime for small noise intensities. However, for these particular limit cases analytical approximations already exist (see Sects. 3.2 and 3.3). Choosing a pmax=nmax150, we can even capture these limit cases sufficiently well (see Fig. 3).

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Jannik Franzen, Email: franzen@physik.hu-berlin.de.

Lukas Ramlow, Email: lukas.ramlow@bccn-berlin.de.

Benjamin Lindner, Email: benjamin.lindner@physik.hu-berlin.de.

References

  1. Abbott L, van Vreeswijk C. Asynchronous states in networks of pulse-coupled oscillators. Physical Review E. 1993;48:1483. doi: 10.1103/PhysRevE.48.1483. [DOI] [PubMed] [Google Scholar]
  2. Alijani AK, Richardson MJE. Rate response of neurons subject to fast or frozen noise: From stochastic and homogeneous to deterministic and heterogeneous populations. Physical Review E. 2011;84:011919. doi: 10.1103/PhysRevE.84.011919. [DOI] [PubMed] [Google Scholar]
  3. Bair W, Koch C, Newsome W, Britten K. Power spectrum analysis of bursting cells in area MT in the behaving monkey. The Journal of Neuroscience. 1994;14:2870. doi: 10.1523/JNEUROSCI.14-05-02870.1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bartussek R. Ratchets driven by colored Gaussian noise. In: Schimansky-Geier L, Pöschel T, editors. Stochastic Dynamics, page 69. Berlin, London, New York: Springer; 1997. [Google Scholar]
  5. Bauermeister C, Schwalger T, Russell D, Neiman AB, Lindner B. Characteristic effects of stochastic oscillatory forcing on neural firing: Analytical theory and comparison to paddlefish electroreceptor data. PLoS Computational Biology. 2013;9:e1003170. doi: 10.1371/journal.pcbi.1003170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Boucsein C, Tetzlaff T, Meier R, Aertsen A, Naundorf B. Dynamical response properties of neocortical neuron ensembles: Multiplicative versus additive noise. The Journal of Neuroscience. 2009;29:1006. doi: 10.1523/JNEUROSCI.3424-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience. 2000;8:183. doi: 10.1023/A:1008925309027. [DOI] [PubMed] [Google Scholar]
  8. Brunel N, Latham PE. Firing rate of the noisy quadratic integrate-and-fire neuron. Neural Computation. 2003;15:2281. doi: 10.1162/089976603322362365. [DOI] [PubMed] [Google Scholar]
  9. Brunel N, Sergi S. Firing frequency of leaky integrate-and-fire neurons with synaptic current dynamics. Journal of Theoretical Biology. 1998;195:87. doi: 10.1006/jtbi.1998.0782. [DOI] [PubMed] [Google Scholar]
  10. Brunel N, Chance FS, Fourcaud N, Abbott LF. Effects of synaptic noise and filtering on the frequency response of spiking neurons. Physical Review Letters. 2001;86:2186. doi: 10.1103/PhysRevLett.86.2186. [DOI] [PubMed] [Google Scholar]
  11. Burkitt, A. N. (2006). A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biological Cybernetics, 95, 1. [DOI] [PubMed]
  12. Câteau H, Reyes AD. Relation between single neuron and population spiking statistics and effects on network activity. Physical Review Letters. 2006;96:058101. doi: 10.1103/PhysRevLett.96.058101. [DOI] [PubMed] [Google Scholar]
  13. Deniz T, Rotter S. Solving the two-dimensional Fokker-Planck equation for strongly correlated neurons. Physical Review E. 2017;95:012412. doi: 10.1103/PhysRevE.95.012412. [DOI] [PubMed] [Google Scholar]
  14. Doiron B, Rinzel J, Reyes A. Stochastic synchronization in finite size spiking networks. Physical Review E. 2006;74:030903. doi: 10.1103/PhysRevE.74.030903. [DOI] [PubMed] [Google Scholar]
  15. Droste F, Lindner B. Integrate-and-fire neurons driven by asymmetric dichotomous noise. Biological Cybernetics. 2014;108:825. doi: 10.1007/s00422-014-0621-7. [DOI] [PubMed] [Google Scholar]
  16. Droste F, Lindner B. Exact results for power spectrum and susceptibility of a leaky integrate-and-fire neuron with two-state noise. Physical Review E. 2017;95:012411. doi: 10.1103/PhysRevE.95.012411. [DOI] [PubMed] [Google Scholar]
  17. Dygas MM, Matkowsky BJ, Schuss Z. A singular perturbation approach to non-markovian escape rate problems. SIAM Journal on Applied Mathematics. 1986;46:265. doi: 10.1137/0146019. [DOI] [Google Scholar]
  18. Engel TA, Helbig B, Russell DF, Schimansky-Geier L, Neiman AB. Coherent stochastic oscillations enhance signal detection in spiking neurons. Physical Review E. 2009;80:021919. doi: 10.1103/PhysRevE.80.021919. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Ermentrout B. Type I membranes, phase resetting curves, and synchrony. Neural Computation. 1996;8:979. doi: 10.1162/neco.1996.8.5.979. [DOI] [PubMed] [Google Scholar]
  20. Fisch K, Schwalger T, Lindner B, Herz A, Benda J. Channel noise from both slow adaptation currents and fast currents is required to explain spike-response variability in a sensory neuron. The Journal of Neuroscience. 2012;32:17332. doi: 10.1523/JNEUROSCI.6231-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Fourcaud N, Brunel N. Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Computation. 2002;14:2057. doi: 10.1162/089976602320264015. [DOI] [PubMed] [Google Scholar]
  22. Fourcaud-Trocmé N, Hansel D, van Vreeswijk C, Brunel N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. The Journal of Neuroscience. 2003;23:11628. doi: 10.1523/JNEUROSCI.23-37-11628.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Gabbiani, F., & Cox, S. J. (2017). Mathematics for neuroscientists. Academic Press.
  24. Galán RF. Analytical calculation of the frequency shift in phase oscillators driven by colored noise: Implications for electrical engineering and neuroscience. Physical Review E. 2009;80(3):036113. doi: 10.1103/PhysRevE.80.036113. [DOI] [PubMed] [Google Scholar]
  25. Guardia, E., Marchesoni, F., & San Miguel, M. (1984). Escape times in systems with memory effects. Physics Letters A, 100, 15.
  26. Hänggi P, Jung P. Colored noise in dynamical-systems. Advances in Chemical Physics. 1995;89:239. [Google Scholar]
  27. Holden AV. Models of the stochastic activity of neurones. Berlin: Springer-Verlag; 1976. [Google Scholar]
  28. Izhikevich EM. Dynamical systems in neuroscience: the geometry of excitability and bursting. Cambridge, London: The MIT Press; 2007. [Google Scholar]
  29. Knight BW. Dynamics of encoding in neuron populations: Some general mathematical features. Neural Computation. 2000;12:473. doi: 10.1162/089976600300015673. [DOI] [PubMed] [Google Scholar]
  30. Koch C. Biophysics of computation - information processing in single neurons. New York, Oxford: Oxford University Press; 1999. [Google Scholar]
  31. Langer JS. Statistical theory of the decay of metastable states. Annals of Physics. 1969;54:258. doi: 10.1016/0003-4916(69)90153-5. [DOI] [Google Scholar]
  32. Lindner B. Interspike interval statistics of neurons driven by colored noise. Physical Review E. 2004;69:022901. doi: 10.1103/PhysRevE.69.022901. [DOI] [PubMed] [Google Scholar]
  33. Lindner, B., & Longtin, A. (2006). Comment on characterization of subthreshold voltage fluctuations in neuronal membranes by M. Rudolph and A. Destexhe. Neural Computation, 18, 1896. [DOI] [PubMed]
  34. Lindner B, Schimansky-Geier L. Transmission of noise coded versus additive signals through a neuronal ensemble. Physical Review Letters. 2001;86:2934. doi: 10.1103/PhysRevLett.86.2934. [DOI] [PubMed] [Google Scholar]
  35. Lindner B, Sokolov IM. Giant diffusion of underdamped particles in a biased periodic potential. Physical Review E. 2016;93:042106. doi: 10.1103/PhysRevE.93.042106. [DOI] [PubMed] [Google Scholar]
  36. Lindner B, Longtin A, Bulsara A. Analytic expressions for rate and CV of a type I neuron driven by white Gaussian noise. Neural Computation. 2003;15:1761. doi: 10.1162/08997660360675035. [DOI] [PubMed] [Google Scholar]
  37. Ly C, Ermentrout B. Synchronization dynamics of two coupled neural oscillators receiving shared and unshared noisy stimuli. Journal of Computational Neuroscience. 2009;26(3):425–443. doi: 10.1007/s10827-008-0120-8. [DOI] [PubMed] [Google Scholar]
  38. Moreno R, de la Rocha J, Renart A, Parga N. Response of spiking neurons to correlated inputs. Physical Review Letters. 2002;89:288101. doi: 10.1103/PhysRevLett.89.288101. [DOI] [PubMed] [Google Scholar]
  39. Moreno-Bote R, Parga N. Role of synaptic filtering on the firing response of simple model neurons. Physical Review Letters. 2004;92:028102. doi: 10.1103/PhysRevLett.92.028102. [DOI] [PubMed] [Google Scholar]
  40. Moreno-Bote R, Parga N. Auto- and crosscorrelograms for the spike response of leaky integrate-and-fire neurons with slow synapses. Physical Review Letters. 2006;96:028101. doi: 10.1103/PhysRevLett.96.028101. [DOI] [PubMed] [Google Scholar]
  41. Moreno-Bote R, Parga N. Response of integrate-and-fire neurons to noisy inputs filtered by synapses with arbitrary timescales: Firing rate and correlations. Neural Computation. 2010;22:1528. doi: 10.1162/neco.2010.06-09-1036. [DOI] [PubMed] [Google Scholar]
  42. Mori H. A continued-fraction representation of time-correlation functions. Progress in Theoretical Physics. 1965;34:399. doi: 10.1143/PTP.34.399. [DOI] [Google Scholar]
  43. Müller-Hansen F, Droste F, Lindner B. Statistics of a neuron model driven by asymmetric colored noise. Physical Review E. 2015;91:022718. doi: 10.1103/PhysRevE.91.022718. [DOI] [PubMed] [Google Scholar]
  44. Naundorf B, Geisel T, Wolf F. Action potential onset dynamics and the response speed of neuronal populations. Journal of Computational Neuroscience. 2005;18:297. doi: 10.1007/s10827-005-0329-8. [DOI] [PubMed] [Google Scholar]
  45. Naundorf B, Geisel T, Wolf F. Dynamical response properties of a canonical model for type-I membranes. Neurocomputing. 2005;65:421. doi: 10.1016/j.neucom.2004.10.040. [DOI] [Google Scholar]
  46. Neiman A, Russell DF. Stochastic biperiodic oscillations in the electroreceptors of paddlefish. Physical Review Letters. 2001;86:3443. doi: 10.1103/PhysRevLett.86.3443. [DOI] [PubMed] [Google Scholar]
  47. Novikov N, Gutkin B. Role of synaptic nonlinearity in persistent firing rate shifts caused by external periodic forcing. Physical Review E. 2020;101(5):052408. doi: 10.1103/PhysRevE.101.052408. [DOI] [PubMed] [Google Scholar]
  48. Ostojic S, Brunel N. From spiking neuron models to linear-nonlinear models. PLoS Computation Biology. 2011;7:e1001056. doi: 10.1371/journal.pcbi.1001056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Pena, R. F., Vellmer, S., Bernardi, D., Roque, A. C., & Lindner, B. (2018). Self-consistent scheme for spike-train power spectra in heterogeneous sparse networks. Frontiers in Computational Neuroscience, 12(9). [DOI] [PMC free article] [PubMed]
  50. Ricciardi LM. Diffusion processes and related topics on biology. Berlin: Springer-Verlag; 1977. [Google Scholar]
  51. Richardson MJE. Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons. Physical Review E. 2004;69:051918. doi: 10.1103/PhysRevE.69.051918. [DOI] [PubMed] [Google Scholar]
  52. Risken H. The Fokker-Planck Equation. Berlin: Springer; 1984. [Google Scholar]
  53. Rudolph M, Destexhe A. An extended analytical expression for the membrane potential distribution of conductance-based synaptic noise (Note on characterization of subthreshold voltage fluctuations in neuronal membranes) Neural Computation. 2005;18:2917. doi: 10.1162/neco.2006.18.12.2917. [DOI] [Google Scholar]
  54. Schimansky-Geier L, Zülicke C. Harmonic noise: Effect on bistable systems. Zeitschrift für Physik B Condensed Matter. 1990;79:451. doi: 10.1007/BF01437657. [DOI] [Google Scholar]
  55. Schuecker J, Diesmann M, Helias M. Modulated escape from a metastable state driven by colored noise. Physical Review E. 2015;92:052119. doi: 10.1103/PhysRevE.92.052119. [DOI] [PubMed] [Google Scholar]
  56. Schwalger T, Schimansky-Geier L. Interspike interval statistics of a leaky integrate-and-fire neuron driven by Gaussian noise with large correlation times. Physical Review E. 2008;77:031914. doi: 10.1103/PhysRevE.77.031914. [DOI] [PubMed] [Google Scholar]
  57. Schwalger T, Fisch K, Benda J, Lindner B. How noisy adaptation of neurons shapes interspike interval histograms and correlations. PLoS Computational Biology. 2010;6:e1001026. doi: 10.1371/journal.pcbi.1001026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Schwalger T, Droste F, Lindner B. Statistical structure of neural spiking under non-poissonian or other non-white stimulation. Journal of Computational Neuroscience. 2015;39:29. doi: 10.1007/s10827-015-0560-x. [DOI] [PubMed] [Google Scholar]
  59. Siegle P, Goychuk I, Talkner P, Hänggi P. Markovian embedding of non-markovian superdiffusion. Physical Review E. 2010;81:011136. doi: 10.1103/PhysRevE.81.011136. [DOI] [PubMed] [Google Scholar]
  60. Silberberg G, Bethge M, Markram H, Pawelzik K, Tsodyks M. Dynamics of population rate codes in ensembles of neocortical neurons. Journal of Neurophysiology. 2004;91:704. doi: 10.1152/jn.00415.2003. [DOI] [PubMed] [Google Scholar]
  61. Tchumatchenko T, Malyshev A, Wolf F, Volgushev M. Ultrafast population encoding by cortical neurons. The Journal of Neuroscience. 2011;31:12171. doi: 10.1523/JNEUROSCI.2182-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Tuckwell HC. Stochastic processes in the neuroscience. Philadelphia, Pennsylvania: SIAM; 1989. [Google Scholar]
  63. Vellmer S, Lindner B. Theory of spike-train power spectra for multidimensional integrate-and-fire neurons. Physical Review Research. 2019;1(2):023024. doi: 10.1103/PhysRevResearch.1.023024. [DOI] [Google Scholar]
  64. Voronenko S, Lindner B. Nonlinear response of noisy neurons. New Journal of Physics. 2017;19:033038. doi: 10.1088/1367-2630/aa5b81. [DOI] [Google Scholar]
  65. Voronenko S, Lindner B. Improved lower bound for the mutual information between signal and neural spike count. Biological Cybernetics. 2018;112:523. doi: 10.1007/s00422-018-0779-5. [DOI] [PubMed] [Google Scholar]
  66. Zhou P, Burton SD, Urban N, Ermentrout GB. Impact of neuronal heterogeneity on correlated colored-noise-induced synchronization. Frontiers in Computational Neuroscience. 2013;7:113. doi: 10.3389/fncom.2013.00113. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Computational Neuroscience are provided here courtesy of Springer

RESOURCES