Skip to main content
Cognitive Neurodynamics logoLink to Cognitive Neurodynamics
. 2020 Sep 18;15(3):517–532. doi: 10.1007/s11571-020-09632-3

Aperiodic stochastic resonance in neural information processing with Gaussian colored noise

Yanmei Kang 1,, Ruonan Liu 1, Xuerong Mao 2
PMCID: PMC8131434  PMID: 34040675

Abstract

The aim of this paper is to explore the phenomenon of aperiodic stochastic resonance in neural systems with colored noise. For nonlinear dynamical systems driven by Gaussian colored noise, we prove that the stochastic sample trajectory can converge to the corresponding deterministic trajectory as noise intensity tends to zero in mean square, under global and local Lipschitz conditions, respectively. Then, following forbidden interval theorem we predict the phenomenon of aperiodic stochastic resonance in bistable and excitable neural systems. Two neuron models are further used to verify the theoretical prediction. Moreover, we disclose the phenomenon of aperiodic stochastic resonance induced by correlation time and this finding suggests that adjusting noise correlation might be a biologically more plausible mechanism in neural signal processing.

Keywords: Ornstein–Ulenbeck process, Local Lipschitz condition, Aperiodic stochastic resonance, Mutual information

Introduction

Dynamical models from cellular level to network and cortical level usually play a necessary role in cognitive neuroscience (Levin and Miller 1996; Wang et al. 2014; Déli et al. 2017; Mizraji and Lin 2017; Song et al. 2019). Due to the random release of neurotransmitter, the stochastic bombing of synaptic inputs and the random opening and closing of ion channels, noise is ubiquitous in neural systems. Various noise-induced non-equilibrium phenomena disclosed in experimental or dynamical models, such as stochastic synchronization (Kim and Lim 2018), noise induced phase transition (Lee et al. 2014) and stochastic integer multiple discharge (Gu and Pan 2015), are helpful in explaining the biophysical mechanisms underlying neural information processing and coding.

Stochastic resonance, initially proposed in exploring the periodicity of the continental ice volume in the quaternary era (Benzi et al. 1981), is such an anti-intuitive phenomenon (Gammaitoni et al. 1998; Nakamura and Tateno 2019; Xu et al. 2020; Zhao et al. 2020), where weak coherent signal can be amplified by noise through certain nonlinearity. In general, a suitable external weak signal is prerequisite for stochastic resonance. When the external weak signal is absent or replaced by an intrinsic periodicity, it is referred to as coherence resonance (Guan et al. 2020), which often appears in systems close to Hopf bifurcation. When the external weak signal is not periodic, it is called aperiodic stochastic resonance (Collins et al. 1995; 1996a, b; Tiwari et al. 2016).

Thanks to the aperiodicity of the weak signal, the spectral amplification factor or the output signal-to-noise ratio, typical for periodic signals (Liu and Kang 2018; Yan et al. 2013), is no longer suitable to be a quantifying index. In fact, for aperiodic stochastic resonance, instead of emphasizing frequency matching, shape matching should be emphasized, thus the cross-correlation measure (Collins et al. 1995; 1996a, b) and the input–output mutual information (Patel and Kosko 2005, 2008) are commonly used indexes. Although the quantification is seemingly complex, the principle of aperiodic stochastic resonance has found much significance in neural processing and coding, since the spike trains of action potential observed in hearing enhancement (Zeng et al. 2000) and visual perception experiments (Dylov and Fleischer 2010; Liu and Li 2015; Yang 1998) tend to be nonharmonic. Very recently, the principle of aperiodic stochastic resonance has been effectively applied to design visual perception algorithm using spiking networks (Fu et al. 2020).

Noise correlation is common in cortical firing activities. Nevertheless, most of the literatures took Gaussian white noise for grant, except that a few (Averbeck et al. 2006; Guo 2011; Sakai et al.1999) paid attention to the “color” of noise but far from enough. Therefore, in this paper we investigate the effect of (Orenstein-Ulunbeck type) Gaussian colored noise (Floris 2015; Wang and Wu 2016) of nonzero correlation time on the aperiodic stochastic resonance. As a starting point, we will generalize the existing zeroth order perturbation results (Freidlin et al. 2012) of nonlinear dynamical systems from Gaussian white noise to Gaussian colored noise. And then, we follow the “forbidden interval” theorem (Kosko et al. 2009) and direct simulation to explore the aperiodic stochastic resonance in bistable and excitable neural systems.

The paper is structured as follows. In “General results” section, we introduce some preliminaries and main results. In “Proof of general results”, we provide the proof for the perturbation property under global and local Lipschitz conditions, respectively. And then we predict the phenomenon of aperiodic stochastic resonance based on information theory measure through Theorem 3 in the same section. In “Numerical verification” section, numerical results based on two types of neuron models are shown to disclose the functional role of noise correlation time. Finally, conclusions are drawn in “Conclusion and discussion” section.

General results

Suppose that X(t) satisfy the general d-dimensional stochastic differential equation driven by an m-dimensional Ornstein–Ulenbeck process

ddtXt=f(Xt,t)+g(Xt,t)U(t),X(0)=X0 1

where Xt=(X1(t)X2(t)Xd(t)) and f(Xt,t)=(f1(Xt,t)f2(Xt,t)fd(Xt,t)) is the state vector and the field vector, the function matrix g(Xt,t)=(gji(Xt,t))d×m describes the noise intensity, and U(t)=(u1(t)u2(t)um(t)) is the m-dimensional Ornstein–Ulenbeck process. Equation (1) is essentially shorthand for the following equations

ddtXti=fi(Xt,t)+j=1mgji(Xt,t)Uj(t) 1a
duj(t)=-1τuj(t)dt+σdWj(t) 1b

for i=1,2,,d and j=1,2,,m. Here, each scalar Ornstein–Ulenbeck process uj(t), also referred to as Gaussian colored noise, is defined on complete probability space (Ω,F,{Ft}t0,P) with a filtration {Ft}t0, which satisfies the usual conventions (Øksendal 2005; Mao 2007): it is increasing and right continuous and F0 contains all P-null sets. In Eq. (1b), where Wi(t)(1im) are statistically independent Wiener processes satisfying

Wi(t)=0,Wi(t)Wj(s)=δijmin(t,s).

In this paper, we assume that the Ornstein–Ulenbeck process uj(t) is stationary. That is, uj(t)N(0,0.5τσ2) for all t0. It is also known from Ito formula that

E|uj(t)|2k=(2k-1)!!(0.5τσ2)k,k=1,2,

Suppose that X^(t) satisfy

ddtX^t=f(X^t,t),X^(0)=X0 2

Then, the following main results in Theorems 1 and 2 state that

Esup0tTXt-X^t20 3

as σ0 under the global and local Lipschitz conditions, respectively.

Theorem 1

Let fi:RdR and gji:RdR in the system (1) be Borel measurable functions. Assume that there is a positive constant L such that fi and gji satisfy

fi(x1,·)-fi(x2,·)2Lx1-x22,gji(x1,·)-gji(x2,·)2Lx1-x22 4

for x1,x2Rdnamely, fi and gji are global Lipschitz continuous. Also, assume that there is a pair of positive constants K and γ(0,1) such that fi and gji satisfy the global growth conditions

fi(x,t)K(1+x),gji(x,t)K(1+xγ) 5

for (x,t)Rd×[0,T]. Here i=1,2,,d and j=1,2,,m. Then, for every T>0, there exist positive constants a4 and b4 (see Eqs. (11) and (12) in “Proof of general results” section, respectively) such that

Esup0tTXt-X^t2σ2A1exp(B1T)<, 6

where A1=2dm(m+1)T2K2(4ξ1+ξ2a4exp(b4T)), B1=(m+1)dTL, and

ξk=(0.5τ)k-1(0.5τ+KT)+2k3T(4k-3)!!(0.5τ)2k-1 7

for k=1,2,.

Theorem 2

Let fi:RdR and gji:RdR in the system (1) for all i=1,2,,d and j=1,2,,m be Borel measurable functions, satisfying the local Lipschitz condition

fi(x1,·)-fi(x2,·)2LNx1-x22,gji(x1,·)-gji(x2,·)2LNx1-x22 8

for all x1,x2Rd with x1N and x2N and the growth conditions (5) . Here LN is a positive constant for any N>0. Then, for every T>0, there holds Esup0tTXt-X^t20 as σ0.

Throughout the context, we use · to denote the Euclidean norm in Rd or the trace norm of matrices, that is to say that for a vector X, X2=iXi2 and for a matrix A, A2=i,jAij2. Here, we remark that Theorem 1 states that the solution of the perturbed system (1) satisfying global Liptschitz condition and the growth condition can be approximated by the unperturbed system when the noise intensity of the Gaussian colored noise tends to zero, while Theorem 2 states the same conclusion but relaxs the global Liptschitz condition into the local Lipschitz condition. Both of them can be regarded as the generalization of the perturbation results associated with zeroth order approximation (Freidlin et al. 2012). More exactly, the corresponding perturbation result in the book of Freidlin and Wentzell can be recovered from Theorem 1 when τ0 (i.e. in the Gaussian white noise limit). By utilizing the two theorems, we can provide an assertion in the theorem 3 below for the existence of aperiodic stochastic resonance in certain nonlinear systems with Gaussian colored noise.

The aperiodic stochastic resonance phenomenon is usually referred to as a kind of special stochastic resonance where the weak drive signal is aperiodic. As pointed in the introduction section, the mutual information is more qualified as the index to quantify aperiodic stochastic resonance than the signal-to-noise ratio. To this end, we suppose that the nonlinear system receives binary random signals, denoted by S(t)s1,s2 and its output Y(t)0,1 is a quantized signal as well, depending on whether the output response x(t) is below or over a certain threshold. We emphasize that this kind of quantized treatment is very common in the background of stochastic resonance and neural dynamics.

Let I(S,Y) be the Shannon mutual information of the discrete input signal S and the discrete output signal Y, then it can be defined by the difference of the output’s unconditional entropy and conditional entropy (Cover and Thomas 1991), namely I(S,Y)=H(Y)-H(YS). Denote by PS(s) the probability density of the input signal, PY(y) the probability density of the output signal, PYS(ys) the conditional density of the output given the input, and PS,Y(s,y) the joint density of the input and the output. Then,

I(S,Y)=H(Y)-H(YS)=-yPY(y)logPY(y)+syPS,Y(s,y)logPYS(ys)=-syPS,Y(s,y)logPY(y)+syPS,Y(s,y)logPS,Y(s,y)PS(s)=syPS,Y(s,y)logPS,Y(s,y)PS(s)PY(y) 9

From the above final equation, it is clear to see that I(S,Y)=0 if and only if PS,Y(s,y)=PS(s)PY(y). Moreover, by mean of Jensen’s inequality one can find that I(S,Y)0, where the equal sign holds true if and only if the input signal and the output signal are mutually independent. Hence, Shannon mutual information, capable of measuring the statistical correlation between the input and output signals, is suitable for detecting how much of the subthreshold aperiodic signal being contained in the output spike trains. Noise deteriorates the transmission performance of dynamical systems, however, when aperiodic stochastic resonance occurs, the transmission capacity can be optimally enhanced at an intermediate noise level.

Note that the nonmonotonic dependence of the input–output mutual information on noise intensity signifies the occurrence of aperiodic stochastic resonance, thus a direct proof for the existence of aperiodic stochastic resonance should contain a basic deduction of the extreme point of the mutual information. But, the explicit formulaes for mutual information are often hard to acquire, thus such a direct proof is almost impossible. In order to make our results generally applicable, we adopt an indirect proof based on the “forbidden interval” theorem (Patel and Kosko 2008), as stated by Theorem 3.

Theorem 3

Consider stochastic resonant systems of the form in Eq. (1) with f(Xt,t)=f~(Xt)+S~(t) and S~(t)=[S(t)00]. Suppose that f~(x) and g(x) satisfy local Lipschitz condition and the growth condition (5). Suppose that the input signal S(t){s1,s2} is subthreshold, that is, S(t)<θ with θ being some crossing threshold. Suppose that for some sufficiently larger noise intensity, there is some statistical dependence between the binary input and the impulsive output, that is to say, I(S,Y)>0 holds true for some σ0>0. Then, the stochastic resonant systems can exhibit the aperiodic stochastic resonance effect in the sense that I(S,Y)0 as σ0.

Theorem 3 gives a sufficient condition for the aperiodic stochastic resonance in the system (1) with subthreshold signals. As is known from Jensen’s inequality that I(S,Y)0 and I(S,Y)=0 if and only if S and Y are statistically independent. Then, we can reasonably suppose there exist some σ0>0 such that I(S,Y)>0. The “forbidden interval” theorem states that what goes down must go up (Patel and Kosko 2005, 2008; Kosko et al. 2009), thus the assertion in Theorem 3 can be proven if one can verify that I(S,Y)0 as σ0. Therefore, the increase of noise intensity will lead to the increase of the mutual information and then will enhance the discriminating ability to subthreshold signals.

Proof of general results

In this section we list the proofs of the above theorems. To avoid too lengthy and tedious deduction, we only list the involving Lemmas here but move their proof to appendix.

Lemma 1

Let k1 be an integer. The stationary OU process (1b) has the property that for T0,

Esup0tTuj(t)2kσ2k((0.5τ)k-1(0.5τ+KT)+2k3T(4k-3)!!(0.5τ)2k-1)=σ2kξk

Lemma 2

Let fi:RdR and gji:RdR in Eq. (1) be Borel measurable functions that satisfy the global Lipschitz condition (4) or the local Lipschitz condition (8) and the growth conditions (5). Then for any initial value X0Rd, Eq. (1) has a unique global solution Xt on t0. Moreover, for any integer p2, the solution has the property that

Esup0tTXtpapexp(bpT)< 10

with

ap=dp2+1(m+2)p-1X0p+Tp2p-1Kp1+mσpξp12+mσp1-γξk¯p2(1-γ)k¯, 11
bp=(m+1)dp2+1(m+2)p-1Tp-12p-1Kp, 12

and k¯ is an integer satisfying

k¯p2(1-γ). 13

Proof of Theorem1

Fix T>0 arbitrarily. Using the elementary inequality uγ1+u for any u0, we see from (5) that

g(x,t)K(2+x),(x,t)Rd×0, 14

To show the assertion (6), let us start with the scalar equation

Xti-X^ti=0t(fi(Xs,s)-fi(X^s,s))ds+j=1m0tgji(Xs,s)uj(s)ds,

Using the inequality (u1++un)2n(u12++un2), we get

Xti-X^ti2(m+1)0tfi(Xs,s)-fi(X^s,s)ds2+j=1m0tgji(Xs,s)uj(s)ds2(m+1)t0tfi(Xs,s)-fi(X^s,s)2ds+j=1mt0tgji(Xs,s)uj(s)2ds(m+1)tL0tXs-X^s2ds+2tK2j=1m0t(4+Xs2)uj(s)2ds

for 0<t<T. We emphasize that the inequality (14) has been used here. As the right-hand-side terms are increasing in t, we derive

Esup0stXsi-X^si2(m+1)TL0tEXs-X^s2ds+2m(m+1)T2K24Esup0stuj(s)2+Esup0stXs2sup0stuj(s)2(m+1)TL0tEsup0rsXr-X^r2ds+2σ2m(m+1)T2K24Esup0sTuj(s)2+Esup0sTXs4Esup0sTuj(s)4

and then by Lemmas 1 and 2,

Esup0stXi-X^i2(m+1)dTL0tEXs-X^s2ds+2σ2dm(m+1)T2K24ξ1+ξ2a4exp(b4T).

An application of the Gronwall inequality implies the required assertion (6). □

Lemma 3

Let fi:RdR in Eq. (1) be Borel measurable functions that satisfy the local Lipschitz condition (8) and the growth condition (5). Then for any initial value X0Rd, Eq. (3) has a unique global solution X^t on t0. Moreover, for any T>0, the solution has the property that

sup0tTX^tp<cp< 15

with cp=dp22p-1X0p+Tp22(p-1)dp2KpTp.

Proof of Theorem2

The local Lipschitz condition and the growth condition ensure that the existence of the unique solution of the system (1). We are going to use the technique adapted from the work of Mao and Sababis (2003) to show the required assertion (3). For each N>d(X0+TK)exp(dKT), then by Lemma 3, sup0tTX^t<N Let us define the stopping time τN=inf{t0:XtN}. Clearly

Esup0tTXt-X^t2=Esup0tTXt-X^t21{τNT}+Esup0tTXt-X^t21{τN>T} 16

where 1A is the indicator function of set A.

Let us estimate the first term in the right-hand side of Eq. (16). Noting that the Young inequality (Prato and Zabczyk 1992) αβηαμμ+αμμvμβvv holds true for all α,β,η,μ and v when u-1+v-1=1, we have

Esup0tTXt-X^t21{τNT}ηp/2Esup0tTXt-X^t2p/2+E1p/(p-2)1η2/(p-2)(1{τNT})p/(p-2)

where p>2 is an integer and η is a positive number from which it can be deduced that

Esup0tTXt-X^t21{τNT}2ηpEsup0tTXt-X^tp+p-2pη2/(p-2)P(τNT). 17

We know

Esup0tTXtpapexp(bpT)

from Lemma 2 and

Esup0tTX^tpcp

from Lemma 3, then,

P(τNT)=E1{τNT}XτNpNp1NpEXτNp1Npapexp(bpT), 18
Esup0tTXt-X^tp2p-1Esup0tTXtp+sup0tTX^tp2p-1apexp(bpT)+cp. 19

Substitution of Eqs. (18) and (19) into Eq. (17) yields

Esup0tTXt-X^t21{τN<T}2pηp(apexp(bpT)+cp)+p-2pη2/(p-2)1Npapexp(bpT). 20

Next, we estimate the second term in the right-hand side of Eq. (16). The involving process is very close to the proof for Theorem 1, and here we list details to enhance the reader’s readability. Clearly,

Esup0tTXt-X^t21{τN<T}=Esup0tTXtτN-X^tτN21{τN<T}Esup0tTXtτN-X^tτN2. 21

Noting that XtτN-X^tτN2=0tτN(fi(Xs,s)-fi(X^s,s))ds+j=1m0tτNgji(Xs,s)uj(s)ds2, then using the Hölder inequality, the local Lipschitz condition (8) and the growth condition (5) in turn arrives at

XtτN-X^tτN2(m+1)0tτNfi(Xs,s)-fi(X^s,s)ds2+j=1m0tτNgji(Xs,s)uj(s)ds2(m+1)t0tτNfi(Xs,s)-fi(X^s,s)2ds+tj=1m0tτNgji(Xs,s)uj(s)2ds(m+1)TLN0tτNXs-X^s2ds+2TK2j=1m0tτN(4+Xs2)uj(s)2ds

As the right-hand-side terms are increasing in t, we derive

EXtτN-X^tτN2(m+1)TLNE0tτNXs-X^s2ds+2K2j=1mE0tτN4+Xs2uj(s)2ds=(m+1)TLN0tEXsτN-X^sτN2ds+2K2j=1m0tE4+XsτN2uj(sτN)2ds(m+1)LNT0tEsup0rsXrτN-X^rτN2ds+2K2T2m4Esup0tTuj(t)2+Esup0tTXt4Esup0tTuj(t)4(m+1)LNT0tEsup0rsXrτN-X^rτN2ds+2σ2K2T2m4ξ1+ξ2a4exp(b4T)

and thenEsup0stXtτN-X^tτN2d(m+1)TLN0tEsup0rsXrτN-X^rτN2ds+2σ2mT2K24ξ1+ξ2a4exp(b4T) By the Gronwall inequality we obtain

Esup0stXtτN-X^tτN22dσ2m(m+1)T2K24ξ1+ξ2a4exp(b4T)ed(m+1)TLN 22

Combination of Eqs. (21) and (22) yields

Esup0tTXt-X^t21τN>T2dσ2m(m+1)T2K24ξ1+ξ2a4exp(b4T)ed(m+1)TLN 23

With Eqs. (20) and (23) substituted into Eq. (16), it is obtained that

Esup0tTXt-X^t22pηpapexp(bpT)+cp+p-2pη2/(p-2)1Npapexp(bpT)+2dσ2m(m+1)T2K24ξ1+ξ2a4exp(b4T)expd(m+1)TLN 24

For any ε>0, we choose η sufficiently small to get 2pηpapexp(bpT)+cp<ε3 and N sufficiently large such that p-2pη2/(p-2)1Npapexp(bpT)<ε3 . Then, we can choose σ small enough to ensure 2dσ2m(m+1)T2K24ξ1+ξ2a4exp(b4T)expd(m+1)TLN<ε3. Hence, there exists a critical value σc such that Esup0tTXt-X^t2<ε when σ<σc. □

Lemma 4

Consider a nonlinear system with f(Xt,t)=f~(Xt)+S(t). Assume f~x and gx satisfy the local Lipschitz condition and gx obey the growth condition. Suppose that the system receives a binay input S(t){s1,s2}. Then for every T>0 and ε>0, as σ0 there hold

Esup0tTXt-X^t2S=si0, 25

and

limkPsup0tTXk(t)-X^k(t)>εS=si=0. 26

Proof of Theorem3

Let σkk=1 be arbitrary decreasing sequence of intensity parameter of Gaussian colored noise such that σk0 as k. Denote the corresponding solution process and the discrete output process of “0” and “1” by Xk(t) and Yk(t) with Gaussian colored noise parameter σk instead of σ. Recalling that I(S,Y)=0 if and only if S and Y are statistically independent, so one only needs to show that FS,Y(s,y)=FS(s)FY(y) or FYS(ys)=FY(y) as σ0 for signal symbols ss1,s2 and for all y0. Here FS,Y represents for the joint distribution function and FYS is the conditional distribution function.

Note that y0 means that X1(t) is capable of crossing the firing threshold from below, then

P(Yk(t)S=si)Psupt1tt2Xk1(t)>θS=si,

and by Lemma 4,

limkP(Yk>yS=si)limkPsupt1tt2Xk1(t)>θS=si=limklimnPsupt1tt2Xk1(t)>θ,Xk1(t)<θ-1nS=silimnlimkPsupt1tt2Xk1(t)-Xk1(t)2>1nS=si

where the first equality is owing to that the input signal is subthreshold. Thus,

limkP(Yk>yS=s1)=limkPYk>yS=s2=0

or equivalently

limkP(YkyS=s1)-limkPYkyS=s2=0

Then, using the total probability formula,

FYk(y)=FYkS(ys1)PS(s1)+FYkS(ys2)PS(s2)=FYkS(ys1)PS(s1)+FYkS(ys2)(1-PS(s1))=FYkS(ys1)-FYkS(ys2)PS(s1)+FYkS(ys2)

Taking the k limit on the both sides of this equation, we arrive at

FY(y)=FYS(ys2)

This demonstrates that S and Y are statistically independent, and hence I(S,Y)=0 as σ0.□

Numerical verification

Theorem 3 builds a bridge between the perturbation theorem and the existence of stochastic resonance. In order to have an intuitive verification of Theorem 3, let us take two examples into account. The first example is the noisy feedback neuron models with a quantized output into account (Patel and Kosko 2008; Gao et al. 2018). Let x denote the membrane voltage, then

dxdt=-x+h(x)+S(t)+u(t),du(t)=-1τu(t)dt+σdW(t) 27

where the logistic function h(x)=(1+e-ax)-1(a=8) gives a bistable artificial neuron model, the signal S(t)A,B represents the net excitatory or inhibitory input. Here the value of S(t) is taken from the binary distribution:P(S(t)=A)=p,P(S(t)=B)=1-p, and the duration time of each value of S(t) is considerably larger than the decay time constant τ. The more details are can be found from the subsequent figures and the numerical steps for mutual information. The neuron feeds its activation back to itself through -x(t)+h(x(t)) and action potential can be generated if the membrane potential (spike) is larger than zero. Here note that the vector field f(x)=-x(t)+h(x(t)). According to the graphic method it can be seen in Fig. 1 that if the input signal S(t)A,B take opposite value between the two dot lines, that is, -0.63<A<B<-0.37, then by linear stability analysis, the neuron has three equilibrium points, namely, two stable and one unstable. Since the neuron information is mainly transmitted by the spiking train, the quantized output y(t) can be defined as

y(t)=0,x(t)0.1,x(t)>0;

Fig. 1.

Fig. 1

Schemata of the vector field function of f(x) (blue solid line). The value of the above dot line is 0.63, and the value of the bottom dotted line is 0.37. In the figure, the intersection of the dash line with the S-shaped curve stands for the equilibrium points, and the one-order derivative of the vector field just is the resultant slope of tangent line. Since two of the three slopes are negative and one is positive, two of the three equilibrium points are stable, and one is unstable. (Color figure online)

Note that

x2-(1+e-8x2)-1-x1+(1+e-8x1)-122x2-x12+2e-8x2-e-8x1(1+e-8x1)(1+e-8x2)218x2-x12

and

f(x)=-x+h(x)1+|x|.

The two inequalities together imply the vector field of the bistable neuron model f(x) satisfies global Lipschitz condition (4) and growth condition (5). Note that the global Lipschitz condition implies the local Lipschitz condition, which can assure the existence and uniqueness of the solution of the model, thus according to Theorem 3, the phenomenon of aperiodic stochastic resonance should exist for a subthreshold input signal. Here, by “subthreshold”, it means that the weak signal cannot spontaneously emit action potential without the help of noise. We can guarantee that when the constant value of the input signal enables the model to be bistable, the input signal is subthresold.

Before exhibiting the numerical results of aperiodic stochastic resonance, let us list the numerical steps for mutual information calculation for the sake of the reader’s reference.

  • (I)

    Initialize the parameters A, B, p and x(0).

  • (II)

    Given the time step-length Δt=0.01 and a series of the duration time Ti(i1).

  • (III)

    For each time span of duration time Ti, generate a uniformly distributed number r, and then let S(t)=A if r>p. Otherwise, S(t)=B.

  • (IV)

    Apply Euler difference scheme and Box-Mueller algorithm to Eq. (27) (or Eq. (28)) to generate the output spike train y(t).

  • (V)

    Calculate the marginal probability laws P(S(t)) and P(y(t)) and the joint probability law P(S(t),y(t)).

  • (VI)

    Substitute the above probability laws into Eq. (9) for the mutual information.

We remark that in the above Step (V), the involving probabilities (also refer to Table 1) are approximated by statistical frequencies. In all numerical implements except Fig. 4, the dimensionless duration time parameter for the input signal S(t) is fixed as T=40, and the simulating time span is taken as 50 such constant duration times. Over one time span, one membrane evolution trajectory or output spike train can be tracked, and then the mutual information can be acquired from one trial. Note that the definition in Eq. (9) can be rewritten into

I(S,Y)=ElogPSY(s,y)PS(s)PY(y),

thus the mutual information is actually the mathematical expectation of the random variable logPSY(s,y)PS(s)PY(y) (Patel and Kosko 2008). So, in order to improve the accuracy of the above calculation, for each set of given parameters, we employ 100 trials to obtain the averaged mutual information, as shown in all the involving figures.

Table 1.

Marginal and joint probability laws

graphic file with name 11571_2020_9632_Tab1_HTML.jpg

Fig. 4.

Fig. 4

Mutual information between input signal S and quantized output signal Y as a function of (a) the noise intensity σ with and (b) correlation time constant τ under different duration time parameters for the input signal. Here A=-0.6, B=-0.4 and p=0.7

The non-monotonic dependence of mutual information on noise intensity signifies the occurrence of stochastic resonance, as shown in Fig. 2. Since the binary input is subthreshold, there is no spike in absent of Gaussian colored noise (Fig. 2b). As the noise of small amount is added, the neuron starts to spike (Fig. 2c), but the output signal is much different from the binary input (Fig. 2a). When the noise is at an appropriate level, the output signal greatly resembles the input signal in shape (Fig. 2d), but the resemblance in shape is gradually broken as overmuch amount of noise could cause too frequent spikes (Fig. 2e). The non-monotonic dependence of the input–output mutual information on noise intensity exactly reflects the change in the resemblance (Fig. 2f), thus the phenomenon of stochastic resonance is confirmed.

Fig. 2.

Fig. 2

Stochastic resonance in the bistable neuron model with quantized output. The binary signal is shown in panel (a). Here A=-0.6, B=-0.4 and p=0.7. Since the input signal is subthreshold, there is no 1 in the quantized output when the Gaussian colored noise is absent (σ=0,τ=0.4), as shown in panel (b). As the noise intensity of the Gaussian colored noise is introduced, more and more “1s” occur in the quantized output, as shown in panel (c) (σ=0.1, τ=0.4), (d) (σ=0.4, τ=0.4) and (e) (σ=1, τ=0.4), but obviously too much Gaussian colored noise will reduce the input–output coherence, so there is a mono-peak structure in the curves of mutual information via noise intensity as shown in panel (f): τ=0.2(blue dot curve),τ=0.4(red broken curve) and τ=0.6 (green solid curve). (Color figure online)

From Fig. 2f one further sees that the correlation time has a certain effect on the bell shaped curve of the input–output mutual information. That is, the peak height of the mutual information is a decreasing function of the correlation time, but at the same time, the optimal noise intensity at which the resonant peak locates shifts to a weaker noise level. In order to more systematically disclose the influence of colored noise, we plot the mutual information as function of correlation time in Fig. 3. Surprisingly, the correlation time induced aperiodic stochastic resonance is observed for given noise intensity, and there exists optimal correlation time at which the shape matching between the input signal and the output signal. Moreover, as noise intensity increases, the optimal correlation time of the maximal mutual information decreases. The similarity between Fig. 2 and Fig. 3 suggests noise intensity and correlation time play a similar role. In fact, our conjecture can be confirmed by checking the steady fluctuation of Gaussian colored noise. After a simple calculation it can be found that the steady noise variance is proportional to the correlation time and the square of noise intensity, namely D(u)=τσ2/2. Although this finding is a bit different from the observation related in Gaussian colored noise induced conventional (periodic) stochastic resonance (Gammaitoni et al. 1998), where the resonant peak tends to shift to a larger noise level as the correlation time increases, it is meaningful, since in neural circuit design noise intensity might be usually hard to change; one may tune the correlation time to realize the enhancement of information capacity instead. Additionally, the influence of different duration time parameter for the input signal on the aperiodic stochastic resonance is also checked in Fig. 4. It is observed that as the duration time decreases, both the effect of resonance induced by Gaussian colored noise and correlation time become weak. This is common with the conventional stochastic resonance, where only a slowly varying periodic signal can be amplified by noise rather than a high frequency signal (Kang et al. 2005).

Fig. 3.

Fig. 3

Stochastic resonance in the bistable neuron model with quantized output. The binary signal is shown in panel (a). Here A=-0.6, B=-0.4 and p=0.7. There is no 1 in the quantized output when the correlation time constant of Gaussian colored noise is close to zero (τ=0.001,σ=0.3), as shown in panel (b). As the correlation time constant increases, more and more “1s” occur in the quantized output, as shown in panel (c) (τ=0.2, σ=0.3) (d) (τ=0.5, σ=0.3), and (e) (τ=1.5, σ=0.3), but obviously too large correlation time constant will reduce the input–output coherence, so there is a mono-peak structure in the curves of mutual information via correlation time constant as shown in panel (f): σ=0.3 (blue dot curve), σ=0.5 (red broken curve) and σ=0.7(green solid curve). (Color figure online)

The second example is the FitzHugh–Nagumo neuron model (Capurro et al. 1998), governed by

εdvdt=v(v-a)(1-v)-w+A0+S(t)+g(v)u(t),dwdt=v-w-b,du(t)=-1τu(t)dt+σdW(t) 28

where v stands for the transmembrane voltage, w denotes a slow recovery variable, and the input signal S(t)A,B is again taken as the subthreshold binary signal. Whenever the membrane voltage crosses the threshold value θ=0.5 from below, the neuron emits a spike, and the output spiking train can be formulated as

Y(t)=iδ(t-ti) 29

with ti being the occurring time of the ith spike.

Note that

v1(v1-a)(1-v1)-w1-v2(v2-a)(1-v2)+w22=(v2-v1)(v12+v1v2+v22)+(v1-v2)(v1+v2)+a(v2-v1)+w2-w124(v2-v1)(v12+v1v2+v22)2+4(v1-v2)(v1+v2)2+4a(v2-v1)2+4w2-w124(9N4+4N2+a2+1)v2-v12+w2-w12

and

v1-w1-v2+w222v2-v12+w2-w12

for all v1,v2R with v1N and v2N. Here, the region-dependent Lipschitz constant LN=4(9N4+4N2+a2+1). Thus, the vector field of the FitzHugh–Nagumo model is local Lipschitz. Actually, the local but not global Lipschitz property of the vector field has been proven by the mean value theorem (Patel and Kosko 2008). On the other hand, since the transmembrane voltage and the slow recovery variable are always bounded, one can assume that there exists constant C such that for any t>0, max(v,w)C, then

v(v-a)(1-v)-w-A0A0+v(v-a)(1-v)-wA0+v(v-a)(1-v)2+w2(A0+(1+a)C2+C4+a2)1+v2+w2

and

v-w-bb+v-w(2+b)1+v2+w2,

that is, the growth condition is satisfied. Again we can choose g(v)=1 (Figs. 5 and 6) to denote the additive intensity, or g(v)=v21+v4 (Fig. 7) such that it stands for the multiplicative noise intensity but is easy to verify the Lipschitz and growth conditions. Then according to Theorem 3, the Gaussian colored noise induced aperiodic stochastic resonance in the neuron model can be anticipated.

Fig. 5.

Fig. 5

Stochastic resonance in the FitzHugh–Nagumo neuron model. Here g(v)=1,A=-0.035, B=-0.125 and p=0.7. (a) The subthreshold binary signal. (b) Output spikes when the Gaussian colored noise is absent (σ=0,τ=0.4). (c) Output spikes when the noise intensity of Gaussian colored noise is small (σ=0.003, τ=0.4). (d) Stochastic resonance effect: Output spikes when the noise intensity of Gaussian colored noise is moderate (σ=0.01, τ=0.4). (e) Output spikes when the noise intensity of Gaussian colored noise is large (σ=0.04,τ=0.4). Obviously too much Gaussian colored noise will reduce the input–output coherence, so there is a mono-peak structure in the curves of mutual information via noise intensity as shown in panel (f): τ=0.2 (blue dot curve),τ=0.4(red broken curve) and τ=0.6 (green solid curve). (Color figure online)

Fig. 6.

Fig. 6

Stochastic resonance in the FitzHugh–Nagumo neuron model. Here g(v)=1, A=-0.035, B=-0.125 and p=0.7. (a) The subthreshold binary signal. (b) Output spikes when the correlation time constant of Gaussian colored noise is close to zero (τ=0.001,σ=0.03). (c) Output spikes when the correlation time constant is small (τ=0.01, σ=0.03). (d) Stochastic resonance effect: Output spikes when the correlation time constant is moderate (τ=0.05, σ=0.03). (e) Output spikes when the correlation time constant is large (τ=0.2, σ=0.03). Obviously too large correlation time constant will reduce the input–output coherence, so there is a mono-peak structure in the curves of mutual information via correlation time constant as shown in panel (f): σ=0.03 (blue dot curve), σ=0.05 (red broken curve) and σ=0.07 (green solid curve). (Color figure online)

Fig. 7.

Fig. 7

Mutual information between input signal S and output spike train Y as a function of (a) the noise intensity σ and (b) correlation time constant τ. Here g(v)=v21+v4, A=-0.035, B=-0.125 and p=0.7. (Color figure online)

In the numerical simulation of the second example, we take a=0.5,A0=0.04,ε=0.005,b=0.2466, Δt=0.001 and the duration time of S(t) is again taken as 40 time units. We point out that the input binary signals in Figs. 5, 6a are still subthreshold, although a spike generation happens at the moment the signal is switched from one value to the other in Figs. 5, 6b in absence of noise, (Patel and Kosko 2005). Figures 5, 6d demonstrate again that the best shape matching can happen at a suitable noise intensity or correlation time, at which the input–output mutual information in Figs. 5, 6f attains its maximum. Thus, the aperiodic stochastic resonance induced by Gaussian colored noise is confirmed. Moreover, Fig. 7 shows that this phenomenon can also be induced by the multiplicative Gaussian colored noise. From Fig. 7, a similar effect of correlation time on resonant peak is observed. This similarity implies that increasing correlation time inhibits the aperiodic stochastic resonance effect but reduces the optimal noise intensity. This feature reflects the noise intensity and the correlation time play the same role here. Note that the “color” of the Gaussian noise always restrains the effect of conventional periodic stochastic resonance and shifts the resonant peak to larger noise intensity (Gammaitoni et al. 1998), thus the properties of aperiodic stochastic resonance seems not suitable for being directly generalized from the conventional stochastic resonance. In fact, we infer the properties of aperiodic stochastic resonance should be similar to stochastic synchronization, since they can be measured by the same quantifying index.

The above neuron models have verified the assertion in Theorem 3. In fact, Theorem 3 gives necessary conditions for aperiodic stochastic resonance effect of Gaussian colored noise in neuron models for subthreshold input signals. By utilizing the statement of Theorem 3, the investigation of the aperiodic stochastic resonance under Gaussian colored noise can be reduced to a simple task of showing that a zero limit of the input–output mutual information exists. Then, just as the theorems stated in the work of Patel and Kosko (2005, 2008) and Kosko et al. (2009), Theorem 3 again acts as a type of screening device to filter whether noise benefits in the detection of subthreshold signals based on the measurement of mutual information.

Conclusion and discussion

After proving that under certain conditions the solution of nonlinear dynamic systems perturbed by Gaussian colored noise can converge to the solution of the deterministic counterpart as noise intensity tends to zero, we theoretically predicted the occurrence of the aperiodic stochastic resonance induced by Gaussian colored noise in bistable and excitable neuron systems based on the “forbidden interval” theorem. The theoretical prediction actually presents a technical tool that screen for whether the mutual-information measured stochastic resonance occurs in the detection of subthreshold signals in the background of Gaussian colored noise. The simulated results with two typical neuron models further verified the occurrence of aperiodic stochastic resonance for weak input signals. Particularly, we disclose the novel inhibitive effect of the correlation time of Gaussian colored noise on the aperiodic stochastic resonance, and found the “color” of noise plays the same role as noise intensity. Since in the design of neural circuits, the noise intensity is not always easy to be tuned for utilizing the benefit of noise, our finding provides an alternative way to implement the effect of aperiodic stochastic resonance by adjusting the correlation time.

At the end, let us stress the main difference from the existing theoretical proofs, and let us also have some prospect. As it is known, Gaussian white noise, as the formal derivative of Wiener process of stationary independent increments, cannot describe the correlation of environmental fluctuations, the fractional Gaussian noise, as the formal derivative of fractional Brownian motion, has power-law feature in power spectral density and can model the fluctuations of long range temporal correlation, while the Gaussian colored noise, generated by the Ornstein–Ulenbeck process, is applicable for modeling the short-time correlation. Thus, the work of this paper actually shrinks the gap between the aperiodic stochastic resonance induced by Gaussian white noise (Patel and Kosko 2005) and induced by fractional Gaussian noise (Gao et al. 2018). Moreover, Levy noises are the formal derivative of the jump-diffusion Levy processes of stationary independent increments, thus the aperiodic stochastic resonance with Levy noise (Patel and Kosko 2008) did not consider the effect of “color”. Note that Gaussian colored noise is only a special member of the family of Levy colored noise (Lü and Lu 2019), which is capable of describing the subquantal release of neurotransmitter, thus it will be meaningful to explore the beneficial role of the more general Levy colored noise in neural processing in the future.

Acknowledgements

This work is financially supported by the National Natural Science Foundation of China (Grant No. 11772241).

Appendix

Proof of Lemma 1

Proof

Fix T0 arbitrarily. The Ito formula (Øksendal 2005; Mao 2007) shows that

uj(t)2k=uj(0)2k+0t-2kτuj(s)2k+k(2k-1)σ2uj(s)2(k-1)ds+2kσ0t(uj(s))2k-1dWj(s)

for 0tT. By the moment property (3) of the stationary OU process, we get

Esup0tTuj(t)2kσ2k(2k-1)!!(0.5τ)k-1(0.5τ+kT)+2kσEsup0tT0t(uj(s))2k-1dWj(s)

By the Burkholder–Davis–Gundy inequality (Prato and Zabczyk 1992),

Esup0tTuj(t)2kσ2k(2k-1)!!(0.5τ)k-1(0.5τ+kT)+23kσE0Tuj(s)4k-2ds12.

Using the Hölder inequality we then derive

Esup0tTuj(t)2kσ2k(2k-1)!!(0.5τ)k-1(0.5τ+kT)+23kσ0TEuj(s)4k-2ds12σ2k(2k-1)!!(0.5τ)k-1(0.5τ+kT)+23kσT(4k-3)!!(0.5τσ2)2k-112σ2k(0.5τ)k-1(0.5τ+kT)+2k3T(4k-3)!!(0.5τ)2k-1.

Proof of Lemma 2

Proof

It is well known that almost all sample paths of the Ornstein–Ulenbeck process are continuous. It is therefore easy to see from the classical theory of ordinary differential equations that for any initial value X0Rd, Eq. (1) has a unique global solution Xt on t0. Fix T0 arbitrarily. According to Lemma 1,

Esup0tTuj(t)2kσ2kξk 30

with ξk given by Eq. (7).

Define the stopping times τh=inf{t0:Xth} for all integers h>X0, where throughout this paper we set infΦ=. Here Φ stands for the empty set. Clearly, τh almost surely as h. For t0,T, it follows from Eq. (1a) that

Xtτhip(m+2)p-1X0ip+0tτhfi(Xs,s)dsp+j=1m0tτhgji(Xs,s)uj(s)dsp(m+2)p-1X0ip+tp-10tτhfi(Xs,s)pds+tp-1j=1m0tτhgji(Xs,s)uj(s)pds(m+2)p-1X0ip+tp-10tfiXsτh,sτhpds+tp-1j=1m0tgjiXsτh,sτhuj(sτh)pds(m+2)p-1X0ip+tp-1Kp0t1+Xsτhpds+tp-1Kpj=1m0t1+Xsτhγpuj(sτh)pds(m+2)p-1X0ip+tp-12p-1Kp0t1+Xsτhpds+tp-12p-1Kpj=1m0t1+Xsτhpγuj(sτh)pds

Here, the first inequality is due to (a1++am)pmp-1(a1p++amp), the second inequality is owing to the Hölder inequality; the growth conditions are adopted for the last second equality; and the inequality (a+b)p2p-1(ap+bp) is used in the last inequality. As the right-hand-side terms are increasing in t, we see easily that

Esup0stXsτhip(m+2)p-1X0ip+Tp-12p-1Kp×0t1+EXsτhpds+(m+2)p-1Tp-12p-1Kp×j=1m0tE1+Xsτhpγuj(sτh)pds

and then by X0ip=X0i2p2i=1dX0i2p2=X0p,

Esup0stXsτhip(m+2)p-1X0p+Tp-12p-1Kp×0t1+Esup0rsXrτhpds+(m+2)p-1Tp-12p-1Kpj=1m0tE1+sup0rsXrτhpγsup0rsuj(rτh)pds

By the well-known Young inequality xyxpp+yqq for x,y0 and p,q>0 with 1p+1q=1,

Esup0rsXrτhpγsup0rsuj(rτh)pγEsup0rsXrτhp+(1-γ)Esup0rsuj(rτh)p1-γEsup0rsXrτhp+Esup0rsuj(rτh)p1-γ

while recalling that k¯p2(1-γ) in Eq. (13), then by the Hölder inequality,

Esup0rsuj(rτh)p1-γEsup0rsuj(rτh)2k¯p2(1-γ)k¯Esup0rTuj(rτh)2k¯p2(1-γ)k¯

Hence, by Eq. (30),

Esup0stXsτhip(m+2)p-1(X0p+Tp-12p-1Kp1+mσpξp12+mσp1-γξk¯p2(1-γ)k¯+(m+2)p-1Tp-12p-1Kp(m+1)0tEsup0rsXrτhpds

Considering

sup0stXsτhp=sup0sti=1dXsτhi2p2dmax1idsup0stXsτhi2p2=dp2max1idsup0stXsτhip,

then

Esup0stXsτhpdp2Emax1idsup0stXsτhipdp2+1max1idEsup0stXsτhip.

Here, the distribution property for the maximum of multiple mutually independent random variables is adopted. Then for any 0tT,

Esup0stXsτhpap+bp0tEsup0rsXrτhpds 31

with ap and bp given in Eqs. (11) and (12). And then, the application of the Gronwall inequality to Eq. (31) yields

Esup0sTXsτhpapexp(bpT)<

Letting h implies the required assertion (10). □

Proof of Lemma 3

Proof

Note the inequality (15) can be proven with technique somehow parallel to that of Lemma 2. It is well known that under given conditions Eq. (3) has a unique global solution X^t on t0. Define a sequence vh=inf{t0:X^th} for all integers hX0, with infΦ= for an empty set Φ. Clearly, vh almost surely as h. For t0,T, it can be deduced from (3) that for 0<t<T,

sup0stX^svhpdp2(2p-1X0p+Tp22(p-1)Kp)+Tp-122(p-1)dp2Kp0tsup0rsX^rvhpds

Then, the Gronwall inequality implies

sup0stX^svhpdp2(2p-1X0p+Tp22(p-1)Kp)exp22(p-1)dp2Tp

Letting h implies the assertion (15) immediately. □

Proof of Lemma 4

Proof

Recalling the duplicate property of the conditional probability distribution

EEsup0tTXt-X^t2S=s1=Esup0tTXt-X^t2,

we obtain

Esup0tTXt-X^t2=PS=s1Esup0tTXt-X^t2S=s1+PS=s2Esup0tTXt-X^t2S=s2,

from which it can be deduced that

Esup0tTXt-X^t2S=si1PS=siEsup0tTXt-X^t2,

and thus by Theorem 2, Eq. (25) is found true. Then, application of Markov’s inequality immediately gives Eq. (26). □

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nat Rev Neurosci. 2006;7(5):358–366. doi: 10.1038/nrn1888. [DOI] [PubMed] [Google Scholar]
  2. Benzi R, Sutera A, Vulpiani A. The mechanism of stochastic resonance. J Phys A. 1981;14(11):L453–L457. [Google Scholar]
  3. Capurro A, Pakdaman K, Nomura T, Sato S. Aperiodic stochastic resonance with correlated noise. Phys Rev E. 1998;58(4):4820–4827. [Google Scholar]
  4. Collins JJ, Chow CC, Imhoff TT. Aperiodic stochastic resonance in excitable systems. Phys Rev E. 1995;52(4):R3321–R3324. doi: 10.1103/physreve.52.r3321. [DOI] [PubMed] [Google Scholar]
  5. Collins JJ, Chow CC, Capela AC, Imhoff TT. Aperiodic stochastic resonance. Phys Rev E. 1996;54(5):5575–5584. doi: 10.1103/physreve.54.5575. [DOI] [PubMed] [Google Scholar]
  6. Collins JJ, Imhoff TT, Grigg P. Noise-enhanced information transmission in rat SA1 cutaneous mechanoreceptors via aperiodic stochastic resonance. J Neurophysiol. 1996;76(1):642–645. doi: 10.1152/jn.1996.76.1.642. [DOI] [PubMed] [Google Scholar]
  7. Cover TM, Thomas JA. Elements of information theory. New York: Wiley; 1991. [Google Scholar]
  8. Déli E, Tozzi A, Peters JF. Relationships between short and fast brain timescales. Cogn Neurodyn. 2017;11(6):539–552. doi: 10.1007/s11571-017-9450-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Dylov DV, Fleischer JW. Nonlinear self-filtering of noisy images via dynamical stochastic resonance. Nat Photon. 2010;4(5):323–328. [Google Scholar]
  10. Floris C. Mean square stability of a second-order parametric linear system excited by a colored Gaussian noise. J Sound Vib. 2015;336:82–95. [Google Scholar]
  11. Freidlin MI, Wentzell AD, Tr. by Szuecs J (2012) Random perturbations of dynamical systems. Springer, Berlin
  12. Fu YX, Kang YM, Chen GR. Stochastic resonance based visual perception using spiking neural networks. Front Comput Neurosci. 2020;14:24. doi: 10.3389/fncom.2020.00024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Gammaitoni L, Hänggi P, Jung P, Marchesoni F. Stochastic resonance. Rev Mod Phys. 1998;70(1):223–287. [Google Scholar]
  14. Gao FY, Kang YM, Chen X, Chen GR. Fractional Gaussian noise enhanced information capacity of a nonlinear neuron model with binary input. Phys Rev E. 2018;97(5):052142. doi: 10.1103/PhysRevE.97.052142. [DOI] [PubMed] [Google Scholar]
  15. Gu HG, Pan BB. Identification of neural firing patterns, frequency and temporal coding mechanisms in individual aortic baroreceptors. Front Comput Neurosci. 2015;9:108. doi: 10.3389/fncom.2015.00108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Guan LN, Gu HG, Jia YB. Multiple coherence resonances evoked from bursting and the underlying bifurcation mechanism. Nonlinear Dyn. 2020;100:3645–3666. [Google Scholar]
  17. Guo DQ. Inhibition of rhythmic spiking by colored noise in neural systems. Cogn Neurodyn. 2011;5(3):293–300. doi: 10.1007/s11571-011-9160-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Kang YM, Xu JX, Xie Y. Signal-to-noise ratio gain of a noisy neuron that transmits subthreshold periodic spike trains. Phys Rev E. 2005;72(2):021902. doi: 10.1103/PhysRevE.72.021902. [DOI] [PubMed] [Google Scholar]
  19. Kim SY, Lim W. Effect of spike-timing-dependent plasticity on stochastic burst synchronization in a scale-free neuronal network. Cogn Neurodyn. 2018;12(3):315–342. doi: 10.1007/s11571-017-9470-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Kosko B, Lee I, Mitaim S, Patel A, Wilde MM (2009) Applications of forbidden interval theorems in stochastic resonance. In: Applications of Nonlinear Dynamics. Springer, New York
  21. Lee KE, Lopes MA, Mendes JFF, Goltsev AV. Critical phenomena and noise-induced phase transitions in neuronal networks. Phys Rev E. 2014;89(1):012701. doi: 10.1103/PhysRevE.89.012701. [DOI] [PubMed] [Google Scholar]
  22. Levin JE, Miller JP. Broadband neural encoding in the cricket cereal sensory system enhanced by stochastic resonance. Nature. 1996;380(6570):165–168. doi: 10.1038/380165a0. [DOI] [PubMed] [Google Scholar]
  23. Liu RN, Kang YM. Stochastic resonance in underdamped periodic potential systems with alpha stable Lévy noise. Phys Lett A. 2018;382(25):1656–1664. [Google Scholar]
  24. Liu J, Li Z. Binary image enhancement based on aperiodic stochastic resonance. IET Image Process. 2015;9(12):1033–1038. [Google Scholar]
  25. Lü Y, Lu H. Anomalous dynamics of inertial systems driven by colored Lévy noise. J Stat Phys. 2019;176(4):1046–1056. [Google Scholar]
  26. Mao XR. Stochastic differential equations and applications. 2. London: Woodhead Publishing Limited; 2007. [Google Scholar]
  27. Mao XR, Sababis S. Numerical solutions of stochastic differential delay equations under local Lipschitz condition. J Comput Appl Math. 2003;151(1):215–227. [Google Scholar]
  28. Mizraji E, Lin J. The feeling of understanding: an exploration with neural models. Cogn Neurodyn. 2017;11(2):135–146. doi: 10.1007/s11571-016-9414-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Nakamura O, Tateno K. Random pulse induced synchronization and resonance in uncoupled non-identical neuron models. Cogn Neurodyn. 2019;13(3):303–312. doi: 10.1007/s11571-018-09518-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Øksendal B. Stochastic differential equations: an introduction with applications. 6. Berlin: Springer; 2005. [Google Scholar]
  31. Patel A, Kosko B. Stochastic resonance in noisy spiking retinal and sensory neuron models. Neural Netw. 2005;18(5–6):467–478. doi: 10.1016/j.neunet.2005.06.031. [DOI] [PubMed] [Google Scholar]
  32. Patel A, Kosko B. Stochastic resonance in continuous and spiking neuron models with Levy noise. IEEE Trans Neural Netw. 2008;19(12):1993–2008. doi: 10.1109/TNN.2008.2005610. [DOI] [PubMed] [Google Scholar]
  33. Prato GD, Zabczyk J. Stochastic equations in infinite dimensions. Cambridge: Cambridge University Press; 1992. [Google Scholar]
  34. Sakai Y, Funahashi S, Shinomoto S. Temporally correlated inputs to leaky integrate-and-fire models can reproduce spiking statistics of cortical neurons. Neural Netw. 1999;12(7–8):1181–1190. doi: 10.1016/s0893-6080(99)00053-2. [DOI] [PubMed] [Google Scholar]
  35. Song JL, Paixao L, Li Q, Li SH, Zhang R, Westover MB. A novel neural computational model of generalized periodic discharges in acute hepatic encephalopathy. J Comput Neurosci. 2019;47(2–3):109–124. doi: 10.1007/s10827-019-00727-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Tiwari I, Phogat R, Parmananda P, Ocampo-Espindola JL, Rivera M. Intrinsic periodic and aperiodic stochastic resonance in an electrochemical cell. Phys Rev E. 2016;94(2):022210. doi: 10.1103/PhysRevE.94.022210. [DOI] [PubMed] [Google Scholar]
  37. Wang HY, Wu YJ. First-passage problem of a class of internally resonant quasi-integrable Hamiltonian system under wide-band stochastic excitations. Int J Nonlin Mech. 2016;85:143–151. [Google Scholar]
  38. Wang RB, Wang GZ, Zheng JC. An exploration of the range of noise intensity that affects the membrane potential of neurons. Abstr Appl Anal. 2014;2014:801642. [Google Scholar]
  39. Xu Y, Guo YY, Ren GD, Ma J. Dynamics and stochastic resonance in a thermosensitive neuron. Appl Math Comput. 2020;385(15):125427. [Google Scholar]
  40. Yan CK, Wang RB, Pan XC. A model of hippocampal memory based on an adaptive learning rule of synapses. J Biol Syst. 2013;21(03):1350016. [Google Scholar]
  41. Yang T. Adaptively optimizing stochastic resonance in visual system. Phys Lett A. 1998;245:79–86. [Google Scholar]
  42. Zeng FG, Fu QJ, Morse R. Human hearing enhanced by noise. Brain Res. 2000;869:251–255. doi: 10.1016/s0006-8993(00)02475-6. [DOI] [PubMed] [Google Scholar]
  43. Zhao J, Qin YM, Che YQ, Ran HYQ, Li JW. Effects of network topologies on stochastic resonance in feedforward neural network. Cogn Neurodyn. 2020;14:399–409. doi: 10.1007/s11571-020-09576-8. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Cognitive Neurodynamics are provided here courtesy of Springer Science+Business Media B.V.

RESOURCES