Skip to main content
Springer logoLink to Springer
. 2018 Mar 27;2018(1):113. doi: 10.1186/s13662-018-1553-7

Global exponential stability of Markovian jumping stochastic impulsive uncertain BAM neural networks with leakage, mixed time delays, and α-inverse Hölder activation functions

C Maharajan 1, R Raja 2, Jinde Cao 3,, G Ravi 4, G Rajchakit 5
PMCID: PMC5942391  PMID: 29770144

Abstract

This paper concerns the problem of enhanced results on robust finite time passivity for uncertain discrete time Markovian jumping BAM delayed neural networks with leakage delay. By implementing a proper Lyapunov–Krasovskii functional candidate, reciprocally convex combination method, and linear matrix inequality technique, we derive several sufficient conditions for varying the passivity of discrete time BAM neural networks. Further, some sufficient conditions for finite time boundedness and passivity for uncertainties are proposed by employing zero inequalities. Finally, the enhancement of the feasible region of the proposed criteria is shown via numerical examples with simulation to illustrate the applicability and usefulness of the proposed method.

Keywords: LMIs, Markovian jumping systems, Leakage delay, Bidirectional associative memory, Discrete time neural networks, Passivity and stability analysis

Introduction and problem statement with preliminaries

There has been a growing research interest in the field of recurrent neural networks (RNNs) largely studied by many researchers in recent years. The network architecture includes various types of neural networks such as bidirectional associative memory (BAM) neural networks, Hopfield neural networks, cellular neural networks, Cohen–Grossberg neural networks, neural and social networks which have received great attention due to their wide applications in the field of classification, signal and image processing, parallel computing, associate memories, optimization, cryptography, and so on. The bidirectional associative memory (BAM) neural network models were initially coined by Kosko, see [1, 2]. This network has an extraordinary class of RNNs which can have the ability to store bipolar vector pairs. It is composed of neurons and is arranged in two layers, one is the X-layer and the other is the Y-layer. The neurons in one layer are fully interconnected to the neurons in the other layer. The BAM neural networks are designed in such a way that, for a given external input, they can reveal only one global asymptotic or exponential stability equilibrium point. Hence, considerable efforts have been made in the study of stability analysis of neural networks, and as a credit to this, a large number of sufficient conditions have been proposed to guarantee the global asymptotic or exponential stability for the addressed neural networks.

Furthermore, the existence of time delays in the network will result in bad performance, instability or chaos. Accordingly, the time delays can be classified into two types: discrete and distributed delays. Here, we have taken both the delays into account while modeling our network system, because the length of the axon sizes is too large. So, it is noteworthy to inspect the dynamical behaviors of neural systems with both time delays, see, for instance, [311].

In [12], Shu et al. considered the BAM neural networks with discrete and distributed time delays. Some sufficient conditions were obtained to ensure the global asymptotic stability [12]. Also, time delays in the leakage term have great impact on the dynamic behavior of neural networks. However, so far, there have been a very few existing works on neural networks with time delay in the leakage term, see, for instance, [1317].

Further the stability performance of state variable with leakage time delays was discussed by Lakshmanan et al. in [18]. While modeling a real nervous system, stochastic noises and parameter uncertainties are inevitable and should be taken into account. In the real nervous system, the synaptic transmission has created a noisy process brought on by apparent variation from the release of neurotransmitters and the connection weights of the neuron completely depend on undisputed resistance and capacitance values. Therefore, it is of practical significance to investigate the stochastic disruption in the stability of time-delayed neural networks with parameter uncertainties, see references cited therein [1922]. Moreover, the hasty consequence (impulsive effect) is probable to exist in a wide variety of evolutionary processes that in turn make changes in the states abruptly at certain moments of time [2328].

The conversion of the parameters jump will lead to a finite-state Markov process. Recently, the researchers in [29, 30] investigated the existence of Markovian jumps in BAMNNs and exploited the stochastic LKF approach, the new sufficient conditions were derived for the global exponential stability in the mean square.

The BAM-type NNs with Markovian jumping parameters and leakage terms were described by Wang et al. in [31]. In [32], a robust stability problem was studied and some delay-dependent conditions were derived for the neutral-type NNs with time-varying delays. The authors in [3335] developed some conditions for the stability analysis of neural networks with integral inequality approach. The criteria to obtain the stability result of neural networks with time-varying delays were checked in[3638]. It should be noted that, with all the consequences reported in the literature above, they are concerned only with Markovian jumping SNNs with Lipschitz model neuron activation functions. Up to now, very little attention has been paid to the problem of the global exponential stability of Markovian jumping SBAMNNs with non-Lipschitz type activation functions, which frequently appear in realistic neural networks. This situation motivates our present problem, i.e., α-inverse holder activation functions.

Our main objective of this paper is to study the delay-dependent exponential stability problem for a class of Markovian jumping uncertain BAM neural networks with mixed time delays, leakage delays, and α-inverse Holder activation functions under stochastic noise perturbation.

To the best of authors knowledge, so far, no result on the global exponential stability of Markovian jumping stochastic impulsive uncertain BAM neural networks with leakage, mixed time delays, and α-inverse Hölder activation functions has been available in the existing literature, which motivates our research to derive the following BAM neural networks:

dx(t)=[Cx(tν1)+W0f(y(t))+W1g(y(tτ1(t)))+W2tσ1th(y(s))ds+I]dtdx(t)=+ρ1(x(tν1),y(t),y(tτ1(t))),t)dω(t);t>0,ttk,Δx(tk)=Mk(x(tk),xtk);t=tk,kZ+,dy(t)=[Dy(tν2)+V0f˜(x(t))+V1g˜(x(tτ2(t)))+V2tσ2th˜(x(s))ds+J]dtdy(t)=+ρ2(y(tν2),x(t),x(tτ2(t))),t)dω˜(t);t>0,ttk,Δy(tk)=Nk(y(tk),ytk);t=tk,kZ+, 1

where x(t)=(x1(t),x2(t),,xn(t))TRn and y(t)=(y1(t),y2(t),,yn(t))TRn denote the states at time t; f(), g(), h() and f˜(), g˜(), h˜() denote the neuron activation functions, C=diag{ci}, D=diag{dj} are positive diagonal matrices; ci>0, dj>0, i,j=1,2,,n, are the neural self inhibitions; W0=(W0ji)n×n, V0=(V0ij)n×n are the connection weight matrices; W1=(W1ji)n×n, V1=(V1ij)n×n are the discretely delayed connection weight matrices; and W2=(W2ji)n×n, V2=(V2ij)n×n are the distributively delayed connection weight matrices; I=(I1,I2,,In)T and J=(J1,J2,,Jn)T are the external inputs; τ1(t) and τ2(t) are the discrete time-varying delays which are bounded with 0<τ1(t)<τ¯1, τ˙1(t)τ1<1, and 0<τ2(t)<τ¯2, τ˙2(t)τ2<1, respectively; σ1 and σ2 are constant delays. The leakage delays ν10, ν20 are constants; ρ1:Rn×Rn×Rn×R+Rn and ρ2:Rn×Rn×Rn×R+Rn denote the stochastic disturbances ω(t)=(ω1(t),ω2(t),,ωn(t))T and ω˜(t)=(ω˜1(t),ω˜2(t),,ω˜n(t))T are n-dimensional Brownian motions defined on a complete probability space (A,F,{Ft}t0,P) with a filtration {Ft}t0 satisfying the usual conditions (i.e., it is right-continuous and F0 contains all P-null sets) and E{dω(t)}=E{dω˜(t)}=0, E{dω2(t)}=E{dω˜2(t)}=dt; Mk():Rn×RnRn, Nk():Rn×RnRn, kZ+ are some continuous functions. The impulsive time tk satisfies 0=t0<t1<<tk, (i.e., limktk=+) and infkZ+{tktk1}>0.

The main contributions of this research work are highlighted as follows:

Uncertain parameters, Markovian jumping, stochastic noises, and leakage delays are taken into account in the stability analysis of designing BAM neural networks with mixed time delays.

By fabricating suitable LKF, the global exponential stability of addressed neural networks is checked via some less conserved stability conditions.

For novelty, some uncertain parameters are initially handled in Lyapunov–Krasovskii functional which ensures the sufficient conditions for global exponential stability of designed neural networks.

In our proposed BAM neural networks, by considering both the time delay terms, the allowable upper bounds of discrete time-varying delay is large when compared with some existing literature, see Table 1 of Example 4.1. This shows that the approach developed in this paper is brand-new and less conservative than some available results.

Table 1.

Maximum allowable upper bounds of discrete time delays

Methods τ¯1=τ¯2>0 System status
In Ref. [41] 0.5784 feasible
In Ref. [42] 2.1 feasible
In Ref. [43] 4.822 feasible
In Ref. [44] 5 feasible
In Ref. [34] 5.912 feasible
In Ref. [45] 6.884 feasible
Theorem 2.1 7.46 feasible

Suppose that the initial condition of the stochastic BAM neural networks (1) has the form x(t)=ϕ(t) for t[ω¯,0] and y(t)=ψ(t) for t[ω˜¯,0], where ϕ(t) and ψ(t) are continuous functions, ω¯=max(τ¯1,ν1,σ1) and ω˜¯=max(τ2¯,ν2,σ2). Throughout this section, we assume that the activation functions fi, fj˜, gi, gj˜, hi, hj˜; i,j=1,2,,n, satisfy the following assumptions:

Assumption 1

  1. fi,fj˜ are monotonic increasing continuous functions.

  2. For any ρ1,ρ2,θ1,θ2R, there exist the respective scalars qρ1>0, rρ1>0 and qρ2>0, rρ2>0 which are correlated with ρ1, ρ2 and α>0, β>0 so that
    |fi(θ1)fi(ρ1)|qiρ1|θ1ρ1|α,|θ1ρ1|riρ1,and|f˜j(θ2)f˜j(ρ2)|q˜jρ2|θ2ρ2|β,|θ2ρ2|r˜jρ2.

Assumption 2

gi, hi and g˜j, h˜j are continuous and satisfy

|gi(s1)gi(s2)|ei|fi(s1)fi(s2)|;|hi(s1)hi(s2)|ki|fi(s1)fi(s2)|;|g˜j(s˜1)g˜j(s˜2)|e˜j|f˜j(s˜1)f˜j(s˜2)|;|h˜j(s˜1)h˜j(s˜2)|k˜j|f˜j(s˜1)f˜j(s˜2)|,

s1,s2,s˜1,s˜2R, s1s2 and s˜1s˜2, i,j=1,2,3,,n. Denote E=diag{ei}, K=diag{ki} and E˜=diag{e˜j}, K˜=diag{k˜j} respectively.

Remark 1.1

In [39], the function fi used in Assumption 1 is said to be an α-inverse Holder activation function which is a non-Lipschitz function. This activation function plays an important role in the stability issues of neural networks, and there exists a great number of results in the engineering mathematics, for example, f(θ)=arctanθ and f(θ)=θ3+θ are 1-inverse Holder functions, f(θ)=θ3 is 3-inverse Holder function.

Remark 1.2

From Assumption 2, we can get that ei, e˜j and ki, k˜j are positive scalars. So E, and K, are both positive definite diagonal matrices. The relations among the different activation functions fi, f˜j (which are α-inverse Holder activation functions) gi, g˜j and hi, h˜j are implicitly established in Theorem 3.2. Such relations, however, have not been provided by any of the authors in the reported literature.

In order to guarantee the global exponential stability of system (1), we assume that the system tends to its equilibrium point and the stochastic noise contribution vanishes, i.e.,

Assumption 3

ρ(x,y,y,t)=0; i,j=1,2,,n.

For such deterministic BAM neural networks, we have the following system of equations:

dx(t)=[Cx(tν1)+W0f(y(t))+W1(g(y(tτ1(t))))dx(t)=+W2tσ1th(y(s))ds+I]dt,t>0,ttk,Δx(tk)=Mk(x(tk),xtk);t=tk,kZ+,dy(t)=[Dy(tν2)+V0f˜(x(t))+V1g˜(x(tτ2(t)))dy(t)=+V2tσ2th˜(x(s))ds+J]dt,t>0,ttk,Δy(tk)=Nk(y(tk),ytk);t=tk,kZ+. 2

Thus system (1) admits one equilibrium point (x,y)=(x1,x2,,xn,y1,y2,,yn)T under Assumption 3. In this regard, let u(t)=x(t)x and v(t)=y(t)y, then system (1) can be rewritten in the following form:

du(t)=[Cu(tν1)+W0f¯(v(t))+W1(g¯(v(tτ1(t))))+W2tσ1th¯(v(s))ds]dtdu(t)=+ρ1¯(u(tν1),v(t),v(tτ1(t)),t)dω¯(t),t>0,ttk,Δu(tk)=M¯k(u(tk),utk),t=tk,kZ+,dv(t)=[Dv(tν2)+V0f˜¯(u(t))+V1g˜¯(u(tτ2(t)))+V2tσ2th˜¯(u(s))ds]dtdv(t)=+ρ2¯(v(tν2),u(t),u(tτ2(t)),t)dω˜¯(t),t>0,ttk,Δv(tk)=N¯k(v(tk),vtk),t=tk,kZ+, 3

where

u(t)=(u1(t),u2(t),,un(t))T,v(t)=(v1(t),v2(t),,vn(t))T,u(tν1)=(u1(tν1),u2(tν1),,un(tν1))T,v(tν2)=(v1(tν2),v2(tν2),,vn(tν2))T,f¯(v(t))=(f¯1(v(t)),f¯2(v(t)),,f¯n(v(t)))T,f˜¯(u(t))=(f˜¯1(u(t)),f˜¯2(u(t)),,f˜¯n(u(t)))T,g¯(v(tτ1(t)))=(g¯1(v(tτ1(t))),g¯2(v(tτ1(t))),,g¯n(v(tτn(t))))T,g˜¯(u(tτ2(t)))=(g˜¯1(u(tτ2(t))),g˜¯2(u(tτ2(t))),,g˜¯n(u(tτn(t))))T,h¯(v(t))=(h¯1(v(t)),h¯2(v(t)),,h¯n(v(t)))T,h˜¯(u(t))=(h˜¯1(u(t)),h˜¯2(u(t)),,h˜¯n(u(t)))T,f¯i(v(t))=fi(v(t)+y)fi(y),f˜¯j(u(t))=f˜j(u(t)+x)f˜j(x),g¯i(v(tτ1(t)))=gi(v(tτ1(t))+y)gi(y),g˜¯j(u(tτ2(t)))=g˜j(u(tτ2(t))+x)g˜j(x),h¯i(v(t))=hi(v(t)+y)hi(y),h˜¯j(u(t))=h˜i(u(t)+x)h˜i(x),ρ¯1(u(tν1),v(t),v(tτ1),t)=ρ1(u(tν1)+x,v(t)+y,v(tτ1(t))+y,t)ρ1(x,y,y,t),ρ¯2(v(tν2),u(t),u(tτ2),t)=ρ2(v(tν2)+y,u(t)+x,u(tτ2(t))+x,t)ρ2(y,x,x,t),ρ¯1(u(tν1),v(t),v(tτ1),t)=(ρ¯11(u(tν1),v(t),v(tτ1(t)),t),,ρ¯1n(u(tν1),v(t),v(tτ1(t)),t))T,ρ¯2(v(tν2),u(t),u(tτ2),t)=(ρ¯21(v(tν2),u(t),u(tτ2(t)),t),,ρ¯2n(v(tν2),u(t),u(tτ2(t)),t))T.

Apparently, f¯i(s), f˜¯j(s) is also an α-inverse Holder function, and f¯i(0)=g¯i(0)=h¯i(0)=f˜¯j(0)=g˜¯j(0)=h˜¯j(0)=0, i,j=1,2,,n.

Let {r(t),t0} be a right-continuous Markov chain in a complete probability space (Ω,F,{Ft}t0,P) and take values in a finite state space M={1,2,,N} with generator Γ=(γij)N×N given by

P{r(t+Δt)=j|r(t)=i}={γijΔt+O(Δt),if ij,1+γiiΔt+O(Δt),if i=j,

where Δt>0 and limΔt0(O(Δt)Δt)=0. Here γij0 is the transition probability rate from i to j if ij, while γii=j=1Nγij.

In this paper, we consider the following BAM neural networks with stochastic noise disturbance, leakage, mixed time delays, and Markovian jump parameters, which is actually a modification of system (3):

du(t)=[C(r(t))u(tν1)+W0(r(t))f¯(v(t))du(t)=+W1(r(t))(g¯(v(tτ1(t))))+W2(r(t))tσ1th¯(v(s))ds]dtdu(t)=+ρ1¯(u(tν1),v(t),v(tτ1(t)),t,r(t))dω¯(t),t>0,ttk,Δu(tk)=M¯k(r(t))(u(tk),utk),t=tk,kZ+,dv(t)=[D(r˜(t))v(tν2)+V0(r˜(t))f˜¯(u(t))dv(t)=+V1(r˜(t))g˜¯(u(tτ2(t)))+V2(r˜(t))tσ2th˜¯(u(s))ds]dtdv(t)=+ρ2¯(v(tν2),u(t),u(tτ2(t)),t,r˜(t))dω˜¯(t),t>0,ttk,Δv(tk)=N¯k(r˜(t))(v(tk),vtk),t=tk,kZ+, 4

where u(tν1), τ1(t), τ2(t), v(t), u(t), v(tν2), f¯(v(t)), f˜¯(u(t)), g¯(v(tτ1(t))), g˜¯(u(tτ2(t))), h¯(v(t)), h˜¯(u(t)) have the same meanings as those in (3), ρ1¯(u(tν1),v(t),v(tτ1(t)),t,r(t)) and ρ2¯(v(tν2),u(t),u(tτ2(t)),t,r˜(t)) are noise intensity function vectors, and for a fixed system mode, C(r(t)), D(r(t)), W0(r(t)), V0(r˜(t)), W1(r(t)), V1(r˜(t)), W2(r(t)), V2(r˜(t)), M¯k(r(t)), and N˜¯k(r˜(t)) are known constant matrices with appropriate dimensions.

For our convenience, each possible value of r(t) and r˜(t) is denoted by i and j respectively; i,jM in the sequel. Then we have Ci=C(r(t)), Dj=D(r˜(t)), W0i=W0(r(t)), V0j=V0(r˜(t)), W1i=W1(r(t)), V1j=V1(r˜(t)), W2i=W2(r(t)), V2j=V2(r˜(t)), M¯ki=M¯k(r(t)), N˜¯kj=N˜¯k(r˜(t)), where Ci, Dj, W0i, V0j, W1i, V1j, W2i, V2j, M¯ki, N˜¯kj for any i,jM.

Assume that ρ1¯:Rn×Rn×Rn×R+×MRn and ρ2¯:Rn×Rn×Rn×R+×MRn are locally Lipschitz continuous and satisfy the following assumption.

Assumption 4

trace[ρ1¯T(u1,v1,v2,t,i)ρ1¯(u1,v1,v2,t,i)]u1TR1iu1+v1TR2iv1+v2TR3iv2;trace[ρ2¯T(v1,u1,u2,t,j)ρ2¯(v1,u1,u2,t,j)]v1TR˜1jv1+u1TR˜2ju1+u2TR˜3ju2;

for all u1,u2,v1,v2Rn and r(t)=i, r˜(t)=j, i,jM,where R1i, R˜1j, R2i, R˜2j, R3i, and R˜3j are known positive definite matrices with appropriate dimensions.

Consider a general stochastic system dx(t)=f(x(t),t,r(t))dt+g(x(t),t,r(t))dω(t), t0 with the initial value x(0)=x0Rn, where f:Rn×R+×MRn and r(t) is the Markov chain. Let C2,1(Rn×R+×M;R+) denote a family of all nonnegative functions V on Rn×R+×M which are twice continuously differentiable in x and once differentiable in t. For any VC2,1(Rn×R+×M;R+), define LV:Rn×R+×MR by

LV(x(t),t,i)=Vt(x(t),t,i)+Vx(x(t),t,i)f(x(t),t,i)+12trace(gT(x(t),t,i)Vxx(x(t),t,i)g(x(t),t,i))+j=1NγijV(x(t),t,j),

where

Vt(x(t),t,i)=V(x(t),t,i)t,Vx(x(t),t,i)=(V(x(t),t,i)x1,V(x(t),t,i)x2,,V(x(t),t,i)xn),Vxx(x(t),t,i)=2V(x(t),t,i)xjxk.

By generalized Ito’s formula, one can see that

EV(x(t),y(t),t,r(t))=EV(x(0),y(0),0,r(0))+E0tLV(x(s),y(s),s,r(s))ds.

Let u(t;ξ) and v(t;ξ˜) denote the state trajectory from the initial data u(θ)=ξ(θ) on ω¯θ0 in LF02([ω¯,0];Rn) and v(θ)=ξ˜(θ) on ω˜¯θ0 in LF02([ω˜¯,0];Rn). Clearly, system (4) admits a trivial solution u(t,0)0 and v(t,0)0 corresponding to the initial data ξ=0 and ξ˜=0, respectively. For simplicity, we write u(t;ξ)=u(t) and v(t,ξ˜)=v(t).

Definition 1.3

The equilibrium point of neural networks (4) is said to be globally exponentially stable in the mean square if, for any ξLF02([ω¯,0];Rn), ξ˜LF02([ω˜¯,0];Rn), there exist positive constants η, T, Πξ, and Θξ˜ correlated with ξ and ξ̃ such that, when t>T, the following inequality holds:

E{u(t;ξ)2}+E{v(t;ξ˜)2}(Πξ+Θξ˜)eηt.

Definition 1.4

We introduce the stochastic Lyapunov–Krasovskii functional VC2,1(R+×Rn×Rn×M;R+) of system (4), the weak infinitesimal generator of random process LV from R+×Rn×Rn×M to R+ defined by

LV(t,u(t),v(t),i)=limΔt0+1Δt[E{V((t+Δt),u(t+Δt),v(t+Δt),r(t+Δt))|u(t),v(t),r(t)=i}V(t,u(t),v(t),r(t)=i)].

Lemma 1.5

([39])

If fi is an α-inverse Holder function, then for any ρ0R, one has

ρ0+[fi(θ)fi(ρ0)]dθ=ρ0[fi(θ)fi(ρ0)]dθ=+.

Lemma 1.6

([39])

If fi is an α-inverse Holder function and fi(0)=0, then there exist constants qi0>0 and ri00 such that |fi(θ)|qi0|θ|α, |θ|ri0. Moreover, |fi(θ)|qi0ri0α, |θ|ri0.

Lemma 1.7

([21])

For any real matrix M>0, scalars a and b with 0a<b, vector function x(α) such that the following integrals are well defined, we have

abt+βtx(α)TMx(α)dαdβ(ba)tbtx(α)TMx(α)dα.

Lemma 1.8

([39])

Let x,yRn, and G is a positive definite matrix, then

2xTyxTGx+yTG1y.

Lemma 1.9

([21])

Given constant symmetric matrices ϒ1, ϒ2, and ϒ3 with appropriate dimensions, where ϒ1T=ϒ1 and ϒ2T=ϒ2>0, ϒ1+ϒ3Tϒ21ϒ3<0 if and only if

[ϒ1ϒ3Tϒ3ϒ2]<0[or][ϒ1ϒ3ϒ3Tϒ2]<0.

Lemma 1.10

([21])

For any constant matrix ΩRn×n, Ω=ΩT>0, scalar γ>0, vector function ω:[0,γ]Rn, such that the integrations concerned are well defined, then

1γ(0γω(s)ds)TΩ(0γω(s)ds)0γωT(s)Ωω(s)ds.

Lemma 1.11

([33])

For given matrices D, E, and F with FTFI and scalar ϵ>0, the following inequality holds:

DFE+ETFTDTϵDDT+ϵ1EET.

Remark 1.12

Lakshmanan et al. in [18] analyzed the impact of time-delayed BAM neural networks for ensuring the stability performance when the leakage delay occurred. In [12], the authors discussed the stability behavior in the sense of asymptotic for BAM neural networks with mixed time delays and uncertain parameters. Moreover, the comparisons for maximum allowable upper bounds of discrete time-varying delays have been listed. Lou and Cui in [29] conversed the exponential stability conditions for time-delayed BAM NNs while Markovian jump parameters arose. Further, the stochastic effects on neural networks and stability criteria were conversed via exponential sense by Huang and Li in [40] by the aid of Lyapunov–Krasovskii functionals. In all the above mentioned references, the stability problem for BAM neural networks was considered only with leakage delays or mixed time delays, or stochastic effects, or Markovian jump parameters, or parameter uncertainties, but all the above factors have not been taken into one account and no one investigated exponential stability via delays at a time. Considering the above facts is very challenging and advanced in this research work.

Global exponential stability for deterministic systems

Theorem 2.1

Under Assumptions 1 and 2, the neural network system (4) is globally exponentially stable in the mean square if, for given ηi,η˜j>0 (i,jM), there exist positive definite matrices S, T, , , R2, R˜2, N1, N2, N3, N4, N5, N6 and Hi, H˜j (i,jM), positive definite diagonal matrices P, Q, and positive scalars λi and μj (i,jM) such that the following LMIs are satisfied:

Hi<λiI, 5
H˜j<μjI, 6
M¯kTHiM¯kHj0, 7
N¯kTH˜iN¯kH˜j0[here r(tk)=i and r˜(tk)=j], 8
Ξi=[Ξ11Ξ12Ξ1300000Ξ1900000Ξ2200Ξ25Ξ26Ξ270Ξ2900000Ξ330Ξ35Ξ36Ξ370000000Ξ440000000000Ξ55000Ξ5900000Ξ6600Ξ6900000Ξ770Ξ7900000Ξ88000000Ξ9900000Ξ10100000Ξ1111000Ξ121200Ξ13130Ξ1414]Ξi<0, 9
Ωj=[Ω11Ω12Ω1300000Ω1900000Ω2200Ω25Ω26Ω270Ω2900000Ω330Ω35Ω36Ω370000000Ω440000000000Ω55000Ω5900000Ω6600Ω6900000Ω770Ω7900000Ω88000000Ω9900000Ω10100000Ω1111000Ω121200Ω13130Ω1414]Ωj<0, 10

where

Ξ11=λiR1i,Ω11=μjR˜1j,Ξ22=CiHi+l=1NγilHl+λi1τ1eηiτ¯1R2+ηiHi,Ω22=DjH˜j+l=1Nγ˜jlH˜l+μj1τ2eη˜jτ¯2R˜2+η˜jH˜j,Ξ33=11τ1E2S+σ1K2T,Ω33=11τ2E˜2S˜+σ2K˜2T˜,Ξ44=eηiτ¯1S,Ω26=H˜jV1j,Ω1212=μjR˜2,Ω44=eη˜jτ¯2S˜,Ω55=N2,Ξ66=(1τ1)eηiτ¯1N3,Ω66=(1τ2)eη˜jτ¯2N4,Ξ88=1σ1T,Ω88=1σ2T˜,Ξ99=l=1NγilCiTHlCi+ηiCiTHiCi,Ξ13=ηiPPCi,Ω69=DjTH˜jV1j,Ω99=l=1Nγ˜jlDjTH˜lDj+η˜jDjTH˜jDj,Ω1010=μjR˜3j,Ω37=QV2j,Ω13=η˜jQQDj,Ξ19=0,Ω19=0,Ξ25=HiW0i,Ω25=H˜jV0j,Ξ26=HiW1i,Ξ27=HiW2i,Ω27=H˜jV2j,Ξ29=CiTHiCil=1NγilCiHlηiCiTHi,Ξ1212=λiR2,Ξ35=PW0i,Ω35=QV0j,Ξ36=PW1i,Ω36=QV1j,Ξ37=PW2i,Ξ1010=λiR3i,Ξ59=CiTHiW0i,Ω59=DjTH˜jV0j,Ξ69=CiTHiW1i,Ξ55=N1,Ω1111=μjR˜2j,Ξ79=CiTHiW2i,Ω79=DjTH˜jV2j,Ω29=DjTH˜jDjl=1Nγ˜jlDjH˜lη˜jDjTH˜j,Ω12=0,Ξ1111=λiR2i,Ξ1313=N3,Ω1313=N4,Ξ12=0,Ξ1414=σ1N5,Ξ77=1σ1N5,Ω1414=σ2N6,Ω77=1σ2N6.

Proof

Let us construct the following Lyapunov–Krasovskii functional candidate:

V(t,u(t),v(t),i,j)=l=18Vl(t,u(t),v(t),i,j),

where

V1(t,u(t),v(t),i,j)=eηit{[u(t)Citν1tu(s)ds]THi[u(t)Citν1tu(s)ds]}+eη˜jt{[v(t)Djtν2tv(s)ds]TH˜j[v(t)Djtν2tv(s)ds]},V2(t,u(t),v(t),i,j)=λi1τ1tτ1(t)teηi(s+τ1(s))uT(s)R2u(s)ds+μj1τ2tτ2(t)teη˜j(s+τ2(s))vT(s)R˜2v(s)ds,V3(t,u(t),v(t),i,j)=11τ1tτ1(t)teηi(s)g¯T(u(s))Sg¯(u(s))ds+11τ2tτ2(t)teη˜jsg˜¯T(v(s))S˜g˜¯(u(s))ds,V4(t,u(t),v(t),i,j)=σ10t+steηiθh¯T(u(θ))Th¯(u(θ))dθds+σ20t+steη˜jθh˜¯T(v(θ))T˜h˜¯(v(θ))dθds,V5(t,u(t),v(t),i,j)=2eηitl=1npl0ul(t)f¯l(θ)dθ+2eη˜jtl=1nql0vl(t)f˜¯l(θ)dθ,V6(t,u(t),v(t),i,j)=0teηisf¯T(v(s))N1f¯(v(s))ds+0teη˜jsf˜¯T(u(s))N2f˜¯(u(s))ds,V7(t,u(t),v(t),i,j)=tτ1(t)teηisg¯T(v(s))N3g¯(v(s))ds+tτ2(t)teη˜jsg˜¯T(u(s))N4g˜¯(u(s))ds,V8(t,u(t),v(t),i,j)=σ10t+steηiθh¯T(v(θ))N5h¯(v(θ))dθds+σ20t+steη˜jθh˜¯T(u(θ))N6h˜¯(u(θ))dθds.

By Assumption 4, (5) and (6), we obtain

trace[ρ¯1T(u1,v1,v2,t,i)Hiρ¯1(u1,v1,v2,t,i)]λi[u1TR1iu1+v1TR2iv1+v2TR3iv2],trace[ρ¯2T(v1,u1,u2,t,j)H˜jρ¯2(v1,u1,u2,t,j)]μj[v1TR˜1jv1+u1TR˜2ju1+u2TR˜3ju2].

It is easy to prove that system (4) is equivalent to the following form:

d[u(t)Citν1tu(s)ds]=[Ciu(t)+W0if¯(v(t))+W1ig¯(v(tτ1(t)))d[u(t)Citν1tu(s)ds]=+W2itσ1th¯(v(s))ds]dtd[u(t)Citν1tu(s)ds]=+ρ¯1(u(tν1),v(t),v(tτ1(t)),t,i)dω¯(t),d[v(t)Djtν2tv(s)ds]=[Djv(t)+V0jf˜¯(u(t))+V1jg˜¯(u(tτ2(t)))d[v(t)Djtν2tv(s)ds]=+V2jtσ2th˜¯(u(s))ds]dtd[v(t)Djtν2tv(s)ds]=+ρ¯2(v(tν2),u(t),u(tτ2(t)),t,j)dω˜¯(t).

By utilizing Lemmas 1.6 and 1.10, from (4) and Definition 1.4, one has

LV1eηit{uT(t)(2CiHi+ηiHi)u(t)+2uT(t)HiW0if¯(v(t))+2uT(t)HiW1iLV1×g¯(v(tτ1(t)))+2uT(t)HiW2i(tσ1th¯(v(s))ds)+2(tν1tu(s)ds)TCiTHiLV1×Ciu(tν1)2(tν1tu(s)ds)TCiTHiW1ig¯(v(tτ1(t)))2(tν1tu(s)ds)TLV1×CiTHiW2i(tσ1th¯(v(s))ds)+uT(t)l=1NγilHiu(t)2uT(t)(l=1NγilCiHiLV1+ηiCiTHi)(tν1tu(s)ds)+(tν1tu(s)ds)T(l=1NγilCiTHiCi+ηiCiTHiCi)LV1×(tν1tu(s)ds)+uT(tν1)λiR1iu(tν1)+vT(t)λiR2iv(t)+vT(tτ1(t))LV1×λiR3iv(tτ1(t))}+eη˜jt{vT(t)(2DjH˜j+η˜jH˜j)v(t)+2vT(t)H˜jV0jf˜¯(u(t))LV1+2vT(t)H˜jV1jg˜¯(u(tτ2(t)))+2vT(t)H˜jV2j(tσ2th˜¯(u(s))ds)LV1+2(tν2tv(s)ds)TDjTH˜jDjv(tν2)2(tν2tv(s)ds)TLV1×DjTH˜jV1jg˜¯(u(tτ2(t)))LV12(tν2tv(s)ds)TDjTH˜jV2j(tσ2th˜¯(u(s))ds)+vT(t)l=1Nγ˜jlH˜lv(t)2vT(t)LV1×(l=1Nγ˜jlDjH˜j+η˜jDjTH˜j)(tν2tv(s)ds)+(tν2tv(s)ds)T(l=1Nγ˜jlDjTH˜jDjLV1+η˜jDjTH˜jDj)(tν2tv(s)ds)+vT(tν2)μjR˜1jv(tν2)+uT(t)μjR˜2jLV1×u(t)+uT(tτ2(t))μjR˜3ju(tτ2(t))}, 11
LV2λi1τ1eηi(tτ¯1)uT(t)R2u(t)λieηituT(tτ1(t))R2u(tτ1(t))LV2+μj1τ2eη˜j(tτ¯2)vT(t)R˜2v(t)μjeη˜jtvT(tτ2(t))R˜2v(tτ2(t), 12
LV311τ1eηitf¯T(u(t))E2Sf¯(u(t)eηi(tτ¯1)g¯T(u(tτ1(t)))SLV3×g¯(u(tτ1(t)))+11τ2eη˜jtf˜¯T(v(t))E˜2S˜f˜¯(v(t))eη˜j(tτ¯2)LV3×g˜¯T(v(tτ2(t)))S˜g˜¯(v(tτ2(t))), 13
LV4σ1eηitf¯T(u(t))K2Tf¯(u(t))1σ1eηit(tσ1th¯(u(s))ds)TTLV4×(tσ1th¯(u(s))ds)+σ2eη˜jtf˜¯T(v(t))K˜2T˜f˜¯(v(t))1σ2LV4×eη˜jt(tσ2th˜¯(v(s))ds)TT˜(tσ2th˜¯(v(s))ds), 14
LV52eηitf¯T(u(t))(ηiPPCi)u(tν1)+2eηitf¯T(u(t))PW0if¯(v(t))LV5+2eηitf¯T(u(t))PW1ig¯(v(tτ1(t)))+2eηitf¯T(u(t))PW2itσ1th¯(v(s))dsLV5+2eη˜jtf˜¯T(v(t))(η˜jQQDj)v(tν2)+2eη˜jtf˜¯T(v(t))QV0jf˜¯(u(t))LV5+2eη˜jtf˜¯T(v(t))QV1jg˜¯(u(tτ2(t)))+2eη˜jtf˜¯T(v(t))QV2jtσ2th˜¯(u(s))ds, 15
LV6=eηitf¯T(v(t))N1f¯(v(t))+eη˜jtf˜¯T(u(t))N2f˜¯(u(t)), 16
LV7eηitg¯T(v(t))N3g¯(v(t))LV7eηi(tτ¯1)g¯T(v(tτ1(t)))N3g¯(v(tτ1(t)))(1τ1)LV7+eη˜jtg˜¯T(u(t))N4g˜¯(u(t))LV7eη˜j(tτ¯2)g˜¯T(u(tτ2(t)))N4g˜¯(u(tτ2(t)))(1τ2), 17
LV8σ1eηith¯T(v(t))N5h¯(v(t))1σ1eηit(tσ1th¯(v(s))ds)TN5(tσ1th¯(v(s))ds)LV8+σ2eη˜jth˜¯T(u(t))N6h˜¯(u(t))LV81σ2eη˜jt(tσ2th˜¯(u(s))ds)TN6(tσ2th˜¯(u(s))ds). 18

By combining Eqs. (11)–(18), we can obtain that

LV(t,u(t),v(t),i,j)eηitΨT(t)ΞiΨ(t)+eη˜jtΦT(t)ΩjΦ(t), 19

where

Ψ(t)=[uT(tν1),uT(t),fT(u(t)),g¯T(u(tτ1(t))),f¯T(v(t)),g¯T(v(tτ1(t))),(tσ1th¯(v(s))ds)T,(tσ1th¯(u(s))ds)T,(tν1tu(s)ds)T,vT(tτ1(t)),vT(t),uT(tτ1(t)),g¯(v(t)),h¯(v(t))]

and

Φ(t)=[vT(tν2),vT(t),f˜¯T(v(t)),g˜¯T(v(tτ2(t))),f˜¯T(u(t)),g˜¯T(u(tτ2(t))),(tσ2th˜¯(u(s))ds)T,(tσ2th˜¯(v(s))ds)T,(tν2tv(s)ds)T,uT(tτ2(t)),uT(t),vT(tτ2(t)),g˜¯T(u(t)),h˜¯T(u(t))].

Let α=miniSλmin(Ξi) and β=minjSμmin(Ωj). From conditions (9) and (10), it is easy to see that α>0 and β>0. This fact together with (19) gives

LV(t,u(t),v(t),i,j)(αeηit(u(t)2+v(t)2)+βeη˜jt(u(t)2+v(t)2)). 20

Then, for t=tk, by some simple calculations, one gets

V1(tk,u(tk),v(tk),i,j)V1(tk,u(tk),v(tk),i,j)<0.

Therefore V1(tk,u(tk),v(tk),i,j)V1(tk,u(tk),v(tk),i,j), kZ+, which implies that V(tk,u(tk),v(tk),i,j)V(tk,u(tk),v(tk),i,j), kZ+. Using mathematical induction, we have that, for all i,jM and k1,

EV(tk,u(tk),v(tk),i,j)EV(tk,u(tk),v(tk),i,j)EV(tk1,u(tk1),v(tk1),r(tk1),r˜(tk1))EV(tk1,u(tk1),v(tk1),r(tk1),r˜(tk1))EV(t0,u(t0),v(t0),r(t0),r˜(t0)).

Since t>t, it follows from Dynkin’s formula that we have

EV(t,u(t),v(t),i,j)EV(t,u(t),v(t),r(t),r˜(t)),EV(t,u(t),v(t),i,j)EV(0,u(0),v(0),r(0),r˜(0)).

Hence it follows from the definition of V(t,u(t),v(t),i,j), the generalized Ito’s formula, and (20) that

λmin(Hi)E(u(t)2+v(t)2)+μmin(H˜j)E(u(t)2+v(t)2)eηitEV(0,u(0),v(0),r(0),r˜(0))+eηitE[0tαeηit(u(s)2+v(s)2)ds]+eη˜jtEV(0,u(0),v(0),r(0),r˜(0))+eη˜jtE[0tβeη˜jt(u(s)2+v(s)2)ds]eηitEV(0,u(0),v(0),r(0),r˜(0))+eη˜jtEV(0,u(0),v(0),r(0),r˜(0)). 21

By (21), we can get that limt+E(u(t)2+v(t)2)=0 and limt+E(u(t)2+v(t)2)=0. Furthermore,

limt+E(|ul(t)|2+|vl(t)|2)=0andlimt+E(|um(t)|2+|vm(t)|2)=0,l,m=1,2,,n. 22

For f¯l(θ) and f˜¯m(θ), by Lemma 1.6 there exist constants ql0>0, q˜m0>0, and rl0>0, r˜m0>0 such that |f¯l(θ)|ql0|θ|α, |θ|rl0, l=1,2,,n, and |f˜¯m(θ)|q˜m0|θ|α, |θ|r˜m0, m=1,2,,n.

By (22), there exists a scalar T>0, when tT, E{ul(t)}[r¯0,r¯0], l=1,2,,n, where r¯0=min1lnrl0 and E{vm(t)}[r˜¯0,r˜¯0], m=1,2,,n, where r˜¯0=min1lnr˜m0. Hence when tT, one gets

eηitEV(0,u(0),v(0),r(0))+eη˜jtEV(0,u(0),v(0),r˜(0))2pq0α+1{max1lnE{ul(t)2}}α+12+2p˜q˜0α+1{max1mnE{vm(t)2}}α+12, 23

where p=min1lnpl, p˜=min1mnp˜m and q0=min1lnql0, q˜0=min1mnq˜m0. By (23), we get

{max1lnE{ul(t)2}}α+12+{max1mnE{vm(t)2}}α+12(α+12pq0){(EV(0,u(0),v(0),r(0),r˜(0)))eηt}(Put (α+12pq0)=max{α+12pq0,α+12p˜q˜0} and η=mini,jM{miniMηi,minjMη˜j}),

where

EV(0,u(0),v(0),r(0),r˜(0))λmax(Hi)Eξ2+μmax(H˜j)Eξ˜2+λmax(R2)λi1τ1×eηiτ¯11ηi(1+τ1)Eξ2+μmax(R˜2)μj1τ2eη˜jτ¯21η˜j(1+τ2)×Eξ˜2+λmax(S)11τ11eηiτ¯1ηiEg¯(ξ)2+μmax(S˜)×11τ21eη˜jτ¯2η˜jEg˜¯(ξ˜)2+λmax(T)σ11eηitηi×Eh¯(ξ)2+μmax(T˜)σ21eη˜jtη˜jEh˜¯(ξ˜)2+2PE|ξf¯(ξ)|+2QE|ξ˜f˜¯(ξ˜)|. 24

Let

πξi+ξ˜j=(πξi+πξj˜)2α+1, 25

where

πξi=n{(α+12pq0)(λmax(Hi)Eξ2+λmax(R2)λi1τ1eηiτ¯11ηi(1+τ1)Eξ2πξi=+λmax(S)11τ11eηiτ¯1ηiEg¯(ξ)2πξi=+λmax(T)σ11eηiτ¯1ηiEh¯(ξ)2+2PE|ξf¯(ξ)|)},πξ˜j=n{(α+12pq0)(μmax(H˜j)Eξ˜2+μmax(R˜2)μj1τ2eη˜jτ¯21η˜j(1+τ2)Eξ˜2πξ˜j=+μmax(S˜)11τ21eη˜jτ¯2η˜jEg˜¯(ξ˜)2πξ˜j=+μmax(T˜)σ21eη˜jτ¯2η˜jEh˜¯(ξ˜)2+2QE|ξ˜f˜¯(ξ˜)|)}.

Let χ=maxi,jM{πξi+ξ˜j}. It follows from (23), (24), and (25) that

E{u(t)2}+E{v(t)2}χe2ηα+1t.

Therefore

E{u(t)2}+E{v(t)2}χe2ηα+1tfor all t>0. 26

By Definition 1.3 and (26), we see that the equilibrium point of neural networks (4) is globally exponentially stable in the mean square sense. □

Remark 2.2

To the best of our knowledge, the global exponential stability criteria for impulsive effects of SBAMNNs with Markovian jump parameters and mixed time, leakage term delays, α-inverse Holder activation functions have not been discussed in the existing literature. Hence this paper reports a new idea and some sufficient conditions for global exponential stability conditions of neural networks, which generalize and improve the outcomes in [9, 11, 21, 37, 38].

Remark 2.3

The criteria given in Theorem 2.1 are dependent on the time delay. It is well known that the delay-dependent criteria are less conservative than the delay-independent criteria, particularly when the delay is small. Based on Theorem 2.1, the following result can be obtained easily.

Remark 2.4

If there are no stochastic disturbances in system (4), then the neural networks are simplified to

du(t)=[C(r(t))u(tν1)+W0(r(t))f¯(v(t))du(t)=+W1(r(t))(g¯(v(tτ1(t))))du(t)=+W2(r(t))tσ1th¯(v(s))ds]dt,t>0,ttk,Δu(tk)=M¯k(u(tk),utk),t=tk,kZ+,dv(t)=[D(r˜(t))v(tν2)+V0(r˜(t))f˜¯(u(t))dv(t)=+V1(r˜(t))g˜¯(u(tτ2(t)))dv(t)=+V2(r˜(t))tσ2th˜¯(u(s))ds]dt,t>0,ttk,Δv(tk)=N¯k(v(tk),vtk),t=tk,kZ+. 27

Global exponential stability of uncertain system

Now consider the following BAM neural networks with stochastic noise disturbance, Markovian jump parameters, leakage and mixed time delays, which are in the uncertainty case system:

du(t)=[(C+ΔC(t))(r(t))u(tν1)du(t)=+(W0+ΔW0(t))(r(t))f¯(v(t))du(t)=+(W1+W1(t))(r(t))(g¯(v(tτ1(t))))du(t)=+(W2+ΔW2(t))(r(t))tσ1th¯(v(s))ds]dtdu(t)=+ρ1¯(u(tν1),v(t),v(tτ1(t)),t,r(t))dω¯(t),t>0,ttk,Δu(tk)=(M¯k+ΔM¯k(t))(r(t))(u(tk),utk),t=tk,kZ+,dv(t)=[(D+ΔD(t))(r˜(t))v(tν2)dv(t)=+(V0+ΔV0(t))(r˜(t))f˜¯(u(t))dv(t)=+(V1+ΔV1(t))(r˜(t))g˜¯(u(tτ2(t)))dv(t)=+(V2+ΔV2(t))(r˜(t))tσ2th˜¯(u(s))ds]dtdv(t)=+ρ2¯(v(tν2),u(t),u(tτ2(t)),t,r˜(t))dω˜¯(t),t>0,ttk,Δv(tk)=(N¯k+ΔN¯k(t))(r˜(t))(v(tk),vtk),t=tk,kZ+. 28

Assumption 5

The perturbed uncertain matrices ΔC(t), ΔD(t), ΔW0i(t), ΔW1i(t), ΔW2i(t), ΔV0j(t), ΔV1j(t), and ΔV2j(t) are time-varying functions satisfying: ΔW0i(t)=MFl(t)NW0i, ΔW1i(t)=MFl(t)NW1i, ΔW2i(t)=MFl(t)NW2i, ΔV0j(t)=MFl(t)NV0j, ΔV1j(t)=MFl(t)NV1j, ΔV2j(t)=MFl(t)NV2j, ΔC(t)=MFl(t)NCi and ΔD(t)=MFl(t)NDj, where M, NW0i, NW1i, NW2i, NV0j, NV1j, NV2j, NCi, and NDj are given constant matrices, respectively. Flz(t) (l=0,1,2,3) (where z=either i or j) are unknown real time-varying matrices which have the following structure: Flz(t)=blockdiag{δl1(t)Izl1,,δlk(t)Izlk,Fl1(t),,Fls(t)}, δlzR, |δlz|1, 1zk˜ and FlpTFlpI, 1ps. We define the set Δl as Δl={FlzT(t)Flz(t)I,FlzNlz=NlzFlz,NlzΓlz}, where Γlz={Nlz=blockdiag[Nl1,,Nlk,nl1Ifl1,,nlsIfls]}, Nlz invertible for 1zk˜ and nlpR, nlp0, for 1ps and p, , sM.

Also ΔHi, ΔH˜j, ΔR1i, ΔR2i, ΔR3i, ΔR˜1j, ΔR˜2j, ΔR˜3j, ΔR2, ΔR˜2, ΔS, ΔT, Δ, Δ, ΔN1, ΔN2, ΔN3, ΔN4, ΔN5, and ΔN6 are positive definite diagonal matrices that are defined as follows: ΔHi=EˇΣFHi, ΔH˜j=EˇΣFH˜j and ΔR1i=EˇΣFR1i, where Ě, FHi, FH˜j, FR1i, FR2i, FR3i, FR˜1j, FR˜2j, FR˜3j, FR2, FR˜2, FS, FS˜, FT, FT˜, FN1, FN2, FN3, FN4, FN5, and FN6 are positive diagonal matrices (i.e., FHiFHiT=diag(h1,h2,,hn), FH˜jFH˜jT=diag(h˜1,h˜2,,h˜n), where hi,h˜j>0 (i,j=1,2,,n)) and the remaining terms are defined in a similar way, which characterizes how the deterministic uncertain parameter in Σ enters the nominal matrices Hi, H˜j, Rbi (b=1,2,3), R˜cj (c=1,2,3), S, , T, , N1, N2, N3, N4, N5, and N6. The matrix Σ with real entries, which may be time-varying, is unknown and satisfies ΣTΣI.

Remark 3.1

Overall, the stability of time-delayed neural networks fully depends on the Lyapunov–Krasovskii functional and LMI concepts. In particular, based on the neural networks, different types of LKF are chosen or handled to lead to the system stability. Up to now, no one has considered uncertain parameters in Lyapunov–Krasovskii functional terms. Without loss of generality, the gap is initially filled in this work, and also this kind of approach gives more advanced and less conserved stability results.

Theorem 3.2

Under Assumptions 1, 2, and 5, the neural network system (28) is global robust exponentially stable in the mean square if, for given ηi,η˜j>0 (i,jM), there exist positive definite matrices S, T, , , R2, R˜2, N1, N2, N3, N4, N5, N6 and Hi, H˜j (i,jM), positive definite diagonal matrices ΔS, ΔT, ΔR2, Δ, Δ, ΔR˜2, ΔHi, ΔH˜j, ΔN1, ΔN2, ΔN3, ΔN4, ΔN5, ΔN6, P, Q and positive scalars λi and μj (i,jM) such that the following LMIs are satisfied:

(Hi+ΔHi)<λiI, 29
(H˜j+ΔH˜j)<μjJ, 30
(M¯k+ΔM¯k)T(Hi+ΔHi)(M¯k+ΔM¯k)(Hj+ΔHj)0, 31
(N¯k+ΔN¯k)T(H˜i+ΔH˜i)(N¯k+ΔN¯k)(H˜j+ΔH˜j)0[here r(tk)=i and r˜(tk)=j], 32
Ω=[Ξ1Ξ2Ξ3Ξ4Ξ5Ξ10Ξ60000Ξ7000Ξ800Ξ90Ξ11]<0, 33
Π=[Δ1Δ2Δ3Δ4Δ5Δ10Δ60000Δ7000Δ800Δ90Δ11]<0, 34

where

Ξ6=diag{Θi},Ξ7=diag{Θl1},Δ6=diag{Φj},Δ7=diag{Φl2},i,j=1,2,3,,12 and l1,l2=13,14,15,,24;Ξ8=diag{Θ˜s},Ξ9=diag{Θ˜l3},Δ8=diag{Φ˜s},Δ9=diag{Φ˜l4},s,s=1,2,3,,12 and l3,l4=13,14,15,16;Ξ11=diag{Θl5},Δ11=diag{Φl6},l5,l6=1,2,3,,10;
Ξ1=[Ξ11Ξ12Ξ1300000Ξ1900000Ξ2200Ξ25Ξ26Ξ270Ξ2900000Ξ330Ξ35Ξ36Ξ370000000Ξ440000000000Ξ55000Ξ5900000Ξ6600Ξ6900000Ξ770Ξ7900000Ξ88000000Ξ9900000Ξ10100000Ξ1111000Ξ121200Ξ13130Ξ1414],
Δ1=[Ω11Ω12Ω1300000Ω1900000Ω2200Ω25Ω26Ω270Ω2900000Ω330Ω35Ω36Ω370000000Ω440000000000Ω55000Ω5900000Ω6600Ω6900000Ω770Ω7900000Ω88000000Ω9900000Ω10100000Ω1111000Ω121200Ω13130Ω1414],
Ξ2=[000000000000NCiTΓ1ϑ2Tϵ2EˇΓ2ϵ3MEˇ0000ϑˆ1T0000000000000000000000000NW0iT0Γ30Γ40000000NW1iT0Γ50Γ60Γ70ϑˆ5T0ϑˆ2T0NW2iT0Γ80Γ90Γ100ϑˆ6T0ϑˆ3T0000000000000ϑ1T0ϑ3T0ϑ4T0ϑ5Tϵ4CTEˇϑ6Tϵ5MTHiϑ7Tϵ6MTEˇ000000000000000000000000000000000000000000000000000000000000],
Ξ3=[000000NCiT000000000000000000000ϑˆ4Tϵ9Eˇ0ϵ10PM00000000000012FSTϵ11Eˇ00000000NW0iT00000Γ1100000NW1iT00000Γ1200000NW2iT000000000000000Γ13ϵ12Eˇϑ8Tϵ7MEˇ000000000000000000000000000000000000Γ14ϵ8Eˇ00000000000000000000000000000000],
Δ2=[000000000000NDjTΓ˜1ϑ˜2Tϵ˜2EˇΓ˜2ϵ˜3MEˇ0000ϑ˜ˆ1T0000000000000000000000000NV0jT0Γ˜30Γ˜40000000NV1jT0Γ˜50Γ˜60Γ˜70ϑ˜ˆ5T0ϑ˜ˆ2T0NV2jT0Γ˜80Γ˜90Γ˜100ϑ˜ˆ6T0ϑ˜ˆ3T0000000000000ϑ˜1T0ϑ˜3T0ϑ˜4T0ϑ˜5Tϵ˜4DTEˇϑ˜6Tϵ˜5MTH˜jϑ˜7Tϵ˜6MTEˇ000000000000000000000000000000000000000000000000000000000000],
Δ3=[000000NDjT000000000000000000000ϑ˜ˆ4Tϵ˜9Eˇ0ϵ˜10QM00000000000012FS˜Tϵ˜11Eˇ00000000NV0jT00000Γ˜1100000NV1jT00000Γ˜1200000NV2jT000000000000000Γ˜13ϵ˜12Eˇϑ˜8Tϵ˜7MEˇ000000000000000000000000000000000000Γ˜14ϵ˜8Eˇ00000000000000000000000000000000],
Ξ4=[000000000000FHiTϵ13Eˇ0ϵ14M0Γ4000000000000000000000000000000000000000000000000000000000000000000000000000000Γ10Γ20Γ30ϑ1Tϵ16EˇΓ5Γ6ϑ2TΓ7000000000000000000000000000000000000000000000000000000000000],
Δ4=[000000000000FH˜jTϵ˜13Eˇ0ϵ˜14M0Γ˜4000000000000000000000000000000000000000000000000000000000000000000000000000000Γ˜10Γ˜20Γ˜30ϑ˜1Tϵ˜16EˇΓ˜5Γ˜6ϑ˜2TΓ˜7000000000000000000000000000000000000000000000000000000000000],
Ξ5=[00000000000000000000000000000000HiTNCiTϵ19CiTMΓ8CiTEˇ00000000000000000000],Δ5=[00000000000000000000000000000000H˜jTNDjTϵ˜19DjTMΓ˜8DjTEˇ00000000000000000000],
Ξ10=[0000000000000000000000000000000000000000ϵ21EˇFN1T0000000000ϵ22EˇαFN3T0000000000ϵ23EˇΓ15000000000000000000000000000000000000000000000000000000000000ϵ24EˇFN3T0000000000ϵ25Eˇσ1FN5T],
Δ10=[0000000000000000000000000000000000000000ϵ˜21EˇFN2T0000000000ϵ˜22EˇβFN4T0000000000ϵ˜23EˇΓ˜15000000000000000000000000000000000000000000000000000000000000ϵ˜24EˇFN4T0000000000ϵ˜25Eˇσ2FN6T],

where

Θ1=Θ2=ϵ1I,Θ3=Θ4=ϵ2I,Θ7=Θ8=ϵ4I,Θ11=Θ12=ϵ6I,Θ19=ϵ10I,Θ9=Θ10=ϵ5I,Θ13=Θ14=ϵ7I,Θ15=Θ16=ϵ8I,Θ17=Θ18=ϵ9I,Θ20=ϵ10I,Θ5=Θ6=ϵ3I,Θ21=Θ22=ϵ11I,Θ23=Θ24=ϵ12I,Φ1=Φ2=ϵ˜1I,Φ3=Φ4=ϵ˜2I,Φ5=Φ6=ϵ˜3I,Φ7=Φ8=ϵ˜4I,Φ15=Φ16=ϵ˜8I,Φ11=Φ12=ϵ˜6I,Φ13=Φ14=ϵ˜7I,Φ9=Φ10=ϵ˜5I,Φ17=Φ18=ϵ˜9I,Φ19=Φ20=ϵ˜10I,Φ21=Φ22=ϵ˜11I,Φ23=Φ24=ϵ˜12I,Θ1=Θ2=ϵ21I,Θ3=Θ4=ϵ22I,Θ5=Θ6=ϵ23I,Θ7=Θ8=ϵ24I,Θ9=Θ10=ϵ25I,Φ1=Φ2=ϵ˜21I,Φ3=Φ4=ϵ˜22I,Φ5=Φ6=ϵ˜23I,Φ7=Φ8=ϵ˜24I,Φ9=Φ10=ϵ˜25I,Γ1=ϵ1MHi,Γ2=NCiTFHiT,Γ3=FHiTW0iT,Γ4=FHiTNW0iT,Γ5=FHiTW1iT,Γ6=FHiTNW1iT,Γ7=W1TFHiT,Γ8=FHiTW2iT,Γ12=CFHiTNW2iT,Γ10=W2TFHiT,Γ11=CFHiTNW1iT,Γ9=FHiTNW2iT,Γ13=12σ1FTT,Γ14=λi2FR2T,Γ˜1=ϵ˜1MH˜j,Γ˜2=NDjTFH˜jT,Γ˜3=FH˜jTV0jT,Γ˜4=FH˜jTNV0jT,Γ˜5=FH˜jTV1jT,Γ˜6=FH˜jTNV1jT,Γ˜7=V1TFH˜jT,Γ˜8=FH˜jTV2jT,Γ˜10=V2TFH˜jT,Γ˜9=FH˜jTNV2jT,Γ˜11=DFH˜jTNV1jT,Γ˜12=DFH˜jTNV2jT,Γ˜13=12σ2FT˜T,Γ˜14=μj2FR˜2T,Θ˜1=Θ˜2=ϵ13I,Θ˜3=Θ˜4=ϵ14I,Θ˜5=Θ˜6=ϵ15I,Θ˜7=Θ˜8=ϵ16I,ϑ˜2=DjTNDjTFH˜jT,Θ˜9=Θ˜10=ϵ17I,Θ˜11=Θ˜12=ϵ18I,Θ˜13=Θ˜14=ϵ19I,Θ˜15=Θ˜16=ϵ20I,Φ˜1=Φ˜2=ϵ˜13I,Φ˜3=Φ˜4=ϵ˜14I,Φ˜5=Φ˜6=ϵ˜15I,Φ˜7=Φ˜8=ϵ˜16I,Γ˜8=ϵ˜20FH˜jTMNDjT,Φ˜9=Φ˜10=ϵ˜17I,Φ˜11=Φ˜12=ϵ˜18I,Φ˜13=Φ˜14=ϵ˜19I,Φ˜15=Φ˜16=ϵ˜20I,Γ˜2=NDjTH˜jT,ϑ1=CiTFHiCi+MTNCiTFHiMNCi,ϑ˜1=DjTFH˜jDj+MTNDjTFH˜jMNDj,ϑ2=CiTNCiTFHiT,Γ1=CiTFHiT,Γ2=NCiTHiT,Γ3=NCiTFHi,Γ4=ϵ15MEˇ,Γ5=NCiHiT,Γ˜5=NDjH˜jT,Γ6=ϵ17CiMT,Γ7=ϵ18MTEˇ,Γ8=ϵ20FHiTMNCiT,Γ˜6=ϵ˜17DjMT,Γ˜7=ϵ˜18MTEˇ,Γ˜1=DjTFH˜jT,Ω99=l=1Nγ˜jlDjTH˜lDj+l=1Nγ˜jlMTNDjTH˜jMNDjη˜jDjTH˜jDj+MTNDjTH˜jMNDj,Ξ29=CiTHiCil=1NγilCiHl+MTNCiTHiMNCiηiCiTHi,Ω69=DjTH˜jV1jMTNDjTH˜jMNV1j,Ξ69=CiTHiW1iMTNCiTHiMNW1i,Ω29=DjTH˜jDjl=1Nγ˜jlDjH˜l+MTNDjTH˜jMNDjη˜jDjTH˜j,Ξ99=l=1NγilCiTHlCi+l=1NγilMTNCiTHiMNCi+ηiCiTHiCi+MTNCiTHiMNCi,Γ˜3=NDjTFH˜j,Ξ79=CiTHiW2iMTNCiTHiMNW2i,Ω79=DjTH˜jV2jMTNDjTH˜jMNV2j,Γ˜4=ϵ˜15MEˇ,Γ15=1σ1FN5T,Γ˜15=1σ2FN6T,α=(1τ1)eηiτ¯1,β=(1τ2)eη˜jτ¯2.

The remaining values of Ξ1, Δ1 are the same as in Theorem 2.1, andmeans the symmetric terms.

Proof

The matrices Ci, Hi, Dj, H˜j, R2, R˜2, S, , T, , N1, N2, N3, N4, N5, and N6 in the Lyapunov–Krasovskii functional of Theorem 2.1 are replaced by Ci+ΔCi(t), Hi+ΔHi, Dj+ΔDj(t), H˜j+ΔH˜j, R2+ΔR2, R˜2+ΔR˜2, S+ΔS, S˜+ΔS˜, T+ΔT, T˜+ΔT˜, N1+ΔN1, N2+ΔN2, N3+ΔN3, N4+ΔN4, N5+ΔN5, and N6+ΔN6, respectively.

Hence, by applying the same procedure of Theorem 2.1 and using Assumption 5, Lemmas 1.8, 1.9, 1.10 and 1.11 and putting η=maxi,jM{maxiMηi,maxjMη˜j}, we have from (28) and Definition 2 (weak infinitesimal operator LV) that

LVeηt{ΨT(t)ΩΨ(t)+ΦT(t)ΠΦ(t)},

where Ψ(t) and Φ(t) are given in Theorem 2.1. The remaining proof of this theorem is similar to the procedure of Theorem 2.1, and we get that the uncertain neural network (28) is global robust exponentially stable in the mean square sense. □

Numerical examples

In this section, we provide two numerical examples with their simulations to demonstrate the effectiveness of our results.

Example 4.1

Consider the second order stochastic impulsive BAM neural networks (4) with u(t)=(u1(t),u2(t))T, v(t)=(v1(t),v2(t))T; ω¯(t), ω˜¯(t) are second order Brownian motions and r(t), r˜(t) denote right-continuous Markovian chains taking values in M={1,2} with generator

Γ=[0.20.10.40.3],Γ˜=[0.50.20.40.3].

The associated parameters of neural networks (4) take the values as follows:

C1=[1003],C2=[2001],D1=[2003],D2=[5002],W01=[0.020.010.020.01],W02=[0.020.010.020.01],W11=[0.030.020.030.02],W12=[0.020.020.010.01],W21=[0.030.040.030.02],W22=[0.020.020.030.01],V01=[0.020.020.010.03],V02=[0.020.010.010.02],V11=[0.010.020.020.01],V12=[0.010.020.020.03],V21=[0.020.010.020.01],V22=[0.020.020.030.02],R2=[0.02000.03],R˜2=[0.05000.02],R11=[0.06000.04],R12=[0.03000.02],R21=[0.02000.03],R22=[0.05000.02],R31=[0.03000.08],R32=[0.07000.04],R˜11=[0.2341000.3421],R˜12=[0.2451000.0251],R˜21=[0.1802000.0102],R˜22=[0.1212000.0140],R˜31=[0.02000.05],R˜32=[0.03000.04],M¯k=[0.04000.05],N¯k=[0.05000.03].

Taking

ρ¯1(u(tν1),v(t),v(tτ1(t)),t,1)=[0.04u1(tν1)+0.04v1(t)+0.03v1(tτ¯1)000.05u2(tν1)+0.02v2(t)+0.03v2(tτ¯1)],ρ¯1(u(tν1),v(t),v(tτ1(t)),t,2)=[0.04u1(tν1)+0.03v1(t)+0.05v1(tτ¯1)000.02u2(tν1)+0.04v2(t)+0.02v2(tτ¯1)],ρ¯2(v(tν2),u(t),u(tτ2(t)),t,1)=[0.02v1(tν2)+0.02u1(t)+0.02u1(tτ¯2)000.03v2(tν2)+0.02u2(t)+0.05u2(tτ¯2)],ρ¯2(v(tν2),u(t),u(tτ2(t)),t,2)=[0.01v1(tν2)+0.03u1(t)+0.03u1(tτ¯2)000.04v2(tν2)+0.03u2(t)+0.02u2(tτ¯2)],ν1=ν2=1,τ¯1=τ¯2=7.46,σ1=0.6,σ2=0.8.

The following activation functions play in neural network system (4):

f¯(v)=sinh(v),f˜¯(u)=sinh(u),g¯(v)=v,g˜¯(u)=u,h¯(v)=sin(v),h˜¯(u)=sin(u).

It is easy to obtain that, for any a,bR with a<b, there exists a scalar c(a,b) such that

fi(b)fi(a)ba=sinh(b)sinh(a)ba=cosh(c)1.

Therefore fi() and f˜¯j(), i,j=1,2, are 1-inverse Holder functions. In addition, for any a,bR, it is easy to check that

|hi(b)hi(a)|=|sin(b)sin(a)||gi(b)gi(a)|=|ba||fi(b)fi(a)|=|sinh(b)sinh(a)|.

By a similar way, we get the same inequalities for g˜¯j() and also h˜¯j() (j=1,2). That means the activation functions f¯i(), f˜¯j(), g¯i(), g˜¯j(), h¯i(), h˜¯j() (i,j=1,2) satisfy Assumptions 1 and 2.

Then, by Theorem 2.1, solving the LMIs using the Matlab LMI control toolbox, one can obtain the following feasible solutions:

S=104×[0.29680.00000.00000.2957],T=104×[0.84350.00000.00000.8461],N1=[0.49480.00010.00010.4948],N3=[0.74650.01470.01470.7263],N4=[0.83330.00460.00460.8248],N5=[0.44240.00010.00010.4423],S˜=[0.03730.00000.00000.0373],T˜=103×[0.21560.00010.00010.2155],H1=[0.35330.00690.00690.0936],H˜1=[0.0728000.0076],H˜2=[0.03550.00300.00300.0350],P=104×[0.4368000.4487],N2=[0.49790.02150.02150.9157],N6=[0.48410.00000.00000.4840],H2=[0.00160.00000.00000.0017],P1=104×[0.4947000.4947],Q1=103×[0.4947000.4947],Q=103×[0.5549000.4465],λ1=0.6805,λ2=0.0023,μ1=2.0060,μ2=0.8234.

Figure 1 narrates the time response of state variables u1(t), u2(t), v1(t), v2(t) with and without stochastic noises, and Fig. 2 depicts the time response of Markovian jumps r(t)=i, r˜(t)=j. By solving LMIs (5)–(10), we get the feasible solutions. The obtained discrete time delay upper bounds of τ¯1 and τ¯2 for neural networks (4), which are given in Table 1, are very maximum. This shows that the contributions of this research work is more effective and less conservative than some existing results. Therefore, by Theorem 2.1, we can conclude that neural networks (4) are globally exponentially stable in the mean square for the maximum allowable upper bounds τ¯1=τ¯2=7.46.

Figure 1.

Figure 1

The state response u1(t), u2(t), v1(t), v2(t) of (1) with stochastic disturbances and without stochastic disturbances

Figure 2.

Figure 2

The state responses r(t) and r˜(t) denote Markovian jump in system (4)

Example 4.2

Consider the second order uncertain stochastic impulsive BAM neural networks (28) with u(t)=(u1(t),u2(t))T, v(t)=(v1(t),v2(t))T; ω¯(t), ω˜¯(t) are second order Brownian motions and r(t), r˜(t) denote right-continuous Markovian chains taking values in M={1,2} with generator

Γ=Γ˜=[3342].

The associated parameters of neural networks (28) are as follows:

C1=[1003],C2=[2001],D1=[2001],D2=[2002],W01=[0.020.010.020.01],W02=[0.020.010.020.01],W11=[0.030.020.030.02],W12=[0.020.020.010.01],W21=[0.050.010.030.02],W22=[0.020.010.030.01],V01=[0.020.020.010.03],V02=[0.020.010.010.02],V11=[0.010.020.020.01],V12=[0.010.020.020.03],V21=[0.020.010.020.01],V22=[0.020.020.030.02],M=[0.50.60.20.5],NC1=[0.1000.3],NC2=[0.2000.2],ND1=[0.1000.3],ND2=[0.2000.2],NW01=[0.050.060.020.02],NW02=[0.020.060.020.02],NW11=[0.030.040.020.01],NW12=[0.010.030.030.01],NW21=[0.040.030.030.02],NW22=[0.020.030.020.01],NV01=[0.030.060.020.02],NV02=[0.020.040.010.03],NV11=[0.020.040.020.03],NV12=[0.060.030.010.04],NV21=[0.050.050.030.01],NV22=[0.030.030.020.03].

Taking

ρ¯1(u(tν1),v(t),v(tτ1(t)),t,1)=[0.3u1(tν1)+0.4v1(t)+0.4v1(tτ¯1)000.3u2(tν1)+0.2v2(t)+0.3v2(tτ¯1)],ρ¯1(u(tν1),v(t),v(tτ1(t)),t,2)=[0.4u1(tν1)+0.3v1(t)+0.4v1(tτ¯1)000.2u2(tν1)+0.5v2(t)+0.2v2(tτ¯1)],ρ¯2(v(tν2),u(t),u(tτ2(t)),t,1)=[0.3v1(tν2)+0.2u1(t)+0.2u1(tτ¯2)000.4v2(tν2)+0.5u2(t)+0.3u2(tτ¯2)],ρ¯2(v(tν2),u(t),u(tτ2(t)),t,2)=[0.2v1(tν2)+0.3u1(t)+0.4u1(tτ¯2)000.2v2(tν2)+0.3u2(t)+0.2u2(tτ¯2)],

ν1=ν2=1, τ¯1=τ¯2=0.4, σ1=σ2=0.3. The following activation functions play in neural network system (28):

f¯(v)=sinh(v),f˜¯(u)=sinh(u),g¯(v)=v,g˜¯(u)=u,h¯(v)=sin(v),h˜¯(u)=sin(u).

Therefore, by Theorem 3.2 in this paper, the uncertain delayed stochastic impulsive BAM neural networks (28) under consideration are global robust exponentially stable in the mean square.

Conclusions

In this paper, we have treated the problem of global exponential stability analysis for the leakage delay terms. By employing the Lyapunov stability theory and the LMI framework, we have attained a new sufficient condition to justify the global exponential stability of stochastic impulsive uncertain BAMNNs with two kinds of time-varying delays and leakage delays. The advantage of this paper is that different types of uncertain parameters were introduced into the Lyapunov–Krasovskii functionals, and the exponential stability behavior was studied. Additionally, two numerical examples have been provided to reveal the usefulness of our obtained deterministic and uncertain results. To the best of our knowledge, there are no results on the exponential stability analysis of inertial-type BAM neural networks with both time-varying delays by using Wirtinger based inequality, which might be our future research work.

Acknowledgements

This work was jointly supported by the National Natural Science Foundation of China under Grant No. 61573096, the Jiangsu Provincial Key Laboratory of Networked Collective Intelligence under Grant No. BM2017002, the Rajiv Gandhi National Fellowship under the University Grant Commission, New Delhi under Grant No. F1-17.1/2016-17/RGNF-2015-17-SC-TAM-21509, the Thailand research grant fund under Grant No. RSA5980019.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Footnotes

Notations

R is the set of real numbers; Rn is the n-dimensional Euclidean space; Rn×n denotes the set of all n×n real matrices; Z+ is the set of all positive integers. For any matrix A, AT is the transpose of A, A1 is the inverse of A; ∗ means the symmetric terms in a symmetric matrix. Positive definite matrix A is denoted by A>0, negative definite A is denoted by A<0, λmin() denotes minimum eigenvalues of a real symmetric matrix; λmax() is maximum eigenvalues of a real symmetric matrix; In denotes the n×n identity matrix; x=(x1,x2,,xn)T, y=(y1,y2,,yn)T are column vectors; xTy=i=1nxiyi, x=(i=1nxi2)1/2; x˙(t), y˙(t) denote the derivatives of x(t) and y(t), respectively; ∗ is the symmetric form of a matrix; C2,1(R+×Rn×M;R+) is the family of all nonnegative functions V(t,u(t),i) on R+×Rn×M which are continuously twice differentiable in u and once differentiable in t; (A,F,{Ft}t0,P) is a complete probability space, where A is the sample space, F is the σ-algebra of subsets of the sample space, P is the probability measure on F and {Ft}t0 denotes the filtration; LF02([ω˜,0];Rn) denotes the family of all F0-measurable C([ω˜,0];Rn)-valued random variables ξ˜={ξ˜(θ):ω˜θ0} such that supω˜θ0E|ξ˜(θ)|<, where E{} stands for the mathematical expectation operator with respect to the given probability measure P.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Kosko B. Neural Networks and Fuzzy Systems—A Dynamical System Approach to Machine Intelligence. Englewood Cliffs: Prentice Hall; 1992. [Google Scholar]
  • 2.Kosko B. Adaptive bidirectional associative memories. Appl. Opt. 1987;26(23):4947–4960. doi: 10.1364/AO.26.004947. [DOI] [PubMed] [Google Scholar]
  • 3.Feng Z., Zheng W. Improved stability condition for Takagi-Sugeno fuzzy systems with time-varying delay. IEEE Trans. Cybern. 2017;47(3):661–670. doi: 10.1109/TCYB.2016.2523544. [DOI] [PubMed] [Google Scholar]
  • 4.Joya G., Atencia M.A., Sandoval F. Hopfield neural networks for optimization: study of the different dynamics. Neurocomputing. 2002;43:219–237. doi: 10.1016/S0925-2312(01)00337-X. [DOI] [Google Scholar]
  • 5.Li R., Cao J., Alsaedi A., Alsaadi F. Exponential and fixed-time synchronization of Cohen-Grossberg neural networks with time-varying delays and reaction-diffusion terms. Appl. Math. Comput. 2017;313:37–51. doi: 10.1016/j.cam.2016.10.002. [DOI] [Google Scholar]
  • 6.Li R., Cao J., Alsaedi A., Alsaadi F. Stability analysis of fractional-order delayed neural networks. Nonlinear Anal., Model. Control. 2017;22(4):505–520. doi: 10.15388/NA.2017.4.6. [DOI] [Google Scholar]
  • 7.Nie X., Cao J. Stability analysis for the generalized Cohen–Grossberg neural networks with inverse Lipschitz neuron activations. Comput. Math. Appl. 2009;57:1522–1538. doi: 10.1016/j.camwa.2009.01.003. [DOI] [Google Scholar]
  • 8.Tu Z., Cao J., Alsaedi A., Alsaadi F.E., Hayat T. Global Lagrange stability of complex-valued neural networks of neutral type with time-varying delays. Complexity. 2016;21:438–450. doi: 10.1002/cplx.21823. [DOI] [Google Scholar]
  • 9.Zhang H., Wang Z., Lin D. Global asymptotic stability and robust stability of a class of Cohen-Grossberg neural networks with mixed delays. IEEE Trans. Circuits Syst. I. 2009;56:616–629. doi: 10.1109/TCSI.2008.2002556. [DOI] [Google Scholar]
  • 10.Zhu Q., Cao J. Robust exponential stability of Markovian jump impulsive stochastic Cohen–Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 2010;21:1314–1325. doi: 10.1109/TNN.2010.2054108. [DOI] [PubMed] [Google Scholar]
  • 11.Zhang X.M., Han Q.L., Seuret A., Gouaisbaut F. An improved reciprocally convex inequality and an augmented Lyapunov–Krasovskii functional for stability of linear systems with time-varying delay. Automatica. 2017;84:221–226. doi: 10.1016/j.automatica.2017.04.048. [DOI] [Google Scholar]
  • 12.Shu H., Wang Z., Lu Z. Global asymptotic stability of uncertain stochastic bi-directional associative memory networks with discrete and distributed delays. Math. Comput. Simul. 2009;80:490–505. doi: 10.1016/j.matcom.2008.07.007. [DOI] [Google Scholar]
  • 13.Balasundaram K., Raja R., Zhu Q., Chandrasekaran S., Zhou H. New global asymptotic stability of discrete-time recurrent neural networks with multiple time-varying delays in the leakage term and impulsive effects. Neurocomputing. 2016;214:420–429. doi: 10.1016/j.neucom.2016.06.040. [DOI] [Google Scholar]
  • 14.Li R., Cao J. Stability analysis of reaction-diffusion uncertain memristive neural networks with time-varying delays and leakage term. Appl. Math. Comput. 2016;278:54–69. [Google Scholar]
  • 15.Senthilraj S., Raja R., Zhu Q., Samidurai R., Yao Z. Exponential passivity analysis of stochastic neural networks with leakage, distributed delays and Markovian jumping parameters. Neurocomputing. 2016;175:401–410. doi: 10.1016/j.neucom.2015.10.072. [DOI] [Google Scholar]
  • 16.Senthilraj S., Raja R., Zhu Q., Samidurai R., Yao Z. Delay-interval-dependent passivity analysis of stochastic neural networks with Markovian jumping parameters and time delay in the leakage term. Nonlinear Anal. Hybrid Syst. 2016;22:262–275. doi: 10.1016/j.nahs.2016.05.002. [DOI] [Google Scholar]
  • 17.Li X., Fu X. Effect of leakage time-varying delay on stability of nonlinear differential systems. J. Franklin Inst. 2013;350:1335–1344. doi: 10.1016/j.jfranklin.2012.04.007. [DOI] [Google Scholar]
  • 18.Lakshmanan S., Park J.H., Lee T.H., Jung H.Y., Rakkiyappan R. Stability criteria for BAM neural networks with leakage delays and probabilistic time-varying delays. Appl. Math. Comput. 2013;219:9408–9423. [Google Scholar]
  • 19.Liao X., Mao X. Exponential stability and instability of stochastic neural networks. Stoch. Anal. Appl. 1996;14:165–185. doi: 10.1080/07362999608809432. [DOI] [Google Scholar]
  • 20.Su W., Chen Y. Global robust exponential stability analysis for stochastic interval neural networks with time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 2009;14:2293–2300. doi: 10.1016/j.cnsns.2008.05.001. [DOI] [Google Scholar]
  • 21.Zhang H., Wang Y. Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 2008;19:366–370. doi: 10.1109/TNN.2007.910738. [DOI] [PubMed] [Google Scholar]
  • 22.Zhu Q., Cao J. Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays. IEEE Trans. Syst. Man Cybern. 2011;41:341–353. doi: 10.1109/TSMCB.2010.2053354. [DOI] [PubMed] [Google Scholar]
  • 23.Bao H., Cao J. Stochastic global exponential stability for neutral-type impulsive neural networks with mixed time-delays and Markovian jumping parameters. Commun. Nonlinear Sci. Numer. Simul. 2011;16:3786–3791. doi: 10.1016/j.cnsns.2010.12.027. [DOI] [Google Scholar]
  • 24.Li X., Song S. Stabilization of delay systems: delay-dependent impulsive control. IEEE Trans. Autom. Control. 2017;62(1):406–411. doi: 10.1109/TAC.2016.2530041. [DOI] [Google Scholar]
  • 25.Li X., Wu J. Stability of nonlinear differential systems with state-dependent delayed impulses. Automatica. 2016;64:63–69. doi: 10.1016/j.automatica.2015.10.002. [DOI] [PubMed] [Google Scholar]
  • 26.Li X., Bohner M., Wang C. Impulsive differential equations: periodic solutions and applications. Automatica. 2015;52:173–178. doi: 10.1016/j.automatica.2014.11.009. [DOI] [Google Scholar]
  • 27.Stamova I., Stamov T., Li X. Global exponential stability of a class of impulsive cellular neural networks with supremums. Int. J. Adapt. Control Signal Process. 2014;28:1227–1239. doi: 10.1002/acs.2440. [DOI] [Google Scholar]
  • 28.Pan L., Cao J. Exponential stability of stochastic functional differential equations with Markovian switching and delayed impulses via Razumikhin method. Adv. Differ. Equ. 2012;2012:61. doi: 10.1186/1687-1847-2012-61. [DOI] [Google Scholar]
  • 29.Lou X., Cui B. Stochastic exponential stability for Markovian jumping BAM neural networks with time-varying delays. IEEE Trans. Syst. Man Cybern. 2007;37:713–719. doi: 10.1109/TSMCB.2006.887426. [DOI] [PubMed] [Google Scholar]
  • 30.Wang Z., Liu Y., Liu X. State estimation for jumping recurrent neural networks with discrete and distributed delays. Neural Netw. 2009;22:41–48. doi: 10.1016/j.neunet.2008.09.015. [DOI] [PubMed] [Google Scholar]
  • 31.Wang Q., Chen B., Zhong S. Stability criteria for uncertainty Markovian jumping parameters of BAM neural networks with leakage and discrete delays. Int. J. Math. Comput. Phys. Electr. Comput. Eng. 2014;8(2):391–398. [Google Scholar]
  • 32.Balasubramaniam P., Krishnasamy R., Rakkiyappan R. Delay-interval-dependent robust stability results for uncertain stochastic systems with Markovian jumping parameters. Nonlinear Anal. Hybrid Syst. 2011;5:681–691. doi: 10.1016/j.nahs.2011.06.001. [DOI] [Google Scholar]
  • 33.Gu K. An integral inequality in the stability problem of time-delay systems; Proceedings of 39th IEEE CDC; 1994. [Google Scholar]
  • 34.Guo S., Huang L., Dai B., Zhang Z. Global existence of periodic solutions of BAM neural networks with variable coefficients. Phys. Lett. A. 2003;317:97–106. doi: 10.1016/j.physleta.2003.08.019. [DOI] [Google Scholar]
  • 35.Haykin S. Neural Networks. New York: Prentice Hall; 1994. [Google Scholar]
  • 36.Shi Y., Cao J., Chen G. Exponential stability of complex-valued memristor-based neural networks with time-varying delays. Appl. Math. Comput. 2017;313:222–234. [Google Scholar]
  • 37.Wu H. Global exponential stability of Hopfield neural networks with delays and inverse Lipschitz neuron activations. Nonlinear Anal., Real World Appl. 2009;10:2297–2306. doi: 10.1016/j.nonrwa.2008.04.016. [DOI] [Google Scholar]
  • 38.Yang X., Cao J. Synchronization of Markovian coupled neural networks with nonidentical node-delays and random coupling strengths. IEEE Trans. Neural Netw. 2012;23:60–71. doi: 10.1109/TNNLS.2011.2177671. [DOI] [PubMed] [Google Scholar]
  • 39.Li Y., Wu H. Global stability analysis in Cohen–Grossberg neural networks with delays and inverse Holder neuron activation functions. Inf. Sci. 2010;180:4022–4030. doi: 10.1016/j.ins.2010.06.033. [DOI] [Google Scholar]
  • 40.Huang T., Li C. Robust exponential stability of uncertain delayed neural networks with stochastic perturbation and impulse effects. IEEE Trans. Neural Netw. Learn. Syst. 2012;23:867–875. doi: 10.1109/TNNLS.2011.2178037. [DOI] [PubMed] [Google Scholar]
  • 41.Balasubramaniam P., Vembarasan V. Robust stability of uncertain fuzzy BAM neural networks of neutral-type with Markovian jumping parameters and impulses. Comput. Math. Appl. 2011;62:1838–1861. doi: 10.1016/j.camwa.2011.06.027. [DOI] [Google Scholar]
  • 42.Rakkiyappan R., Chandrasekar A., Lakshmana S., Park J.H. Exponential stability for Markovian jumping stochastic BAM neural networks with mode-dependent probabilistic time-varying delays and impulse control. Complexity. 2015;20(3):39–65. doi: 10.1002/cplx.21503. [DOI] [Google Scholar]
  • 43.Park J.H., Park C.H., Kwon O.M., Lee S.M. A new stability criterion for bidirectional associative memory neural networks. Appl. Math. Comput. 2008;199:716–722. [Google Scholar]
  • 44.Mohamad S. Lyapunov exponents of convergent Cohen–Grossberg-type BAM networks with delays and large impulses. Appl. Math. Sci. 2008;2(34):1679–1704. [Google Scholar]
  • 45.Park J.H. A novel criterion for global asymptotic stability of BAM neural networks with time-delays. Chaos Solitons Fractals. 2006;29:446–453. doi: 10.1016/j.chaos.2005.08.018. [DOI] [Google Scholar]

Articles from Advances in Difference Equations are provided here courtesy of Springer

RESOURCES