Skip to main content
Cognitive Neurodynamics logoLink to Cognitive Neurodynamics
. 2017 Apr 20;11(4):369–381. doi: 10.1007/s11571-017-9438-0

Synchronization of generalized reaction-diffusion neural networks with time-varying delays based on general integral inequalities and sampled-data control approach

S Dharani 1, R Rakkiyappan 1, Jinde Cao 2,3,, Ahmed Alsaedi 3
PMCID: PMC5509615  PMID: 28761556

Abstract

This paper explores the problem of synchronization of a class of generalized reaction-diffusion neural networks with mixed time-varying delays. The mixed time-varying delays under consideration comprise of both discrete and distributed delays. Due to the development and merits of digital controllers, sampled-data control is a natural choice to establish synchronization in continuous-time systems. Using a newly introduced integral inequality, less conservative synchronization criteria that assure the global asymptotic synchronization of the considered generalized reaction-diffusion neural network and mixed delays are established in terms of linear matrix inequalities (LMIs). The obtained easy-to-test LMI-based synchronization criteria depends on the delay bounds in addition to the reaction-diffusion terms, which is more practicable. Upon solving these LMIs by using Matlab LMI control toolbox, a desired sampled-data controller gain can be acuqired without any difficulty. Finally, numerical examples are exploited to express the validity of the derived LMI-based synchronization criteria.

Keywords: Generalized neural networks, Reaction-diffusion, Integral inequality, Sampled-data control, Linear matrix inequality

Introduction

Over the last few decades, neural networks have procured substantial intrigue for its applications in many practical areas including automatic control, image and signal processing, fault diagnosis, combinatorial optimization, associative memory, etc (Young et al. 1997; Atencia et al. 2005; Hopfield 1984; Diressche and Zou 1998). All such applications heavily depend on the qualitative properties of neural networks. Synchronization, one of the collective dynamical behavior and one of the most fascinating phenomena of neural networks has been widely explored in the literature, see for example He et al. (2017), Shen et al. (2017), Bao et al. (2015), Tong et al. (2015) and Prakash et al. (2016). Besides, based on the choice of neuron states (internal/external), neural networks can be classified as local field neural networks (LFNNs) and static neural networks (SNNs), but still these neural networks can be transfered from one to other under certain assumptions, which are not always practicable. Hence, independent analysis on their dynamical behaviors are acquired. To avoid this complication, an unified model that includes both LFNNs and SNNs has been constructed in Zhang and Han (2011) and investigated its stability. This model of generalized neural network has been applied in areas wherever different types of neural networks are used. This promoted researchers to analyze the dynamics of generalized neural networks, see for examples Zheng et al. (2015), Manivannan et al. (2016), Liu et al. (2015a), Rakkiyappan et al. (2016a, b) and Li and Cao (2016), which are yet far from complete. The stability analysis for the switched Hopfield neutral neural networks was conducted in Manivannan et al. (2016), and some delay dependent stability criteria were established by constructing a new Lyapunov–Krasovskii functional. The problem of stability and pinning synchronization of delayed inertial memristive neural networks was investigated by employing matrix measure and Halanay inequality (Rakkiyappan et al. 2016a). Some LMI-based criteria were presented to guarantee the stability of reaction-diffusion delayed memristor-based neural networks (Li and Cao 2016). Based on the Lyapunov theory and analytical techniques, the fixed-time synchronization for delayed memristive recurrent neural networks was studied in Cao and Li (2017).

Besides, owing to the finite switching speeds and traffic congestion in signal transmission processes, time delays may endure in various dynamical systems, which may be unfavorable to successful applications of neural networks. It is evident from the literature that most research work on delayed neural networks has been confined to simple cases of discrete delays. However, in reality, neural networks frequently exhibit a special nature due to the occurrence of multitude of parallel pathways with a variety of axon sizes and lengths and thus there is a distribution of conduction velocities along these pathways and a distribution of propagation delays. Therefore, to construct a realistic neural network model, it is indispensable to include both discrete and distributed delays. Thus, the qualitative analysis of different neural networks with discrete and distributed delays has been established in the literature, see Wang et al. (2006), Zheng and Cao (2014), Rakkiyappan and Dharani (2017), Lee et al. (2015), Zhang et al. (2009) and Yang et al. (2014), for instance. Robust stability analysis of generalized neural networks with discrete and distributed delays has been analyzed in Wang et al. (2006). Robust synchronization analysis for a class of coupled delayed neural networks has been examined in Zheng and Cao (2014). In addition, in most stability and synchronization analysis of delayed systems, integral inequalities are extensively employed since they exactly produce bounds for integral terms with respect to quadratic functions. Jensen’s inequality is the mainly used and has served as a dominant tool in the stability analysis of time delay systems. Recently, an auxiliary function-based integral inequality has been introduced in Park et al. (2015) and it has been proved that this class of integral inequality produce more tighter bounds than that of Jensen inequality. Over which, it is proved in Chen et al. (2016a) that there occurs a general inequality based on orthogonal polynomials which comprises of all existing inequalities such as the Bessel-Legendre inequality, Jensen inequality, the Wirtinger-based inequality and the auxiliary function-based inequality as its special cases. Motivated by this fact, this paper employs the general integral inequality based on orthogonal polynomials to derive new less conservative criteria for the synchronization of generalized reaction-diffusion neural networks with mixed time-varying delays.

As one of the main factors that bring bad performance to the system, diffusion phenomena is often encountered in neural networks and electric circuits once electrons move in a nonuniform electromagnetic field. This implies that the whole structure and dynamics of neural networks rely on the evolution time of each variable as well as intensively rely on its position status and thus, the reaction-diffusion system arises in response to the above mentioned phenomenon. Recently, neural networks with diffusion terms and Markovian jumping have been immensely investigated in the literature, see Wang et al. (2012a), Gan (2012), Yang et al. (2013), Zhou et al. (2007), Li et al. (2015) and references therein. The problem of exponential synchronization of reaction-diffusion Cohen-Grossberg neural networks and Hopfield neural networks with time-varying delays has been investigated in Gan (2012) and Wang et al. (2012a), respectively. To our knowledge, many phenomena such as time delay and diffusion effects may show the way to undesirable behaviors like oscillation and instability. Consequently, in order to make complete use of their advantages and restrain their disadvantages, some control techniques should be adopted to realize synchronization in generalized reaction-diffusion neural networks.

As far as we know, most of the benefications to nonlinear control theory are based on continuous-time control such as adaptive control, non-fragile control, feedback control and so on. The finite-time passivity problem was discussed by constructing a nonfragile state feedback controller (Rajavel et al. 2017). The problem of exponential H filtering for discrete-time switched neural networks was investigated via the average dwell time approach and the piecewise Lyapunov function technique (Cao et al. 2016). The key point in implementing such continuous-time controllers is that the input signal should be continuous, which we cannot always ensure in real-time situations. Moreover, due to the expeditious furtherance in intelligent instrument and digital measurement, nowadays, continuous-time controllers are generally replaced by discrete-time controllers to embellish improved stability, precision and better performance. Further, in digital control method, namely, the sampled-data control approach, the response network accepts the signals from the drive system only in discrete times and the amount of information transferred decreases. This is very advantageous in implementation of controllers because of reduced control cost. Owing to this reason, sampled-data control theory has attracted appreciable notice from researchers and fruitful works have been done previously, few are Liu et al. (2015b), Liu and Zhou (2015), Lee and Park (2017), Rakkiyappan et al. (2015a, b), Su and Shen (2015) and Lee et al. (2014). In all such above mentioned works, only Jensen’s inequality has been utilized to handle integral terms that occur when deriving the synchronization criteria. Different from these works, in this paper, we suppose to derive the synchronization criteria for generalized neural networks with mixed delays using a general integral inequality that comprises Jensen’s inequality and many other existing inequalities as its special cases, which has not yet been established in the literature.

In response to the above discussion, it is worth noticing that both mixed delays and reaction-diffusion effects cannot be neglected when modeling neural networks since they have a potential influence on the dynamics of such networks and however, it is not that easy to handle all of them in a whole under a unified framework of generalized neural network model. Hence, it is of great interest to study this problem and still there is space for further improvement. The main goal of this paper is to explore the global asymptotic synchronization criteria for generalized reaction-diffusion neural networks with mixed delays based on a newly introduced general integral inequality which is derived based on the orthogonal polynomials and includes many existing inequalities such as Wirtinger-based inequality, Jensen’s inequality, the Bessel-Legendre inequality, etc. as its special cases. By constructing proper Lyapunov–Krasovskii functional with triple and four integral terms, new solvability criteria are derived in terms of LMIs which depends on the size of the delays, the sampling period as well as the diffusion terms. Finally, two numerical examples with simulation results are presented to exhibit the effectiveness of the derived theoretical results.

The structure of this paper is summarized as follows: In “Synchronization analysis problem” section, the problem under investigation is described and “Main results” section is devoted to establish some new synchronization criteria. “Numerical examples” section gives two numerical examples to demonstrate the effectiveness of our theoretical results. Finally, conclusion is presented in “Conclusions” section.

Notations and preliminaries Through the whole, Rn represents the n-dimensional Euclidean space. For a matrix XX>0(<0) refers to a positive (negative) definite symmetric matrix and XT,X-1 means the transpose and the inverse of a square matrix X,  respectively. The symbol in a symmetric matrix indicates the elements that are induced by symmetry. The shorthand diag{·} denotes a diagonal or block diagonal matrix. SnandSn+ denotes the sets of symmetric and symmetric positive definite matrices of Rn×n, respectively.

Before proceeding, we present some necessary lemmas which will be employed to derive the main results.

Lemma 1

(Lu 2008) Let Γ be a cube |xk|<qk(k=1,2,,m) and let w(x) be a real-valued function belonging to C1(Γ) which satisfies w(x)|Γ=0. Then

Γw2(x)dxqk2Γw(x)xk2dx.

Lemma 2

(Chen et al. 2016a) Let ω˙ (s) be a multiple integrable function and RSn+. The following integral inequalities hold:

abω˙T(s)Rω˙(s)ds1b-aγ˙0TRγ˙0+3b-aγ˙1TRγ˙1+5b-aγ˙2TRγ˙2+7b-aγ˙3TRγ˙3+9b-aγ˙4TRγ˙4, 1
abcbω˙T(s)Rω˙(s)dsdδ2(b-a)2γ˙02TRγ˙02+16(b-a)2γ˙0,1TRγ˙0,1+54(b-a)2γ˙0,2TRγ˙0,2+128(b-a)2γ˙0,3TRγ˙0,3, 2
abcbδbω˙T(s)Rω˙(s)dsdδdδ6(b-a)3γ˙03TRγ˙03+90(b-a)3γ˙02,1TRγ˙02,1+504(b-a)3γ˙02,2TRγ˙02,2 3

where

γ˙0=ω(b)-ω(a),γ˙02=(b-a)ω(b)-γ0,γ˙03=(b-a)22ω(b)-γ02,γ˙1=ω(b)+ω(a)-2b-aγ0,γ˙2=ω(b)-ω(a)+6b-aγ0-12(b-a)2γ02,γ˙3=ω(b)+ω(a)-12b-aγ0+60(b-a)2γ02-120(b-a)3γ03,γ˙4=ω(b)-ω(a)+20b-aγ0-180(b-a)2γ02+840(b-a)3-1680(b-a)4γ04,γ˙0,1=b-a2ω(b)+γ0-3b-aγ02,γ˙0,2=b-a3ω(b)-γ0+8b-aγ02-20(b-a)2γ03,γ˙0,3=b-a4ω(b)+γ0-15b-aγ02+90(b-a)2γ03-210(b-a)3γ04,γ˙02,1=(b-a)26ω(b)+γ02-4b-aγ03,γ˙02,2=(b-a)212ω(b)-γ02+10b-aγ03-30(b-a)2γ04.

Lemma 3

(Gu et al. 2003) For any matrix Φ>0Rn×n and a scalar ρ>0, vector function r:[0,ν]Rn such that the following integrations are well defined, then

0ρr(s)dsTΦ0ρr(s)dsρ0ρrT(s)Φr(s)ds.

Lemma 4

(Park et al. 2011) For any vector ξRm, matrices R1,R2Sn+,SRn×n, Θ1,Θ2Rn×m, and real scalars a0,b0 satisfying a+b=1, the following inequality holds:

1aξTΘ1TR1Θ1ξ+1bξTΘ2TR2Θ2ξξTΘ1Θ2TR1SSTR2Θ1Θ2ξ

subject to R1SSTR20.

Synchronization analysis problem

Consider the generalized reaction-diffusion neural network model with mixed time-varying delay components as

yl(t,x)t=k=1mxkDlkyl(t,x)xk-clyl(t,x)+j=1naljfj(wljyj(t,x))+j=1nbljfj(wljyj(t-τ(t),x))+j=1ndljt-d(t)tfj(wljyj(s,x))ds, 4

where 1ln and x(t)=[x1,x2,,xm]TΓRm, with Γ={x||xk|δk,k=1,2,,m}, where δk is a positive constant. Dlk0 is the transmission diffusion operator along the lth neuron, yl(t,x) is the state of the lth neuron, cl>0 denotes the decay rate of the lth neuron, alj,blj and dlj are the connection strength, the time-varying delay connection weight, and the distributed time-varying delay connection strength of the jth neuron on the lth neuron, respectively. wlj is the value of the synaptic connectivity from neuron j to l. fj(yj(t,x)) represents the neuron activation function. τ1τ(t)τ2 and 0d(t)d are the time-varying delay and distributed time-varying delay, where τ1,τ2 and d are positive constants. Also, σ=max{τ2,d}. The initial and Dirichlet boundary conditions of system (4) are given by

yl(s,x)=φl(s,x),(s,x)[-σ,0]×Γ, 5

and

yl(t,x)=0,(t,x)[-σ,+]×Γ, 6

respectively. φ(s,x)=(φ1(s,x),φ2(s,x),,φn(s,x))TC([-σ,0]×Γ,Rn), where C([-σ,0]×Γ,Rn) is the Banach space of continuous functions from [-σ,0]×Γ to Rn with φ(s,x)=ΓφT(s,x)φ(s,x)dx1/2.

Remark 1

It can be clearly seen that the model (4) is a generalized neural network model which incorporates some familiar neural networks as its particular cases. (i) Letting alj=blj=dlj=1, the model in (4) reduces to the reaction-diffusion static neural networks model and (ii) Letting wlj=1, it reduces to a classic reaction-diffusion local field neural networks model.

For simplicity, we represent system (4) in a compact form as

y(t,x)t=k=1mxkDky(t,x)xk-Cy(t,x)+Af(Wy(t,x))+Bf(Wy(t-τ(t),x))+Dt-d(t)tf(Wy(s,x))ds, 7

where y(t,x)=(y1(t,x),y2(t,x),,yn(t,x))T,Dk=diag{d1k,d2k,,dnk}, f(y(t,x))=(f1(y1(t,x)),,fn(yn(t,x)))T,C=diag{c1,c2,,cn},A=(alj)n×n,B=(blj)n×n,andD=(dlj)n×n.

Assumption 1

For any u,vR, the neuron activation function fj(·) is continuously bounded and satisfy

lj-fj(u)-fj(v)u-vlj+, 8

where lj- and lj+ are some real constants and may be positive, zero or negative.

Remark 2

As it has been mentioned in Liu et al. (2013), the constants lj-andlj+ can be positive, negative or zero. Therefore, the resulting activation functions could be nonmonotonic and more general than the usual sigmoid functions. In addition, when using the Lyapunov stability theory to analyze the stability of dynamic systems, such a description is particularly suitable since it quantifies the lower and upper bounds of the activation functions that offer the possibility of reducing the induced conservatism.

To observe the synchronization of system (4), the slave system is designed as

vi(t,x)t=k=1mxkDkvi(t,x)xk-Cvi(t,x)+Af(Wvi(t,x))+Bf(Wvi(t-τ(t),x))+Dt-d(t)tf(Wvi(s,x))ds+wi(t,x), 9

where vi(t,x)=(vi1(t,x),vi2(t,x),,vin(t,x))Rn is the state vector and wi(t,x) is the sampled-data controller to be designed. The boundary and initial conditions for (9) are given by

vi(t,x)=0,(t,x)[-σ,+]×Γ, 10
vi(s,x)=ψ(s,x),ψ(s,x)C([-σ,0]×Γ,Rn). 11

Define the error vector as ei(t,x)=vi(t,x)-y(t,x). Subtracting (4) from (9), we arrive at the error system as follows:

ei(t,x)t=k=1mxkDkei(t,x)xk-Cei(t,x)+Ag(Wei(t,x))+Bg(Wei(t-τ(t),x))+Dt-d(t)tg(Wei(s,x))ds+wi(t,x), 12

where g(Wei(t,x))=f(Wvi(t,x))-f(Wy(t,x)).

This paper adopts the following sampled-data controller:

wi(t,x)=Kei(tk,x),tkttk+1, 13

where KiRn×n is the gain matrix to be obtained, tk denotes the sampling instant and satisfies 0=t0<t1<<tk<<limk+tk=+.

Using input delay approach, we have t-tk=h(t),tkt<tk+1 and 0h(t)<h. Then the controller in (4a) becomes

wi(t,x)=Kei(t-h(t),x). 14

Thus, (12) can be written as

ei(t,x)t=k=1mxkDkei(t,x)xk-Cei(t,x)+Ag(Wei(t,x))+Bg(Wei(t-τ(t),x))+Dt-d(t)tg(Wei(s,x))ds+Kei(t-h(t),x). 15

Main results

In this section, by constructing Lyapunov–Krasovskii functional and by utilizing advanced techniques, new criteria to ensure the global asymptotic synchronization of the concerned generalized neural networks with reaction-diffusion terms and mixed delays will be furnished. Eventually, a design method of the sampled-data controller for the considered generalized reaction-diffusion neural networks will be proposed.

Further, let us define

ζT(t,x)=[eiT(t,x),e˙iT(t,x)eiT(t-τ1,x)eiT(t-τ(t),x)eiT(t-τ2,x),e˙iT(t-τ1,x)1τ1t-τ1teiT(s,x)ds1τ(t)-τ1t-τ(t)t-τ1eiT(s,x)ds1τ2-τ(t)t-τ2t-τ(t)eiT(s,x)ds2τ12-τ10t+θteiT(s,x)ds2τ22-τ20t+θteiT(s,x)dsdθ2(τ(t)-τ1)2-τ(t)-τ1t+θt-τ1eiT(s,x)dsdθ2(τ2-τ(t))2-τ2-τ(t)t+θt-τ(t)eiT(s,x)dsdθ6τ13-τ10θ0t+λteiT(s,x)dsdλdθ6τ23-τ20θ0t+λteiT(s,x)dsdλdθ6(τ(t)-τ1)3-τ(t)-τ1θ-τ1t+λt-τ1eiT(s,x)dsdλdθ6(τ2-τ(t))3-τ2-τ(t)θ-τ(t)t+λt-τ(t)eiT(s,x)dsdλdθ24τ14-τ10θ0λ0t+κteiT(s,x)dsdκdλdθ24τ24-τ20θ0λ0t+κteiT(s,x)dsdκdλdθ24(τ(t)-τ1)4-τ(t)-τ1θ-τ1λ-τ1t+κt-τ1eiT(s,x)dsκdλdθ24(τ2-τ(t))4-τ2-τ(t)θ-τ(t)λ-τ(t)t+κt-τ(t)eiT(s,x)dsdκdλdθeiT(t-h(t),x)eiT(t-h,x)g(Wei(t,x))g(Wei(t-τ1,x))g(Wei(t-τ(t),x))g(Wei(t-τ2,x))t-d(t)tg(Wei(s,x))ds]r1=[I00],r2=[0I0],,r28=[00I],α=τ(t)-τ1τ12andβ=τ2-τ(t)τ12Λ=Σ2Σ3TU~2LLTU~2Σ2Σ3,H1=diag{l1-l1+,l2-l2+,,ln-ln+},H2=diagl1-+l1+2,l2-+l2+2,,ln-+ln+2Σ1=(r1-r3)T(r1+r3-2r7)T(r1-r3+6r7-6r10)T(r1+r3-12r7+30r10-20r13)T(r1-r3+20r7-90r10+140r13-70r16)T,Σ2=(r4-r5)T(r4+r5-2r9)T(r4-r5+6r9-6r12)T(r4+r5-12r9+30r12-20r15)T(r4-r5+20r9-90r12+140r15-70r18)T,Σ3=(r3-r4)T(r3+r4-2r8)T(r3-r4+6r8-6r11)T(r3+r4-12r8+30r11-20r14)T(r3-r4+20r8-90r11+140r14-70r17)T,Σ4=(r1-r7)T(r1+r7-2r10)T(r1-r7+6r10-6r13)T(r1+r7-12r10+30r13-20r16)T,Σ5=(r4-r9)T(r4+2r9-3r12)T(r4-3r9+12r12-10r15)T(r4+4r9-30r12+45r15-140r18)T,Σ6=(r3-r8)T(r3+2r8-3r11)T(r3-3r8+12r11-10r14)T(r3+4r8-30r11+45r14-35r17)T,Σ7=(r1-r10)T(r1+3r10-4r13)T(r1-6r10+20r13-15r16)T,Σ8=(r1-r11)T(r1+3r11-4r15)T(r1-6r11+20r15-15r19)T,Σ9=(r1-r19)T(r1+r19-2r20)T(r1-r19+6r20-6r21)T,U~1=diag{U1,3U1,5U1,7U1,9U1},U~2=diag{U2,3U2,5U2,7U2,9U2},S~2=diag{S2,3S2,5S2,7S2,9S2},S¯1=diag{2S1,4S1,6S1,8S1},S¯2=diag{2S2,4S2,6S2,8S2},S¯3=diag{2S3,4S3,6S3},T¯1=diag{3T1,5T1,7T1},T¯2=diag{3T2,5T2,7T2}.

Theorem 1

The generalized neural networks with reaction-diffusion terms and mixed time-varying delays can be globally asymptotically synchronized under a sampled-data controller, if there exist matrices 0<P,0<G,0<Qν,(ν=1,2,3),0<Uν~,0<Sν~,0<Tν~,0<Xν~,(ν~=1,2),0<W1 such that the following LMIs hold:

U~2+S~2LLTU~20, 16
Φ<0,whereΦ=r2TPr1-r1TGr2-r1TGD~r1-r1TGCr1+r1TGAr24+r1GBr26+r1TGDr28+r1TFir22-r2TGr2+r2TGCr1+r2TGAr24+r2TGBr26+r2TGDr28+r2TFir22+r1T(Q1+Q2)r1+r3T(Q3-Q1)r3-(1-μ)r4TQ2r4-r5TQ3r5+τ12r2TU1r2-τ122r6TU2r6+(τ12/2)r2TS1r2+r6TS2r6+(τ13/6)r2TT1r2+(τ23/6)r2TT2r2+d2r24TW1r24+r1TX1r1+r23TX1r23+h2r2TX2r2-r28TW1r28-(r22-r23)TX2(r22-r23)-(r1-r22)TX2(r1-r22)-Σ1TU~1Σ1-Σ4TS¯1Σ4-Σ5TS¯2Σ5-Σ6TS¯2Σ6-Σ8TT¯1Σ8-Σ9TT¯2Σ9-r1TH1Ω1r1+2r1TH2Ω1r24-r24TΩ1r24-r4TH1Ω2r4+2r4TH2Ω2r26-r26TΩ2r26-r3TH1Ω3r3+2r3TH2Ω3r25-r25TΩ3r25-r5TH1Ω4r5+2r5TH2Ω4r27-r27TΩ4r27-Λ 17

with F=GK.

Proof

Consider the Lyapunov–Krasovskii Candidate

V(t,x)=κ=19Vκ(t,x) 18

where

V1(t,x)=Γi=1neiT(t,x)Pei(t,x)+k=1mDkei(t,x)xkTGei(t,x)xkdx,V2(t,x)=Γi=1nt-τ1teiT(s,x)Q1ei(s,x)dsdx+Γi=1nt-τ(t)teiT(s,x)Q2ei(s,x)dsdx+Γi=1nt-τ2t-τ1eiT(s,x)Q3ei(s,x)dsdx,V3(t,x)=τ1Γi=1n-τ10t+θte˙iT(s,x)U1e˙i(s,x)dsdθdx+τ12Γi=1n-τ2-τ1t+θt-τ1e˙iT(s,x)U2e˙i(s,x)dsdθdx,V4(t,x)=Γi=1n-τ10θ0t+λte˙iT(s,x)S1e˙i(s,x)dsdλdθdx,V5(t,x)=Γi=1n-τ2-τ1θ-τ1t+λt-τ1e˙iT(s,x)S2e˙i(s,x)dsdλdθdx,V6(t,x)=Γi=1n-τ10θ0λ0t+κte˙iT(s,x)T1e˙i(s,x)dsdκdλdθdx,V7(t,x)=Γi=1n-τ20θ0λ0t+κte˙iT(s,x)T2e˙i(s,x)dsdκdλdθdx,V8(t,x)=dΓi=1n-d0t+θtgT(Wei(s,x))W1g(Wei(s,x))dsdθdx,V9(t,x)=Γi=1nt-hteiT(s,x)X1ei(s,x)dsdx+hΓi=1n-h0t+θte˙iT(s,x)X2e˙i(s,x)dsdθdx.

The time derivative of V(tx) is computed as follows:

V˙1(t,x)=2Γi=1ne˙iT(t,x)Pei(t,x)+k=1mDke˙i(t,x)xkTGei(t,x)xkdx, 19
V˙2(t,x)=Γi=1nζT(t,x){r1T(Q1+Q2)r1+r3T(Q3-Q1)r3-(1-μ)r4TQ2r4-r5TQ3r5}ζ(t,x)dx, 20
V˙3(t,x)=Γi=1n{ζT(t,x){τ12r2TU1r2-τ122r6TU2r6}ζ(t,x)-τ1t-τ1te˙iT(s,x)U1e˙i(s,x)ds-τ12t-τ2t-τ1e˙iT(s,x)U2e˙i(s,x)ds}dx, 21
V˙4(t,x)=Γi=1n{ζT(t,x){(τ12/2)r2TS1r2}ζ(t,x)-τ10t+θte˙iT(s,x)S1e˙i(s,x)dsdθ}dx, 22
V˙5(t,x)=Γi=1n{ζT(t,x)r6TS2r6ζ(t,x)--τ2-τ(t)t+θt-τ(t)e˙iT(s,x)S2e˙i(s,x)dsdθ-(τ2-τ(t))t-τ(t)t-τ1e˙iT(s,x)S2e˙i(s,x)ds--τ(t)-τ1t+θt-τ1e˙iT(s,x)S2e˙i(s,x)dsdθ}dx, 23
V˙6(t,x)=Γi=1n{ζT(t,x)(τ13/6)r2TT1r2ζ(t,x)--τ10θ0t+λte˙iT(s,x)T1e˙i(s,x)dsdλdθ}dx, 24
V˙7(t,x)=Γi=1n{ζT(t,x)(τ23/6)r2TT2r2ζ(t,x)--τ20θ0t+λte˙iT(s,x)T2e˙i(s,x)dsdλdθ}dx, 25
V˙8(t,x)=Γi=1n{ζT(t,x)d2r24TW1r24ζ(t,x)-dt-dtgT(Wei(s,x))W1g(Wei(s,x))ds}dx, 26
V˙9(t,x)=Γi=1n{ζT(t,x){r1TX1r1-r23TX1r23+h2r2TX2r2}ζ(t,x)-ht-hte˙iT(s,x)X2e˙i(s,x)ds}dx. 27

Now by employing Lemma 2 to the integral terms in (21)–(25), we arrive at

-τ1t-τ1te˙iT(s,x)U1e˙i(s,x)ds-ζT(t,x){(r1-r3)TU1(r1-r3)+3(r1+r3-2r7)TU1(r1+r3-2r7)+5(r1-r3+6r7-6r10)TU1(r1-r3+6r7-6r10)+7(r1+r3-12r7+30r10-20r14)TU1×(r1+r3-12r7+30r10-20r14)+9(r1-r3+20r7-90r10+140r13-70r18)TU1(r1-r3+20r7-90r10+140r13-70r18)}ζ(t,x)=-ζT(t,x)Σ1TU~1Σ1ζ(t,x), 28
-τ12t-τ(t)t-τ1e˙iT(s,x)U2e˙i(s,x)ds-τ12τ(t)-τ1ζT(t,x){(r3-r4)TU2(r3-r4)+3(r3+r4-2r8)TU2(r3+r4-2r8)+5(r3-r4+6r8-6r12)TU2(r3-r4+6r8-6r12)+7(r3+r4-12r8+30r12-20r16)T×U2(r3+r4-12r8+30r12-20r16)+9(r3-r4+20r8-90r12+140r16-70r20)TU2(r3-r4+20r8-90r12+140r16-70r20)}ζ(t,x)=-1αζT(t,x)Σ2TU~2Σ2ζ(t,x), 29
-τ12t-τ2t-τ(t)e˙iT(s,x)U2e˙i(s,x)ds-τ12τ2-τ(t)ζT(t,x){(r4-r5)TU2(r4-r5)+3(r4+r5-2r9)TU2(r4+r5-2r9)+5(r4-r5+6r9-6r13)TU2(r4-r5+6r9-6r13)+7(r4+r5-12r9+30r13-20r17)TU2(r4+r5-12r9+30r13-20r17)+9(r4-r5+20r9-90r13+140r17-70r21)TU2(r4-r5+20r9-90r13+140r17-70r21)}ζ(t,x)=-1βζT(t,x)Σ3TU~2Σ3ζ(t,x), 30
-τ10t+θte˙iT(s,x)S1e˙i(s,x)dsdθ-ζT(t,x){2(r1-r7)TS1(r1-r7)+4(r1+2r7-4r10)TS1(r1+2r7-4r10)+6(r1-3r7+12r10-10r14)TS1(r1-3r7+12r10-10r14)+8(r1+32r7-30r10+60r14-35r18)TS1(r1+32r7-30r10+60r14-35r18)}ζ(t,x)=-ζT(t,x)Σ4TS¯1Σ4ζ(t,x), 31
--τ2-τ(t)t+θt-τ(t)e˙iT(s,x)S2e˙i(s,x)dsdθ-ζT(t,x){2(r4-r9)TS1(r4-r9)+4(r4+2r9-3r13)TS1(r4+2r9-3r13)+6(r4-3r9+12r13-10r17)TS1(r4-3r9+12r13-10r17)+8(r4+4r9-30r13+45r17-140r21)TS1(r4+4r9-30r13+45r17-140r21)}ζ(t,x)=-ζT(t,x)Σ5TS¯2Σ5ζ(t,x) 32
-(τ2-τ(t))t-τ(t)t-τ1e˙iT(s,x)S2e˙i(s,x)ds-τ12τ(t)-τ1-1ζT(t,x){(r3-r4)TS2(r3-r4)+3(r3+r4-2r8)TS2(r3+r4-2r8)+5(r3-r4+6r8-6r12)TS2(r3-r4+6r8-6r12)+7(r3+r4-12r8+30r12-20r16)TS2(r3+r4-12r8+30r12-20r16)+9(r3-r4+20r8-90r12+140r16-70r20)TS2(r3-r4+20r8-90r12+140r16-70r20)}ζ(t,x),=-1α-1ζT(t,x)Σ2TS~2Σ2ζ(t,x), 33
--τ(t)-τ1t+θt-τ1e˙iT(s,x)S2e˙i(s,x)dsdθ-ζT(t,x){2(r3-r8)TS2(r3-r8)+4(r3+2r8-3r12)TS2(r3+2r8-3r12)+6(r3-3r8+12r12-10r16)TS2(r3-3r8+12r12-10r16)+8(r3+4r8-30r12+45r16-35r20)TS2(r3+4r8-30r12+45r16-35r20)}ζ(t,x)=-ζT(t,x)Σ6TS¯2Σ6ζ(t,x), 34
--τ10θ0t+λte˙iT(s,x)T1e˙i(s,x)dsdλdθ-ζT(t,x){3(τ1/2)(r1-r10)TT1(r1-r10)+5(τ1/2)(r1+3r10-4r14)TT1(r1+3r10-4r14)+7(τ1/2)(r1-6r10+20r14-15r18)TT1(r1-6r10+20r14-15r18)}ζ(t,x)=-ζT(t,x)Σ7TT¯1Σ7ζ(t,x), 35
--τ20θ0t+λte˙iT(s,x)T2e˙i(s,x)dsdλdθζT(t,x){3(τ2/2)(r1-r11)TT2(r1-r11)+5(τ2/2)(r1+3r11-4r15)TT2(r1+3r11-4r15)+7(τ2/2)(r1-6r11+20r15-15r19)TT2(r1-6r11+20r15-15r19)}ζ(t,x)=-ζT(t,x)Σ8TT¯2Σ8ζ(t,x). 36

Further,

-dt-dtgT(Wei(s,x))W1g(Wei(s,x))ds-d(t)t-d(t)tgT(Wei(s,x))W1g(Wei(s,x))ds-ζT(t,x)r28TW1r28ζ(t,x), 37
-ht-hte˙iT(s,x)X2e˙i(s,x)ds=-ht-ht-h(t)e˙iT(s,x)X2e˙i(s,x)ds-ht-h(t)te˙iT(s,x)X2e˙i(s,x)ds-ζT(t,x){(r22-r23)TX2(r22-r23)+(r1-r22)TX2(r1-r22)}ζ(t,x). 38

According to the error system (15), we can have

0=Γi=1N2[eiT(t,x)G+e˙iT(t,x)G]-e˙i(t,x)+k=1mxkDkei(t,x)xk-Cei(t,x)+Ag(Wei(t,x))+Bg(Wei(t-τ(t),x))+Dt-d(t)tg(Wei(s,x))ds+Kei(t-h(t),x)dx 39

Then, by using Green’s formula, Dirichlet boundary condition and Lemma 1 on (39) and adding upon to V˙1(t,x) results in

V˙1(t,x)2ΓζT(t,x){r2TPr1-r1TGr2-r1TGD~r1-r1TGCr1+r1TGAr24+r1TGBr26+r1TGDr28+r1TGKr22-r2TGr2+r2TGCr1+r2TGAr24+r2TGBr26+r2TGDr28+r2TGKr22}ζ(t,x)dx 40

Moreover, for positive diagonal matrices Ω1,Ω2,Ω3andΩ4, we have from Assumption 1 that

ei(t,x)g(Wei(t,x))TH1Ω1-H2Ω1Ω1ei(t,x)g(Wei(t,x))0, 41
ei(t-τ(t),x)g(Wei(t-τ(t),x))TH1Ω2-H2Ω2Ω2ei(t-τ(t),x)g(Wei(t-τ(t),x))0, 42
ei(t-τ1,x)g(Wei(t-τ1,x))TH1Ω3-H2Ω3Ω3ei(t-τ1,x)g(Wei(t-τ1,x))0, 43
ei(t-τ2,x)g(Wei(t-τ2,x))TH1Ω4-H2Ω4Ω4ei(t-τ2,x)g(Wei(t-τ2,x))0, 44

Now, by employing Lemma 4 to (29), (30) and (33), we obtain

1αζT(t,x)Σ2TU~2Σ2ζ(t,x)+1βζT(t,x)Σ3TU~2Σ3ζ(t,x)+1α-1ζT(t,x)Σ2TS~2Σ2ζ(t,x)=1αζT(t,x)Σ2TU~2+S~2Σ2ζ(t,x)+1βζT(t,x)Σ3TU~2Σ3ζ(t,x)-ζT(t,x)Σ2TS~2Σ2ζ(t,x)ζT(t,x)Λζ(t,x). 45

Upon adding all V˙i(t),i=1,2,,9 along with (41)–(44), we have

V˙(t,x)ζT(t,x)Φζ(t,x)<0 46

where Φ is given in (17). This proves that the considered generalized delayed reaction-diffusion neural network model with time-varying delays is globally asymptotically stable under the sampled-data controller.

Remark 3

Lemma 1 is utilized to deal with the reaction-diffusion terms. It can be witnessed in the literature that in the problems on qualitative inspection of reaction-diffusion neural networks, see for example Wang et al. (2012b), Liu (2010), Wang and Zhang (2010) and Lv et al. (2008), the impact of diffusion terms have been expunged. However, the results derived in this paper also includes the effect of diffusion terms, which is worth mentioning. Moreover, the results presented here are generic and no limitation is urged on the time-varying delay.

Remark 4

It is to be noticed that Theorem 1 furnishes a synchronization scheme for generalized delayed reaction-diffusion neural networks in the framework of input delay approach. The results are revealed in the form of LMIs. An advantage of LMI approach is that the LMI condition can be verified easily and effectively by employing the available softwares which are meant to solve LMIs.

If the effects of diffusion are ignored, then following result will be obtained.

Theorem 2

The generalized neural networks with mixed time-varying delays is globally asymptotically synchronized under a sampled-data controller, if there exist matrices 0<P,0<Qν,(ν=1,2,3),0<Uν~,0<Sν~,0<Tν~,0<Xν~,(ν~=1,2),0<W1 such that (16) and the following LMI hold:

Ψ<0, 47

with

Ψ=r2TPr1-r1TGr2-r1TGCr1+r1TGAr24+r1GBr26+r1TGDr28+r1TFir22-r2TGr2+r2TGCr1+r2TGAr24+r2TGBr26+r2TGDr28+r2TFir22+r1T(Q1+Q2)r1+r3T(Q3-Q1)r3-(1-μ)r4TQ2r4-r5TQ3r5+τ12r2TU1r2-τ122r6TU2r6+(τ12/2)r2TS1r2+r6TS2r6+(τ13/6)r2TT1r2+(τ23/6)r2TT2r2+d2r24TW1r24+r1TX1r1+r23TX1r23+h2r2TX2r2-r28TW1r28-(r22-r23)TX2(r22-r23)-(r1-r22)TX2(r1-r22)-Σ1TU~1Σ1-Σ4TS¯1Σ4-Σ5TS¯2Σ5-Σ6TS¯2Σ6-Σ7TT¯1Σ7-Σ8TT¯2Σ8-r1TH1Ω1r1+2r1TH2Ω1r24-r24TΩ1r24-r4TH1Ω2r4+2r4TH2Ω2r26-r26TΩ2r26-r3TH1Ω3r3+2r3TH2Ω3r25-r25TΩ3r25-r5TH1Ω4r5+2r5TH2Ω4r27-r27TΩ4r27-Λ

where F=GK.

Proof

The proof follows from Theorem 1 with the second term of V1(t,x) in (18) being neglected.

Remark 5

In general, computational complexity will be a big issue based on how large are the LMIs and how more are the decision variables. The results in Theorem 1 are derived based on the construction of proper Lyapunov–Krasovskii functional with triple and four integral terms and by using a newly introduced integral inequality technique which produces tighter bounds than what the existing ones such as the Wirtinger-based inequality, Jensen’s inequality and the auxiliary function-based integral inequalities produce. It should be mentioned that the derived synchronization criteria for the considered neural networks with reaction-diffusion terms and time-varying delays is less conservative than the other conditions in the literature, which will be proved in the next section. Meanwhile, it should also be noticed that the relaxation of the derived results is acquired at the cost of more number of decision variables. As far the results to be efficient enough it is more comfortable to have larger maximum allowable upper bounds but still in order to reduce computational burden and time consumption, our future work will be focused on reducing the number of decision variables.

Remark 6

It is worth to mention that, in the existing literature only few works that concerns with the qualitative analysis of generalized neural networks has been published (Zheng et al. 2015; Liu et al. 2015a; Rakkiyappan et al. 2016b). In Zheng et al. (2015) and Liu et al. (2015a), the stability analysis for a class of generalized neural networks with time-varying delays has been carried out based on a free-matrix-based inequality and an improved inequality, respectively. By constructing augmented Lyapunov–Krasovskii functional with more information on activation functions, improved delay-dependent stability criteria for generalized neural networks with additive time-varying delays has been presented in Rakkiyappan et al. (2016b). However, all these existing works does not include diffusion phenomena in the neural network model, which cannot be ignored when electrons are moving in a nonuniform electromagnetic field. Therefore, to manufacture high quality neural networks, it is must to consider the activations to vary in space as well as in time and in this case the model should be expressed by partial differential equations. In the existing literature there were only a very few works that investigate the synchronization of generalized reaction-diffusion neural networks with time-varying delays, namely Gan (2012) and Gan et al. (2016). Different from the existing literature, the present paper focuses on the synchronization behavior of generalized reaction-diffusion neural networks with mixed time-varying delays. Here, to realize synchronization, discrete controller, namely the sampled-data controller has been employed. Moreover, by using a new class of integral inequalities for quadratic functions, from which almost all of the existing integral inequalities can be obtained, such as Jensen inequality, the Wirtinger-based inequality, the Bessel-Legendre inequality, the Wirtinger-based double integral inequality, and the auxiliary function-based integral inequalities, less conservative synchronization criteria that depends on the information of delays as well as reaction-diffusion terms has been derived. The obtained results are formulated in the form of LMIs which can be efficiently solved via Matlab LMI control toolbox.

Numerical examples

This section offers two numerical examples with simulations to verify the validity of the proposed theoretical results acquired in this paper.

Example 1

Consider the drive-response generalized neural network model (7) and (9)

y(t,x)t=k=1mxkDky(t,x)xk-Cy(t,x)+Af(Wy(t,x))+Bf(Wy(t-τ(t),x))+Dt-d(t)tf(Wy(s,x))ds,y(t,x)=0,(t,x)[-σ,+]×Γ,ϕ(s,x)=0.7sin(πx) 48

and

vi(t,x)t=k=1mxkDkvi(t,x)xk-Cvi(t,x)+Af(Wvi(t,x))+Bf(Wvi(t-τ(t),x))+Dt-d(t)tf(Wvi(s,x))ds+wi(t,x),vi(t,x)=0,(t,x)[-σ,+]×Γ,ψ(s,x)=0.3sin(πx) 49

with the parameters

C=1001,A=0.22-0.5-0.2,B=10.8-0.11.8,D=0.80.3-0.5-0.1,

Dk=diag{0.01,0.01},andW=diag{1,1}. We let τ(t)=0.1+0.4sin(t) and d(t)=0.9+0.1cos(t). Moreover, the activation function is considered to be f(y(s,x))=0.25tanh(y(s,x))-0.25. Through simple calculation, we arrive at τ1=0,τ2=0.5,μ=0.4,σ=1,lj-=-0.5,lj+=0. The sampling period is taken as h=0.3.

Next, by using Matlab LMI toolbox, the sufficient conditions in Theorem 1 are verified under the above chosen parameters and were found to be feasible with the control gain matrix

K=-2.5830-3.0801-2.98711.5342.

Thus, we conclude that the generalized neural network model with mixed delays under a sampled-data controller is globally asymptotically synchronized.

From Fig. 1 one can read that under the designed controller, the error dynamics of the generalized reaction-diffusion neural networks converges to zero. The simulation results clearly clarify the effectiveness of the controller to the asymptotical synchronization of generalized reaction-diffusion neural networks with mixed delays.

Fig. 1.

Fig. 1

Dynamical behavior of the error states ei1(t,x) and ei2(t,x)

Moreover, this paper utilizes novel integral inequalities in order to derive the bound for some cross terms that arises when deriving the time derivative of the Lyapunov–Krasovskii functional. To show the effectiveness of such novel integral inequalities, the maximum allowable upper bounds are calculated, which is presented in Table 1.

Table 1.

The maximum allowable upper bounds τ2 for different τ1

τ1 0.2 0.4 0.6 0.8
τ2 0.9024 0.8842 0.8005 0.6522

Example 2

Consider the generalized neural network model in Example 1 without diffusion term as

y(t,x)t=-Cy(t,x)+Af(Wy(t,x))+Bf(Wy(t-τ(t),x))+Dt-d(t)tf(Wy(s,x))ds, 50

where we take

C=1.2000.1,A=0.20.50.30.1,B=0.20.1-0.2-0.1,D=0.41-0.50.1

and the activation function is chosen to be f(y(s,x))=12|y(s,x)+1|-|y(s,x)-1| and let τ(t)=0.6+0.5sin(t),d(t)=0.8+0.4sin(t). A straight forward calculation yields li-=0,li+=1,τ2=1.1andμ=0.5. The sampling period is chosen as h=0.5. Verifying the results provided in Theorem 2 using Matlab LMI toolbox, it is proved that the LMIs presented are feasible and the controller gain is calculated as

K=-1.7841-3.6279-2.2487-1.7690

Table 2 lists the maximum allowable upper bound τ2 for different τ1.

Table 2.

The maximum allowable upper bounds τ2 for different τ1

τ1 0.2 0.4 0.6 0.8
Theorem 1 (Chen et al. 2016b) 1.1139 0.9137 0.7135 0.5096
Theorem 2 1.1314 0.9421 0.7540 0.5201

It can be witnessed from Table 2 that Theorem 2 achieves larger allowable upper bounds than those obtained in Chen et al. (2016b), which implies that Theorem 2 has less conservative results, which is obtained by using the newly introduced general integral inequality lemma.

Conclusions

Over the past few years, the problem of synchronization of reaction-diffusion neural networks has become a hot area of research. However, only a few works on the synchronization problem for generalized reaction-diffusion neural networks have been established. This paper focuses on the synchronization of generalized reaction-diffusion neural network with mixed time-varying delay components under a sampled-data controller. In virtue of a general integral inequality based on orthogonal polynomials and Lyapunov–Krasovskii functionals with triple and four integral terms, new less conservative synchronization criterion has been derived in terms of LMIs. The acquired LMIs are solved numerically through MATLAB LMI toolbox. Finally, numerical simulations are established to illustrate the validity of the proposed theoretical results.

In practical sampled-data systems, the control packet can be lost because of several factors and this may induce undesirable behavior of the system under control. Therefore, it is worth to consider the problem of synchronization of generalized reaction-diffusion neural networks using sampled-data control with control packet loss, which will be one of our future topics.

Contributor Information

S. Dharani, Email: sdharanimails@gmail.com

R. Rakkiyappan, Email: rakkigru@gmail.com

Jinde Cao, Email: jdcao@seu.edu.cn.

Ahmed Alsaedi, Email: aalsaedi@hotmail.com.

References

  1. Atencia M, Joya G, Sandoval F. Dynamical analysis of continuous higher order Hopfield neural networks for combinatorial optimization. Neural Comput. 2005;17:1802–1819. doi: 10.1162/0899766054026620. [DOI] [PubMed] [Google Scholar]
  2. Bao H, Park JH, Cao J. Matrix measure strategies for exponential synchronization and anti-synchronization of memristor-based neural networks with time-varying delays. Appl Math Comput. 2015;270:543–556. [Google Scholar]
  3. Cao J, Li R. Fixed-time synchronization of delayed memristor-based recurrent neural networks. Sci China Inf Sci. 2017;60(3):032201. doi: 10.1007/s11432-016-0555-2. [DOI] [Google Scholar]
  4. Cao J, Rakkiyappan R, Maheswari K, Chandrasekar A. Exponential H filtering analysis for discrete-time switched neural networks with random delays using sojourn probabilities. Sci China Technol Sci. 2016;59(3):387–402. doi: 10.1007/s11431-016-6006-5. [DOI] [Google Scholar]
  5. Chen J, Xu S, Chen W, Zhang B, Ma Q, Zou Y. Two general integral inequalities and their applications to stability analysis for systems with time-varying delay. Int J Robust Nonlinear Control. 2016;26:4088–4103. doi: 10.1002/rnc.3551. [DOI] [Google Scholar]
  6. Chen G, Xia J, Zhuang G. Delay-dependent stability and dissipativity analysis of generalized neural networks with Markovian jump parameters and two delay components. J Frankl Inst. 2016;353:2137–2158. doi: 10.1016/j.jfranklin.2016.02.020. [DOI] [Google Scholar]
  7. Diressche P, Zou X. Global attractivity in delayed Hopfield neural network models. SIAM J Appl Math. 1998;58:1878–1890. doi: 10.1137/S0036139997321219. [DOI] [Google Scholar]
  8. Gan Q. Global exponential synchronization of generalized stochastic neural networks with mixed time-varying delays and reaction-diffusion terms. Neurocomputing. 2012;89:96–105. doi: 10.1016/j.neucom.2012.02.030. [DOI] [Google Scholar]
  9. Gan Q, Lv T, Fu Z. Synchronization criteria for generalized reaction-diffusion neural networks via periodically intermittent control. Chaos. 2016;26:043113. doi: 10.1063/1.4947288. [DOI] [PubMed] [Google Scholar]
  10. Gu K, Kharitonov V, Chen J. Stability of time-delay systems. Boston: Birkhauser; 2003. [Google Scholar]
  11. He W, Qian F, Cao J. Pinning-controlled synchronization of delayed neural networks with distributed-delay coupling via impulsive control. Neural Netw. 2017;85:1–9. doi: 10.1016/j.neunet.2016.09.002. [DOI] [PubMed] [Google Scholar]
  12. Hopfield J (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of National Academy of Sciences, USA 81:3088–3092 [DOI] [PMC free article] [PubMed]
  13. Lee TH, Park JH. Improved criteria for sampled-data synchronization of chaotic Lure systems using two new approaches. Nonlinear Anal Hybrid Syst. 2017;24:132–145. doi: 10.1016/j.nahs.2016.11.006. [DOI] [Google Scholar]
  14. Lee T, Park J, Lee S, Kwon O. Robust sampled-data control with random missing data scenario. Int J Control. 2014;87:1957–1969. doi: 10.1080/00207179.2014.896476. [DOI] [Google Scholar]
  15. Lee T, Park JH, Park M, Kwon O, Jung H. On stability criteria for neural networks with time-varying delay using Wirtinger-based multiple integral inequality. J Franklin Inst. 2015;352:5627–5645. doi: 10.1016/j.jfranklin.2015.08.024. [DOI] [Google Scholar]
  16. Li R, Cao J. Stability analysis of reaction-diffusion uncertain memristive neural networks with time-varying delays and leakage term. Appl Math Comput. 2016;278:54–69. [Google Scholar]
  17. Li X, Rakkiyappan R, Sakthivel R. Non-fragile synchronization control for Markovian jumping complex dynamical networks with probabilistic time-varying coupling delay. Asian J Control. 2016;17:1678–1695. doi: 10.1002/asjc.984. [DOI] [Google Scholar]
  18. Liu X. Synchronization of linearly coupled neural networks with reaction-diffusion terms and unbounded time delays. Neurocomputing. 2010;73:2681–2688. doi: 10.1016/j.neucom.2010.05.003. [DOI] [Google Scholar]
  19. Liu H, Zhou G. Finite-time sampled-data control for switching T-S fuzzy systems. Neurocomputing. 2015;156:294–300. doi: 10.1016/j.neucom.2015.04.008. [DOI] [Google Scholar]
  20. Liu Y, Wang Z, Liang J, Liu X. Synchronization of coupled neutral type neural networks with jumping-mode-dependent discrete and unbounded distributed delays. IEEE Trans Cybern. 2013;43:102–114. doi: 10.1109/TSMCB.2012.2199751. [DOI] [PubMed] [Google Scholar]
  21. Liu Y, Lee S, Kwon O, Park JH. New approach to stability criteria for generalized neural networks with interval time-varying delays. Neurocomputing. 2015;149:1544–1551. doi: 10.1016/j.neucom.2014.08.038. [DOI] [Google Scholar]
  22. Liu X, Yu W, Cao J, Chen S. Discontinuous Lyapunov approach to state estimation and filtering of jumped systems with sampled-data. Neural Netw. 2015;68:12–22. doi: 10.1016/j.neunet.2015.04.001. [DOI] [PubMed] [Google Scholar]
  23. Lu J. Global exponential stability and periodicity of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions. Chaos Solitons Fractals. 2008;35:116–125. doi: 10.1016/j.chaos.2007.05.002. [DOI] [Google Scholar]
  24. Lv Y, Lv W, Sun J. Convergence dynamics of stochastic reaction-diffusion recurrent neural networks with continuously distributed delays. Nonlinear Anal Real World Appl. 2008;9:1590–1606. doi: 10.1016/j.nonrwa.2007.04.003. [DOI] [Google Scholar]
  25. Manivannan R, Samidurai R, Cao J, Alsaedi A. New delay-interval-dependent stability criteria for switched Hopfield neural networks of neutral type with successive time-varying delay components. Cogn Neurodyn. 2016;10(6):543–562. doi: 10.1007/s11571-016-9396-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Park P, Ko J, Jeong C. Reciprocally convex approach to stability of systems with time-varying delays. Automatica. 2011;47:235–238. doi: 10.1016/j.automatica.2010.10.014. [DOI] [Google Scholar]
  27. Park P, Lee W, Lee S. Auxiliary function-based integral inequalities for quadratic functions and their applications to time-delay systems. J Franklin Inst. 2015;352:1378–1396. doi: 10.1016/j.jfranklin.2015.01.004. [DOI] [Google Scholar]
  28. Prakash M, Balasubramaniam P, Lakshmanan S. Synchronization of Markovian jumping inertial neural networks and its applications in image encryption. Neural Netw. 2016;83:86–93. doi: 10.1016/j.neunet.2016.07.001. [DOI] [PubMed] [Google Scholar]
  29. Rajavel S, Samidurai R, Cao J, Alsaedi A, Ahmad B. Finite-time non-fragile passivity control for neural networks with time-varying delay. Appl Math Comput. 2017;297:145–158. [Google Scholar]
  30. Rakkiyappan R, Dharani S. Sampled-data synchronization of randomly coupled reaction-diffusion neural networks with Markovian jumping and mixed delays using multiple integral approach. Neural Comput Appl. 2017;28:449C462. doi: 10.1007/s00521-015-2079-5. [DOI] [Google Scholar]
  31. Rakkiyappan R, Dharani S, Cao J. Synchronization of neural networks with control packet loss and time-varying delay via stochastic sampled-data controller. IEEE Trans Neural Netw Learn Syst. 2015;26:3215–3226. doi: 10.1109/TNNLS.2015.2425881. [DOI] [PubMed] [Google Scholar]
  32. Rakkiyappan R, Dharani S, Zhu Q. Stochastic sampled-data H synchronization of coupled neutral-type delay partial differential systems. J Frankl Inst. 2015;352:4480–4502. doi: 10.1016/j.jfranklin.2015.06.019. [DOI] [Google Scholar]
  33. Rakkiyappan R, Premalatha S, Chandrasekar A, Cao J. Stability and synchronization analysis of inertial memristive neural networks with time delays. Cogn Neurodyn. 2016;10(5):437–451. doi: 10.1007/s11571-016-9392-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Rakkiyappan R, Sivasamy R, Park JH, Lee T. An improved stability criterion for generalized neural networks with additive time-varying delays. Neurocomputing. 2016;171:615–624. doi: 10.1016/j.neucom.2015.07.004. [DOI] [Google Scholar]
  35. Shen H, Zhu Y, Zhang L, Park JH. Extended dissipative state estimation for Markov jump neural networks with unreliable links. IEEE Trans Neural Netw Learn Syst. 2017;28:346–358. doi: 10.1109/TNNLS.2015.2511196. [DOI] [PubMed] [Google Scholar]
  36. Su L, Shen H. Mixed H passive synchronization for complex dynamical networks with sampled-data control. Appl Math Comput. 2015;259:931–942. [Google Scholar]
  37. Tong D, Zhou W, Zhou X, Yang J, Zhang L, Xu X. Exponential synchronization for stochastic neural networks with multi-delayed and Markovian switching via adaptive feedback control. Commun Nonlinear Sci Numer Simul. 2015;29:359–371. doi: 10.1016/j.cnsns.2015.05.011. [DOI] [Google Scholar]
  38. Wang Z, Zhang H. Global asymptotic stability of reaction-diffusion Cohen-Grossberg neural networks with continuously distributed delays. IEEE Trans Neural Netw. 2010;21:39–49. doi: 10.1109/TNN.2009.2033910. [DOI] [PubMed] [Google Scholar]
  39. Wang Z, Shu H, Liu Y, Ho DW, Liu X. Robust stability analysis of generalized neural networks with discrete and distributed time delays. Chaos Solitons Fractals. 2006;30:886–896. doi: 10.1016/j.chaos.2005.08.166. [DOI] [Google Scholar]
  40. Wang Y, Lin P, Wang L. Exponential stability of reaction-diffusion high-order Markovian jump Hopfield neural networks with time-varying delays. Nonlinear Anal Real World Appl. 2012;13:1353–1361. doi: 10.1016/j.nonrwa.2011.10.013. [DOI] [Google Scholar]
  41. Wang K, Teng Z, Jiang H. Adaptive synchronization in an array of linearly coupled neural networks with reaction-diffusion terms and time delays. Commun Nonlinear Sci Numer Simul. 2012;17:3866–3875. doi: 10.1016/j.cnsns.2012.02.020. [DOI] [Google Scholar]
  42. Yang X, Cao J, Yang Z. Synchronization of coupled reaction-diffusion neural networks with time-varying delays via pinning-impulsive controller. SIAM J Control Optim. 2013;51:3486–3510. doi: 10.1137/120897341. [DOI] [Google Scholar]
  43. Yang X, Cao J, Yu W. Exponential synchronization of memristive Cohen-Grossberg neural networks with mixed delays. Cogn Neurodyn. 2014;8:239–249. doi: 10.1007/s11571-013-9277-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Young S, Scott P, Nasrabadi N. Object recognition using multilayer Hopfield neural network. IEEE Trans Image Process. 1997;6:357–372. doi: 10.1109/83.557336. [DOI] [PubMed] [Google Scholar]
  45. Zhang X, Han Q. Global asymptotic stability for a class of generalized neural networks with interval time-varying delays. IEEE Trans Neural Netw. 2011;22:1180–1192. doi: 10.1109/TNN.2011.2147331. [DOI] [PubMed] [Google Scholar]
  46. Zhang H, Wang Z, Liu D. Global asymptotic stability and robust stability of a class of Cohen-Grossberg neural networks with mixed delays. IEEE Trans Circuit Syst. 2009;I(56):616–629. doi: 10.1109/TCSI.2008.2002556. [DOI] [Google Scholar]
  47. Zheng C, Cao J. Robust synchronization of coupled neural networks with mixed delays and uncertain parameters by intermittent pinning control. Neurocomputing. 2014;141:153–159. doi: 10.1016/j.neucom.2014.03.042. [DOI] [Google Scholar]
  48. Zheng H, He Y, Wu M, Xiao P. Stability analysis of generalized neural networks with time-varying delays via a new integral inequality. Neurocomputing. 2015;161:148–154. doi: 10.1016/j.neucom.2015.02.055. [DOI] [Google Scholar]
  49. Zhou Q, Wan L, Sun J. Exponential stability of reaction-diffusion generalized Cohen-Grossberg neural networks with time-varying delays. Chaos Solitons Fractals. 2007;32:1713–1719. doi: 10.1016/j.chaos.2005.12.003. [DOI] [Google Scholar]

Articles from Cognitive Neurodynamics are provided here courtesy of Springer Science+Business Media B.V.

RESOURCES