Skip to main content
Cognitive Neurodynamics logoLink to Cognitive Neurodynamics
. 2017 Mar 18;11(3):293–306. doi: 10.1007/s11571-017-9429-1

Global asymptotic stability of complex-valued neural networks with additive time-varying delays

K Subramanian 1, P Muthukumar 1,
PMCID: PMC5430246  PMID: 28559957

Abstract

In this paper, we extensively study the global asymptotic stability problem of complex-valued neural networks with leakage delay and additive time-varying delays. By constructing a suitable Lyapunov–Krasovskii functional and applying newly developed complex valued integral inequalities, sufficient conditions for the global asymptotic stability of proposed neural networks are established in the form of complex-valued linear matrix inequalities. This linear matrix inequalities are efficiently solved by using standard available numerical packages. Finally, three numerical examples are given to demonstrate the effectiveness of the theoretical results.

Keywords: Additive time-varying delays, Complex-valued neural networks, Global asymptotic stability, Leakage delay, Lyapunov–Krasovskii functional

Introduction

In the past decades, there have been increasing research interests in analyzing the dynamic behaviors of neural networks due to their widespread applications (Guo and Li 2012; Manivannan and Samidurai 2016; Mattia and Sanchez-Vives 2012; Yu et al. 2013), and the references therein. In many applications, complex signals are involved and complex-valued neural network is preferable. In the recent years, the complex-valued neural network is an emerging field of research in both theoretical and practical points of view. The major advantage of complex-valued neural networks is to explore new capabilities and higher performance of the designed network. According to that there has been increasing attention paid to study the dynamical behavior of the complex-valued neural networks and found an applications in different areas, such as pattern classification problems (Nait-Charif 2010), associative memory (Tanaka and Aihara 2009) and optimization problems (Jiang 2008). The equilibrium point of those existing applications are necessary to keep the networks to be stable. Therefore, the stability analysis is the most important dynamical property of complex-valued neural networks.

In real life situation, a time delay often occurs for the reason of the finite switching speed of the amplifiers, and it also appears in the electronic implementation of the neural networks when processing the signal transmission, which may cause the dynamical behaviors of neural networks in the form of instability, bifurcation and oscillation (Alofi et al. 2015; Cao and Li 2017; Hu et al. 2014; Huang et al. 2017). Thus, many authors have taken into account the constant time delays (Hu and Wang 2012; Subramanian and Muthukumar 2016) and time-varying delays (Chen et al. 2017; Gong et al. 2015) in the stability analysis of complex-valued neural networks.

In addition, a time delay in leakage term of the systems, which is called the leakage delay and a considerable factor affecting dynamics for the worse in the systems, is being put to use in the stability analysis of neural networks. The aforesaid results (Bao et al. 2016; Chen et al. 2017; Gong et al. 2015; Hu and Wang 2012; Subramanian and Muthukumar 2016) concerning the dynamical behavior analysis of the complex-valued neural networks with constant time delays or time-varying delays and did not consider the leakage effect. Even though, the leakage delay is extensively studied for real-valued neural networks (Lakshmanan et al. 2013; Li and Cao 2016; Sakthivel et al. 2015; Xie et al. 2016), the complex-valued neural networks with leakage delay has been rarely considered in the literature (Chen and Song 2013; Chen et al. 2016; Gong et al. 2015). As an example, by constructing an appropriate Lyapunov–Krasovskii functionals, the authors Gong et al. (2015) and Chen et al. (2016), respectively, studied the global μ-stability problem for the continuous-time and discrete-time complex-valued neural networks with leakage time delay and unbounded time-varying delays. By employing a combination of fixed point theory, Lyapunov–Krasovskii functional and the free weighting matrix method, the existence, uniqueness and global stability of the equilibrium point of complex-valued neural networks with both leakage delay and time delay on time scales are established in Chen and Song (2013).

Furthermore, the neural networks model with two successive time-varying delays has been introduced in Zhao et al. (2008) and the authors correctly pointed out that since signal transmissions may experience a few segment of networks and the conditions of network transmission may differ for each other, which can possibly produce successive delays with different properties. It is not rational to lump the two time delays into one time delay. Thus, it is more reasonable to model the neural networks with additive time-varying delays. In Shao and Han (2012), the authors discussed the stability and stabilization for continuous-time systems with two additive time-varying input delays arising from networked control systems. The problem of stability criteria of neural networks with two additive time-varying delay components are addressed in Tian and Zhong (2012) by using the both reciprocally convex and convex polyhedron approach. The authors Rakkiyappan et al. (2015) studied the passivity and passification problem for a class of memristor-based recurrent neural networks with additive time-varying delays. From the above, the additive time-varying delays are only presented in the real-valued neural networks in the previous literature.

However, it is worth noting that in those existing results, the time-varying delay considered in the complex-valued neural networks is usually a single. Stability research on complex-valued neural networks with additive time-varying delays has not been considered in the literature, which motivates our research interesting. To the best of authors knowledge, the global asymptotic stability analysis for complex-valued neural networks with leakage delays and additive time-varying delays has not been considered in the literature, and remains as a topic for further investigation.

In this paper, the main contributions are given as follows:

  • It is the first time to establish the global asymptotic stability of complex-valued neural networks with leakage delays and additive time-varying delay components.

  • A suitable Lyapunov–Krasovskii functional is constructed with the full information of additive time-varying delays and leakage delays.

  • A new type of complex-valued triple integral inequality is introduced to estimate the upper bound of the derivative of Lyapunov–Krasovskii functional.

  • Based on the model transformation technique, sufficient conditions for the global asymptotic stability of proposed neural networks are obtained in the linear matrix inequality form, which can be checked numerically by using the effective YALMIP toolbox in MATLAB.

  • Finally, three illustrative examples are provided to show the effectiveness of the proposed criteria.

The rest of this paper is organized as follows: In “Problem formulation and preliminaries” section, the model of the complex-valued neural networks with leakage delay and additive time-varying delays is presented, and some preliminaries are briefly outlined. In “Main result” section, the sufficient conditions are derived to ascertain the global asymptotic stability of the complex-valued neural networks with leakage delay and additive time-varying delays by Lyapunov–Krasovskii functional method. Three numerical examples are given to show the effectiveness of the acquired conditions in “Numerical example” section. Finally, conclusions are drawn in “Conclusion” section.

Notations

The notation used throughout this paper is fairly standard. Cn and Cm×n denote the set of n-dimensional complex vectors, m×n complex matrices, respectively. The superscript T and denotes the matrix transposition and complex conjugate transpose, respectively; i denotes the imaginary unit, that is i=-1. For any matrix P, P>0 (P<0) means P is positive definite (negative definite) matrix. For complex number z=x+iy, the notation |z|=x2+y2 stands for the module of z and z=zz; diag{·} stands for diagonal of the block- diagonal matrix. If ACn×n, denotes by A its operator norm, i.e., A=sup{Ax:x=1}=λmax(AA). The notation always denotes the conjugate transpose of block in a Hermitian matrix.

Problem formulation and preliminaries

In this paper, we consider a model of complex-valued neural networks with leakage delay and two additive time-varying delay components, which can be described by

u˙(t)=-Au(t-σ)+Bg(u(t))+Cg(u(t-τ1(t)-τ2(t)))+J, 1

where u(t)=(u1(t),u2(t),,un(t))TCn is the state vector of the neural networks with n neurons at time t, A=diag{a1,a2,,an}Rn×n with aj>0(j=1,2,,n) is the self feedback connection weight matrix. B=(bjk)n×nCn×n and C=(cjk)n×nCn×n are the connection weight matrix and delayed connection weight matrix, respectively; g(u(t))=(g1(u1(·)),g2(u2(·)),,gn(un(·)))TCn is the complex-valued neuron activation function; J=(J1,J2,,Jn)T is the external input vector; σ denotes leakage delay, τ1(t) and τ2(t) are two time varying delays satisfies 0τ1(t)τ1, τ˙1(t)μ1, 0τ2(t)τ2, τ˙2(t)μ2, τ(t)=τ1(t)+τ2(t), τ=τ1+τ2, μ=μ1+μ2 and μ1,μ2,μ are less than one. The initial condition associated with the complex-valued neural network (1) is given by

u(s)=ϕ(s),s[-ρ,0],whereρmax{σ,τ},ϕC([-ρ,0],D).

It means that u(s) is continuous and satisfies (1). Let C([-ρ,0],D) be the space of continuous functions mapping [-ρ,0] into DCn.

Assumption 2.1

Let gj(·) satisfies the Lipschitz continuity condition in the complex domain, that is, for all j=1,2,,n, there exists a positive constant F^j, such that, for u1,u2C, we have

|gj(u1)-gj(u2)|F^j|u1-u2|.

Definition 2.2

The vector u^Cn is said to be an equilibrium point of complex valued neural networks (1) if it satisfies the following condition

-Au^+(B+C)g(u^)+J=0.

Theorem 2.3

(Existence of equilibrium point) Under the Assumption  2.1, there exist an equilibrium point u^Cn for the system (1) if

-Au^+(B+C)g(u^)+J=0.

Proof

Since the activation function of the system (1) is bounded, there exists a constant Mi such that,

|gi(ui)|MiforanyuiC,i=1,2,,n.

Let M=i=1n(Mi2)12. Then g(u)M, for u=(u1,u2,,un)Cn. We denote A={uCn:uA-1(B+C)M+J} and let us define the map CnCn by,

H(u)=A-1(Bg(u)+Cg(u)+J).

Since, H is a continuous map and using the condition g(u)M, we obtain that

H(u)A-1(B+CM+J).

Therefore from the definition of A, H maps A into itself. By Brouwers fixed point theorem, it can be inferred that there exist a fixed point u^ of H, which satisfies

A-1(Bg(u^)+Cg(u^)+J)=u^.

Pre multiplying by the matrix A on both sides, gives

-Au^+Bg(u^)+Cg(u^)+J=0.

That is, by Definition 2.2, u^ is an equilibrium point of (1). Hence, the proof is completed.

For convenience, we shift the equilibrium point u^ to the origin by letting z(t)=u(t)-u^. Then, the system (1) can be written as

z˙(t)=-Az(t-σ)+Bf(z(t))+Cf(z(t-τ1(t)-τ2(t))). 2

where, f(z(t))=g(z(t)+u^)-g(u^). By using the transformation, the system (2) has an equivalent form as follows

ddtz(t)-At-σtz(s)ds=-Az(t)+Bf(z(t))+Cf(z(t-τ1(t)-τ2(t))). 3

In the following, we introduce relevant assumption and lemmas to facilitate the presentation of main results in the ensuing sections.

Assumption 2.4

Let fj(·) satisfies the Lipschitz continuity condition in the complex domain, that is, for all j=1,2,,n, there exists a positive constant Fj, such that, for z1,z2C, we have

|fj(z1)-fj(z2)|Fj|z1-z2|,

where Fj is called Lipschitz constant. Moreover, define Γ=diag{F12,F22,,Fn2}.

Lemma 2.5

Velmurugan et al. (2015)(Schur Complement) A given matrix,

Ω=Ω11Ω12Ω12TΩ22>0,

where Ω11=Ω11T, Ω22=Ω22T is equivalent to any one of the following conditions

Ω22>0,Ω11-Ω12TΩ22-1Ω12>0,Ω11>0,Ω22-Ω12Ω11-1Ω12T>0.

Lemma 2.6

For any constant Hermitian matrix MCn×n and M>0, a scalar function z(s):[a,b]Cn with scalars a<b such that the following inequalities are satisfied:

  • (i)

    abz(s)dsMabz(s)ds(b-a)abz(s)Mz(s)ds

  • (ii)

    absbz(θ)dθdsMabsbz(θ)dθds(b-a)22absbz(θ)Mz(θ)dθds

  • (iii)

    absbθbz(γ)dγdθdsMabsbθbz(γ)dγdθds(b-a)36absbθbz(γ)Mz(γ)dγdθds.

Proof

The proof of complex-valued Jensen’s inequality (i) is given in Chen and Song (2013). Therefore, we have to prove (ii) and (iii) as follows:

From (i), the following inequality holds:

sbz(θ)dθMsbz(θ)dθ(b-s)sbz(θ)Mz(θ)dθ.

By the Schur complement Lemma (Velmurugan et al. 2015), the above inequality becomes,

sbz(θ)Mz(θ)dθsbz(θ)dθsbz(θ)dθ(b-s)M-10. 4

Integrating (4) from a to b, we have

absbz(θ)Mz(θ)dθdsabsbz(θ)dθdsabsbz(θ)dθdsab(b-s)M-1ds0,absbz(θ)Mz(θ)dθdsabsbz(θ)dθdsabsbz(θ)dθds(b-a)22M-10. 5

By using Schur complement Lemma, the inequality (5) is equivalent to

absbz(θ)dθdsMabsbz(θ)dθds(b-a)22absbz(θ)Mz(θ)dθds.

This completes the proof of (ii). By applying the same procedure presented in the proof of (ii), the inequality (iii) can be easily derived. Thus, it is omitted.

Main result

In this section, by utilizing a Lyapunov–Krasovskii functional and integral inequalities, we will present a delay-dependent stability criterion for the complex-valued neural networks with leakage delays and additive time-varying delays (3) via linear matrix inequality.

Theorem 3.1

Under Assumption 2.4, the complex-valued neural networks (3) is globally asymptotically stable, if there exist positive Hermitian matrices J, M, N, O, P, Q, R, S, T, U, V, W, X, Y and positive diagonal matrix G such that the following linear matrix inequality holds:

Θ=Θ1,100WX0PBPCΘ1,90τ1Mτ2NΘ2,20000000000Θ3,3000000000Θ4,400000000Θ5,50000000Θ6,6Θ6,7Θ6,80000Θ7,7Θ7,8Θ7,9000Θ8,8Θ8,9000Θ9,9000Θ10,1000Θ11,110Θ12,12<0, 6

where Θ1,1=-PA-AP+Q+R+S+T+U+σ2Y-W-X-σ2O+σ636J+GΓ-τ12M-τ22N, Θ1,9=APA+σO, Θ2,2=-(1-μ1)Q, Θ3,3=-(1-μ2)R, Θ4,4=-W-S, Θ5,5=-X-T, Θ6,6=-U+τ12AWA+τ22AXA+τ144AMA+τ244ANA+σ44AOA, Θ6,7=-τ12AWB-τ22AXB-τ144AMB-τ244ANB-σ44AOB, Θ6,8=-τ12AWC-τ22AXC-τ144AMC-τ244ANC-σ44AOC, Θ7,7=τ12BWB+τ22BXB+τ144BMB+τ244BNB+σ44BOB+V-G, Θ7,8=τ12BWC+τ22BXC+τ144BMC+τ244BNC+σ44BOC, Θ7,9=-BPA, Θ8,8=τ12CWC+τ22CXC+τ144CMC+τ244CNC+σ44COC-(1-μ)V, Θ8,9=-CPA, Θ9,9=-Y-O, Θ10,10=-J, Θ11,11=-M, Θ12,12=-N.

Proof

Consider the following Lyapunov–Krasovskii functional

V(t)=j=15Vi(t), 7

where

V1(t)=z(t)-t-σtz(s)dsPz(t)-t-σtz(s)ds,V2(t)=t-τ1(t)tz(s)Qz(s)ds+t-τ2(t)tz(s)Rz(s)ds+t-τ1tz(s)Sz(s)ds+t-τ2tz(s)Tz(s)ds+t-σtz(s)Uz(s)ds+t-τ(t)tf(z(s))Vf(z(s))ds,V3(t)=τ1-τ10t+θtz˙(s)Wz˙(s)dsdθ+τ2-τ20t+θtz˙(s)Xz˙(s)dsdθ+σ-σ0t+θtz(s)Yz(s)dsdθ,V4(t)=τ122t-τ1tθtγtz˙(s)Mz˙(s)dsdγdθ+τ222t-τ2tθtγtz˙(s)Nz˙(s)dsdγdθ,V5(t)=σ22t-σtθtγtz˙(s)Oz˙(s)dsdγdθ+σ36t-σtθtγtδtz(s)Jz(s)dsdδdγdθ.

Taking the time derivative of V(t) along the trajectories of system (3), it follows that

V˙1(t)=2z(t)-At-σtz(s)dsP[-Az(t)+Bf(z(t))+Cf(z(t-τ1(t)-τ2(t)))],=-2z(t)PAz(t)+2z(t)PBf(z(t))+2z(t)PCf(z(t-τ1(t)-τ2(t)))+2t-σtz(s)dsAPAz(t)-2t-σtz(s)dsAPBf(z(t))-2t-σtz(s)dsAPCf(z(t-τ1(t)-τ2(t))), 8
V˙2(t)z(t)[Q+R+S+T+U]z(t)-(1-μ1)z(t-τ1(t))Qz(t-τ1(t))-(1-μ2)z(t-τ2(t))Rz(t-τ2(t))-z(t-τ1)Sz(t-τ1)-z(t-τ2)Tz(t-τ2)-z(t-σ)Uz(t-σ)+f(z(t))Vf(z(t))-(1-μ)f(z(t-τ(t)))Vf(z(t-τ(t))), 9
V˙3(t)=z˙(t)(τ12W+τ22X)z˙(t)+σ2z(t)Yz(t)-τ1t-τ1tz˙(s)Wz˙(s)ds-τ2t-τ2tz˙(s)Xz˙(s)ds-σt-σtz(s)Yz(s)ds, 10
V˙4(t)=τ122t-τ1tγtz˙(t)Mz˙(t)dsdγ-τ122t-τ1tγtz˙(s)Mz˙(s)dsdγ+τ222t-τ2tγtz˙(t)Nz˙(t)dsdγ-τ222t-τ2tγtz˙(s)Nz˙(s)dsdγ,=z˙(t)τ144M+τ244Nz˙(t)-τ122t-τ1tγtz˙(s)Mz˙(s)dsdγ-τ222t-τ2tγtz˙(s)Nz˙(s)dsdγ, 11
V˙5(t)=σ22t-σtγtz˙(t)Oz˙(t)dsdγ-σ22t-σtγtz˙(s)Oz˙(s)dsdγ+σ36t-σtγtδtz(t)Jz(t)dsdδdγ-σ36t-σtγtδtz(s)Jz(s)dsdδdγ,=σ44z˙(t)Oz˙(t)+σ636z(t)Jz(t)-σ22t-σtγtz˙(s)Oz˙(s)dsdγ-σ36t-σtγtδtz(s)Jz(s)dsdδdγ. 12

Applying Lemma 2.6 (i) to the integral terms of (10) which produces that

-τ1t-τ1tz˙(s)Wz˙(s)ds-t-τ1tz˙(s)dsWt-τ1tz˙(s)ds=-[z(t)-z(t-τ1)]W[z(t)-z(t-τ1)], 13
-τ2t-τ2tz˙(s)Xz˙(s)ds-t-τ2tz˙(s)dsXt-τ2tz˙(s)ds=-[z(t)-z(t-τ2)]X[z(t)-z(t-τ2)], 14
-σt-σtz(s)Yz(s)ds-t-σtz(s)dsYt-σtz(s)ds. 15

From (3) and (13)–(15), it follows that

V˙3(t)z(t-σ)[τ12AWA+τ22AXA]z(t-σ)-z(t-σ)[τ12AWB+τ22AXB]f(z(t))-z(t-σ)[τ12AWC+τ22AXC]f(z(t-τ(t)))-f(z(t))[τ12BWA+τ22BXA]z(t-σ)+f(z(t))[τ12BWB+τ22BXB]f(z(t))+f(z(t))[τ12BWC+τ22BXC]f(z(t-τ(t)))-f(z(t-τ(t)))[τ12CWA+τ22CXA]z(t-σ)+f(z(t-τ(t)))[τ12CWB+τ22CXB]f(z(t))+f(z(t-τ(t)))[τ12CWC+τ22CXC]f(z(t-τ(t)))+z(t)[σ2Y-W-X]z(t)-z(t-τ1)×Wz(t-τ1)+z(t)Wz(t-τ1)+z(t-τ1)Wz(t)-z(t-τ2)Xz(t-τ2)+z(t)Xz(t-τ2)+z(t-τ2)Xz(t)-t-σtz(s)dsYt-σtz(s)ds. 16

By using Lemma 2.6 (2), the integral terms in (11) can be estimated as follows:

-τ122t-τ1tγtz˙(s)Mz˙(s)dsdγ-t-τ1tγtz˙(s)dsdγM×t-τ1tγtz˙(s)dsdγ,=-τ1z(t)-t-τ1tz(s)ds×Mτ1z(t)-t-τ1tz(s)ds, 17
-τ222t-τ2tγtz˙(s)Mz˙(s)dsdγ-t-τ2tγtz˙(s)dsdγNt-τ2tγtz˙(s)dsdγ,=-τ2z(t)-t-τ2tz(s)dsNτ2z(t)-t-τ2tz(s)ds. 18

Then, substituting (17) and (18) into (11), yields

V˙4(t);z(t-σ)τ144AMA+τ144ANAz(t-σ)-z(t-σ)τ144AMB+τ244ANBf(z(t))-z(t-σ)×τ144AMC+τ244ANCf(z(t-τ(t)))-f(z(t))τ244BMA+τ244BNAz(t-σ)+f(z(t))×τ144BMB+τ244BNBf(z(t))+f(z(t))τ144BMC+τ244BNCf(z(t-τ(t)))-f(z(t-τ(t)))τ12CMA+τ22CNAz(t-σ)+f(z(t-τ(t)))τ12CMB+τ22CNBf(z(t))+f(z(t-τ(t)))τ144CMC+τ244CNCf(z(t-τ(t)))-z(t)[τ12M+τ22N]z(t)-t-τ1tz(s)dsMt-τ1tz(s)ds+τ1z(t)Mt-τ1tz(s)ds+τ1t-τ1tz(s)dsMz(t)-t-τ2tz(s)dsNt-τ2tz(s)ds+τ2z(t)Nt-τ2tz(s)ds+τ2t-τ2tz(s)dsNz(t). 19

Similarly, by using Lemma 2.6 (ii) and (iii), the integral terms in V˙5(t) can be obtained as follows:

-σ22t-σtγtz˙(s)Oz˙(s)dsdγ-t-σtγtz˙(s)dsdγO×t-σtγtz˙(s)dsdγ,=-σz(t)-t-σtz(s)ds×Oσz(t)-t-σtz(s)ds, 20
-σ36t-σtγtδtz(s)Jz(s)dsdδdγ-t-σtγtδtz(s)dsdδdγJt-σtγtδtz(s)dsdδdγ. 21

Therefore, together with (12) and (20), (21), we get

V˙5(t)z(t-σ)σ44AOAz(t-σ)-z(t-σ)σ44AOBf(z(t))-z(t-σ)σ44AOCf(z(t-τ(t)))-f(z(t))σ44BOAz(t-σ)+f(z(t))σ44BOBf(z(t))+f(z(t))σ44BOCf(z(t-τ(t)))-f(z(t-τ(t)))[σ2COA]z(t-σ)+f(z(t-τ(t)))[σ2COB]f(z(t))+f(z(t-τ(t)))σ44COCf(z(t-τ(t)))-z(t)σ2O+σ636Jz(t)-t-σtz(s)dsOt-σtz(s)ds+σz(t)Ot-σtz(s)ds+σt-σtz(s)dsOz(t)-t-σtγtδtz(s)dsdδdγJt-σtγtδtz(s)dsdδdγ. 22

Moreover, based on Assumption 2.4, for any p=1,2,,n, we have

|fp(zp(t))|Fp|zp(t)|. 23

Let G=diag{s1,s2,,sn}>0. From (23), it can be seen that

spfp(zp(t))fp(zp(t))-spFp2zp(t)zp(t)0,p=1,2,,n.

Thus,

f(z(t))Gf(z(t))-z(t)GΓz(t)0. 24

Combining (8), (9), (16), (19), (22) and (24), one can deduce that

V˙(t)ζ(t)Θζ(t),

where

ζ(t)=z(t)z(t-τ1(t))z(t-τ2(t))z(t-τ1)z(t-τ2)z(t-σ)f(z(t))f(z(t-τ(t)))t-σtz(s)dst-σtγtδtz(s)dsdδdγt-τ1tz(s)dst-τ2tz(s)ds.

If (6) holds, then we get

V˙(t)<0.

Hence, the complex-valued neural networks (3) is globally asymptotically stable. This completes the proof.

Remark 3.2

In Dey et al. (2010), the problem of asymptotic stability for continuous-time systems with additive time-varying delays is investigated by utilizing the free matrix variables. The authors Wu et al. (2009) addressed the stability problem for a class of uncertain systems with two successive delay components. By using a convex polyhedron method, the delay-dependent stability criteria is established in Shao and Han (2011) for neural networks with two additive time-varying delay components. However, when constructing the Lyapunov–Krasovskii functional, those results are not adequately use the full information about the additive time-varying delays τ1(t), τ2(t) and τ(t), which would be inevitably conservative to some extent. Since, the authors Cheng et al. (2014) utilized the full information about the additive time-varying delays in the constructed Lyapunov–Krasovskii functional and studied the delay-dependent stability of real-valued continuous-time system which gives the less conservative results. Inspired by the above, in the present paper, we also make the full information of additive time-varying delays τ1(t), τ2(t) and τ(t) in the study of stability analysis for complex-valued neural networks.

Remark 3.3

In the existing literature, many researchers studied the stability problem of neural networks and proposed good results, for example see Lakshmanan et al. (2013); Sakthivel et al. (2015); Xie et al. (2016) and references there in. Most of these results are founded on the inequality abz(s)dsTMabz(s)ds(b-a)abzT(s)Mz(s)ds. Chen and Song (2013) and Velmurugan et al. (2015) studied the stability and passivity analysis of complex-valued neural networks with the help of complex-valued Jensen’s inequality abz(s)dsMabz(s)ds(b-a)abz(s)Mz(s)ds. Based on the above analysis and discussions, in this paper the single integral inequality is handled with the complex-valued Jensen’s inequality. Moreover, in this present paper, we introduce the double integral inequality absbz(θ)dθdsMabsbz(θ)dθds (b-a)22absbz(θ)Mz(θ)dθds and as well as triple integral inequality absbθbz(γ)dγdθdsMabsbθbz(γ)dγdθds(b-a)36absbθbz(γ)Mz(γ)dγdθds, for calculating the derivative of Lyapunov–Krasovskii functional in the complex-valued neural networks.

Remark 3.4

In the following, we will discuss the global asymptotic stability criteria for complex-valued neural networks with additive time-varying delays, that is, there is no leakage delay (i.e.,σ=0) in (3), then the system (3) becomes

z˙(t)=-Az(t)+Bf(z(t))+Cf(z(t-τ1(t)-τ2(t))),z(s)=ϕ(s),s[-τ,0]. 25

Then, according to Theorem 3.1, we have the following corollary for the delay-dependent global asymptotic stability of system (25).

Corollary 3.5

Given scalars τ1, τ2, μ1 and μ2, the equilibrium point of complex-valued neural networks (25) with additive time-varying delays is globally asymptotically stable if there exist positive Hermitian matrices M, N, P, Q, R, S, T, V, W, X and positive diagonal matrix G such that the following linear matrix inequality holds:

Θ~=Θ~1,100WXΘ~1,6Θ~1,7τ1Mτ2NΘ~2,20000000Θ~3,3000000Θ~4,400000Θ~5,50000Θ~6,6Θ~6,700Θ~7,700Θ~8,80Θ~9,9<0, 26

where Θ~1,1=-PA-AP+Q+R+S+T-W-X+GΓ-τ12M-τ22N+τ12AWA+τ22AXA+τ144AMA+τ244A×NA, Θ~1,6=PB-τ12AWB-τ22AXB-τ144AMB-τ244A×NB, Θ~1,7=PC-τ12AWC-τ22AXC-τ144AMC-τ244A×NC, Θ~2,2=-(1-μ1)Q, Θ~3,3=-(1-μ2)R, Θ~4,4=-W-S, Θ~5,5=-X-T, Θ~6,6=τ12BWB+τ22BXB+τ144BMB+τ244BNB+V-G, Θ~6,7=τ12BWC+τ22BXC+τ144BMC+τ244BNC, Θ~7,7=τ12CWC+τ22CXC+τ144CMC+τ244CNC-(1-μ)V, Θ~8,8=-M, Θ~9,9=-N.

Proof

The proof immediately follows from the proof of Theorem 3.1, by setting σ=0, U=0, Y=0, O=0 and J=0, hence it is omitted. This completes the proof.

Remark 3.6

When τ1(t)=0 or τ2(t)=0 complex-valued neural networks with additive time-varying delays (25) reduces to single time-varying delays. Without loss of generality, assume that τ2(t)=0 then (25) becomes

z˙(t)=-Az(t)+Bf(z(t))+Cf(z(t-τ1(t))),z(s)=ϕ(s),s[-τ1,0]. 27

By letting τ2=0, R=T=X=N=0 in Corollary 3.5, we can easily obtain the sufficient condition for global asymptotic stability of complex-valued neural networks with time-varying delays (27), which are summarized the following corollary.

Corollary 3.7

Given scalars τ1 and μ1, the equilibrium point of complex-valued neural networks (27) is globally asymptotically stable if there exist positive Hermitian matrices M, P, Q, S, V, W and positive diagonal matrix G such that the following linear matrix inequality holds:

Θ^=Θ^1,10WΘ^1,4Θ^1,5τ1M-(1-μ1)Q0000-W-S000Θ^4,400Θ^5,50-M<0, 28

where Θ^1,1=-PA-AP+Q+S-W+GΓ-τ12M+τ12AWA+τ144AMA, Θ^1,4=PB-τ12AWB-τ144AMB, Θ^1,5=PC-τ12AWC-τ144AMC, Θ^4,4=τ12BWB+τ144BMB+V-G, Θ^4,5=τ12BWC+τ144BMC, Θ^5,5=τ12CWC+τ144CMC-(1-μ1)V, Θ^6,6=-M.

Remark 3.8

In Liu and Chen (2016), the global exponential stability of complex-valued neural networks with asynchronous time delays is established by decomposing the complex-valued networks into its real and imaginary parts and construct an equivalent real-valued system. The authors Xu et al. (2014) derived the exponential stability condition for a class of complex-valued neural networks with time-varying delays and unbounded delays by utilizing the vector Lyapunov–Krasovskii functional method, homeomorphism mapping lemma and the matrix theory. In Liu and Chen (2016) and Xu et al. (2014), the authors addressed the stability results of complex-valued neural networks with constant or time-varying delays by separating their activation function into its real and imaginary parts. However, when the activation functions cannot be expressed by separating their real and imaginary parts, the proposed stability results in Liu and Chen (2016) and Xu et al. (2014) cannot be applied. It should be mentioned that, in this present paper, the proposed stability criterion for complex-valued neural networks is valid regardless of the active functions can be expressed by separating their real and imaginary parts. Thus, the derived delay dependent stability condition in this paper is more general than the existing literature (Liu and Chen 2016; Xu et al. 2014).

Numerical example

In this section, we give three numerical examples to demonstrate the derived main results.

Example 4.1

Consider a two-dimensional complex-valued neural networks (3) with the following parameters:

A=9009,B=1-i-1-i2-i2-5i,C=1-i1-i1+i-1-i,

The activation functions are chosen as f(z(t))=1-e-x(t)1+e-x(t)+i11+e-y(t). The time-varying delays are considered as τ1(t)=0.2sint+0.2, τ2(t)=0.5cost+0.8, which satisfying τ1=0.4, τ2=1.3, μ1=0.2 and μ2=0.5. Take Γ=diag{0.5,0.5} and σ=0.08, by using the effective YALMIP toolbox in MATLAB, we can find the feasible solutions to linear matrix inequality in (6) as follows:

P=1.5815-0.1675+0.0342i-0.1675-0.0342i1.0867,Q=0.2537-0.1143+0.0229i-0.1143-0.0229i0.0772,R=0.2538-0.1144+0.0229i-0.1144-0.0229i0.0773,S=0.2507-0.1129+0.0226i-0.1129-0.0226i0.0764,T=0.2533-0.1141+0.0229i-0.1141-0.0229i0.0771,U=0.8848-0.2728+0.0567i-0.2728-0.0567i0.4261,V=7.0635-1.2995+0.8830i-1.2995-0.8830i3.1009,W=0.0061-0.0028+0.0006i-0.0028-0.0006i0.0017,J=87.0921-0.0011+0.0002i-0.0011-0.0002i87.0905,X=10-03×0.5620-0.2517+0.0505i-0.2517-0.0505i0.1735,M=0.2831-0.1315+0.0264i-0.1315-0.0264i0.0802,Y=10-031.1409-0.1424+0.0287i-0.1424-0.0287i0.7563,N=0.0026-0.0012+0.0002i-0.0012-0.0002i0.0007,O=10-02×0.87910.5352-0.0968i0.5352+0.0968i1.4391,

and G=diag{9.6363,7.7007}. According to Theorem 3.1, the complex-valued neural networks with leakage delay and additive time-varying delays (3) is globally asymptotically stable. Figures 1 and 2 show that the time responses of the real and imaginary parts of the system (3) with 21 initial conditions, respectively. The phase trajectories of the real parts of the system (3) is given in Fig. 3. Similarly, the phase trajectories for imaginary parts of the system (3) is given in Fig. 4. Also, Figs. 5 and 6 depict the real and imaginary parts of states of the considered complex-valued neural networks (3) with σ=1 under the same 21 initial conditions. It is easy to check that the unique equilibrium point of the system (3) is unstable, this implies that the delays in leakage term on the dynamics of complex-valued neural networks cannot be ignored when we analyze the stability of complex-valued neural networks.

Fig. 1.

Fig. 1

State trajectories of real parts of the system (3) in Example 4.1

Fig. 2.

Fig. 2

State trajectories of imaginary parts of the system (3) in Example 4.1

Fig. 3.

Fig. 3

State trajectories of neural networks (3) between real subspace [Re(z1),Re(z2)]

Fig. 4.

Fig. 4

State trajectories of neural networks (3) between imaginary subspace [Im(z1(t)),Im(z2(t))]

Fig. 5.

Fig. 5

Time response of real parts of the system (3) when δ=1

Fig. 6.

Fig. 6

Time response of imaginary parts of the system (3) when δ=1

Example 4.2

Consider the following two-dimensional complex-valued neural networks (25) with additive time-varying delays:

z˙(t)=-Az(t)+Bf(z(t))+Cf(z(t-τ1(t)-τ2(t))),

where

A=100010,B=2-i-1-3i2-2i2-i,C=2-i1-i1+i-3-i.

The additive time-varying delays are taken as τ1(t)=0.5sin(0.2t) and τ2(t)=0.4cos(0.6t). Choose the nonlinear activation function as f(z)=tanhz with Γ=diag{0.5,0.5}, τ1=0.5, τ2=0.4, μ1=0.1, μ2=0.24. By using the YALMIP toolbox in MATLAB along with the above parameters, the linear matrix inequality (26) is feasible. From Figs. 7 and 8, we have found that the state trajectories of the system with its real and imaginary parts are converge to the zero equilibrium point with different initial conditions, respectively. The phase trajectories of real and imaginary parts of the system (25) are depicted in Figs. 9 and 10, respectively. By Corollary 3.5, we can conclude that the proposed neural networks (25) is globally asymptotically stable.

Fig. 7.

Fig. 7

Trajectories of the real parts x(t) of the states z(t) for the neural network (25)

Fig. 8.

Fig. 8

Trajectories of the imaginary parts y(t) of the states z(t) for the neural network (25)

Fig. 9.

Fig. 9

State trajectories of neural networks (25) between real subspace [x1(t),x2(t)]

Fig. 10.

Fig. 10

State trajectories of neural networks (25) between imaginary subspace [y1(t),y2(t)]

Example 4.3

Consider the following two-dimensional complex-valued neural networks (27) with time-varying delays:

z˙(t)=-Az(t)+Bf(z(t))+Cf(z(t-τ1(t))),

where

A=8008,B=-1-2i1-3i2-3i4-i,C=2-2i1-i3+i1-i.

Choose the nonlinear activation function as f(z)=tanhz with Γ=diag{0.5,0.5}. The time-varying delays are chosen as τ1(t)=0.1sint+0.6 which satisfies τ1=0.7, μ1=0.1. By employing the MATLAB YALMIP Toolbox, we can find the feasible solutions to linear matrix inequalities in (28) as follows, which guarantee the global asymptotic stability of the equilibrium point.

P=49.3881-14.0945+18.3994i-14.0945-18.3994i37.0857,Q=51.1062-19.9936+26.6048i-19.9936-26.6048i39.9656,S=44.6015-17.1853+23.1041i-17.1853-23.1041i34.9374,V=1002×1.7782-0.0575-0.4004i-0.0575+0.4004i0.7124,W=10.7229-4.4289+5.6768i-4.4289-5.6768i7.4946,M=38.5826-8.7800+11.8910i-8.7800-11.8910i31.9817,

G=diag{311.1505,230.5390}. Figures 11 and 12, respectively, displays state trajectories of the complex-valued neural networks (27) with its real and imaginary parts are converge to the origin with 21 randomly selected initial conditions. The phase trajectories of real and imaginary parts of the system (27) are drawn in Figs. 13 and 14, respectively.

Fig. 11.

Fig. 11

Time responses of real parts of the system (27) with 21 initial conditions

Fig. 12.

Fig. 12

Time responses of imaginary parts of the system (27) with 21 initial conditions

Fig. 13.

Fig. 13

Phase trajectories of real parts of the proposed system (27)

Fig. 14.

Fig. 14

Phase trajectories of imaginary parts of the proposed system (27)

Conclusion

In this paper, the global asymptotic stability of the complex-valued neural networks with leakage and additive-time varying delays has been studied. The sufficient conditions have been proposed to ascertain the global asymptotic stability of the addressed neural networks based on the appropriate Lyapunov–Krasovskii functional with involving triple integral terms. The complex-valued linear matrix inequalities are used to study the main results which can be easily solved by YALMIP tool in MATLAB. Three numerical examples have been presented to illustrate the effectiveness of theoretical results.

Acknowledgements

The authors wish to thank the editor and reviewers for a number of constructive comments and suggestions that have improved the quality of this manuscript. This work was supported by Science Engineering Research Board (SERB), DST, Govt. of India under YSS Project F.No: YSS/2014/000447 dated 20.11.2015.

Footnotes

This work was supported by Science Engineering Research Board (SERB), DST, Govt. of India under YSS Project F. No: YSS/2014/000447 dated 20. 11. 2015.

Contributor Information

K. Subramanian, Email: subramaniangri@gmail.com

P. Muthukumar, Phone: 91-451-2452371, Email: pmuthukumargri@gmail.com

References

  1. Alofi A, Ren F, Al-Mazrooei A, Elaiw A, Cao J. Power-rate synchronization of coupled genetic oscillators with unbounded time-varying delay. Cogn Neurodyn. 2015;9(5):549–559. doi: 10.1007/s11571-015-9344-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bao H, Park JH, Cao J. Synchronization of fractional-order complex-valued neural networks with time delay. Neural Netw. 2016;81:16–28. doi: 10.1016/j.neunet.2016.05.003. [DOI] [PubMed] [Google Scholar]
  3. Cao J, Li R. Fixed-time synchronization of delayed memristor-based recurrent neural networks. Sci China Inf Sci. 2017;60(3):032201. doi: 10.1007/s11432-016-0555-2. [DOI] [Google Scholar]
  4. Cheng J, Zhu H, Zhong S, Zhang Y, Zeng Y. Improved delay-dependent stability criteria for continuous system with two additive time-varying delay components. Commun Nonlinear Sci Numer Simul. 2014;19(1):210–215. doi: 10.1016/j.cnsns.2013.05.026. [DOI] [Google Scholar]
  5. Chen X, Song Q. Global stability of complex-valued neural networks with both leakage time delay and discrete time delay on time scales. Neurocomputing. 2013;121:254–264. doi: 10.1016/j.neucom.2013.04.040. [DOI] [Google Scholar]
  6. Chen X, Song Q, Zhao Z, Liu Y. Global μ-stability analysis of discrete-time complex-valued neural networks with leakage delay and mixed delays. Neurocomputing. 2016;175:723–735. doi: 10.1016/j.neucom.2015.10.120. [DOI] [Google Scholar]
  7. Chen X, Zhao Z, Song Q, Hu J. Multistability of complex-valued neural networks with time-varying delays. Appl Math Comput. 2017;294:18–35. [Google Scholar]
  8. Dey R, Ray G, Ghosh S, Rakshit A. Stability analysis for continuous system with additive time-varying delays: a less conservative result. Appl Math Comput. 2010;215(10):3740–3745. [Google Scholar]
  9. Gong W, Liang J, Cao J. Global μ-stability of complex-valued delayed neural networks with leakage delay. Neurocomputing. 2015;168:135–144. doi: 10.1016/j.neucom.2015.06.006. [DOI] [Google Scholar]
  10. Gong W, Liang J, Cao J. Matrix measure method for global exponential stability of complex-valued recurrent neural networks with time-varying delays. Neural Netw. 2015;70:81–89. doi: 10.1016/j.neunet.2015.07.003. [DOI] [PubMed] [Google Scholar]
  11. Guo D, Li C. Population rate coding in recurrent neuronal networks with unreliable synapses. Cogn Neurodyn. 2012;6(1):75–87. doi: 10.1007/s11571-011-9181-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Hu M, Cao J, Hu A. Exponential stability of discrete-time recurrent neural networks with time-varying delays in the leakage terms and linear fractional uncertainties. IMA J Math Control Inf. 2014;31(3):345–362. doi: 10.1093/imamci/dnt014. [DOI] [Google Scholar]
  13. Hu J, Wang J. Global stability of complex-valued recurrent neural networks with time-delays. IEEE Trans Neural Netw Learn Syst. 2012;23(6):853–865. doi: 10.1109/TNNLS.2012.2195028. [DOI] [PubMed] [Google Scholar]
  14. Huang C, Cao J, Xiao M, Alsaedi A, Hayat T. Bifurcations in a delayed fractional complex-valued neural network. Appl Math Comput. 2017;292:210–227. [Google Scholar]
  15. Jiang D (2008) Complex-valued recurrent neural networks for global optimization of beamforming in multi-symbol MIMO communication systems. In: Proceedings of international conference on conceptual structuration, Shanghai. pp 1–8
  16. Lakshmanan S, Park JH, Lee TH, Jung HY, Rakkiyappan R. Stability criteria for BAM neural networks with leakage delays and probabilistic time-varying delays. Appl Math Comput. 2013;219(17):9408–9423. [Google Scholar]
  17. Li R, Cao J. Stability analysis of reaction-diffusion uncertain memristive neural networks with time-varying delays and leakage term. Appl Math Comput. 2016;278:54–69. [Google Scholar]
  18. Liu X, Chen T. Global exponential stability for complex-valued recurrent neural networks with asynchronous time delays. IEEE Trans Neural Netw Learn Syst. 2016;27(3):593–606. doi: 10.1109/TNNLS.2015.2415496. [DOI] [PubMed] [Google Scholar]
  19. Manivannan R, Samidurai R, Cao Jinde, Alsaedi Ahmed. New delay-interval-dependent stability criteria for switched Hopfield neural networks of neutral type with successive time-varying delay components. Cogn Neurodyn. 2016;10(6):543–562. doi: 10.1007/s11571-016-9396-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Mattia M, Sanchez-Vives MV. Exploring the spectrum of dynamical regimes and timescales in spontaneous cortical activity. Cogn Neurodyn. 2012;6(3):239–250. doi: 10.1007/s11571-011-9179-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Nait-Charif H. Complex-valued neural networks fault tolerance in pattern classification applications. IEEE Second WRI Glob Congr Intell Syst. 2010;3:154–157. [Google Scholar]
  22. Rakkiyappan R, Chandrasekar A, Cao J. Passivity and passification of memristor-based recurrent neural networks with additive time-varying delays. IEEE Trans Neural Netw Learn Syst. 2015;26(9):2043–2057. doi: 10.1109/TNNLS.2014.2365059. [DOI] [PubMed] [Google Scholar]
  23. Sakthivel R, Vadivel P, Mathiyalagan K, Arunkumar A, Sivachitra M. Design of state estimator for bidirectional associative memory neural networks with leakage delays. Inf Sci. 2015;296:263–274. doi: 10.1016/j.ins.2014.10.063. [DOI] [Google Scholar]
  24. Shao H, Han QL. New delay-dependent stability criteria for neural networks with two additive time-varying delay components. IEEE Trans Neural Netw. 2011;22(5):812–818. doi: 10.1109/TNN.2011.2114366. [DOI] [PubMed] [Google Scholar]
  25. Shao H, Han QL. On stabilization for systems with two additive time-varying input delays arising from networked control systems. J Franklin Inst. 2012;349(6):2033–2046. doi: 10.1016/j.jfranklin.2012.03.011. [DOI] [Google Scholar]
  26. Subramanian K, Muthukumar P (2016) Existence, uniqueness, and global asymptotic stability analysis for delayed complex-valued Cohen–Grossberg BAM neural networks. Neural Comput Appl. doi:10.1007/s00521-016-2539-6
  27. Tanaka G, Aihara K. Complex-valued multistate associative memory with nonlinear multilevel functions for gray-level image reconstruction. IEEE Trans Neural Netw. 2009;20(9):1463–1473. doi: 10.1109/TNN.2009.2025500. [DOI] [PubMed] [Google Scholar]
  28. Tian J, Zhong S. Improved delay-dependent stability criteria for neural networks with two additive time-varying delay components. Neurocomputing. 2012;77(1):114–119. doi: 10.1016/j.neucom.2011.08.027. [DOI] [Google Scholar]
  29. Velmurugan G, Rakkiyappan R, Lakshmanan S. Passivity analysis of memristor-based complex-valued neural networks with time-varying delays. Neural Process Lett. 2015;42(3):517–540. doi: 10.1007/s11063-014-9371-8. [DOI] [Google Scholar]
  30. Wu H, Liao X, Feng W, Guo S, Zhang W. Robust stability analysis of uncertain systems with two additive time-varying delay components. Appl Math Model. 2009;33(12):4345–4353. doi: 10.1016/j.apm.2009.03.008. [DOI] [Google Scholar]
  31. Xie W, Zhu Q, Jiang F. Exponential stability of stochastic neural networks with leakage delays and expectations in the coefficients. Neurocomputing. 2016;173:1268–1275. doi: 10.1016/j.neucom.2015.08.086. [DOI] [Google Scholar]
  32. Xu X, Zhang J, Shi J. Exponential stability of complex-valued neural networks with mixed delays. Neurocomputing. 2014;128:483–490. doi: 10.1016/j.neucom.2013.08.014. [DOI] [Google Scholar]
  33. Yu K, Wang J, Deng B, Wei X. Synchronization of neuron population subject to steady DC electric field induced by magnetic stimulation. Cogn Neurodyn. 2013;7(3):237–252. doi: 10.1007/s11571-012-9233-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Zhao Y, Gao H, Mou S. Asymptotic stability analysis of neural networks with successive time delay components. Neurocomputing. 2008;71:2848–2856. doi: 10.1016/j.neucom.2007.08.015. [DOI] [Google Scholar]

Articles from Cognitive Neurodynamics are provided here courtesy of Springer Science+Business Media B.V.

RESOURCES