Skip to main content
Cognitive Neurodynamics logoLink to Cognitive Neurodynamics
. 2014 Aug 26;9(2):113–128. doi: 10.1007/s11571-014-9307-z

Exponential synchronization of discontinuous neural networks with time-varying mixed delays via state feedback and impulsive control

Xinsong Yang 1, Jinde Cao 2,3,, Daniel W C Ho 4,5
PMCID: PMC4378667  PMID: 25834647

Abstract

This paper investigates drive-response synchronization for a class of neural networks with time-varying discrete and distributed delays (mixed delays) as well as discontinuous activations. Strict mathematical proof shows the global existence of Filippov solutions to neural networks with discontinuous activation functions and the mixed delays. State feedback controller and impulsive controller are designed respectively to guarantee global exponential synchronization of the neural networks. By using Lyapunov function and new analysis techniques, several new synchronization criteria are obtained. Moreover, lower bound on the convergence rate is explicitly estimated when state feedback controller is utilized. Results of this paper are new and some existing ones are extended and improved. Finally, numerical simulations are given to verify the effectiveness of the theoretical results.

Keywords: Neural networks, Discontinuous activations, Exponential synchronization, Filippov solutions, State feedback control, Impulsive control

Introduction

Although neural networks with piecewise-linear neuron activation or continuously differentiable strictly increasing sigmoid activation have important applications in signal and image processing (Di Marco et al. 2010, 2005; Kamel and Xia 2009; Cao and Wan 2014; Zhang et al. 2014), it is reported that neural networks with discontinuous (or non-Lipschitz) neuron activations are an ideal model for neuron amplifiers with very high gain (Forti et al. 2005). For example, the sigmoidal neuron activations of the classical Hopfield network with high-gain amplifiers would approach a discontinuous hard-comparator function (Li et al. 1989). The high-gain hypothesis is crucial to make negligible the contribution to the network energy function of the term depending on the neuron self-inhibitions, and to favor binary output formation (Li et al. 1989; Forti and Nistri 2003). When dealing with neural networks possessing high-slope nonlinear activations it is often advantageous to model them with a system of differential equations with discontinuous neuron activations, rather than studying the case where the slope is high but of finite value. So far many results have been published on the convergence of periodic (almost) periodic solutions or equilibrium point of neural networks with discontinuous activations (Forti et al. 2005, 2006; Liu et al. 2011, 2012, 2014; Lu and Chen 2008, 2006; Wang et al. 2009; Cai and Huang 2011; Haddad 1981) since the global convergence criteria of this class of neural networks was firstly researched in Forti and Nistri (2003), nevertheless, synchronization result on this class of neural networks is seldom.

Since the pioneering work of Pecora and Carrol (1990), much attention have been attracted to synchronization due to its wide applications in engineering such as secure communication, biological systems, information processing (Yan and Wang 2014; Yang et al. 2011b, 2014; Liao and Huang 1999; Cao et al. 2013; Li et al. 2013; Rigatos 2014; Xu 2014. In Pecora and Carroll (1990), complete synchronization (synchronization for brief) was proposed, the mechanism of which is this: a chaotic system, called the driver (or master), generates a signal sent over a channel to a responder (or slave) which is identical with the driver, then the responder uses this signal to control itself so that it oscillates in a synchronized manner with the driver. By far many effective control methods have been proposed to realize synchronization, such as state feedback control, intermittent control, impulsive control, etc. Recently, by using the state feedback control technique, several quasi-synchronization criteria were obtained in Liu et al. (2011) for neural networks with discontinuous activations and parameters mismatches. Results in Liu et al. (2011) show that, under the linear state feedback controller as usual, complete synchronization cannot be realized even without parameter mismatches between the drive and response systems due to the discontinuity of activation function. Afterwards, by using a class of discontinuous state feedback control, authors in Yang and Cao (2013) considered exponential synchronization of neural networks with time-varying delays and discontinuous activations. However, authors of Yang and Cao (2013) did not consider distributed delay.

As a matter of fact, a realistic neural network should involve both discrete and distributed delays (Liao and Lu 2011). It is well known that discrete time-delays are introduced in neural networks due to the finite speed of switching of the amplifiers and transmissions of signal in the network, while distributed delays correspond to the presence of an amount of parallel pathways with a variety of axon sizes and lengths. The distributed delay is usually bounded since the signal propagation is distributed during a certain time period. The application of distributed delay can be found in Tank and Hopfield (1987), where a neural circuit with bounded distributed delays has been designed to solve a general problem of recognizing patterns in a time-dependent signal. Although there were many results on synchronization of continuous neural networks with discrete and distributed delays (mixed delays) (Cao et al. 2007; Yang et al. 2013; Wang et al. 2010; Yang et al. 2011a), as far as we know, result on synchronization of neural networks with discontinuous activations and mixed delays is few.

On the other hand, impulsive control technique has got the favor of many researchers because it needs small control gains and acts only at discrete time instants. These special characters of impulsive control can reduce control cost drastically (Yang et al. 2011b). However, result on synchronization of neural networks with discontinuous activations under impulsive control has not yet been reported in the literature.

Motivated by the above analysis, this paper aims to investigate global exponential synchronization of neural networks with discontinuous activation functions and time-varying mixed delays via two kinds of control technique: state feedback control and impulsive control. The bounded distributed delay is related to delay kernel, which makes the considered model more general. Unlike Lu and Chen (2008), Liu et al. (2012), Lu and Chen (2006), and Haddad (1981), in which constructing method was utilized to get approximate Filippov solutions of neural networks with discontinuous activations and mixed delays, we show the global existence of Filippov solutions through strict mathematical proof. Then, by using the sign function, novel state feedback controller and impulsive controller are designed to guarantee the synchronization goal. Moreover, the lower bound on the convergence rate is explicitly estimated when state feedback controller is utilized. Numerical simulations show the effectiveness of the theoretical results.

Notations

In the sequel, if not explicitly stated, matrices are assumed to have compatible dimensions. N+ is the set of positive integers, Im denotes the identity matrix of m-dimension. R is the space of real number. The Euclidean norm in Rm is denoted as ·, accordingly, for vector xRm, x=xTx, where T denotes transposition. x=0 represents each component of x is zero. A=(aij)m×m denotes a matrix of m-dimension, A=λmax(ATA). co¯[E] stands for the closure of the convex hull of the set E.

The rest of this paper is organized as follows. In section “Model description and some preliminaries”, model of discontinuous neural networks with mixed delays is described. Some necessary assumptions, definitions and lemmas are also given in this section. Exponential synchronization of the considered model under state feedback control and impulsive control are studied in sections “Synchronization under state feedback control” and “Synchronization under impulsive control”, respectively. In section “Examples and simulations”, two examples with their numerical simulations verify the effectiveness of our results. conclusions and future research interest are given in section “Conclusion”. At last, acknowledgments are presented.

Model description and some preliminaries

In this paper, we consider the neural network with time-varying discrete and distributed delays which is described as follows:

x˙(t)=-Cx(t)+Af(x(t))+Bf(x(t-τ(t)))+Dt-θ(t)tK(t-s)f(x(s))ds+I, 1

where x(t)=(x1(t),x2(t),,xn(t))Rn is the state vector; C=diag(c1,c2,,cn), in which ci>0,i=1,2,,n, are the neuron self-inhibitions; A=(aij)n×n, B=(bij)n×n and D=(dij)n×n are the connection weight matrices; the activation function f(x(t))=(f1(x1(t)),f2(x2(t)),,fn(xn(t)))T represents the output of the network; I=(I1,I2,,In) is the external input vector; the bounded functions τ(t)>0 and θ(t)>0 (They can be different) represent unknown time-varying discrete and distributed delays, respectively; K(u) is a non-negative bounded scalar function for u0 describing the delay kernel.

The trajectory of the solution x(t) of neural network (1) can be any desired state: equilibrium point, a nontrivial periodic or almost periodic orbit, or even a chaotic orbit. In this paper, we suppose that the activation function f(x(t)) is not continuous on Rn. Hence, the system (1) becomes a differential equation with discontinuous right-hand side. In this case, the uniqueness of the solution of (1) might be lost, and at the worst case, one can not define a solution in the conventional sense.

In order to study the dynamics of a system of differential equation with discontinuous right-hand side, we first transform it into a differential inclusion (Filippov 1960) by using Filippov regularization, then by the measurable selection theorem in Aubin and Cellina 1984 we reach an uncertain differential equation. Thus, studying the dynamics of the system of differential equation with discontinuous right-hand side has at last been transformed into considering the same problem of the uncertain differential equation. The Filippov regularization is defined as follows.

Definition 1

Filippov (1960) (Filippov regularization). The Filippov set-valued map of f(x) at xRn is defined as follows:

F(x)=δ>0μ(Ω)=0co¯[f(B(x,δ)\Ω)],

where B(x,δ)={y:y-xδ}, and μ(Ω) is the Lebesgue measure of set Ω.

By Definition 1, the Filippov set-valued map gives the convex hull of f(·) at the discontinuity points (ignoring sets of measure zero) when applied to the discontinuity points, but is otherwise the same as f(·) at continuous points.

For example, consider the following initial value problem (IVP):

x˙(t)=f(x(t)),x(0)=x0,t[0,T], 2

where

f(x)=x,x<1,x+1,x>1. 3

According to Definition 1, the differential inclusion of the system (2) is as follows:

x˙(t)F(x)=x,x<1,[1,2],x=1,x+1,x>1. 4

Function f(x) in (3) and its convex hull F(x) are shown in Fig. 1. Note that convex hull F(x) is not related to the values of f(x) at the discontinuous points.

Fig. 1.

Fig. 1

Function f(x) in (3) (left) and convex hull F(x) for the f(x) (right)

A vector-value function x(t) defined on the interval [0,T] is called a Filippov solution of (2) if it is absolutely continuous on [0,T] and satisfies the differential inclusion x˙(t)F(x(t)) for t[0,T]. By the measurable selection theorem in Aubin and Cellina (1984), we can find a measurable function γ:[0,T]Rn such that γ(t)F(x(t)) for almost all (a.a.) t[0,T] and x˙(t)=γ(t) for a.a. t[0,T].

The following properties hold Forti and Nistri (2003): co¯[αE]=αco¯[E] for any αR and ERn; co¯[E1+E2]=co¯[E1]+co¯[E2] for any E1,E2Rn. A set-valued map F with nonempty values is said to be upper semicontinuous at x0E if, for any open set N containing F(x0), there exists a neighborhood M of x0 such that F(M)N. If E is closed, F has nonempty closed values, and F is bounded in a neighborhood of each point xE, then F is upper semicontinuous on E if and only if its graph {(x,y)E×Rn:yF(x)} is closed (Forti and Nistri 2003).

From the above discussion, we give the following definition, which specifies what a Filippov solution of the system (1) is.

Definition 2

Cai and Huang (2011). A function x:[-ϑ,T]Rn, T(0,+], is a solution (in the sense of Filippov) of the discontinuous system (1) on [-ϑ,T] if:

  • (i)

    x is continuous on [-ϑ,T] and absolutely continuous on [0,T];

  • (ii)
    x(t) satisfies
    x˙(t)-Cx(t)+AF(x(t)+BF(x(t-τ(t)))+Dt-θ(t)tK(t-s)F(x(s))ds+I,t[0,T]. 5

or equivalently, by the measurable selection theorem in Aubin and Cellina (1984),

  • (ii’)
    There exists a measurable function γ(t)=(γ1,γ2,,γn)T:[-ϑ,T]Rn, such that γ(t)F(x(t)) for a.a. t[-ϑ,T] and
    x˙(t)=-Cx(t)+Aγ(t)+Bγ(t-τ(t))+Dt-θ(t)tK(t-s)γ(s)ds+I,fora.a.t[0,T]. 6

The next definition is the initial value problem (IVP) associated to system (1).

Definition 3

(IVP) Forti et al. (2005). For any continuous function ϕ:[-ϑ,0]Rn and measurable selection ψ:[-ϑ,0]Rn such that ψ(s)F(ϕ(s)) for a.a. s[-ϑ,0] by an initial value problem associated to (6) with initial condition (ϕ,ψ), we mean the following problem: find a couple of functions [x(t),γ(t)]:[-ϑ,T]Rn×Rn, such that x(t) is a solution of (6) on [-ϑ,T] for some T>0, γ(t) is an output associated to x(t), and

x˙(t)=-Cx(t)+Aγ(t)+Bγ(t-τ(t))+Dt-θ(t)tK(t-s)γ(s)ds+I,fora.a.t[0,T],γ(t)F(x(t)),fora.a.t[0,T],x(s)=ϕ(s),s[-ϑ,0],γ(s)=ψ(s),fora.a.s[-ϑ,0]. 7

Throughout this paper, we assume that

(H1)

For every i=1,2,,n, fi:RR is continuous except on a countable set of isolate points {ρki}, where there exist finite right and left limits fi+(ρki) and fi-(ρki), respectively. Moreover, fi has at most a finite number of jump discontinuities in every compact interval of R;

(H2)
There exist nonnegative constants L~ and P~ such that
ξL~u+P~, 8
uRn, where ξF(u), F(u)=(F1(u1),F2(u2),,Fn(un))T, Fi(ui)=[min{fi-(ui),fi+(ui)},max{fi-(ui),fi+(ui)}] for i=1,2,,n;
(H3)
There exist nonnegative constants L and P such that
ξ-ηLu-v+P 9
u,vRn, where ξF(u), ηF(v);
(H4)

There exists constant h and positive constants τ and θ such that 0<τ(t)τ, 0θ(t)θ, and τ˙(t)h<1. Let ϑ=max{τ,θ}.

(H5)

There are positive function q(t) and positive constant q such that q(t)q and 0θ(t)K(s)ds=q(t)q.

Remark 1

If there exists even one discontinuous point defined in (H1), the constant P in (9) is positive. When the function fi(ui) is continuous on R, then P=0. Hence, the above assumptions include continuous activation function fi(ui) as a special case. This implies that results of this paper are also applicable to corresponding models with continuous function.

Remark 2

Generally, it is difficult to determine the exact value of γ(t) at the discontinuous points of f(x(t)). What we only know is that γ(t)F(x(t)) is a bounded measurable function from the condition (H1). In order to realize complete synchronization of neural networks with discontinuous activations, the effects of the uncertainties of the measurable function must be surmounted. Obviously, the paper Liu et al. (2011) did not solve this problem.

In the literature, there were many results on the existence of solutions of differential inclusions with and without delays (Benchohra and Ntouyas 2000; Balasubramaniam et al. 2005). The main tool for proving the existence of solutions of differential inclusions is a fixed point theorem for condensing map due to Martelli (1975). Inspired by Benchohra and Ntouyas (2000) and Balasubramaniam et al. (2005), we shall prove that, under the above conditions, the neural network (1) exists solutions globally in the sense of Filippov. Before starting our proof, we first introduce the fixed point theorem for condensing map developed in Martelli (1975).

Lemma 1

Martelli (1975). Let X be a Banach space and G:XBCC(X) a condensing map. If the set Γ={xX:λxλG(x),λ>1} is bounded, then G has a fixed point, where BCC(X) denotes the set of all nonempty bounded closed and convex subsets of X.

By using Lemma 1, we get the following Lemma 2. The analysis techniques is similar to those used in Benchohra and Ntouyas (2000), Balasubramaniam et al. (2005). In order to deal with the distributed delay, the method of exchanging integral order is utilized in (14).

Lemma 2

Suppose that (H1H5) are satisfied. Then, there exists at least one solution x(t) of discontinuous neural network (1) on [0,+) in the sense of Eq. (7).

Proof

Transform the problem in (7) into a fixed point problem. Consider the multi-valved map G:C([-ϑ,T],Rn)C([-ϑ,T],Rn) defined by

G(x)(t)=x(0)e-Ct+0te-C(t-s){AF(x(s))+BF(x(s-τ(s)))+Ds-θ(s)sK(s-u)F(x(u))du+I}dsfort[0,T],ϕ(s),s[-ϑ,0], 10

where C([-ϑ,T],Rn) is the Banach space of continuous functions from [-ϑ,T] into Rn normed by ξ=sup{ξ(t),t[-ϑ,T]}.

It is clear that the fixed points of G are solutions to the IVP of (7).

It should be remarked that a completely continuous multi-valued map is the easiest example of a condensing map. By the similar process of the steps 1–4 in the proof of theorem 3.2 in Balasubramaniam et al. (2005) and the steps 1–4 in the proof of theorem 3.1 in Benchohra and Ntouyas (2000), it is easy to get that, under the assumptions (H1H5), G is a completely continuous multi-valued map, upper semi-continuous with convex closed values.

Now we prove that the set Γ={xC([-ϑ,T],Rn):λxG(x),λ>1} is bounded.

Let xΓ, then λxG(x) for some λ>1. Thus there exists γ(t)F(x(t)) such that

x(t)=λ-1{x(0)e-Ct+0te-C(t-s){Aγ(s)+Bγ(s-τ(s))+Ds-θ(s)sK(s-u)γ(u)du+I}ds}fora.a.t[0,T]. 11

Denote c=min{ci,i=1,2,,n}. For any t[0,T], one has from (H2) and (11) that

x(t)x(0)e-ct+0te-c(t-s)Aγ(s)+Bγ(s-τ(s))+Ds-θ(s)sK(s-u)γ(u)du+Idse-ct[x(0)+0tecsAγ(s)ds+0tecsBγ(s-τ(s))ds+0tecsDs-θ(s)sK(s-u)γ(u)duds+0tecsIds]e-ct[x(0)+e-ctL~A0tecsx(s)ds+L~B0tecsx(s-τ(s))ds+0tecsDs-θ(s)sK(s-u)γ(u)duds]+1c(I+P~(A+B))(1-e-ct). 12

It is easy to get that

e-ctL~B0tecsx(s-τ(s))dse-ctL~B-τ(0)t-τ(t)ec(s+τ)x(s)1-τ˙(φ-1(s))dsec(τ-t)L~B-τ(0)t-τ(t)ecsx(s)1-hds, 13

where φ-1 is the inverse function of φ(t)=t-τ(t).

Since K(·) is a non-negative bounded scalar function, there exists a positive constant k¯ such that K(t)k¯ for tR. It can be derived from (H2) and (H5) that

e-ct0tDecss-θ(s)sK(s-u)|γ(u)dudse-ctDL~0ts-θsecsK(s-u)x(u)duds+1cDP~q(1-e-ct)k¯DL~e-ct0ts-θsecsx(u)duds+1cDP~q(1-e-ct)=k¯DL~e-ct(-θ00u+θecsx(u)dsdu+0t-θθu+θecsx(u)dsdu+t-θtθtecsx(u)dsdu)+1cDP~q(1-e-ct)=DL~k¯ce-ct(-θ0(ec(u+θ)-1)x(u)du+0t-θ(ec(u+θ)-ecθ)x(u)du+t-θt(ect-ecθ)x(u)du)+1cDP~q(1-e-ct)DL~k¯ce-ct(-θ0(ec(s+θ)-1)x(s)ds+0t(ect-ecθ)x(s)ds)+1cDP~q(1-e-ct). 14

Therefore, it can be obtained from (H4), (12), (13), and (14) that

x(t)x(0)e-ct+e-ctL~A0tecsx(s)ds+ec(τ-t)L~B-τ(0)t-τ(t)ecsx(s)1-hds+e-ctDL~k¯c(-θ0(ec(s+θ)-1)x(s)ds+0t(ect-ecθ)x(s)ds)+1cDP~q(1-e-ct)+1c(I+P~(A+B))(1-e-ct)=x(0)e-ct+1cDP~q(1-e-ct)+1c(I+P~(A+B))(1-e-ct)+ec(τ-t)L~B-τ(0)0ecsx(s)1-hds+e-ctDL~k¯c-θ0(ec(s+θ)-1)x(s)ds+e-ctL~A0tecsx(s)ds+ec(τ-t)L~B0tecsx(s)1-hds+e-ctDL~k¯c0t(ect-ecθ)x(s)dsx(0)e-ct+1cDP~q(1-e-ct)+1c(I+P~(A+B))(1-e-ct)+ec(τ-t)L~B-τ(0)0ecsx(s)1-hds+e-ctDL~k¯c-θ0(ec(s+θ)-1)x(s)ds+L~A0tx(s)ds+ecτL~B0tx(s)1-hds+DL~k¯c0tx(s)dsa~+0tmx(s)ds,

where a~=x(0)+1cDP~q+1c(I+P~(A+B))+ecτL~B-τ(0)0ecsx(s)1-hds+DL~k¯c-θ0(ec(s+θ)-1)x(s)|ds, m=L~A+ecτL~B1-h+DL~k¯c.

Utilizing the Gronwall inequality yields:

x(t)a~emta~emT,fora.a.t[0,T]. 15

It is obvious that x(t)=ϕ(t)a~emT for t[-ϑ,0]. Considering the inequality (15), we have x(t)a~emT, which implies that Γ is bounded. As a consequence of Lemma 1, we deduce that G has a fixed point which is a solution of (7).

From the above derivation process we also know that x(t) is bounded for any positive time, and hence it is defined on [0,+). This completes the proof.

Remark 3

By exchanging the integral order and using the integration by substitution, Lemma 1 gives sufficient conditions on the existence of Filippov solution x(t) to the neural network with time-varying mixed delays. It is easy prove the existence of Filippov solution x(t) to the neural network with time-varying discrete delay and unbounded distributed delay by using the same analysis technique as that in the proof of Lemma 1. As fa as we know, no published paper give strict mathematical proof for the existence of Filippov solution to neural networks with discontinuous activation functions and mixed delays. Although many attempts have been made by researchers, to the best of our knowledge, this problem has not been solved in the literature so far. Recently, the authors of Cai and Huang (2011) had to deal with the distributed delay as discrete delay [see the proof of Lemma 3.1 in Cai and Huang (2011)], authors in Lu and Chen (2008), Liu et al. (2012), Lu and Chen (2006), and Haddad (1981) only got approximate Filippov solutions to such discontinuous neural networks with mixed delays by constructing a sequence of continuous delay differential equations with high–slope right–hand sides. Moreover, the conditions on the discontinuous activation functions in Lemma 1 are very general. However, the discontinuous activation functions in Lu and Chen (2008), Liu et al. (2012), Lu and Chen (2006), and Haddad (1981) were monotonically nondecreasing or uniformly locally bounded. This is another reason for giving Lemma 1.

Based on the drive-response concept for synchronization proposed by Pecora and Carrol Pecora and Carroll (1990), we consider the neural network model (1) as the drive system, the controlled response system is given in the following form:

y˙(t)=-Cy(t)+Af(y(t))+Bf(y(t-τ(t)))+Dt-θ(t)tK(t-s)f(y(s))ds+I+u(t), 16

where y(t)=(y1(t),y2(t),,yn(t))T is the state of the response system, u(t)=(u1(t),u2(t),,un(t))T is the controller to be designed, the other parameters are the same as those defined in system (1).

In view of Definitions 2, 3 and Lemma 2, the initial value problem of system (16) is

y˙(t)=-Cy(t)+Aδ(t)+Bδ(t-τ(t))+Dt-θ(t)tK(t-s)δ(s)ds+I+u(t),fora.a.t[0,+),δ(t)F(y(t)),fora.a.t[0,+),y(s)=υ(s),s[-ϑ,0],δ(s)=ω(s),fora.a.s[-ϑ,0]. 17

According to Pecora and Carroll (1990), if limt+y(t)=x(t), then (17) is said to be synchronized with (1) under the controller u(t).

Remark 4

Since the measurable functions γ(t) and δ(t) are uncertain at the discontinuous points of activation function f(x), the usual state feedback controller and impulsive controller as those in Liu et al. (2011) and Yang et al. (2011b) can not realize synchronization between systems (1) and (17). Results of Liu et al. (2011) shows that only quasi-synchronization can be achieved when the usual linear state feedback controllers are added to systems with discontinuous right-hand side and discrete time delays. As for synchronization of neural networks with discontinuous activations or other systems with discontinuous right-hand side, we do not find any similar result in the literature, let alone result on synchronization of neural networks with time-varying mixed delays via impulsive control. This fact implies that realizing synchronization of neural networks with discontinuous activations and time-varying mixed delays is not an easy work.

Remark 5

Although achieving synchronization of neural networks with discontinuous activations under control is difficult, the stability of neural networks with discontinuous activations as (1) without control can be realized by designing special connection weight matrices, for instance, the matrices -A,-B,-D should satisfy the Lyapunov Diagonal Stable (LDS) condition (Forti et al. 2006) or other linear matrix inequalities (LMIs) (Lu and Chen 2008; Wang et al. 2009; Liu et al. 2012; Cai and Huang 2011). In this paper, we shill study the exponential synchronization of (16) and (1) without using these special and strict conditions.

In this paper, we study exponential synchronization between the neural networks (16) and (1), i.e., by adding suitable controller u(t) to (16), the state of (16) can be exponentially synchronized onto the state of (1). Based on the discussion above, if the state y(t) of (17) is exponentially synchronized onto the state x(t) of (7), then our synchronization goal can be realized. Let e(t)=y(t)-x(t). Substituting (7) from (17) yields the following error system:

e˙(t)=-Ce(t)+Aβ(t)+Bβ(t-τ(t))+Dt-θ(t)tK(t-s)β(s)ds+u(t),e(s)=υ(s)-ϕ(s),s[-ϑ,0], 18

where β(t)=δ(t)-γ(t).

Now we introduce the definition of exponential synchronization, which is used in this paper.

Definition 4

The controlled neural network (16) with discontinuous activations is said to be exponentially synchronized with system (1) if there exist positive constants M>1,α>0 such that

e(t)Msup-ϑs0υ(s)-ϕ(s)exp(-αt)

hold for t0.

Obviously, e(t)=0 is the equilibrium point of the error system (18) when u(t)=0. If system (18) realizes global exponential stability at the origin for any given initial condition, then the global exponential synchronization between (16) and (1) [or (17) and (7)] is achieved.

Let V:RnR be a locally Lipschitz continuous function. The Clarke’s generalized gradient of V at xRn Clarke (1983) is defined by V(x)=co¯[limV(xi):xix,xiΩN], where ΩRn is the set of Lebesgue measure zero where V does not exist, and NRn is an arbitrary set with measure zero.

The next lemma will be useful to compute the time derivative along solutions (18) of the Lyapunov function designed in the later sections.

Lemma 3

(Chain rule) Clarke (1983). If V(x):RnR is C-regular Clarke (1983), and x(t) is absolutely continuous on any compact subinterval of [0,+). Then, x(t) and V(x(t)):[0,+)R are differentiable for a.a. t[0,+) and

ddtV(x(t))=γ(t)x˙(t),γ(t)V(x(t)), 19

where V(x(t)) is the Clark generalized gradient of V at x(t).

The next two lemmas will be utilized in this paper.

Lemma 4

Wang et al. (1992). If X,Y are real matrices with appropriate dimensions, then there exist number ε>0 such that

XTY+YTXεXTX+1εYTY.

Lemma 5

Yang et al. (2011a). Suppose K(t) is a non-negative bounded scalar function defined on [0,+) and 0+K(u)du=k. For any constant matrix DRn×n, D>0, and vector function x:(-,t]Rn for t0, one has

k-tK(t-s)xT(s)Dx(s)ds(-tK(t-s)x(s)ds)TD-tK(t-s)x(s)ds

provided the integrals are all well defined.

Synchronization under state feedback control

In the literature, there are many results concerning exponential synchronization of continuous neural networks with discrete or/and distributed delays. However, there is seldom published result on exponential synchronization of discontinuous neural networks with discrete or/and distributed delays. The main difficulty in studying the exponential synchronization comes from the discontinuous activations, which result in non-zero uncertain measurable selections γ(t) and δ(t) in the drive and response systems. In order to overcome this difficulty, in this section, we shall design a novel state feedback controller u(t) such that the uncertainty can be well dealt with and the controlled neural networks (16) can realize global exponential synchronization with system (1).

The following Theorem 1 is our first result.

Theorem 1

Suppose that the assumptions (H1H5) are satisfied. Then the neural networks (1) and (16) can achieve global exponential synchronization under the following controller:

u(t)=-Re(t)-ηsign(e(t)), 20

where R=diag(R1,R2,,Rn), Ri-ci+LA+2-h2(1-h)LB+12LD(q2+1)+α, α is a positive constant, η is a constant satisfying

ηP(A+B+qD), 21

sign(e(t))=(sign(e1(t)),sign(e2(t)),,sign(en(t)))T, sign(·) is the sign function. Moreover, the lower bound on the convergence rate is α.

Proof

Define the following Lyapunov functional candidate:

V¯(t)=12eT(t)e(t)+LB2(1-h)t-τ(t)teT(s)e(s)ds+12qLD-θ0t+utK(-u)eT(s)e(s)dsdu.

In view of (H3), one has β(t)Le(t)+P. Computing the derivative of V¯(t) along trajectories of error system (18), we get from (H4), Lemma 2, and the calculus for differential inclusion in Paden and Sastry (1987) that:

V¯˙(t)=eT(t)[-Ce(t)+Aβ(t)+Bβ(t-τ(t))+Dt-θ(t)tK(t-s)β(s)ds-Re(t)-ηsign(e(t))]+LB2(1-h)eT(t)e(t)-LB2(1-h)(1-τ˙(t))eT(t-τ(t))e(t-τ(t))+12qLD-θ0K(-u)eT(t)e(t)du-12qLDt-θtK(t-u)eT(u)e(u)du-eT(t)(C+R)e(t)+LAe(t)2+LBe(t)e(t-τ(t))+Le(t)Dt-θ(t)tK(t-s)e(s)ds-eT(t)ηsign(e(t))+12q2LDeT(t)e(t)+LB2(1-h)eT(t)e(t)-12LBeT(t-τ(t))e(t-τ(t))+Pe(t)(A+B+qD)-12qLDt-θ(t)tK(t-s)eT(s)e(s)ds. 22

It can be get from Lemma 3 that

LBe(t)e(t-τ(t))12LBe(t)2+LB2e(t-τ(t))2, 23

and

Le(t)Dt-θ(t)tK(t-s)e(s)ds12LDe(t)2+12LD(t-θ(t)tK(t-s)e(s)ds)2. 24

By virtue of Lemma 4, it can be obtained from (H5) that

(t-θ(t)tK(t-s)e(s)ds)2qt-θ(t)tK(t-s)e(s)2ds. 25

On the other hand, it is easy to get that

Pe(t)(A+B+qD)P(A+B+qD)i=1n|ei(t)|. 26

Therefore, it follows from (21) and (26) that

-eT(t)ηsign(e(t))+Pe(t)(A+B+qD)0. 27

Substituting (2327) into (22) produces that

V¯˙(t)+αeT(t)e(t)eT(t)Ξe(t), 28

where Ξ=-C+(LA+2-h2(1-h)LB+12LD(q2+1)+α)In-R.

By virtue of Ri-ci+LA+2-h2(1-h)LB+12LD(q2+1)+α, one gets that

V¯˙(t)+αeT(t)e(t)0. 29

Hence, it is followed from (22) and (29) that

eT(t)e(t)2V¯(t)=2V¯(0)+20tV¯˙(s)ds2V¯(0)-2α0teT(s)e(s)ds. 30

According to Gronwall inequality, one derives from (30) that

eT(t)e(t)2V¯(0)exp(-2αt),

which implies the following inequality:

e(t)ρsup-ϑs0ϕ(s)-υ(s)exp(-αt),

where ρ=1+τLB1-h+θq2LD.

According to Definition 4, the neural networks (1) and (16) achieve global exponential synchronization. This completes the proof.

Remark 6

According to Theorem 1, the designed state feedback controller (20) can realize global exponential synchronization between the discontinuous neural networks (1) and (16). According to the definition of sign function (sign(ei)=1 if ei>0, sign(ei)=-1 if ei<0, and sign(ei)=0 if ei=0), the controller (20) becomes ui(t)=-Riei(t)-η<0 if ei(t)>0, ui(t)=-Riei(t)+η>0 if ei(t)<0, and ui(t)=0 if ei(t)=0, i=1,2,,n. Hence, the role of the controller is: it decreases e˙i(t) when ei(t)>0, it increases e˙i(t) when ei(t)<0, and the controller is not needed when ei(t)=0. From the inequality (27) one can see that the term -ηsign(e(t)) in the controller (20) plays the role of dealing with the uncertainties of the measurable selections. To the best of our knowledge, no result concerning complete synchronization of discontinuous neural networks with time-varying mixed delays is published in the literature, let alone exponential synchronization of discontinuous neural networks with time-varying mixed. Recently, in Liu et al. (2011), the synchronization issue of coupled discontinuous neural networks and other chaotic systems were investigated by using the classical state feedback controller, i.e., controller (20) without the term -ηsign(e(t)), however, only quasi-synchronization criteria were derived due to its disability in coping with the uncertainties of the uncertain measurable selections in the drive and response systems. Moreover, authors in Liu et al. (2011) did not consider distributed delay. Therefore, Theorem 1 in this paper is new and improve the corresponding results in Liu et al. (2011).

Remark 7

In the proof of Theorem 1, we do not use the well known Halanay inequality. One of the advantages using the method in the proof is that the relationship between the control gain R and the convergence rate α is obvious. However, it is not so obvious by using Halanay inequality. Moreover, the utilized Lyaponov functional (22) is simple and does not include the exponential function eαt. In sum, by using the simple Lyaponov functional and the new proving method, we simplify the proof process.

Synchronization under impulsive control

In the above section, the designed state feedback controller leads to the global exponential synchronization between (1) and (16). However, the corresponding control cost may be high. It is well known that it is better that designed controller can not only realize the synchronization goal but also reduce the control cost. Impulsive control, as an effective and ideal control technique, is activated only at some isolated points. Obviously, the control cost can be drastically reduced if the response system can be synchronized with the drive system under impulsive control. Until now, impulsive control has been extensively utilized to study synchronization (Yang et al. 2011b), but result on synchronization of neural networks with discontinuous activations via impulsive control is seldom, let alone the same result of discontinuous neural networks with mixed delays. Hence, in the present section, a novel impulsive controller shall be designed such that the system (16) can globally exponentially synchronize with system (1). Furthermore, a useful corollary will be given such that the given synchronization criterion reduces conservativeness as much as possible.

The impulsive controller u(t) is designed as

ui(t)=k=1Eiei(t)ϱ(t-tk)-ηsign(ei(t)), 31

where kN+, Ei,i=1,2,,n, are constants to be determined, η and sign(·) are defined in Theorem 1, the time sequence {tk,kN+} satisfies 0=t0<t1<t2<<tk-1<tk<, and limk+tk=+, ϱ(·) is the Dirac impulsive function.

With the impulsive controller (31), the synchronization error system (18) turns out to the following hybrid impulsive system:

e˙(t)=-Ce(t)+Aβ(e(t))+Bβ(e(t-τ(t)))+Dt-θ(t)tK(t-s)β(e(s))ds-ηsign(e(t)),ttk,Δe(tk)=e(tk)-e(tk-)=Ee(tk-),t=tk, 32

where E=diag(E1,E2,,En), e(tk)=e(tk+)=limttk+e(t), e(tk-)=limttk-e(t).

The following Theorem 2 states the sufficient conditions guaranteeing the global exponential synchronization between (1) and (16) under the impulsive controller (32).

Theorem 2

Suppose that the assumptions (H1H5) are satisfied. Then the neural networks (1) and (16) can achieve global exponential synchronization under the impulsive controller (31), if there exist positive constants α,β such that the following inequalities hold:

P(A+B+qD)<η, 33
In+E2d<1,kN+, 34
-2c+2LA+LBα+LDβ+lndTm+d-1(LBα+qLDβ)<0, 35

where c=min{ci,i=1,2,,n}, Tm=max{tk+1-tk,kN+}.

Proof

Define the following Lyapunov functional candidate:

V(t)=12eT(t)e(t). 36

Computing the derivative of V(t) along trajectories of error system (32) for t[tk-1,tk) yields:

V˙(t)=eT(t)[-Ce(t)+Aβ(e(t))+Bβ(e(t-τ(t)))+Dt-θ(t)tK(t-s)β(e(s))ds-ηsign(e(t))]-eT(t)Ce(t)+eT(t)Aβ(e(t))+eT(t)Bβ(e(t-τ(t)))+eT(t)Dt-θ(t)tK(t-s)β(e(s))ds-eT(t)ηsign(e(t))-eT(t)Ce(t)+LAe(t)2+LBe(t)e(t-τ(t))+Le(t)Dt-θ(t)tK(t-s)e(s)ds-eT(t)ηsign(e(t))+Pe(t)(A+B+qD). 37

For constants α>0,β>0, one has from Lemma 3 that

LBe(t)e(t-τ(t))12LBαe(t)2+LB2αe(t-τ(t))2, 38

and

Le(t)Dt-θ(t)tK(t-s)e(s)ds12LβDe(t)2+12βLD(t-θ(t)tK(t-s)e(s)ds)2. 39

Substituting (25), (27), (38) and (39) into (37) produces that

V˙(t)(-2c+2LA+LBα+LDβ)V(t)+LBαV(e(t-τ(t)))+qLDβt-θ(t)tK(t-s)V(e(s))ds. 40

On the other hand, when t=tk,kN+, it is obtained from inequality (34) and the second equation of (32) that

V(tk)=12eT(tk)e(tk)=In+E212eT(tk-)e(tk-)dV(tk-). 41

For any ϵ>0, let v(t) be the unique solution of the following impulsive delay system:

v˙(t)=av(t)+bv(e(t-τ(t)))+ϵ+c¯t-θ(t)tK(t-s)v(e(s))ds,ttk,v(tk)=dv(tk-),t=tk,kN+,v(s)=12e(s)2,s[-ϑ,0]. 42

where a=-2c+2LA+LBα+LDβ, b=LBα, c¯=qLDβ.

For t[0,t1), using the ordinary differential equation theory, one gets

v(t)=eatv(0)+0tea(t-s)[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds. 43

For t[t1,t2), one obtains from (43) and the second equation of (42) that

v(t1+)=deat1v(t1+)+d0t1ea(t1-s)[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds, 44

and

v(t)=ea(t-t1)v(t1+)+t1tea(t-s)[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds. 45

It is followed that

v(t2+)=dea(t2-t1)v(t1+)+dt1t2ea(t2-s)[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds=d2eat2v(0)+d2ea(t2-t1)0t1ea(t1-s)×[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds+dt1t2ea(t2-s)[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds.

By induction, we can derive that, for t[tk,tk+1),

v(t)=dkeatv(0)+0t1ea(t-s)dk[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds+t1t2ea(t-s)dk-1[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds++tk-1tkea(t-s)d[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds+tktea(t-s)[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds=0tktdeatv(0)+0tea(t-s)stktd[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(e(u))du+ϵ]ds. 46

It is derived from d<1 that

ea(t-s)stktdea(t-s)dt-sTm-1=d-1e-σ(t-s),ts0, 47

where σ=-(a+lndTm)>0.

It follows from (46) and (47) that

v(t)Δe-σt+0td-1e-σ(t-s)[bv(s-τ(s))+c¯s-θ(s)sK(s-u)v(u)du+ϵ]ds, 48

where Δ=12d-1sup-ϑs0e(s)2. Now define

h(ν)=ν-σ+d-1beντ+d-1c¯0θ(t)K(s)eνsds.

It can be derived from (H4) and (35) that

hl(0)=-σ+d-1b+d-1c¯0θ(t)K(s)ds-σ+d-1b+d-1c¯q<0.

Since h(+)=+ and h(ν) is continuous on [0,+), there exists an unique constant ν¯>0 such that

ν¯-σ+d-1beν¯τ+d-1c¯0θ(t)K(s)eν¯sds=0. 49

It is obvious that

v(t)=12e(t)2Δ<Δe-ν¯t+ϵσd-b-c¯q,-ϑt0.

We claim that the inequality

v(t)<Δe-ν¯t+ϵσd-b-c¯q, 50

holds for t0. If it is not true, there exists a point t>0 such that

v(t)Δe-ν¯t+ϵσd-b-c¯q, 51

and

v(t)<Δe-ν¯t+ϵσd-b-c¯q,t<t. 52

Then one has from (48) and (52) that

v(t)Δe-σt+0td-1e-σ(t-s)[bv(s-τ(s))+cs-θ(s)sK(s-u)v(u)du+ϵ]ds<e-σt{Δ+ϵσd-b-c¯q+0td-1eσs×[b(Δe-ν¯(s-τ(s))+ϵσd-b-c¯q)+cs-θ(s)sK(s-u)(Δe-ν¯u+ϵσd-b-c¯q)du+ϵ]ds. 53

By simple computation, one can get

bϵd(σd-b-c¯q)+ckϵd(σd-b-c¯q)+d-1ϵ=ϵσσd-b-c¯q. 54

Noticing 0τ(t)τ, one derives from (49), (53), and (54) that

v(t)<e-σt{Δ+ϵσd-b-c¯q+d-1Δ(beν¯τ+c¯0θ(t)K(s)eν¯sds)0te(σ-ν¯)sds+ϵσσd-b-c¯q0teσsds=Δe-ν¯t+ϵσd-b-c¯q,

which contradicts (51). So (50) holds. Letting ϵ0, one gets from the lemma 3 in Yang et al. (2011a) that

V(t)v(t)Δe-ν¯t,t0,

which implies e(t)1dsup-ϑs0υ(s)-ϕ(s)exp(-ν¯2t) for t0. According to Definition 4, the neural networks (1) and (16) realize global exponential synchronization under the (31). This completes the proof.

In Theorem 2, for given η and d, different values of α,β lead to different value of Tm. It is well known that larger Tm can further reduce the control cost. In order to enlarge the value of Tm, appropriate values of α and β in (35) should be taken. The following Corollary 1 solves this problem.

Corollary 1

Suppose that the assumptions (H1)-(H5) are satisfied. Then the neural networks (1) and (16) can achieve global exponential synchronization under the impulsive controller (31), if the inequalities (33), (34) and the following inequality hold:

-c+LA+LBd+qdLD+lnd2Tm<0, 55

where c and Tm are defined in Theorem 2.

Proof

Define the function g(α,β) with variables α>0,β>0 as

g(α,β)=-2c+2LA+LBα+LDβ+lndTm+d-1(LBα+qLDβ).

In order to reduce the conservativeness of (35) holds, we only need to find a point (α¯,β¯) such that g(α¯,β¯) takes the minimum value and g(α¯,β¯)<0. By simple computation, one has gα=LB-LBdα2 and gβ=LD-PLDdβ2. Let gα=gβ=0, one gets (α¯,β¯)=(1d,qd). Since 2gα2|(α¯,β¯)2gβ2|(α¯,β¯)-(2gαβ|(α¯,β¯))2=4qdL2BD>0 and 2gα2|(α¯,β¯)=2dLB>0, g(α,β) takes the minimum value at the point (α¯,β¯)=(1d,qd). Let g(1d,Pd)<0, one derives (28). This completes the proof.

Remark 8

When the activation function f(x) is continuous, the constant P becomes zero. In this case, the state feedback controller (20) and impulsive controller (31) are also effective, and the constant η can be chosen as η0. When η=0, the controllers (20) and (31) become the classical the state feedback (Liu et al. 2011) and impulsive controllers (Yang et al. 2011b), respectively. Moreover, the bounded distributed delay becomes the usual one when the delay kernel K(t)=1. It is obvious from Lemma 4 and the proofs of Theorems 1 and 2 that the distributed delay can be extended to the unbounded case. Hence, the results of this paper are general and are applicable to neural networks with both discontinuous and continuous activations.

Remark 9

In this paper, the existence of Filippov solutions to a class of discontinuous neural networks with time-varying mixed delays is solved, analysis technology is different from those in Liu et al. (2012) and Cai and Huang (2011). One may think that whether the existence of the solution of (17) is guaranteed when the controller is added to the system. This problem can still be solved in this paper. In the case that u(t) is state feedback controller (20), the existence of the solution of (17) is guaranteed by using the same analysis technique in the proof of Lemma 1. In the case of impulsive controller (31), one can first translate (17) into the form of (46) by using the similar the method in (4346), then apply the same analysis technique in the proof of Lemma 1. In order to avoid tedious repetition, we do not give all the proofs in this paper.

Examples and simulations

In this section, numerical examples and figures are given to show the theoretical results obtained above. Consider the following delayed neural networks:

x˙(t)=-Cx(t)+Af(x(t))+Bf(x(t-τ(t)))+Dt-θ(t)tK(t-s)f(x(s))ds+I, 56

where x(t)=(x1(t),x2(t))T, τ(t)=0.8-0.2sint, θ(t)=0.3, K(t)=4-0.2t, C=diag(1.2,1), I=(1.5,0.5)T, and

A=3-0.385,B=-1.40.10.3-7,D=-1.20.1-2.8-1,

the activation function is f(x)=(f1(x1),f2(x2)) with

fi(xi)=tanh(xi)+0.05,xi>0,i=1,2,tanh(xi)-0.05,xi<0,i=1,2.

Figure 2 shows the trajectory of (56) with initial condition x(t)=(0.4,0.6)T,t[-1,0].

Fig. 2.

Fig. 2

Trajectory of system (56) with initial value x(t)=(0.4,0.6)T,t[-1,0]

Obviously, neural network (56) satisfies (H1H5) with τ=1,θ=0.3, h=0.2, L=1, P=0.1, q=0.2879. Now consider the following controlled response system of system (56):

y˙(t)=-Cy(t)+Af(y(t))+Bf(y(t-τ(t)))+Dt-θ(t)tK(t-s)f(y(s))ds+I+u(t), 57

When no controller is added to the response system (57), the systems (56) and (57) can not realize synchronization, see Fig. 3.

Fig. 3.

Fig. 3

Time response of the trajectory error e(t)=y(t)-x(t) between systems (56) and (57) without control

In the first case, we use the state feedback controller (20). Take α=0.1. By simple computation, we get R118.2444, R218.4444, P(A+B+qD)=1.8664. Take R1=18.2444, R2=18.4444, η=1.9662. According to Theorem 1, system (57) can be exponentially synchronized with (56) under the state feedback controller (20) and the convergence rate is α=0.1.

In the numerical simulations, we use the forward Euler method, which was used in Danca (2004) to obtain numerical solution of differential inclusions. The parameters in the simulations are taken as: step-length is 0.0005, y(t)=(-0.1,1.1)T for t[-1,0]. The trajectory of log(e(t))=log(y(t)-x(t)) is presented in Fig. 4, from which one can see that log(e(t))ω(t)=-4.9t+0.5, so the convergence rate of e(t) is larger than 4.9, which is larger than α=0.1. Therefore, the results in Theorem 1 are verified.

Fig. 4.

Fig. 4

Time response of log(e(t)) between systems (56) and (57) via state feedback controller (20)

Now we construct impulsive controllers. Taking E=-0.5I2, kN+, tk-tk-1=Tm=0.025, it is obtained that d=0.25<1 and

-c+LA+LBd+qdLD+lnd2Tm=-1.562<0,

According to the above computation in the first case, we still take η=1.9662. Then the inequalities (33), (34) and (55) are satisfied. According to Corollary 1, system (57) can be exponentially synchronized with (56) under the impulsive controller (31).

In the numerical simulations, the other parameters and initial values of y(t) are the same as those in the first case. The trajectory of log(e(t))=log(y(t)-x(t)) are presented in Fig. 5, which verify the effectiveness of Corollary 1, and hence the Theorem 2 is also effective.

Fig. 5.

Fig. 5

Time response of log(e(t)) between systems (56) and (57) via impulsive controller (31)

Remark 10

Numerical examples and simulations specify the effectiveness of the designed controllers and the obtained results. However, results and numerical simulations in Liu et al. (2011) only get quasi-synchronization for systems with discontinuous right-hand side under the linear state feedback controller ui(t)=-kiei(t),i=1,2,,n (for instance see Remark 6, Corollary 2, and Figs. 8–10 in Example 3 in the reference Liu et al. (2011)). From the proofs of Theorems 1–2 and and Corollary 1 and the numerical simulations one can see that the term -ηsign(ei(t)) in the controllers plays a key role in dealing with the uncertain measurable functions γ(t)F(x(t)) and δ(t)F(y(t)) between the drive and response systems. Therefore, theoretical results and numerical simulations in this paper improve those in Liu et al. (2011).

Conclusion

The issue of controlled synchronization of neural networks with discontinuous activations has intrigued increasing interests of researchers from different fields. However, few published paper considered complete synchronization control of such system with mixed delays. Hence, this paper study the exponential synchronization of neural networks with discontinuous activations with mixed delays. Strict mathematical proof guaranteeing the global existence of Filippov solutions to neural networks with discontinuous activation functions and mixed delays has been given. Both state feedback control and impulsive control techniques have been considered. The designed controllers are simple and can be applied to neural networks with discontinuous and continuous activations. Compared with existing results which only quasi-synchronization can be realized, the obtained results of this paper are better. Numerical simulations show that the theoretical results are effective.

It is well known that finite-time synchronization means the optimality in convergence time Yang and Cao (2010). Recently, authors in Forti et al. (2005) investigated the global convergence in finite time for a subclass of neural networks with discontinuous activations and constant delay. However, finite-time synchronization in an array of coupled general neural networks with discontinuous activations and time-varying mixed delays has not been considered in the literature, this is an interesting and challenging problem to be considered in our future research.

Acknowledgments

This work was jointly supported by the National Natural Science Foundation of China (NSFC) under Grants Nos. 61263020, 11101053, 61272530, and 11072059, and CityU Grants 7008188 and 7002868, and the Program of Chongqing Innovation Team Project in University under Grant No. KJTD201308, the Natural Science Foundation of Jiangsu Province of China under Grant BK2012741.

Contributor Information

Xinsong Yang, Email: xinsongyang@163.com.

Jinde Cao, Email: jdcao@seu.edu.cn.

Daniel W. C. Ho, Email: madaniel@cityu.edu.hk

References

  1. Aubin J, Cellina A. Differential inclusions: set-valued maps and viability theory. New York: Springer; 1984. [Google Scholar]
  2. Balasubramaniam P, Ntouyas SK, Vinayagam D. Existence of solutions of semilinear stochastic delay evolution inclusions in a Hilbert space. J Math Anal Appl. 2005;305(2):438–451. doi: 10.1016/j.jmaa.2004.10.063. [DOI] [Google Scholar]
  3. Benchohra M, Ntouyas SK. Existence of mild solutions of semilinear evolution inclusions with nonlocal conditions. Georgian Math J. 2000;7(2):221–230. [Google Scholar]
  4. Cai Z, Huang L. Existence and global asymptotic stability of periodic solution for discrete and distributed time-varying delayed neural networks with discontinuous activations. Neurocomputing. 2011;74:3170–3179. doi: 10.1016/j.neucom.2011.04.027. [DOI] [Google Scholar]
  5. Cao J, Alofi A, Al-Mazrooei A, Elaiw A (2013) Synchronization of switched interval networks and applications to chaotic neural networks. Abstr Appl Anal. Article ID 940573, 11 p
  6. Cao J, Wan Y. Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw. 2014;53:165–172. doi: 10.1016/j.neunet.2014.02.003. [DOI] [PubMed] [Google Scholar]
  7. Cao J, Wang Z, Sun Y. Synchronization in an array of linearly stochastically coupled networks with time delays. Physica A. 2007;385(2):718–728. doi: 10.1016/j.physa.2007.06.043. [DOI] [Google Scholar]
  8. Clarke F. Optimization and nonsmooth analysis. New York: Wiley; 1983. [Google Scholar]
  9. Danca M. Controlling chaos in discontinuous dynamical systems. Chaos Solitons Fractals. 2004;22:605–612. doi: 10.1016/j.chaos.2004.02.032. [DOI] [Google Scholar]
  10. Di Marco M, Forti M, Grazzini M, Pancioni L. Fourth-order nearly-symmetric cnns exhibiting complex dynamics. Int J Bifurc Chaos. 2005;15(5):1579–1587. doi: 10.1142/S0218127405012867. [DOI] [Google Scholar]
  11. Di Marco M, Forti M, Grazzini M, Pancioni L. Limit set dichotomy and convergence of semiflows defined by cooperative standard cnns. Int J Bifurc Chaos. 2010;20(11):3549–3563. doi: 10.1142/S0218127410027891. [DOI] [Google Scholar]
  12. Filippov A. Differential equations with discontinuous right-hand side. Matematicheskii Sb. 1960;93(1):99–128. [Google Scholar]
  13. Forti M, Nistri P. Global convergence of neural networks with discontinuous neuron activations. IEEE Trans Circuts Syst I. 2003;50(11):1421–1435. doi: 10.1109/TCSI.2003.818614. [DOI] [Google Scholar]
  14. Forti M, Grazzini M, Nistri P, Pancioni L. Generalized lyapunov approach for convergence of neural networks with discontinuous or non-Lipschitz activations. Physica D. 2006;214(1):88–99. doi: 10.1016/j.physd.2005.12.006. [DOI] [Google Scholar]
  15. Forti M, Nistri P, Papini D. Global exponential stability and global convergence in finite time of delayed neural networks with infinite gain. IEEE Trans Neural Netw. 2005;16(6):1449–1463. doi: 10.1109/TNN.2005.852862. [DOI] [PubMed] [Google Scholar]
  16. Haddad G. Monotone viable trajectories for functional differential inclusions. J Differ Eq. 1981;42(1):1–24. doi: 10.1016/0022-0396(81)90031-0. [DOI] [Google Scholar]
  17. Kamel MS, Xia Y. Cooperative recurrent modular neural networks for constrained optimization: a survey of models and applications. Cogn Neurodyn. 2009;3(1):47–81. doi: 10.1007/s11571-008-9036-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Li JH, Michel AN, Porod W. Analysis and synthesis of a class of neural networks: variable structure systems with infinite grain. IEEE Trans Circuts Syst. 1989;36(5):713–731. doi: 10.1109/31.31320. [DOI] [Google Scholar]
  19. Li Y, Liu Z, Luo J, Wu H. Coupling-induced synchronization in multicellular circadian oscillators of mammals. Cogn Neurodyn. 2013;7(1):59–65. doi: 10.1007/s11571-012-9218-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Liao CW, Lu CY. Design of delay-dependent state estimator for discrete-time recurrent neural networks with interval discrete and infinite-distributed time-varying delays. Cogn Neurodyn. 2011;5(2):133–143. doi: 10.1007/s11571-010-9135-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Liao T, Huang NS. An observer-based approach for chaotic synchronization with applications to secure communications. IEEE Trans Circuts Syst I. 1999;46(9):1144–1150. doi: 10.1109/81.788817. [DOI] [Google Scholar]
  22. Liu J, Liu X, Xie W. Global convergence of neural networks with mixed time-varying delays and discontinuous neuron activations. Inf Sci. 2012;183:92–105. doi: 10.1016/j.ins.2011.08.021. [DOI] [Google Scholar]
  23. Liu X, Park JH, Jiang N, Cao J. Nonsmooth finite-time stabilization of neural networks with discontinuous activations. Neural Netw. 2014;52:25–32. doi: 10.1016/j.neunet.2014.01.004. [DOI] [PubMed] [Google Scholar]
  24. Liu X, Chen T, Cao J, Lu W. Dissipativity and quasi-synchronization for neural networks with discontinuous activations and parameter mismatches. Neural Netw. 2011;24(10):1013–1021. doi: 10.1016/j.neunet.2011.06.005. [DOI] [PubMed] [Google Scholar]
  25. Lu W, Chen T. Almost periodic dynamics of a class of delayed neural networks with discontinuous activations. Neural Comput. 2008;20(4):1065–1090. doi: 10.1162/neco.2008.10-06-364. [DOI] [PubMed] [Google Scholar]
  26. Lu W, Chen T. Dynamical behaviors of delayed neural network systems with discontinuous activation functions. Neural Comput. 2006;18(3):683–708. doi: 10.1162/neco.2006.18.3.683. [DOI] [Google Scholar]
  27. Martelli M. A Rothe’s type theorem for non compact acyclic-valued maps. Boll Unione Mat Ital. 1975;4(3):70–76. [Google Scholar]
  28. Paden B, Sastry S. A calculus for computing Filippov’s differential inclusion with application to the variable structure control of robot manipulators. IEEE Trans Circuts Syst. 1987;34(1):73–82. doi: 10.1109/TCS.1987.1086038. [DOI] [Google Scholar]
  29. Pecora L, Carroll TL. Synchronization in chaotic systems. Phys Rev Lett. 1990;64(8):821–824. doi: 10.1103/PhysRevLett.64.821. [DOI] [PubMed] [Google Scholar]
  30. Rigatos G (2014) Robust synchronization of coupled neural oscillators using the derivative-free nonlinear kalman filter. Cogn Neurodyn. doi:10.1007/s11571-014-9299-8 [DOI] [PMC free article] [PubMed]
  31. Tank D, Hopfield JJ. Neural computation by concentrating information in time. Proc Natl Acad Sci. 1987;84(7):1896–1900. doi: 10.1073/pnas.84.7.1896. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Wang J, Huang L, Guo Z. Global asymptotic stability of neural networks with discontinuous activations. Neural Netw. 2009;22(7):931–937. doi: 10.1016/j.neunet.2009.04.004. [DOI] [PubMed] [Google Scholar]
  33. Wang T, Xie L, de Souza CE. Robust control of a class of uncertain nonlinear systems. Syst Control Lett. 1992;19(2):139–149. doi: 10.1016/0167-6911(92)90097-C. [DOI] [Google Scholar]
  34. Wang Y, Wang Z, Liang J, Li Y, Du M. Synchronization of stochastic genetic oscillator networks with time delays and Markovian jumping parameters. Neurocomputing. 2010;73(13–15):2532–2539. doi: 10.1016/j.neucom.2010.06.006. [DOI] [Google Scholar]
  35. Xu A, Du Y, Wang R (2014) Interaction between different cells in olfactory bulb and synchronous kinematic analysis. Discrete Dyn Nat Soc. Artical ID 808792
  36. Yan C, Wang R. Asymmetric neural network synchronization and dynamics based on an adaptive learning rule of synapses. Neurocomputing. 2014;125:41–45. doi: 10.1016/j.neucom.2012.07.045. [DOI] [Google Scholar]
  37. Yang X, Cao J. Finite-time stochastic synchronization of complex networks. Appl Math Model. 2010;34(11):3631–3641. doi: 10.1016/j.apm.2010.03.012. [DOI] [Google Scholar]
  38. Yang X, Cao J. Exponential synchronization of delayed neural networks with discontinuous activations. IEEE Trans Circuts Syst I. 2013;60(9):2431–2439. [Google Scholar]
  39. Yang X, Huang C, Zhu Q. Synchronization of switched neural networks with mixed delays via impulsive control. Chaos Solitons Fractals. 2011;44(10):817–826. doi: 10.1016/j.chaos.2011.06.006. [DOI] [Google Scholar]
  40. Yang X, Cao J, Lu J. Synchronization of delayed complex dynamical networks with impulsive and stochastic effects. Nonlinear Anal Real World Appl. 2011;12:2252–2266. doi: 10.1016/j.nonrwa.2011.01.007. [DOI] [Google Scholar]
  41. Yang X, Cao J, Lu J. Synchronization of coupled neural networks with random coupling strengths and mixed probabilistic time-varying delays. Int J Robust Nonlinear Control. 2013;23(18):2060–2081. doi: 10.1002/rnc.2868. [DOI] [Google Scholar]
  42. Yang X, Cao J, Yu W. Exponential synchronization of memristive Cohen–Grossberg neural networks with mixed delays. Cogn Neurodyn. 2014;8(3):239–249. doi: 10.1007/s11571-013-9277-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Zhang Z, Cao J, Zhou D. Novel lmi-based condition on global asymptotic stability for a class of Cohen–Grossberg BAM networks with extended activation functions. IEEE Trans Neural Netw Lear Syst. 2014;25(6):1161–1172. doi: 10.1109/TNNLS.2013.2289855. [DOI] [Google Scholar]

Articles from Cognitive Neurodynamics are provided here courtesy of Springer Science+Business Media B.V.

RESOURCES