Skip to main content
Journal of Mathematical Neuroscience logoLink to Journal of Mathematical Neuroscience
. 2011 May 3;1:1. doi: 10.1186/2190-8567-1-1

Stability of the stationary solutions of neural field equations with propagation delays

Romain Veltz 1,2,, Olivier Faugeras 2,
PMCID: PMC3280889  PMID: 22655751

Abstract

In this paper, we consider neural field equations with space-dependent delays. Neural fields are continuous assemblies of mesoscopic models arising when modeling macroscopic parts of the brain. They are modeled by nonlinear integro-differential equations. We rigorously prove, for the first time to our knowledge, sufficient conditions for the stability of their stationary solutions. We use two methods 1) the computation of the eigenvalues of the linear operator defined by the linearized equations and 2) the formulation of the problem as a fixed point problem. The first method involves tools of functional analysis and yields a new estimate of the semigroup of the previous linear operator using the eigenvalues of its infinitesimal generator. It yields a sufficient condition for stability which is independent of the characteristics of the delays. The second method allows us to find new sufficient conditions for the stability of stationary solutions which depend upon the values of the delays. These conditions are very easy to evaluate numerically. We illustrate the conservativeness of the bounds with a comparison with numerical simulation.

1 Introduction

Neural fields equations first appeared as a spatial-continuous extension of Hopfield networks with the seminal works of Wilson and Cowan, Amari [1,2]. These networks describe the mean activity of neural populations by nonlinear integral equations and play an important role in the modeling of various cortical areas including the visual cortex. They have been modified to take into account several relevant biological mechanisms like spike-frequency adaptation [3,4], the tuning properties of some populations [5] or the spatial organization of the populations of neurons [6]. In this work we focus on the role of the delays coming from the finite-velocity of signals in axons, dendrites or the time of synaptic transmission [7,8]. It turns out that delayed neural fields equations feature some interesting mathematical difficulties. The main question we address in the sequel is that of determining, once the stationary states of a non-delayed neural field equation are well-understood, what changes, if any, are caused by the introduction of propagation delays? We think this question is important since non-delayed neural field equations are pretty well understood by now, at least in terms of their stationary solutions, but the same is not true for their delayed versions which in many cases are better models closer to experimental findings. A lot of work has been done concerning the role of delays in waves propagation or in the linear stability of stationary states but except in [9] the method used reduces to the computation of the eigenvalues (which we call characteristic values) of the linearized equation in some analytically convenient cases (see [10]). Some results are known in the case of a finite number of neurons [11,12] and in the case of a few number of distinct delays [13,14]: the dynamical portrait is highly intricated even in the case of two neurons with delayed connections.

The purpose of this article is to propose a solid mathematical framework to characterize the dynamical properties of neural field systems with propagation delays and to show that it allows us to find sufficient delay-dependent bounds for the linear stability of the stationary states. This is a step in the direction of answering the question of how much delays can be introduced in a neural field model without destabilization. As a consequence one can infer in some cases without much extra work, from the analysis of a neural field model without propagation delays, the changes caused by the finite propagation times of signals. This framework also allows us to prove a linear stability principle to study the bifurcations of the solutions when varying the nonlinear gain and the propagation times.

The paper is organized as follows: in section 2 we describe our model of delayed neural field, state our assumptions and prove that the resulting equations are well-posed and enjoy a unique bounded solution for all times. In section 3 we give two different methods for expressing the linear stability of stationary cortical states, i.e. of the time independent solutions of these equations. The first one, section 3.1, is computationally intensive but accurate. The second one, section 3.2, is much lighter in terms of computation but unfortunately leads to somewhat coarse approximations. Readers not interested in the theoretical and analytical developments can go directly to the summary of this section. We illustrate these abstract results in section 4 by applying them to a detailed study of a simple but illuminating example.

2 The model

We consider the following neural field equations defined over an open bounded piece of cortex and/or feature space Ω ⊂ Rd. They describe the dynamics of the mean membrane potential of each of p neural populations.

graphic file with name 2190-8567-1-1-i1.gif (1)

We give an interpretation of the various parameters and functions that appear in (1).

Ω is a finite piece of cortex and/or feature space and is represented as an open bounded set of Rd. The vectors r and Inline graphic represent points in Ω.

The function S : R → (0, 1) is the normalized sigmoid function:

graphic file with name 2190-8567-1-1-i3.gif (2)

It describes the relation between the firing rate νi of population i as a function of the membrane potential, e.g, Vi : νi = S[σi (Vi - hi)]. We note V the p-dimensional vector (V1, ⋯, Vp).

The p functions ϕi, i = 1, ⋯, p represent the initial conditions, see below. We note ϕ the p-dimensional vector (ϕ1, ⋯, ϕp).

The p functions Inline graphic, i = 1>, ⋯, p represent external currents from other cortical areas. We note Iext the p-dimensional vector Inline graphic.

The p × p matrix of functions J = {Jij}i,j = 1, ⋯, p represents the connectivity between populations i and j, see below.

The p real values hi, i = 1, ⋯, p, determine the threshold of activity for each population, i.e. the value of the membrane potential corresponding to 50% of the maximal activity.

The p real positive values σi, i = 1, ⋯, p determine the slopes of the sigmoids at the origin.

Finally the p real positive values li, i = 1, ⋯, p, determine the speed at which each membrane potential decreases exponentially toward its rest value.

We also introduce the function

S : Rp Rp, defined by S(x) = [S(σ1(x1 - h1)), ⋯, S(σp (xp - hp))], and the diagonal p × p matrix L0 = diag (l1, ⋯, lp).

A difference with other studies is the intrinsic dynamics of the population given by the linear response of chemical synapses. In [9,15], Inline graphic is replaced by Inline graphic to use the alpha function synaptic response.

We use Inline graphic for simplicity although our analysis applies to more general intrinsic dynamics, see proposition 3.10 in section 3.1.3.

For the sake of generality, the propagation delays are not assumed to be identical for all populations, hence they are described by a matrxsix Inline graphic whose element Inline graphic is the propagation delay between population j at Inline graphic and population i at r. The reason for this assumption is that it is still unclear from physiology if propagation delays are independent of the populations. We assume for technical reasons that τ is continuous, i.e. Inline graphic. Moreover biological data indicate that τ is not a symmetric function (i.e. Inline graphic), thus no assumption is made about this symmetry unless otherwise stated.

In order to compute the right-hand side of (1), we need to know the voltage V on some interval [-T, 0].

The value of T is obtained by considering the maximal delay:

graphic file with name 2190-8567-1-1-i12.gif

Hence we choose T = τm.

2.1 The propagation-delay function

What are the possible choices for the propagation-delay function Inline graphic? There are few papers dealing with this subject. Our analysis is built upon [16]. The authors of this paper study, inter alia, the relationship between the path length along axons from soma to synaptic buttons versus the Euclidean distance to the soma. They observe a linear relationship with a slope close to one. If we neglect the dendritic arbor, this means that if a neuron located at r is connected to another neuron located at Inline graphic, the path length of this connection is very close to Inline graphic, in other words, axons are straight lines. According to this, we will choose in the following:

graphic file with name 2190-8567-1-1-i14.gif

where c is the inverse of the propagation speed.

2.2 Mathematical framework

A convenient functional setting for the non-delayed neural field equations (see [17-19]) is to use the space Inline graphic which is a Hilbert space endowed with the usual inner product:

graphic file with name 2190-8567-1-1-i16.gif

To give a meaning to (1), we define the history space Inline graphic with Inline graphic, which is the Banach phase space associated with equation (3) below. Using the notation Vt(θ ) = V(t +θ ), θ ∈ [-τm, 0], we write (1) as:

graphic file with name 2190-8567-1-1-i19.gif (3)

where

graphic file with name 2190-8567-1-1-i20.gif

is the linear continuous operator satisfying (the notation |||·||| is defined in definition A.2 of appendix A) Inline graphic. Notice that most of the papers on this subject assume Ω infinite, hence requiring τm = ∞. This raises difficult mathematical questions which we do not have to worry about, unlike [9,15,20-24].

We first recall the following proposition whose proof appears in [25]

Proposition 2.1. If the following assumptions are satisfied:

1. J L22,ℝp×p)

2. the external current Inline graphic

3. Inline graphic,

Then for any Inline graphic, there exists a unique solution Inline graphicto (3).

Notice that this result gives existence on R+, finite-time explosion is impossible for this delayed differential equation. Nevertheless, a particular solution could grow indefinitely, we now prove that this cannot happen.

2.3 Boundedness of solutions

A valid model of neural networks should only feature bounded membrane potentials. We find a bounded attracting set in the spirit of our previous work with non-delayed neural mass equations. The proof is almost the same as in [19] but some care has to be taken because of the delays.

Theorem 2.2. All the trajectories of the equation (3) are ultimately bounded by the same constant R (see the proof) if Inline graphic.

Proof. Let us define Inline graphicas

graphic file with name 2190-8567-1-1-i28.gif

We note l = mini = 1⋯p li and from lemma B.2 (see appendix B.1):

graphic file with name 2190-8567-1-1-i29.gif

Thus, if Inline graphic.

Let us show that the open ball of Inline graphic of center 0 and radius R, BR, is stable under the dynamics of equation (3). We know that V(t) is defined for all t ≥ 0s and that f <0 on ∂BR, the boundary of BR. We consider three cases for the initial condition V0.

If Inline graphic and set Inline graphic. Suppose that T R, then V(T) is defined and belongs to Inline graphic, the closure of BR, because Inline graphic is closed, in effect to ∂BR. We also have Inline graphic because V(T) ∈ ∂BR. Thus we deduce that for ε > 0 and small enough, V(T +ε ) ∈ Inline graphic which contradicts the definition of T . Thus T ∉ R and Inline graphic is stable.

Because f <0 on ∂BR, V(0) ∈ ∂BR implies that ∀t >0, V(t) ∈ BR.

Finally we consider the case Inline graphic. Suppose that ∀t >0, Inline graphic, then ∀t >0, Inline graphic, thus Inline graphic is monotonically decreasing and reaches the value of R in finite time when V(t) reaches ∂BR. This contradicts our assumption. Thus ∃T > 0 |V(T) ∈ BR.   □

3 Stability results

When studying a dynamical system, a good starting point is to look for invariant sets. Theorem 2.2 provides such an invariant set but it is a very large one, not sufficient to convey a good understanding of the system. Other invariant sets (included in the previous one) are stationary points. Notice that delayed and non-delayed equations share exactly the same stationary solutions, also called persistent states. We can therefore make good use of the harvest of results that are available about these persistent states which we note Vf. Note that in most papers dealing with persistent states, the authors compute one of them and are satisfied with the study of the local dynamics around this particular stationary solution. Very few authors (we are aware only of [19,26]) address the problem of the computation of the whole set of persistent states. Despite these efforts they have yet been unable to get a complete grasp of the global dynamics. To summarize, in order to understand the impact of the propagation delays on the solutions of the neural field equations, it is necessary to know all their stationary solutions and the dynamics in the region where these stationary solutions lie. Unfortunately such knowledge is currently not available. Hence we must be content with studying the local dynamics around each persistent state (computed for example with the tools of [19]) with and without propagation delays. This is already, we think, a significant step forward toward understanding delayed neural field equations.

From now on we note Vf a persistent state of (3) and study its stability.

We can identify at least three ways to do this:

1. to derive a Lyapunov functional,

2. to use a fixed point approach,

3. to determine the spectrum of the infinitesimal generator associated to the linearized equation.

Previous results concerning stability bounds in delayed neural mass equations are "absolute" results that do not involve the delays: they provide a sufficient condition, independent of the delays, for the stability of the fixed point (see [15,20-22]). The bound they find is similar to our second bound in proposition 3.13. They "proved" it by showing that if the condition was satisfied, the eigenvalues of the infinitesimal generator of the semi-group of the linearized equation had negative real parts. This is not sufficient because a more complete analysis of the spectrum (e.g., the essential part) is necessary as shown below in order to proof that the semi-group is exponentially bounded. In our case we prove this assertion in the case of a bounded cortex (see section3.1). To our knowledge it is still unknown whether this is true in the case of an infinite cortex.

These authors also provide a delay-dependent sufficient condition to guarantee that no oscillatory instabilities can appear, i.e., they give a condition that forbids the existence of solutions of the form ei(k·r+ωt). However, this result does not give any information regarding stability of the stationary solution. We use the second method cited above, the fixed point method, to prove a more general result which takes into account the delay terms. We also use both the second and the third method above, the spectral method, to prove the delay-independent bound from [15,20-22]. We then evaluate the conservativeness of these two sufficient conditions. Note that the delay-independent bound has been correctly derived in [25] using the first method, the Lyapunov method. It might be of interest to explore its potential to derive a delay-dependent bound.

We write the linearized version of (3) as follows. We choose a persistent state Vf and perform the change of variable U = V-Vf. The linearized equation writes

graphic file with name 2190-8567-1-1-i40.gif (4)

where the linear operator Inline graphic is given by

graphic file with name 2190-8567-1-1-i42.gif

It is also convenient to define the following operator:

graphic file with name 2190-8567-1-1-i43.gif

3.1 Principle of linear stability analysis via characteristic values

We derive the stability of the persistent state Vf (see [19]) for the equation (1) or equivalently (3) using the spectral properties of the infinitesimal generator. We prove that if the eigenvalues of the infinitesimal generator of the right-hand side of (4) are in the left part of the complex plane, the stationary state U = 0 is asymptotically stable for equation (4). This result is difficult to prove because the spectrum (the main definitions for the spectrum of a linear operator are recalled in appendix A) of the infinitesimal generator neither reduces to the point spectrum (set of eigenvalues of finite multiplicity) nor is contained in a cone of the complex plane C (such an operator is said to be sectorial). The "principle of linear stability" is the fact that the linear stability of U is inherited by the state Vf for the nonlinear equations (1) or (3). This result is stated in the corollaries 3.7 and 3.8.

Following [27-31], we note (T(t))t≥0 the strongly continuous semigroup of (4) on Inline graphic (see definition A.3 in appendix A) and A its infinitesimal generator. By definition, if U is the solution of (4) we have Ut = T(t)ϕ. In order to prove the linear stability, we need to find a condition on the spectrum Σ(A) of A which ensures that T(t) → 0 as t → ∞.

Such a "principle" of linear stability was derived in [29,30]. Their assumptions implied that Σ(A) was a pure point spectrum (it contained only eigenvalues) with the effect of simplifying the study of the linear stability because, in this case, one can link estimates of the semigroup T to the spectrum of A. This is not the case here (see proposition 3.4).

When the spectrum of the infinitesimal generator does not only contain eigenvalues, we can use the result in [[27], chapter 4, theorem 3.10 and corollary 3.12] for eventually norm continuous semigroups (see definition A.4 in appendix A) which links the growth bound of the semigroup to the spectrum of A:

graphic file with name 2190-8567-1-1-i45.gif (5)

Thus, U is uniformly exponentially stable for (4) if and only if

graphic file with name 2190-8567-1-1-i46.gif

We prove in lemma 3.6 (see below) that (T(t))t≥0 is eventually norm continuous. Let us start by computing the spectrum of A.

3.1.1 Computation of the spectrum of A

In this section we use L1 for Inline graphic for simplicity.

Definition 3.1. We define Inline graphicfor λ ∈ C by:

graphic file with name 2190-8567-1-1-i48.gif

where J(λ) is the compact (it is a Hilbert-Schmidt operator, see [[32], chapter X.2]) operator

graphic file with name 2190-8567-1-1-i49.gif

We now apply results from the theory of delay equations in Banach spaces (see [27,28,31]) which give the expression of the infinitesimal generator Inline graphic as well as its domain of definition

graphic file with name 2190-8567-1-1-i51.gif

The spectrum Σ(A) consists of those λ C such that the operator Δ(λ) of Inline graphic defined by

Δ(λ) = λId + L0 - J(λ) is non invertible. We use the following definition:

Definition 3.2 (Characteristic values (CV)). The characteristic values of A are the λs such that Δ(λ) has a kernel which is not reduced to 0, i.e., is not injective.

It is easy to see that the CV are the eigenvalues of A.

There are various ways to compute the spectrum of an operator in infinite dimensions. They are related to how the spectrum is partitioned (for example continuous spectrum, point spectrum...). In the case of operators which are compact perturbations of the identity such as Fredholm operators, which is the case here, there is no continuous spectrum. Hence the most convenient way for us is to compute the point spectrum and the essential spectrum (see the appendix A). This is what we achieve next.

Remark 1. In finite dimension (i.e. Inline graphic), the spectrum of A consists only of CV. We show that this is not the case here.

Notice that most papers dealing with delayed neural field equations only compute the CV and numerically assess the linear stability (see [9,24,33]).

We now show that we can link the spectral properties of A to the spectral properties of Lλ. This is important since the latter operator is easier to handle because it acts on a Hilbert space. We start with the following lemma (see [34] for similar results in a different setting).

Lemma 3.3. λ ∈ Σess(A) ⇔ λ ∈ Σess (Lλ)

Proof. Let us define the following operator. If λ C, we define Inline graphic by Inline graphic, Inline graphic. From [[28], Lemma 34], Inline graphic is surjective and it is easy to check that Inline graphic, see [[28], Lemma 35]. Moreover Inline graphic is closed in Inline graphic iif Inline graphic is closed in Inline graphic, see [[28], Lemma 36].

Let us now prove the lemma. We already know that Inline graphic is closed in Inline graphic iff Inline graphic is closed in Inline graphic. Also, we have Inline graphic, hence Inline graphic. It remains to check that Inline graphic.

Suppose that Inline graphic. There exist Inline graphic such that Inline graphic. Consider Inline graphic. Because Inline graphic is surjective, for all Inline graphic, there exists Inline graphic satisfying Inline graphic. We write Inline graphic. Then Inline graphic where Inline graphici.e. Inline graphic.

Suppose that Inline graphic. There exist Inline graphic such that Inline graphic. As Inline graphic is surjective for all i = 1, ⋯, N there exists Inline graphic such that Inline graphic. Now consider Inline graphic. Inline graphic can be written Inline graphic where Inline graphic. But Inline graphic because Inline graphic. It follows that Inline graphic.

Lemma 3.3 is the key to obtain Σ(A). Note that it is true regardless of the form of L and could be applied to other types of delays in neural field equations. We now prove the important following proposition.

Proposition 3.4. A satisfies the following properties:

1. Σess(A) = Σ(-L0)

2. Σ(A) is at most countable.

3. Σ(A) = Σ(-L0) ∪ CV

4. For λ ∈ Σ(A)\Σ(-L0), the generalized eigenspace Inline graphicis finite dimensional and k ∈ ℕ, Inline graphic

Proof. 1. λ ∈ Σess(A) ⇔ λ ∈ Σess(Lλ) = Σess(-L0 + J(λ)). We apply [[35], Theorem IV.5.26]. It shows that the essential spectrum does not change under compact perturbation. As Inline graphic is compact, we find Σess(-L0 + J(λ)) = Σess(-L0).

Let us show that Σess(-L0) = Σ(-L0). The assertion "⊂" is trivial. Now if λ ∈ Σ(-L0), for example λ = -l1, then λId + L0 = diag(0, -l1 + l2, ...).

Then Inline graphic is closed but Inline graphic. Hence Inline graphic. Also Inline graphic, hence Inline graphic. Hence, according to definition A.7, Inline graphic.

2. We apply [[35], Theorem IV.5.33] stating (in its first part) that if Σess(A) is at most countable, so is Σ(A).

3. We apply again [[35], Theorem IV.5.33] stating that if Σess(A) is at most countable, any point in Σ(A)\Σess(A) is an isolated eigenvalue with finite multiplicity.

4. Because Σess(A) ⊂ Σess, Arino(A), we can apply [[28], Theorem 2] which precisely states this property.    □

As an example, Figure 1 shows the first 200 eigenvalues computed for a very simple model one-dimensional model. We notice that they accumulate at λ = -1 which is the essential spectrum. These eigenvalues have been computed using TraceDDE, [36], a very efficient method for computing the CVs.

Figure 1.

Figure 1

Plot of the first 200 eigenvalues of A in the scalar case (p = 1, d = 1) and L0 = Id, J(x) = -1+1.5 cos(2x). The delay function τ(x) is the π periodic saw-like function shown in figure 2. Notice that the eigenvalues accumulate at λ = -1.

Last but not least, we can prove that the CVs are almost all, i.e. except for possibly a finite number of them, located on the left part of the complex plane. This indicates that the unstable manifold is always finite dimensional for the models we are considering here.

Corollary 3.5. Card Σ(A) ∩ {λ ∈ ℂ, ℜλ >-l} < ∞ where l = mini li.

Proof. If λ = ρ + ∈ Σ(A) and ρ >-l, then λ is a CV i.e. Inline graphic stating that 1 ∈ ΣP((λId + L0)-1J(λ)) (ΣP denotes the point spectrum).

But Inline graphic for λ big enough since Inline graphic is bounded.

Hence, for λ large enough 1 ∉ ΣP((λId + L0)-1J(λ)), which holds by the spectral radius inequality. This relationship states that the CVs λ satisfying ℜλ >-l are located in a bounded set of the right part of ℂ; given that the CV are isolated, there is a finite number of them.

3.1.2 Stability results from the characteristic values

We start with a lemma stating regularity for (T(t))t ≥ 0:

Lemma 3.6. The semigroup (T(t))t ≥ 0 of (4) is norm continuous on Inline graphicfor t > τm.

Proof. We first notice that -L0 generates a norm continuous semigroup (in fact a group) Inline graphic on Inline graphic and that Inline graphic is continuous from Inline graphic to Inline graphic. The lemma follows directly from [[27], Theorem VI.6.6].    □

Using the spectrum computed in proposition 3.4, the previous lemma and the formula (5), we can state the asymptotic stability of the linear equation (4). Notice that because of corollary 3.5, the supremum in (5) is in fact a max.

Corollary 3.7 (Linear stability). Zero is asymptotically stable for (4) if and only if maxℜΣp(A) < 0.

We conclude by showing that the computation of the characteristic values of A is enough to state the stability of the stationary solution Vf.

Corollary 3.8. If maxℜΣp(A) < 0, then the persistent solution Vf of (3) is asymptotically stable.

Proof. Using U = V - Vf, we write (3) as Inline graphic. The function G is C2 and satisfies G(0) = 0, DG(0) = 0 and Inline graphic. We next apply a variation of constant formula. In the case of delayed equations, this formula is difficult to handle because the semigroup T should act on non-continuous functions as shown by the formula Inline graphic, where X0(θ) = 0 if θ <0 and X0(0) = 1. Note that the function θ X0(θ)G(Us) is not continuous at θ = 0.

It is however possible (note that a regularity condition has to be verified but this is done easily in our case) to extend (see [34]) the semigroup T(t) to the space Inline graphic. We note Inline graphic this extension which has the same spectrum as T(t). Indeed, we can consider integral solutions of (4) with initial condition U0 in Inline graphic. However, as L0U0(0) has no meaning because ϕ ϕ(0) is not continuous in Inline graphic, the linear problem (4) is not well-posed in this space. This is why we have to extend the state space in order to make the linear operator in (4) continuous. Hence the correct state space is Inline graphic and any function Inline graphic is represented by the vector (ϕ(0), ϕ). The variation of constant formula becomes:

graphic file with name 2190-8567-1-1-i103.gif

where π2 is the projector on the second component.

Now we choose ω = - maxℜΣp(A)/2 > 0 and the spectral mapping theorem implies that there exists M >0 such that Inline graphic and Inline graphic. It follows that Inline graphic and from theorem 2.2, Inline graphic, which yields Inline graphic and concludes the proof.    □

Finally, we can use the CVs to derive a sufficient stability result.

Proposition 3.9. If Inline graphicthen Vf is asymptotically stable for (3).

Proof. Suppose that a CV λ of positive real part exists, this gives a vector in the Kernel of Δ(λ). Using straightforward estimates, it implies that Inline graphic, a contradiction.    □

3.1.3 Generalization of the model

In the description of our model, we have pointed out a possible generalization. It concerns the linear response of the chemical synapses, i.e., the lefthand side Inline graphic of (1). It can be replaced by a polynomial in Inline graphic, namely Inline graphic, where the zeros of the polynomials Pi have negative real parts. Indeed, in this case, when J is small, the network is stable. We obtain a diagonal matrix Inline graphic such that P(0) = L0 and change the initial condition (as in the theory of ODEs) while the history space becomes Inline graphic where ds + 1 = maxi degPi. Having all this in mind equation (1) writes

graphic file with name 2190-8567-1-1-i115.gif (6)

Introducing the classical variable Inline graphic, we rewrite (6) as

graphic file with name 2190-8567-1-1-i117.gif (7)

where Inline graphic is the Vandermonde (we put a minus sign in order to have a formulation very close to 1) matrix associated to P and Inline graphic. It appears that equation (7) has the same structure as (1): Inline graphic, are bounded linear operators; we can conclude that there is a unique solution to (6). The linearized equation around a persistent states yields a strongly continuous semigroup Inline graphic which is eventually continuous. Hence the stability is given by the sign of Inline graphic where Inline graphic is the infinitesimal generator of Inline graphic. It is then routine to show that

graphic file with name 2190-8567-1-1-i124.gif

This indicates that the essential spectrum Inline graphic of Inline graphic is equal to ∪iRoot (Pi) which is located in the left side of the complex plane. Thus the point spectrum is enough to characterize the linear stability:

Proposition 3.10. If Inline graphic the persistent solution Vf of (6) is asymptotically stable.

Using the same proof as in [20], one can prove that Inline graphic provided that Inline graphic.

Proposition 3.11. If Inline graphicthen Vf is asymptotically stable.

3.2 Principle of linear stability analysis via fixed point theory

The idea behind this method (see [37]) is to write (4) as an integral equation. This integral equation is then interpreted as a fixed point problem. We already know that this problem has a unique solution in Inline graphic. However, by looking at the definition of the (Lyapunov) stability, we can express the stability as the existence of a solution of the fixed point problem in a smaller space Inline graphic. The existence of a solution in Inline graphic gives the unique solution in Inline graphic. Hence, the method is to provide conditions for the fixed point problem to have a solution in Inline graphic; in the two cases presented below, we use the Picard fixed point theorem to obtain these conditions. Usually this method gives conditions on the averaged quantities arising in (4) whereas a Lyapunov method would give conditions on the sign of the same quantities. There is no method to be preferred, rather both of them should be applied to obtain the best bounds.

In order to be able to derive our bounds we make the further assumption that there exists a β >0 such that:

graphic file with name 2190-8567-1-1-i132.gif

Note that the notation τ -β represents the matrix of elements Inline graphic.

Remark 2. For example, in the 2D one-population case for Inline graphic, we have 0 ≤ β <1.

We rewrite (4) in two different integral forms to which we apply the fixed point method. The first integral form is obtained by a straightforward use the variation-of-parameters formula. It reads

graphic file with name 2190-8567-1-1-i135.gif (8)

The second integral form is less obvious. Let us define

graphic file with name 2190-8567-1-1-i136.gif

Note the slight abuse of notation, namely Inline graphic.

Lemma B.3 in appendix B.2 yields the upperbound Inline graphic. This shows that ∀t, Inline graphic.

Hence we propose the second integral form:

graphic file with name 2190-8567-1-1-i140.gif (9)

We have the following lemma.

Lemma 3.12. The formulation (9) is equivalent to (4).

Proof. The idea is to write the linearized equation as:

graphic file with name 2190-8567-1-1-i141.gif (10)

By the variation-of-parameters formula we have:

graphic file with name 2190-8567-1-1-i142.gif

We then use an integration by parts:

graphic file with name 2190-8567-1-1-i143.gif

which allows us to conclude.    □

Using the two integral formulations of (4) we obtain sufficient conditions of stability, as stated in the following proposition:

Proposition 3.13. If one of the following two conditions is satisfied:

1. Inline graphic and there exist α < 1, β > 0 such that

graphic file with name 2190-8567-1-1-i145.gif

where Inline graphic represents the matrix of elements Inline graphic .

2. Inline graphic.

then Vf is asymptotically stable for (3).

Proof. We start with the first condition.

The problem (4) is equivalent to solving the fixed point equation U = P2U for an initial condition Inline graphic. Let us define Inline graphic with the supremum norm written Inline graphic, as well as

graphic file with name 2190-8567-1-1-i152.gif

We define P2 on Inline graphic.

For all Inline graphic we have Inline graphic and (P2ψ)(0) = ϕ(0). We want to show that Inline graphic. We prove two properties.

1. P2ψ tends to zero at infinity.

Choose Inline graphic.

Using corollary.B.3, we have Z(t) → 0 as t → ∞.

Let 0 <T <t, we also have

graphic file with name 2190-8567-1-1-i157.gif

For the first term we write:

graphic file with name 2190-8567-1-1-i158.gif

Similarly, for the second term we write

graphic file with name 2190-8567-1-1-i159.gif

Now for a given ε > 0 we choose T large enough so that Inline graphic. For such a T we choose t* large enough so that Inline graphic for t >t*. Putting all this together, for all t >t*:

graphic file with name 2190-8567-1-1-i162.gif

From (9), it follows that P2ψ → 0 when t → ∞.

Since P2ψ is continuous and has a limit when t → ∞ it is bounded and therefore Inline graphic.

2. P2 is contracting on Inline graphic

Using (9) for all ψ1 , Inline graphic we have

graphic file with name 2190-8567-1-1-i165.gif

We conclude from Picard theorem that the operator P2 has a unique fixed point in Inline graphic.

There remains to link this fixed point to the definition of stability and first show that

graphic file with name 2190-8567-1-1-i166.gif

where U(t, ϕ) is the solution of (4).

Let us choose ε > 0 and M ≥ 1 such that Inline graphic. M exists because, by hypothesis, Inline graphic. We then choose δ satisfying

graphic file with name 2190-8567-1-1-i168.gif (11)

and Inline graphic such that Inline graphic. Next define

graphic file with name 2190-8567-1-1-i170.gif

We already know that P2 is a contraction on Inline graphic (which is a complete space). The last thing to check is Inline graphic, that is Inline graphic. Using lemma B.3 in appendix B.2:

graphic file with name 2190-8567-1-1-i174.gif

Thus P2 has a unique fixed point Uϕ,ε in Inline graphic which is the solution of the linear delayed differential equation i.e.

graphic file with name 2190-8567-1-1-i176.gif

As Uϕ,ε (t) → 0 in Inline graphic implies Inline graphic in Inline graphic, we have proved the asymptotic stability for the linearized equation.

The proof of the second property is straightforward. If 0 is asymptotically stable for (4) all the CV are negative and corollary 3.8 indicates that Vf is asymptotically stable for (3).

The second condition says that Inline graphic is a contraction because Inline graphic.

The asymptotic stability follows using the same arguments as in the case of P2.

We next simplify the first condition of the previous proposition to make it more amenable to numerics.

Corollary 3.14. Suppose that t ≥ 0, Inline graphicwith ε > 0.

If there exist α < 1, β > 0 such that Inline graphic, then Vf is asymptotically stable.

Proof. This corollary follows immediately from the following upperbound of the integral Inline graphic. Then if there exists α < 1, β > 0 such that Inline graphic, it implies that the condition 1. in proposition 3.13 is satisfied, from which the asymptotic stability of Vf follows.    □

Notice that ε > 0 is equivalent to Inline graphic. The previous corollary is useful in at least the following cases:

• If Inline graphic is diagonalizable, with associated eigenvalues/eigenvectors: Inline graphic, then Inline graphic and Inline graphic.

• If L0 = l0Id and the range of Inline graphic is finite dimensional: Inline graphic where (ek)k∈ℕ is an orthonormal basis of Inline graphic, then Inline graphic and Inline graphic. Let us write J = (Jkl)k,l = 1 ⋯ N the matrix associated to Inline graphic (see above). Then Inline graphic is also a compact operator with finite range and Inline graphic. Finally, it gives Inline graphic.

• If Inline graphic is self-adjoint, then it is diagonalizable and we can chose Inline graphic, Mε = 1

Remark 3. If we suppose that we have higher order time derivatives as in section 3.1.3, we can write the linearized equation as

graphic file with name 2190-8567-1-1-i196.gif (12)

Suppose that Inline graphicis diagonalizable then Inline graphicwhere Inline graphicand Inline graphic. Also notice that Inline graphic. Then using the same functionals as in the proof of proposition 3.13, we can find two bounds for the stability of a stationary state Vf:

Suppose that Inline graphici.e. Vf is stable for the non-delayed equation where Inline graphic. If there exist α < 1, β > 0 such that Inline graphic.

Inline graphic.

To conclude, we have found an easy-to-compute formula for the stability of the persistent state Vf. It can indeed be cumbersome to compute the CVs of neural field equations for different parameters in order to find the region of stability whereas the evaluation of the conditions in Corollary 3.14 is very easy numerically.

The conditions in proposition 3.13 and corollary 3.14 define a set of parameters for which Vf is stable. Notice that these conditions are only sufficient conditions: if they are violated, Vf may still remain stable. In order to find out whether the persistent state is destabilized we have to look at the characteristic values. Condition 1 in proposition 3.13 indicates that if Vf is a stable point for the non-delayed equation (see [18]) it is also stable for the delayed-equation. Thus, according to this condition, it is not possible to destabilize a stable persistent state by the introduction of small delays, which is indeed meaningful from the biological viewpoint. Moreover this condition gives an indication of the amount of delay one can introduce without changing the stability.

Condition 2 is not very useful as it is independent of the delays: no matter what they are, the stable point Vf will remain stable. Also, if this condition is satisfied there is a unique stationary solution (see [18]) and the dynamics is trivial, i.e. converging to the unique stationary point.

3.3 Summary of the different bounds and conclusion

The next proposition summarizes the results we have obtained in proposition 3.13 and corollary 3.14 for the stability of a stationary solution.

Proposition 3.15. If one of the following conditions is satisfied:

1. There exist ε >0 such that Inline graphicand α < 1, β > 0 such that Inline graphic,

2. Inline graphic

then Vf is asymptotically stable for (3).

The only general results known so far for the stability of the stationary solutions are those of Atay and Hutt (see for example [20]): they found a bound similar to condition 2 in proposition 3.15 by using the CVs, but no proof of stability was given. Their condition involves the L1-norm of the connectivity function J and it was derived using the CVs in the same way as we did in the previous section. Thus our contribution with respect to condition 2 is that, once it is satisfied, the stationary solution is asymptotically stable: up until now this was numerically inferred on the basis of the CVs. We have proved it in two ways, first by using the CVs, and second by using the fixed point method which has the advantage of making the proof essentially trivial.

Condition 1 is of interest, because it allows one to find the minimal propagation delay that does not destabilize. Notice that this bound, though very easy to compute, overestimates the minimal speed. As mentioned above, the bounds in condition 1 are sufficient conditions for the stability of the stationary state Vf. In order to evaluate the conservativeness of these bounds, we need to compare them to the stability predicted by the CVs. This is done in the next section.

4 Numerical application: Neural fields on a ring

In order to evaluate the conservativeness of the bounds derived above we compute the CVs in a numerical example. This can be done in two ways:

• Solve numerically the nonlinear equation satisfied by the CVs. This is possible when one has an explicit expression for the eigenvectors and periodic boundary conditions. It is the method used in [9].

• Discretize the history space Inline graphic in order to obtain a matrix version AN of the linear operator A: the CVs are approximated by the eigenvalues of AN . Following the scheme of [36], it can be shown that the convergence of the eigenvalues of AN to the CVs is in Inline graphic for a suitable discretization of Inline graphic. One drawback of this method is the size of AN which can be very large in the case of several neuron populations in a two-dimensional cortex. A recent improvement (see [38]), based on a clever factorization of AN , allows a much faster computation of the CVs: this is the scheme we have been using.

The Matlab program used to compute the righthand side of (1) uses a Cpp code that can be run on multiprocessors (with the OpenMP library) to speed up computations. It uses a trapezoidal rule to compute the integral. The time stepper dde23 of Matlab is also used.

In order to make the computation of the eigenvectors very straightforward, we study a network on a ring, but notice that all the tools (analytical/numerical) presented here also apply to a generic cortex. We reduce our study to scalar neural fields Ω ⊂ ℝ and one neuronal population, p = 1. With this in mind the connectivity is chosen to be homogeneous J(x, y) = J(x - y) with J even. To respect topology, we assume the same for the propagation delay function τ(x, y).

We therefore consider the scalar equation with axonal delays defined on Inline graphic with periodic boundary conditions. Hence Inline graphic and J is also π-periodic.

graphic file with name 2190-8567-1-1-i211.gif (13)

where the sigmoid S0 satisfies S0(0) = 0.

Remember that (13) has a Lyapunov functional when c = 0 and that all trajectories are bounded. The trajectories of the non-delayed form of (13) are heteroclinic orbits and no non-constant periodic orbit is possible.

We are looking at the local dynamics near the trivial solution V f = 0. Thus we study how the CVs vary as functions of the nonlinear gain σ and the "maximum" delay c. From the periodicity assumption, the eigenvectors of Δ(λ) are the functions cos(nx), sin(nx) which leads to the characteristic equation for the eigenvalues λ:

graphic file with name 2190-8567-1-1-i212.gif (14)

where Inline graphic is the Fourier Transform of J and Inline graphic. This nonlinear scalar equation is solved with the Matlab Toolbox TraceDDE (see [36]). Recall that the eigenvectors of A are given by the functions θ eλθ cos(nx), θ eλθ sin(nx) ∈ Inline graphic where λ is a solution of (14). A bifurcation point is a pair (c, σ) for which equations (14) have a solution with zero real part. Bifurcations are important because they signal a change in stability, a set of parameters ensuring stability is enclosed (if bounded) by bifurcation curves. Notice that if σ0 is a bifurcation point in the case c = 0, it remains a bifurcation point for the delayed system ∀c, hence ∀c, σ = σ0, 0 ∈ Σ(A). This is why there is a bifurcation line σ = σ0 in the bifurcation diagrams that are shown later.

The bifurcation diagram depends on the choice of the delay function τ. As explained in section 2.1, we use τ(x, y) = |x - y|π, where the lower index π indicates that it is a π-periodic function. The bifurcation diagram with respect to the parameters (c, σ) is shown in the righthand part of Figure 2 in the case when the connectivity J is equal to Inline graphic. The two bounds derived in section 3.3 are also shown. Note that the delay-dependent bound is computed using the fact that Inline graphic is self-adjoint. They are clearly very conservative. The lefthand part of the same figure shows the delay function τ.

Figure 2.

Figure 2

Left: Example of a periodic delay function, the saw-function. Right: plot of the CVs in the plane (c, σ), the line labelled P is the pitchfork line, the line labelled H is the Hopf curve. The two bounds of proposition 3.15 are also shown. Parameters are: L0 = Id, Inline graphic. The labels 1, 2, 3, indicate approximate positions in the parameter space (c, σ) at which the trajectories shown in Figure 3 are computed.

The first bound gives the minimal velocity 1/c below which the stationary state might be unstable, in this case, even for smaller speed, the state is stable as one can see from the CV boundary. Notice that in the parameter domain defined by the 2 conditions bound.1. and bound.2., the dynamic is very simple: it is characterized by a unique and asymptotically stable stationary state, V f = 0.

In Figure 2 we show the dynamics for different parameters corresponding to the points labelled 1, 2 and 3 in the right-hand part of Figure 2 for a random (in space) and constant (in time) initial condition ϕ (see (1)). When the parameter values are below the bound computed with the CV, the dynamics converge to the stable stationary state V f = 0. Along the Pitchfork line labelled (P) in the right-hand part of Figure 2, there is a static bifurcation leading to the birth of new stable stationary states, this is shown in the middle part of Figure 3. The Hopf curve labelled (H) in the righthand part of Figure 2 indicates the transition to oscillatory behaviors as one can see in the righthand part of Figure 3. Note that we have not proved that the Hopf curve was indeed a Hopf bifurcation curve, we have just inferred numerically from the CVs that the eigenvalues satisfy the usual conditions for the Hopf bifurcation.

Figure 3.

Figure 3

Plot of the solution of (13) for different parameters corresponding to the points shown as 1, 2 and 3 in the righthand part of figure 2 for a random (in space) and constant (in time) initial condition, see text. The horizontal axis corresponds to space, the range is Inline graphic. The vertical axis represents time.

Notice that the graph of the CVs shown in the righthand part of Figure 2 features some interesting points, for example the Fold-Hopf point at the intersection of the Pitchfork line and the Hopf curve. It is also possible that the multiplicity of the 0 eigenvalue could change on the Pitchfork line (P) to yield a Bogdanov-Takens point.

These numerical simulations reveal that the Lyapunov function derived in [39] is likely to be incorrect. Indeed, if such a function existed, as its value decreases along trajectories, it must be constant on any periodic orbit which is not possible. However the third plot in Figure 3 strongly suggests that we have found an oscillatory trajectory produced by a Hopf bifurcation (which we did not prove mathematically): this oscillatory trajectory converges to a periodic orbit which contradicts the existence of a Lyapunov functional such as the one proposed in [39].

Let us comment on the tightness of the delay-dependent bound: as shown in proposition 3.13, this bound involves the maximum delay value τm and the norm Inline graphic, hence the specific shape of the delay function, i.e. Inline graphic, is not completely taken into account in the bound. We can imagine many different delay functions with the same values for τm and Inline graphic that will cause possibly large changes to the dynamical portrait. For example in the previous numerical application the singularity σ = σ0, corresponding to the fact that 0 ∈ Σp(A), is independent of the details of the shape of the delay function: however for specific delay functions, the multiplicity of this 0-eigenvalue could change as in the Bogdanov-Takens bifurcation which involves changes in the dynamical portrait compared to the pitchfork bifurcation. Similarly, an additional purely imaginary eigenvalue could emerge (as for c ≈ 3.7 in the numerical example) leading to a Fold-Hopf bifurcation. These instabilities depend on the expression of the delay function (and the connectivity function as well). These reasons explain why the bound in proposition 3.13 is not very tight.

This suggests another way to attack the problem of the stability of fixed points: one could look for connectivity functions Inline graphic which have the following property: for all delay function τ, the linearized equation (4) does not possess 'unstable solutions', i.e. for all delay function τ, ℜΣp(A) < 0. In the literature (see [40,41]), this is termed as the all-delay stability or the delay-independent stability. These remain questions for future work.

5 Conclusion

We have developed a theoretical framework for the study of neural field equations with propagation delays. This has allowed us to prove the existence, uniqueness, and the boundedness of the solutions to these equations under fairly general hypotheses.

We have then studied the stability of the stationary solutions of these equations. We have proved that the CVs are sufficient to characterize the linear stability of the stationary states. This was done using the semigroups theory (see [27]).

By formulating the stability of the stationary solutions as a fixed point problem we have found delay-dependent sufficient conditions. These conditions involve all the parameters in the delayed neural field equations, the connectivity function, the nonlinear gain and the delay function. Albeit seemingly very conservative they are useful in order to avoid the numerically intensive computation of the CV.

From the numerical viewpoint we have used two algorithms [36,38] to compute the eigenvalues of the linearized problem in order to evaluate the conservativeness of our conditions. A potential application is the study of the bifurcations of the delayed neural field equations.

By providing easy-to-compute sufficient conditions to quantify the impact of the delays on neural field equations we hope that our work will improve the study of models of cortical areas in which the propagation delays have so far been somewhat been neglected due to a partial lack of theory.

A Operators and their spectra

We recall and gather in this appendix a number of definitions, results and hypotheses that are used in the body of the article to make it more self-sufficient.

Definition A.1. An operator T ∈ ⋯(E, F), E, F being Banach spaces, is closed if its graph is closed in the direct sum E F.

Definition A.2. We note Inline graphicthe operator norm of a bounded operator Inline graphic, i.e.

graphic file with name 2190-8567-1-1-i223.gif

It is known, see e.g. [35], that

graphic file with name 2190-8567-1-1-i224.gif

Definition A.3. A semigroup (T(t))t≥0 on a Banach space E is strongly continuous if x E, t T (t)x is continuous from + to E.

Definition A.4. A semigroup (T(t))t≥0 on a Banach space E is norm continuous if t T (t) is continuous from + to L(E). It is said eventually norm continuous if it is norm continuous for t > t0 ≥ 0.

Definition A.5. A closed operator Inline graphicof a Banach space E is Fredholm if Inline graphicand Inline graphicare finite and Inline graphicis closed in E.

Definition A.6. A closed operator Inline graphicof a Banach space E is semi-Fredholm if Inline graphic or Inline graphicis finite and Inline graphicis closed in E.

Definition A.7. If Inline graphicis a closed operator of a Banach space E the essential spectrum Σess(T ) is the set of λs in such that λId - T is not semi-Fredholm i.e. either Inline graphicis not closed or Inline graphicis closed but Inline graphic.

Remark 4. [28]uses the definition: λ ∈ Σess,Arino(T) if at least one of the following holds: Inline graphicis not closed or Inline graphicis infinite dimensional or λ is a limit point of Σ(T). Then

graphic file with name 2190-8567-1-1-i233.gif

B The Cauchy problem

B.1 Boundedness of solutions

We prove lemma B.2 which is used in the proof of the boundedness of the solutions to the delayed neural field equations (1) or (3).

Lemma B.1. We have Inline graphicand Inline graphic.

Proof. • We first check that L1 is well defined: if Inline graphic then ψ is measurable (it is Ω-measurable by definition and [0, τ]-measurable by continuity) on Ω × [-τm, 0] so that the integral in the definition of L1 is meaningful. As τ is continuous, it follows that Inline graphicis measurable on Ω2. Furthermore (ψd)2 L12,ℝp × p).

• We now show that Inline graphic. We have for Inline graphic, Inline graphic.

With Cauchy-Schwartz:

graphic file with name 2190-8567-1-1-i239.gif (15)

Nothing that

Inline graphic we obtain

graphic file with name 2190-8567-1-1-i241.gif

and L1 is continuous.    □

Lemma B.2. We have Inline graphic.

Proof. By the Cauchy-Schwarz inequality and lemma B.1:

Inline graphic because S is bounded by 1.    □

B.2 Stability

In this section we prove lemma B.3 which is central in establishing the first sufficient condition in proposition 3.13.

Lemma B.3. Let β >0 be such that τ L22 , ℝp × p)). Then we have the following bound.

graphic file with name 2190-8567-1-1-i244.gif

Proof. We have:

graphic file with name 2190-8567-1-1-i245.gif (16)

and if we set Inline graphic, we have:

Inline graphic and from the Cauchy-Schwartz inequality:

graphic file with name 2190-8567-1-1-i248.gif

Again, from the Cauchy-Schwartz inequality applied to Inline graphic:

graphic file with name 2190-8567-1-1-i250.gif (17)

Then, from the discrete Cauchy-Schwartz inequality:

graphic file with name 2190-8567-1-1-i251.gif (18)

which gives as stated:

graphic file with name 2190-8567-1-1-i252.gif

      □

and allows us to conclude.

Competing interests

The authors declare that they have no competing interests.

Contributor Information

Romain Veltz, Email: romain.veltz@sophia.inria.fr.

Olivier Faugeras, Email: romain.veltz@sophia.inria.fr.

Acknowledgements

We wish to thank Elias Jarlebringin who provided his program for computing the CV.

This work was partially supported by the ERC grant 227747 - NERVI and the EC IP project #015879-FACETS.

References

  1. Wilson H, Cowan J. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Biological Cybernetics. 1973;13(2):55–80. doi: 10.1007/BF00288786. [DOI] [PubMed] [Google Scholar]
  2. Amari SI. Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics. 1977;27(2):77–87. doi: 10.1007/BF00337259. [DOI] [PubMed] [Google Scholar]
  3. Curtu R, Ermentrout B. Pattern Formation in a Network of Excitatory and Inhibitory Cells with Adaptation. SIAM Journal on Applied Dynamical Systems. 2004;3:191. doi: 10.1137/030600503. [DOI] [Google Scholar]
  4. Kilpatrick Z, Bressloff P. Effects of synaptic depression and adaptation on spatiotemporal dynamics of an excitatory neuronal network. Physica D: Nonlinear Phenomena. 2010;239(9):547–560. doi: 10.1016/j.physd.2009.06.003. [DOI] [Google Scholar]
  5. Ben-Yishai R, Bar-Or R, Sompolinsky H. Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences. 1995;92(9):3844–3848. doi: 10.1073/pnas.92.9.3844. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bressloff P, Cowan J, Golubitsky M, Thomas P, Wiener M. Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. http://dx.doi.org/doi:10.1098/rstb.2000.0769. Phil Trans R Soc Lond B. 2001;306(1407):299–330. doi: 10.1098/rstb.2000.0769. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Coombes S, Laing C. Delays in activity based neural networks. Philosophical Transactions of the Royal Society A. 2009;367:1117–1129. doi: 10.1098/rsta.2008.0256. [DOI] [PubMed] [Google Scholar]
  8. Roxin A, Brunel N, Hansel D. Role of Delays in Shaping Spatiotemporal Dynamics of Neuronal Activity in Large Networks. Physical Review Letters. 2005;94(23):238103. doi: 10.1103/PhysRevLett.94.238103. [DOI] [PubMed] [Google Scholar]
  9. Venkov N, Coombes S, Matthews P. Dynamic instabilities in scalar neural field equations with space-dependent delays. Physica D: Nonlinear Phenomena. 2007;232:1–15. doi: 10.1016/j.physd.2007.04.011. [DOI] [Google Scholar]
  10. Jirsa V, Kelso J. Spatiotemporal pattern formation in neural systems with heterogeneous connection topologies. Physical Review E. 2000;62(6):8462–8465. doi: 10.1103/PhysRevE.62.8462. [DOI] [PubMed] [Google Scholar]
  11. Wu J. Symmetric functional differential equations and neural networks with memory. Transactions of the American Mathematical Society. 1998;350(12):4799–4838. doi: 10.1090/S0002-9947-98-02083-2. [DOI] [Google Scholar]
  12. Bélair J, Campbell S, Van Den Driessche P. Frustration, stability, and delay-induced oscillations in a neural network model. SIAM Journal on Applied Mathematics. 1996. pp. 245–255.
  13. Bélair J, Campbell S. Stability and bifurcations of equilibria in a multiple-delayed differential equation. SIAM Journal on Applied Mathematics. 1994. pp. 1402–1424.
  14. Campbell S, Ruan S, Wolkowicz G, Wu J. Stability and bifurcation of a simple neural network with multiple time delays. Differential Equations with Application to Biology. 1999. pp. 65–79.
  15. Atay FM, Hutt A. Neural fields with distributed transmission speeds and long-range feedback delays. SIAM Journal of Applied Dynamical Systems. 2006;5(4):670–698. doi: 10.1137/050629367. [DOI] [Google Scholar]
  16. Budd J, Kovács K, Ferecskó A, Buzás P, Eysel U, Kisvárday Z. Neocortical Axon Arbors Trade-off Material and Conduction Delay Conservation. PLoS Comput Biol. 2010;6(3):e1000711. doi: 10.1371/journal.pcbi.1000711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Faugeras O, Grimbert F, Slotine JJ. Abolute stability and complete synchronization in a class of neural fields models. SIAM Journal of Applied Mathematics. 2008;61:205–250. [Google Scholar]
  18. Faugeras O, Veltz R, Grimbert F. Persistent neural states: stationary localized activity patterns in nonlinear continuous n-population, q-dimensional neural networks. Neural Computation. 2009;21:147–187. doi: 10.1162/neco.2009.12-07-660. [DOI] [PubMed] [Google Scholar]
  19. Veltz R, Faugeras O. Local/Global Analysis of the Stationary Solutions of Some Neural Field Equations. http://link.aip.org/link/?SJA/9/954/1. SIAM Journal on Applied Dynamical Systems. 2010;9(3):954–998. doi: 10.1137/090773611. [DOI] [Google Scholar]
  20. Atay FM, Hutt A. Stability and bifurcations in neural fields with finite propagation speed and general connectivity. SIAM Journal on Applied Mathematics. 2005;65(2):644–666. [Google Scholar]
  21. Hutt A. Local excitation-lateral inhibition interaction yields oscillatory instabilities in nonlocally interacting systems involving finite propagation delays. Physics Letters A. 2008;372:541–546. doi: 10.1016/j.physleta.2007.08.018. [DOI] [Google Scholar]
  22. Hutt A, Atay F. Effects of distributed transmission speeds on propagating activity in neural populations. Physical Review E. 2006;73(021906):1–5. doi: 10.1103/PhysRevE.73.021906. [DOI] [PubMed] [Google Scholar]
  23. Coombes S, Venkov N, Shiau L, Bojak I, Liley D, Laing C. Modeling electrocortical activity through local approximations of integral neural field equations. Physical Review E. 2007;76(5):51901. doi: 10.1103/PhysRevE.76.051901. [DOI] [PubMed] [Google Scholar]
  24. Bressloff P, Kilpatrick Z. Nonlocal Ginzburg-Landau equation for cortical pattern formation. Physical Review E. 2008;78(4):41916:1–16. doi: 10.1103/PhysRevE.78.041916. [DOI] [PubMed] [Google Scholar]
  25. Faye G, Faugeras O. Some theoretical and numerical results for delayed neural field equations. Physica D. 2010;239(9):561–578. doi: 10.1016/j.physd.2010.01.010. [Special issue on Mathematical Neuroscience.] [DOI] [Google Scholar]
  26. Ermentrout G, Cowan J. Large scale spatially organized activity in neural nets. SIAM Journal on Applied Mathematics. 1980. pp. 1–21.
  27. Engel K, Nagel R. One-parameter semigroups for linear evolution equations. Vol. 63. Springer; 2001. [Google Scholar]
  28. Arino O, Hbid M, Dads E. Delay differential equations and applications. Springer; 2006. [Google Scholar]
  29. Hale J, Lunel S. Introduction to functional differential equations. Springer Verlag; 1993. [Google Scholar]
  30. Wu J. Theory and applications of partial functional differential equations. Springer; 1996. [Google Scholar]
  31. Diekmann O. Delay equations: functional-, complex-, and nonlinear analysis. Springer; 1995. [Google Scholar]
  32. Yosida K. Functional analysis. Reprint of the sixth (1980) edition. Classics in Mathematics. 1995.
  33. Hutt A. Finite Propagation Speeds in Spatially Extended Systems. Complex Time-Delay Systems: Theory and Applications. 2009. p. 151.
  34. Bátkai A, Piazzera S. Semigroups for delay equations. AK Peters, Ltd; 2005. [Google Scholar]
  35. Kato T. Perturbation Theory for Linear Operators. Springer; 1995. [Google Scholar]
  36. Breda D, Maset S, Vermiglio R. TRACE-DDE: a Tool for Robust Analysis and Characteristic Equations for Delay Differential Equations. Topics in Time Delay Systems. 2009. pp. 145–155.
  37. Burton T. Stability by Fixed Point Theory for Functional Differential Equations. Dover Publications, Mineola, NY; 2006. [Google Scholar]
  38. Jarlebring E, Meerbergen K, Michiels W. An Arnoldi like method for the delay eigenvalue problem. status: published. 2010.
  39. Enculescu M, Bestehorn M. Liapunov functional for a delayed integro-differential equation model of a neural field. EPL (Europhysics Letters) 2007;77:68007. doi: 10.1209/0295-5075/77/68007. [DOI] [Google Scholar]
  40. Chen J, Latchman H. Asymptotic stability independent of delays: simple necessary and sufficient conditions. Proceedings of the American Control Conference. 1994.
  41. Chen J, Xu D, Shafai B. On sufficient conditions for stability independent of delay. IEEE Transactions on Automatic Control. 1995;40(9):1675–1680. doi: 10.1109/9.412644. [DOI] [Google Scholar]

Articles from Journal of Mathematical Neuroscience are provided here courtesy of Springer-Verlag

RESOURCES