Skip to main content
Springer logoLink to Springer
. 2014 Jul 29;71(2):259–300. doi: 10.1007/s00285-014-0807-6

Stochastic neural field equations: a rigorous footing

O Faugeras 1, J Inglis 2,
PMCID: PMC4496531  PMID: 25069787

Abstract

We here consider a stochastic version of the classical neural field equation that is currently actively studied in the mathematical neuroscience community. Our goal is to present a well-known rigorous probabilistic framework in which to study these equations in a way that is accessible to practitioners currently working in the area, and thus to bridge some of the cultural/scientific gaps between probability theory and mathematical biology. In this way, the paper is intended to act as a reference that collects together relevant rigorous results about notions of solutions and well-posedness, which although may be straightforward to experts from SPDEs, are largely unknown in the neuroscientific community, and difficult to find in a very large body of literature. Moreover, in the course of our study we provide some new specific conditions on the parameters appearing in the equation (in particular on the neural field kernel) that guarantee the existence of a solution.

Keywords: Stochastic neural field equations, Spatially correlated noise, Multiplicative noise, Stochastic integro-differential equation, Existence and uniqueness

Introduction

Neural field equations have been widely used to study spatiotemporal dynamics of cortical regions. Arising as continuous spatial limits of discrete models, they provide a step towards an understanding of the relationship between the macroscopic spatially structured activity of densely populated regions of the brain, and the underlying microscopic neural circuitry. The discrete models themselves describe the activity of a large number of individual neurons with no spatial dimensions. Such neural mass models have been proposed by Lopes da Silva et al. (1974, 1976) to account for oscillatory phenomena observed in the brain, and were later put on a stronger mathematical footing in the study of epileptic-like seizures in Jansen and Rit (1995). When taking the spatial limit of such discrete models, one typically arrives at a nonlinear integro-differential equation, in which the integral term can be seen as a nonlocal interaction term describing the spatial distribution of synapses in a cortical region. Neural field models build on the original work of Wilson and Cowan (1972, Wilson and Cowan (1973)) and Amari (1977), and are known to exhibit a rich variety of phenomena including stationary states, traveling wave fronts, pulses and spiral waves. For a comprehensive review of neural field equations, including a description of their derivation, we refer to Bressloff (2012).

More recently several authors have become interested in stochastic versions of neural field equations (see for example Bressloff 2009, 2010; Bressloff and Webber 2012; Bressloff and Wilkerson 2012; Kilpatrick and Ermentrout 2013), in order to (amongst other things) model the effects of fluctuations on wave front propagation. In particular, in Bressloff and Webber (2012) a multiplicative stochastic term is added to the neural field equation, resulting in a stochastic nonlinear integro-differential equation of the form

dY(t,x)=-Y(t,x)+Rw(x,y)G(Y(t,y))dydt+σ(Y(t,x))dW(t,x), 1.1

for xR,t0, and some functions G (referred to as the nonlinear gain function), σ (the diffusion coefficient) and w (the neural field kernel, sometimes also called the connectivity function). Here (W(t,x))xR,t0 is a stochastic process (notionally a “Gaussian random noise”) that depends on both space and time, and which may possess some spatial correlation.

Of course the first step towards understanding (1.1) rigorously is defining what we mean by a solution. This is in fact not completely trivial and is somewhat glossed-over in the neuroscientific literature. The main point is that any solution must involve an object of the form

σ(Y(t,x))dW(t,x) 1.2

which must be precisely defined. Of course, in the case where there is no spatial dimension, the theory of such stochastic integrals is widely disseminated, but for integrals with respect to space-time white noise (for example) it is far less well-known. It is for this reason that we believe it to be extremely worthwhile making a detailed review of how to give sense to these objects, and moreover to solutions to (1.1) when they exist, in a way that is accessible to practitioners. Although such results are quite well-known in probability theory, the body of literature is very large and generalistic, posing a daunting prospect for a mathematical neuroscientist looking to apply a specific result. The fact that the equation fits into well-studied frameworks also opens up opportunities to apply existing abstract results (for example large deviation principles—see Remark 2.3).

There are in fact two distinct approaches to defining and interpreting the quantity (1.2), both of which allow one to build up a theory of stochastic partial differential equations (SPDEs). Although (1.1) does not strictly classify as a SPDE (since there is no derivative with respect to the spatial variable), both approaches provide a rigorous underlying theory upon which to base a study of such equations.

The first approach generalizes the theory of stochastic processes in order to give sense to solutions of SPDEs as random processes that take their values in a Hilbert space of functions [as presented by Da Prato and Zabczyk in (1992) and more recently by Prévôt and Röckner in (2007)]. With this approach, the quantity (1.2) is interpreted as a Hilbert space-valued integral i.e. “B(Y(t))dW(t)”, where (Y(t))t0 and (W(t))t0 take their values in a Hilbert space of functions, and B(Y(t)) is an operator between Hilbert spaces (depending on σ). The second approach is that of Walsh [as described in Walsh (1986)], which, in contrast, takes as its starting point a PDE with a random and highly irregular “white-noise” term. This approach develops integration theory with respect to a class of random measures, so that (1.2) can be interpreted as a random field in both t and x.

In the theory of SPDEs, there are advantages and disadvantages of taking both approaches. This is also the case with regards to the stochastic neural field Eq. (1.1), as described in the conclusion below (Sect. 5), and it is for this reason that we here review both approaches. Taking the functional approach of Da Prato and Zabczyk is perhaps more straightforward for those with knowledge of stochastic processes, and the existing general results can be applied more directly in order to obtain, for example, existence and uniqueness. This was the path taken in Kuehn and Riedler (2014) where the emphasis was on large deviations, though in a much less general setup than we consider here (see Remark 2.3). However, it can certainly be argued that solutions constructed in this way may be “non-physical”, since the functional theory tends to ignore any spatial regularity properties (solutions are typically L2-valued in the spatial direction). We argue that the approach of Walsh is more suited to looking for “physical” solutions that are at least continuous in the spatial dimension. A comparison of the two approaches in a general setting is presented in Dalang and Quer-Sardanyons (2011) or Jetschke (1982, 1986), and in our setting in Sect. 4 below. Our main conclusion is that in typical cases of interest for practitioners, the approaches are equivalent (see Example 4.2), but one or the other may be more suited to a particular need.

To reiterate, the main aim of this article is to present a review of an existing theory, which is accessible to readers unfamiliar with stochastic partial differential equations, that puts the study of stochastic neural field equations on a rigorous mathematical footing. As a by product we will be able to give general conditions on the functions G, σ and w that, as far as we know, do not appear anywhere else in the literature and guarantee the existence of a solution to (1.1) in some sense. Moreover, these conditions are weak enough to be satisfied for all typical choices of functions made by practitioners (see Sects. 2.6, 2.7 and 2.8). By collecting all these results in a single place, we hope this will provide a reference for practitioners in future works.

The layout of the article is as follows. We first present in Sect. 2 the necessary material in order to consider the stochastic neural field Eq. (1.1) as an evolution equation in a Hilbert space. This involves introducing the notion of a Q-Wiener process taking values in a Hilbert space and stochastic integration with respect to Q-Wiener processes. A general existence result from Prato and Zabczyk (1992) is then applied in Sect. 2.5 to yield a unique solution to (1.1) interpreted as a Hilbert space valued process. The second part of the paper switches track, and describes Walsh’s theory of stochastic integration (Sect. 3.1), with a view of giving sense to a solution to (1.1) as a random field in both time and space. To avoid dealing with distribution-valued solutions, we in fact consider a Gaussian noise that is smoothed in the spatial direction (Sect. 3.2), and show that, under some weak conditions, the neural field equation driven by such a smoothed noise has a unique solution in the sense of Walsh that is continuous in both time and space (Sect. 3.3). We finish with a comparison of the two approaches in Sect. 4, and summarize our findings in a conclusion (Sect. 5).

Notation: Throughout the article (Ω,F,P) will be a probability space, and L2(Ω,F,P) will be the space of square-integrable random variables on (Ω,F,P). We will use the standard notation B(T) to denote the Borel σ-algebra on T for any topological space T. The Lebesgue space of p-integrable (with respect to the Lebesgue measure) functions over RN for NN={1,2,} will be denoted by Lp(RN), p1, as usual, while Lp(RN,ρ), p1, will be the Lebesgue space weighted by a measurable function ρ:RNR+.

Stochastic neural field equations as evolution equations in Hilbert spaces

As stated in the introduction, the goal of this section is to provide the theory and conditions needed to interpret the solution to (1.1) as a process (Y(t))t0 that takes its values in a Hilbert space of functions i.e. for each t0, Y(t) is a function of the spatial variable x. This is in order to try and cast the problem into the well-known theoretical framework of stochastic evolution equations in Hilbert spaces, as detailed in Prato and Zabczyk (1992). In particular we will look for solutions to

dY(t)=-Y(t)+F(Y(t))dt+``B(Y(t))dW(t)'',t0, 2.1

such that Y(t)L2(RN,ρ) for some measurable ρ:RNR+ (to be determined), where F is now an operator on L2(RN,ρ) given by

F(Y(t))(x)=RNw(x,y)G(Y(t,y))dy,xRN.

Here w:RN×RNR is the neural field kernel, and G:RR is the nonlinear gain function. Note that we have made a slight generalization here in comparison with (1.1) in that we in fact work on RN, rather than R. The term B(Y(t))dW(t) represents a stochastic differential term that must be made sense of as a differential in the Hilbert space L2(RN,ρ). This is done with the help of Sects. 2.1 and 2.2 below.

Notation: In this section we will also need the following basic notions from functional analysis. Let U and H be two separable Hilbert spaces. We will write L0(U,H) to denote the space of all bounded linear operators from U to H with the usual norm1 (with the shorthand L0(H) when U=H), and L2(U,H) for the space of all Hilbert-Schmidt operators from U to H, i.e. those bounded linear operators B:UH such that

k1B(ek)H2<,

for some (and hence all) complete orthonormal systems {ek}k1 of U. Finally, a bounded linear operator Q:UU will be said to be trace-class if Tr(Q):=k1Q(ek),ekU<, again for some (and hence all) complete orthonormal systems {ek}k1 of U.

Hilbert space valued Q-Wiener processes

The purpose of this section is to provide a basic understanding of how we can generalize the idea of an Rd-valued Wiener process to one that takes its values in an infinite dimensional Hilbert space, which for convenience we fix to be U=L2(RN) (this is simply for the sake of being concrete).

In the finite dimensional case, it is well-known that Rd-valued Wiener processes are characterized by their d×d covariance matrices, which are symmetric and non-negative. The basic idea is that in the infinite dimensional setup the covariance matrices are replaced by covariance operators, which are linear, non-negative, symmetric and bounded.

Indeed, let Q:UU be a non-negative, symmetric bounded linear operator on U. To avoid introducing extra embeddings, we also suppose Tr(Q)<. Then, completely analogously to the finite dimensional case, there exists a sequence of non-negative real numbers (λk)k1 which are eigenvalues of Q, associated with a sequence of eigenfunctions {ek}k1 (i.e. Qek=λkek) that form a complete orthonormal basis for U. Moreover, since Tr(Q)<, it holds that

k=1λk<.

By a Q-Wiener process W=(W(t))t0 on U we will simply mean that W(t) can be expanded as

W(t)=k=1λkβk(t)ek, 2.2

where (βk(t))t0, k=1,2, are mutually independent standard real-valued Brownian motions. We note that W(t) exists as a U-valued square-integrable random variable i.e. W(t)L2(Ω,F,P).

Equation (2.2) shows the role played by Q: the eigenvectors ek are functions that determine “where” the noise “lives” in U, while the eigenvalues λk determine its dimensionality and relative strength. As an example of a covariance operator2, let us compute the covariance operator of W. An easy computation based on (2.2) and the elementary properties of the standard real-valued Brownian motion shows that

E[W(s),gUW(t),hU]=(st)Qg,hUg,hU. 2.3

It turns out that W is white in both space and time. The whiteness in time is apparent from the above expression. The whiteness in space is shown explicitly in Sect. 2.7.

Stochastic integration with respect to Q-Wiener processes

The second point is that we would like to be able to define stochastic integration with respect to these Hilbert space valued Wiener processes. In particular we must determine for which integrands this can be done [exactly as in Prato and Zabczyk (1992)].

As above, let U=L2(RN), Q:UU a non-negative, symmetric bounded linear operator on U such that Tr(Q)<, and W=(W(t))t0 be a Q-Wiener process on U [given by (2.2)].

Unfortunately, in order to define stochastic integrals with respect to W, we need a couple of technical definitions from functional analysis. This is simply in order to control the convergence of the infinite series that appear in the construction, as we will see in the example below. Indeed, let Q12(U) be the subspace of U, which is a Hilbert space under the inner product

u,vQ12(U):=Q-12u,Q-12vU,u,vQ12(U).

Q12(U) is in fact simply the space generated by the orthonormal basis {λkek} whenever {ek} is the orthonormal basis for U consisting of eigenfunctions of Q. Moreover, let H=L2(RN,ρ) for some measurable ρ:RNR+ (again this is just for the sake of concreteness—one could instead take any separable Hilbert space). It turns out that the space L2(Q12(U),H) of all Hilbert-Schmidt operators from Q12(U) into H plays an important role in the theory of stochastic integration with respect to W, and for this reason we detail the following simple but illuminating example.

Example 2.1

Let B:UH be a bounded linear operator from U to H i.e. BL0(U,H). Then, by definition,

BL2(Q12(U),H)2=k=1B(Q12(ek))H2BL0(U,H)2k=1Q12(ek)U2=BL0(U,H)2k=1Q12(ek),Q12(ek)U=BL0(U,H)2k=1Q(ek),ekU=BL0(U,H)2Tr(Q)<,

since Tr(Q)<, where {ek}k1 is again a complete orthonormal system for U. In other words BL0(U,H)BL2(Q12(U),H).

The main point of the section is the following. According to the construction detailed in Chapter 4 of Prato and Zabczyk (1992), we have that for a (random) process (Φ(t))t0 the integral

0tΦ(s)dW(s) 2.4

has a sense as an element of H when Φ(s)L2(Q12(U),H), Φ(s) is knowable3 at time s, and if

P0tΦ(s)L2(Q12(U),H)2ds<=1.

Now in view of Example 2.1, the take-away message is simply that the stochastic integral (2.4) has a sense in H if Φ(s):UH is a bounded linear operator i.e. is in L0(U,H) for all s[0,t], and that the norm of Φ(s) is bounded on [0,t]. In fact this is the only knowledge that will be needed below.

The stochastic neural field equation: interpretation in language of Hilbert space valued processes

With the previous two sections in place, we can now return to (2.1) and interpret it (and in particular the noise term) in a rigorous way. Indeed, as above, let W be an L2(RN)-valued Q-Wiener process, with Q a non-negative, symmetric bounded linear operator on L2(RN) such that Tr(Q)< (trace-class). The rigorous interpretation of (2.1) as an equation for a process (Y(t))t0 taking its values in the Hilbert space L2(RN,ρ) is then

dY(t)=-Y(t)+F(Y(t))dt+B(Y(t))dW(t),Y(0)=Y0L2(RN,ρ) 2.5

where B is a map from L2(RN,ρ) into the space of bounded linear operators L0(L2(RN),L2(RN,ρ)). Note that if B is such a map, then the integrated noise term of this equation has a sense thanks to Sect. 2.2.

We in fact work with a general map B satisfying a Lipschitz condition (see below), but we keep in mind the following example which provides the link with the diffusion coefficient σ in (1.1):

B(h)(u)(x)=σ(h(x))RNφ(x-y)u(y)dy,xRN, 2.6

for hL2(RN,ρ) and uL2(RN), where σ and φ are some functions that must be chosen to ensure the conditions stated below are satisfied. We detail potential choices of σ and φ (and their significance from a modeling point of view—in particular how φ controls the spatial correlation) in Sect. 2.7 below.

To summarize, we are here concerned with the solvability of (2.5) in L2(RN,ρ) (for some measurable ρ:RNR+ to be determined), where

F(h)(x)=RNw(x,y)G(h(y))dy,xRN,hL2(RN,ρ), 2.7

and B:L2(RN,ρ)L0(L2(RN),L2(RN,ρ)). To this end, we make the following two Lipschitz assumptions on B and the nonlinear gain function G:

  • B:HL0(U,H) is such that
    B(g)-B(h)L0(U,H)Cσg-hU,g,hL2(RN,ρ),
    where U=L2(RN) and H=L2(RN,ρ) for notational simplicity;
  • G:RR is bounded and globally Lipschitz i.e such that there exists a constant CG with supaR|G(a)|CG and
    |G(a)-G(b)|CG|a-b|,a,bR.
    Typically the nonlinear gain function G is taken to be a sigmoid function, for example G(a)=(1+e-a)-1, aR, which certainly satisfies this assumption.

Discussion of conditions on the neural field kernel w and ρ

Of particular interest to us are the conditions on the neural field kernel w which will allow us to prove existence and uniqueness of a solution to (2.5) by quoting a standard result from Prato and Zabczyk (1992).

In Kuehn and Riedler (2014, footnote 1) it is suggested that the condition

RNRN|w(x,y)|2dxdy<(C1)

together with symmetry of w is enough to ensure that there exists a unique L2(RN)-valued solution to (2.5). However, the problem is that it does not follow from (C1) that the operator F is stable on the space L2(RN). For instance, suppose that in fact G1 (so that G is trivially globally Lipschitz). Then for hL2(RN) (and assuming w0) we have that

F(h)L2(RN)2=RNw(x,·)L1(RN)2dx. 2.8

The point is that we can choose positive w such that (C1) holds, while (2.8) is not finite. For example in the case N=1 we could take w(x,y)=(1+|x|)-1(1+|y|)-1 for x,yR. In such a case the Eq. (2.5) is ill-posed: if Y(t)L2(R) then F(t,Y(t)) is not guaranteed to be in L2(R), which in turn implies that Y(t)L2(R)!

With this in mind we argue two points. Firstly, if we want a solution in L2(RN), we must make the additional strong assumption that

xRN(yw(x,y))L1(RN),andw(x,·)L1(RN)L2(RN).(C2)

Indeed, below we will show that (C1) together with (C2) are enough to yield the existence of a unique L2(RN)-valued solution to (2.5).

On the other hand, if we don’t want to make the strong assumptions that (C1) and (C2) hold, then we have to work instead in a weighted space L2(RN,ρ), in order to ensure that F is stable. In this case, we will see that if

ρwL1(RN)s.t.RN|w(x,y)|ρw(x)dxΛwρw(y)yRN,(C1)

for some Λw>0, and

xRN(yw(x,y))L1(RN),andsupxRNw(x,·)L1(RN)Cw(C2)

for some constant Cw, then we can prove the existence of a unique L2(RN,ρw)-valued solution to (2.5).

Condition (C1) is in fact a non-trivial eigenvalue problem, and it is not straightforward to see whether it is satisfied for a given function w. However, we chose to state the theorem below in a general way, and then below provide some important examples of when it can be applied.

We will discuss these abstract conditions from a modeling point of view below. However, we first present the existence and uniqueness result.

Existence and uniqueness

Theorem 2.2

Suppose that the neural field kernel w either

  • (i)

    satisfies conditions (C1) and (C2); or

  • (ii)

    satisfies conditions (C1’) and (C2’).

If (i) holds set ρw1, while if (ii) holds let ρw be the function appearing in condition (C1’).

Then, whenever Y0 is an L2(RN,ρw)-valued random variable with finite p-moments for all p2, the neural field Eq. (2.5) has a unique solution taking values in the space L2(RN,ρw). To be precise, there exists a unique L2(RN,ρw)-valued process (Y(t))t0 such that for all T>0

P0TY(s)L2(RN,ρw)2ds<=1,

and

Y(t)=e-tY0+0te-(t-s)F(Y(s))ds+0te-(t-s)B(Y(s))dW(s),P-a.s.

Moreover, (Y(t))t0 has a continuous modification, and satisfies the bounds

supt[0,T]EY(t)L2(RN,ρw)pCT(p)1+EY0L2(RN,ρw)p,T>0, 2.9

for all p2, while for p>2,

Esupt[0,T]Y(t)L2(RN,ρw)pCT(p)1+EY0L2(RN,ρw)pT>0. 2.10

Proof

We simply check the hypotheses of Prato and Zabczyk (1992, Theorem 7.4) (a standard reference in the theory) in both cases (i) and (ii). This involves showing that (a) F:L2(RN,ρw)L2(RN,ρw); (b) the operator B(h)L2(Q12(U),H), for all hH [recalling that U=L2(RN) and H=L2(RN,ρ)]; and (c) F and B are globally Lipschitz.

(a): We check that the function F:L2(RN,ρw)L2(RN,ρw). In case (i) this holds since ρw1 and for any hL2(RN)

F(h)L2(RN)2=RNRNw(x,y)G(h(y))dy2dxCG2RNw(x,·)L1(RN)2dx<,

by assumption (C2). Similarly in case (ii) for any hL2(RN,ρw)

F(h)L2(RN,ρw)2=RNRNw(x,y)G(h(y))dy2ρw(x)dxCG2supxRNw(x,·)L1(RN)2ρwL1(RN)<.

Hence in either case F in fact maps L2(RN,ρw) into a metric ball in L2(RN,ρw).

(b): To show (b) in both cases, we know by Example 2.1 that for hH, B(h)L2(Q12(U),H) whenever B(h)L0(U,H), which is true by assumption.

(c): To show (c), we first want F:L2(RN,ρw)L2(RN,ρw) to be globally Lipschitz. To this end, for any g,hL2(RN,ρw), we see that in either case

F(g)-F(h)L2(RN,ρw)2=RN|F(g)-F(h)|2(x)ρw(x)dxRNRN|w(x,y)|G(g(y))-G(h(y))dy2ρw(x)dxCG2RNRN|w(x,y)|g(y)-h(y)dy2ρw(x)dx,

where we have used the Lipschitz property of G. Now in case (i) it clearly follows from the Cauchy–Schwartz inequality that

F(g)-F(h)L2(RN)2CG2RNRN|w(x,y)|2dxdyg-hL2(RN),

so that by condition (C1), F is indeed Lipschitz.

In case (ii), by Cauchy–Schwartz and the specific property of ρw given by (C1’), we see that

F(g)-F(h)L2(RN,ρw)2CG2supxRNw(x,·)L1(RN)RNg(y)-h(y)2RN|w(x,y)|ρw(x)dxdyCG2ΛwsupxRNw(x,·)L1(RN)g-hL2(RN,ρw)2,

so that again F is Lipschitz. Since we have assumed that B:HL0(U,H) is Lipschitz, we are done.

Remark 2.3

(Large Deviation Principle) The main focus of Kuehn and Riedler (2014) was a large deviation principle for the stochastic neural field Eq. (2.5) with small noise, but in a less general situation than we consider here. In particular, the authors only considered the neural field equation driven by a simple additive noise, white in both space and time.

We would therefore like to remark that in our more general case, and under much weaker conditions than those imposed in Kuehn and Riedler (2014) (our conditions are for example satisfied for a connectivity function w that is homogeneous, as we will see in Example 2 below), an LDP result for the solution identified by the above theorem still holds and can be quoted from the literature. Indeed, such a result is presented in Peszat (1994, Theorem 7.1). The main conditions required for the application of this result have essentially already been checked above (global Lipschitz properties of F and B), and it thus remains to check conditions (E.1)–(E.4) as they appear in Peszat (1994). In fact these are trivialities, since the strongly continuous contraction semigroup S(t) is generated by the identity in our case.

Discussion of conditions on w and ρ in practice

Our knowledge about the kinds of neural field kernels that are found in the brains of mammals is still quite limited. Since visual perception is the most active area of research, it should not come as a surprise that it is in cortical regions involved in visual perception that this knowledge is the most extensive, and in particular in the primary visual area called V1 in humans. In models of this region it is usually assumed that w is the sum of two parts: a local part wloc corresponding to local neuronal connections, and a non-local part wlr corresponding to longer range connections. As suggested in Lund et al. (2003), Mariño et al. (2005), wloc is well approximated by a Gaussian function (or a difference of of such functions, see below):

wloc(x,y)=Kexp(-|x-y|2/2βloc2)x,yRN,K>0, 2.11

where βloc is the extent of the local connectivity. Hence wloc is isotropic and homogeneous. In fact for practitioners, a very common assumption on w is that it is homogeneous and in L1(RN), which thus concentrates on modeling the local interactions (Bressloff and Folias 2004; Bressloff and Webber 2012; Bressloff and Wilkerson 2012; Folias and Bressloff 2004; Kilpatrick and Ermentrout 2013; Owen et al. 2007). However, when w is homogeneous it is clear that neither (C1) nor (C2) of the above theorem are satisfied, and so we instead must try to show that (C1’) is satisfied [(C2’) trivially holds], and look for solutions in a weighted L2 space. This is done in the second example below.

Long range connectivity is best described by assuming N=2. It is built upon the existence of maps of orientation sensitivity in which the preferred visual orientation at each point x is represented by a function θ(x)[0,π). This function is smooth except at countably many points called the pinwheels where it is undefined4. Depending on the species, the long range connections feature an anisotropy, meaning that they tend to align themselves with the preferred orientation at x. On way to take this into account is to introduce the function A(χ,x)=exp[-((1-χ)2x12+x22)/2βlr2], where x=(x1,x2), χ[0,1), and βlr is the extent of the long range connectivity. When χ=0 there is no isotropy (as for the macaque monkey for example) and when χ(0,1) there is some anisotropy (as for the tree shrew, for example). Let Rα represent the rotation by angle α around the origin. The long range neural field kernel is then defined by (Baker and Cowan 2009; Bressloff 2003)

wlr(x,y)=εlrA(χ,R-2θ(x)(x-y))·Gβθ(θ(x)-θ(y)),

where εlr1 and Gβθ is the one-dimensional Gaussian density with 0 mean and variance βθ2. Note that wlr is not homogeneous, even in the case χ=0, because θ(x)-θ(y) is not a function of x-y. It is easy to verify that wlrL2(R2).

Combining the local and non-local parts, one then writes for the neural field kernel of the primary visual area:

wpva(x,y)=wloc(x-y)+wlr(x,y). 2.12

In view of our results, in the case where w=wpva, since the first part is homogeneous while the second is non-homogeneous but is in L2(R2), we need a combination of the results above. Indeed, the homogeneous part dictates to work in L2(R2,ρwloc) (ρwlocL1(R2)). The second kernel dictates to work in L2(R2). But L2(R2)L2(R2,ρwloc), because, as shown in Example 2 below ρwloc can be chosen to be bounded, and hence there is no problem.

Another commonly used type of (homogeneous) neural field kernel, when modeling excitatory and inhibitory populations of neurons is the so-called “Mexican hat” kernel defined by

wmh(x,y)=K1exp(-|x-y|2/2β12)-K2exp(-|x-y|2/2β22),x,yRN, 2.13

for some K1,K2>0. If β2>β1 and K1>K2 for example, this is locally excitatory and remotely inhibitory.

It is also important to mention the role of ρw from a modeling perspective. The first point is that in the case where w is homogeneous, it is very natural to look for solutions that live in L2(RN,ρ) for some ρL1(RN), rather than in L2(RN). This is because in the deterministic case (see Ermentrout and McLeod 1993), solutions of interest are of the form of traveling waves, which are constant at , and thus are not integrable.

Moreover, we emphasize that in Theorem 2.2 and the examples in the next section we identify a single ρwL1(RN) so that the standard existence result of Prato and Zabczyk (1992) can be directly applied through Theorem 2.2. We do not claim that this is the only weight ρ for which the solution can be shown to exist in L2(RN,ρ) (see also Example 2 below).

Remark 2.4

If we replace the spatial coordinate space RN by a bounded domain DRN, so that the neural field Eq. (2.5) describes the activity of a neuron found at position xD then checking the conditions as done Theorem 2.2 becomes rather trivial (under appropriate boundary conditions). Indeed, by doing this one can see that there exists a unique L2(D)-valued solution to (2.5) under the condition (C2’) only (with RN replaced by D). Although working in a bounded domain seems more physical (since any physical section of cortex is clearly bounded), the unbounded case is still often used, see Bressloff and Webber (2012) or the review Bressloff (2012), and is mathematically more interesting. The problem in passing to the unbounded case stems from the fact that the nonlocal term in (2.5) naturally ‘lives’ in the space of bounded functions, while according to the theory the noise naturally lives in an L2 space. These are not compatible when the underlying space is unbounded.

Discussion of the noise term in (2.5)

It is important to understand the properties of the noise term in the neural field Eq. (2.5) which we now know has a solution in some sense. As mentioned above, one particular form of the noise operator B that is of special importance from a modeling point of view is given by (2.6) i.e.

B(h)(u)(x)=σ(h(x))RNφ(x-y)u(y)dy,xRN, 2.14

for hL2(RN,ρ) and uL2(RN), and some functions σ, and φ. This is because such noise terms are spatially correlated depending on φ (as we will see below) and make the link with the original Eq. (1.1) considered in Bressloff and Webber (2012), where spatial correlations are important.

An obvious question is then for which choices of σ and φ can we apply the above results? In particular we need to check that B(h) is a bounded linear operator from L2(RN) to L2(RN,ρ) for all hL2(RN,ρ), and that B is Lipschitz (assuming as usual that ρL1(RN)).

To this end, suppose φL2(RN) and that there exists a constant Cσ such that

|σ(a)-σ(b)|Cσ|a-b|,and|σ(a)|Cσ(1+|a|),a,bR. 2.15

In other words σ:RR is assumed to be Lipschitz and of linear growth. Then for any hL2(RN,ρ) and uL2(RN),

B(h)(u)L2(RN,ρ)2=RNσ2(h(x))RNφ(x-y)u(y)dy2ρ(x)dx2uL2(RN)2φL2(RN)2(ρL1(RN)+hL2(RN,ρ)2).

Thus B(h) is indeed a bounded linear operator from L2(RN) to L2(RN,ρ). Moreover, a similar calculation yields the Lipschitz property of B, so that the above results can be applied. In particular our results hold when σ(a)=λa, for some λR. This is important because it is this choice of σ that is used for the simulations carried out in Bressloff and Webber (2012, Section 2.3).

To see the spatial correlation in the noise term in (2.5) when B has the form (2.14) with φL2(RN), consider the case σ1 (so that the noise is purely additive). Then

0tB(Y(t))dW(t)=0tBdW(t)=:X(t),t0,

where

B(u)(x)=RNφ(x-y)u(y)dy,xRN,uL2(RN),

and X(t) is a well-defined L2(RN,ρ)-valued process since B is bounded from L2(RN) into L2(RN,ρ) (see Sect. 2.2). Moreover, by Theorem 5.25 of Prato and Zabczyk (1992), (X(t))t0 is Gaussian with mean zero and

CovX(t)X(s)=stBQB,s,t0,

where B:L2(RN,ρ)L2(RN) is the adjoint of B. In other words, for all g,hL2(RN,ρ), s,t0, we have, by definition of the covariance operator, that

Eg,X(s)L2(RN,ρ)h,X(t)L2(RN,ρ)=stBQBg,hL2(RN,ρ).

That is, for any g,hL2(RN,ρ)

RNRNEX(s,x)X(t,y)g(x)h(y)ρ(x)ρ(y)dxdy=stQBh,BgL2(RN)=stRNQBg(z)Bh(z)dz.=stRNQ1/2Bg(z)Q1/2Bh(z)dz. 2.16

Now, by definition, for uL2(RN) and fL2(RN,ρ)

RNu(y)B(f)(y)dy=RNB(u)(x)f(x)ρ(x)dx=RNu(y)RNφ(x-y)f(x)ρ(x)dxdy

so that B(f)(y)=RNφ(x-y)f(x)ρ(x)dx. Using this in (2.16), we see that

RNRNEX(s,x)X(t,y)g(x)h(y)ρ(x)ρ(y)dxdy=stRNRNQ12φ(x-z)g(x)ρ(x)dxRNQ12φ(y-z)h(y)ρ(y)dydz,

for all g,hL2(RN,ρ), since Q is a linear operator and is self-adjoint. We can then conclude that

EX(s,x)X(t,y)=stRNQ12φ(x-z)Q12φ(y-z)dz=(st)c(x-y), 2.17

where c(x)=Q12φQ12φ~(x) and φ~(x)=φ(-x). Hence (X(t))t0 is white in time but stationary and colored in space with covariance function (st)c(x). We remark that the manipulations above are certainly not new [they are for example used in Brzeźniak and Peszat (1999)], but they illustrate nicely the spatial correlation property of the noise we consider.

We conclude that (2.14) is exactly the rigorous interpretation of the noise described in Bressloff and Webber (2012), when interpreting a solution to the stochastic neural field equation as a process taking values in L2(RN,ρw).

Remark 2.5

Note that in the case where B is the identity, X(t)=W(t). We can, at least formally, carry out the above computation with φ=δ0 and find that

EW(s,x)W(t,y)=(st)Qδ0(x-y), 2.18

which yields for any g,hL2(RN)

EW(s),gL2(RN)W(t),hL2(RN)=RNRNEW(s,x)W(t,y)g(x)h(y)dxdy=(st)Qg,hL2(RN),

which is Eq. (2.3). Equation (2.18) is the reason why we stated in Sect. 2.1 that W was a white noise in space and time.

Examples

As mentioned we now present two important cases where the conditions (C1’) and (C2’) are satisfied. For convenience, in both cases we in fact show that (C1) is satisfied for some ρwL1(RN) that is also bounded.

Example 1: |w| defines a compact integral operator. Suppose that

  • given ε>0, there exists δ>0 and R>0 such that for all θRN with |θ|<δ
    • (i)
      for almost all xRN,
      RN\B(0,R)|w(x,y)|dy<ε,RN|w(x,y+θ)-w(x,y)|dy<ε,
    • (ii)
      for almost all yRN,
      RN\B(0,R)|w(x,y)|dx<ε,RN|w(x+θ,y)-w(x,y)|dx<ε,
    where B(0,R) denotes the ball of radius R in RN centered at the origin;
  • There exists a bounded subset ΩRN of positive measure such that
    infyΩΩ|w(x,y)|dx>0,orinfxΩΩ|w(x,y)|dy>0;
  • w satisfies (C2’) and moreover
    yRN(xw(x,y))L1(RN),andsupyRNw(·,y)L1(RN)<.

We claim that these assumptions are sufficient for (C1’) so that we can apply Theorem 2.2 in this case. Indeed, let X be the Banach space of functions in L1(RN)L(RN) equipped with the norm ·X=max{·L1(RN),·L(RN)}. Thanks to the last point above, we can well-define the map J:XX by

Jh(y)=RN|w(x,y)|h(x)dx,hX.

Moreover, it follows from (Eveson (1995), Corollary 5.1) that the first condition we have here imposed on w is in fact necessary and sufficient for both the operators J:L1(RN)L1(RN) and J:L(RN)L(RN) to be compact. We therefore clearly also have that the condition is necessary and sufficient for the operator J:XX to be compact.

Note now that the space K of positive functions in X is a cone in X such that J(K)K, and that the cone is reproducing (i.e. X={f-g:f,gK}). If we can show that r(J) is strictly positive, we can thus finally apply the Krein-Rutman Theorem [see for example (Du (2006), Theorem 1.1)] to see that r(J) is an eigenvalue with corresponding non-zero eigenvector ρK.

To show that r(J)>0, suppose first of all that there exists a bounded ΩRN of positive measure such that infyΩΩ|w(x,y)|dx>0. Define h=1 on Ω, 0 elsewhere, so that hX=max{1,|Ω|}. Then, trivially,

JhXsupyRNΩ|w(x,y)|dxinfyΩΩ|w(x,y)|dx=:m>0,

by assumption. Replacing h by h~=h/max{1,|Ω|} yields h~X=1 and

Jh~Xm/max{1,|Ω|}.

Thus Jm/max{1,|Ω|}. Similarly

J2hXsupyRNRN|w(x1,y)|Ω|w(x2,x1)|dx2dx1RN|w(x1,y)|Ω|w(x2,x1)|dx2dx1,yRNinfx1ΩΩ|w(x2,x1)|dx2Ω|w(x1,y)|dx1,yRN.

Therefore

J2hXm2,

so that J2m2/max{1,|Ω|}. In fact we have Jkmk/max{1,|Ω|} for all k1, so that, by the spectral radius formula, r(J)m>0. The case where infxΩΩ|w(x,y)|dy>0 holds instead is proved similarly, by instead taking h=1/|Ω| on Ω (0 elsewhere) and working with the L1(RN) norm of Jh in place of the L(RN) norm.

We have thus found a non-negative, non-zero function ρ=ρwL1(RN)L(RN) such that

RN|w(x,y)|ρw(x)dx=r(J)ρw(y),yRN,

so that (C1’) is satisfied.

Example 2: Homogeneous case. Suppose that

  • w is homogeneous i.e w(x,y)=w(x-y) for all x,yRN;

  • wL1(RN) and is continuous;

  • RN|x|2N|w(x)|dx<.

These conditions are satisfied for many typical choices of the neural field kernel in the literature [e.g. the “Mexican hat” kernel Bressloff et al. 2001; Faye et al. 2011; Owen et al. 2007; Veltz and Faugeras 2010 and (2.13) above]. However, it is clear that we are not in the case of the previous example, since for any R>0

supxRNRN\B(0,R)|w(x-y)|dy=wL1(RN),

which is not uniformly small. We thus again show that (C1’) is satisfied in this case so that [since (C2’) is trivially satisfied] Theorem 2.2 yields the existence of a unique L2(RN,ρw)-valued solution to (2.5).

In order to do this, we use the Fourier transform. Let v=|w|, so that v is continuous and in L1(RN). Let Fv be the Fourier transform of v i.e.

Fv(ξ):=RNe-2πix.ξv(x)dx,ξRN.

Therefore Fv is continuous and bounded by

supξRN|Fv(ξ)|vL1(RN)=wL1(RN).

Now let Λw=wL1(RN)+1, and z(x):=e-|x|2/2, xRN, so that z is in the Schwartz space of smooth rapidly decreasing functions, which we denote by S(RN). Then define

ρ^(ξ):=Fz(ξ)Λw-Fv(ξ). 2.19

We note that the denominator is continuous and strictly bounded away from 0 (indeed by construction Λw-Fv(ξ)1 for all ξRN). Thus ρ^ is continuous, bounded and in L1(RN) (since FzS(RN) by the standard stability result for the Fourier transform on S(RN)).

We now claim that F-1ρ^(x)L1(RN), where the map F-1 is defined by

F-1g(x):=RNe2πix.ξg(ξ)dξ,gL1(RN).

Indeed, we note that for any k{1,,N},

k2NFv(ξ)=(-2πi)2NRNe-2πix.ξxk2Nv(x)dx,

which is well-defined and bounded thanks to our assumption on the integrability of x|x|2N|w(x)|. Since Fz is rapidly decreasing, we can thus see that the function ρ^(ξ) is 2N times differentiable with respect to every component and k2Nρ^(ξ) is absolutely integrable for every k{1,N}. Finally, since F-1(k2Nρ^)(x)=(2πi)2Nxk2NF-1ρ^(x) for each k{1,,N}, we have that

|F-1ρ^(x)|k=1N|F-1(k2Nρ^)(x)|(2π)2Nk=1Nxk2NNN-1k=1Nk2Nρ^L1(RN)(2π)2N|x|2N,

for all xRN. Thus there exists a constant K such that |F-1ρ^(x)|K/|x|2N. Moreover, since we also have the trivial bound

|F-1ρ^(x)|ρ^L1(RN),

for all xRN, it follows that |F-1ρ^(x)|K/(1+|x|2N), by adjusting the constant K. Since this is integrable over RN, the claim is proved.

Now, by the classical Fourier Inversion Theorem (which is applicable since ρ^ and F-1ρ^ are both in L1(RN)), we thus have that

FF-1ρ^(ξ)=ρ^(ξ),

for all ξRN.

By setting ρ(x)=F-1ρ^(x), we see that

ΛwFρ(ξ)-Fρ(ξ)Fv(ξ):=Fz(ξ).

We may finally again apply the inverse Fourier transform F-1 to both sides, so that by the Inversion Theorem again (along with the standard convolution formula) it holds that

Λwρ(y)-RNv(x-y)ρ(x)dx=e-|y|22,yRN.

It then follows that

RN|w(x-y)|ρ(x)dxΛwρ(y),yRN,

as claimed.

Moreover, Eq. (2.19) shows that ρ^(ξ) is in Schwartz space, hence so is ρ, implying that it is bounded. Note that Eq. (2.19) provides a way of explicitly computing one possible function ρw appearing in condition (C1’) in the cases where the neural field kernel is homogeneous [for example given by (2.11) and (2.13)]. That particular function can be varied for example by changing the function z and/or the constant Λw.

Stochastic neural fields as Gaussian random fields

In this section we take an alternative approach, and try to give sense to a solution to the stochastic neural field Eq. (1.1) as a random field, using Walsh’s theory of integration.

This approach generally takes as its starting point a deterministic PDE, and then attempts include a term which is random in both space and time. With this in mind, consider first the well studied deterministic neural field equation

tY(t,x)=-Y(t,x)+RNw(x,y)G(Y(t,y))dy,xRN,t0. 3.1

Under some conditions on the neural field kernel w (boundedness, condition (C2’) above and L1-Lipschitz continuity), this equation has a unique solution (t,x)Y(t,x) that is bounded and continuous in x and continuously differentiable in t, whenever xY(0,x) is bounded and continuous (Potthast 2010).

The idea then is to directly add a noise term to this equation, and try and give sense to all the necessary objects in order to be able to define what we mean by a solution. Indeed, consider the following stochastic version of (3.1),

tY(t,x)=-Y(t,x)+RNw(x,y)G(Y(t,y))dy+σ(Y(t,x))W˙(t,x) 3.2

where W˙ is a “space-time white noise”. Informally we may think of the object W˙(t,x) as the random distribution which, when integrated against a test function hL2(R+×RN)

W˙(h):=0RNh(t,x)W˙(t,x)dtdx,hL2(R+×RN),

yields a zero-mean Gaussian random field (W˙(h))hL2(R+×RN) with covariance

EW˙(g)W˙(h)=0RNg(t,x)h(t,x)dxdt,g,hL2(R+×RN).

The point is that with this interpretation of space-time white noise, since Eq. (3.2) specifies no regularity in the spatial direction (the map xY(t,x) is simply assumed to be Lebesgue measurable so that the integral makes sense), it is clear that any solution will be distribution-valued in the spatial direction, which is rather unsatisfactory. Indeed, consider the extremely simple linear case when G0 and σ1, so that (3.2) reads

tY(t,x)=-Y(t,x)+W˙(t,x). 3.3

Formally, the solution to this equation is given by

Y(t,x)=e-tY(0,x)+0te-(t-s)W˙(s,x)ds,t0,xRN,

and since the integral is only over time it is clear (at least formally) that xY(t,x) is a distribution for all t0. This differs significantly from the usual SPDE situation, when one would typically have an equation such as (3.3) where a second order differential operator in space is applied to the first term on the right-hand side (leading to the much studied stochastic heat equation). In such a case, the semigroup generated by the second order differential operator can be enough to smooth the space-time white noise in the spatial direction, leading to solutions that are continuous in both space and time [at least when the spatial dimension is 1—see for example Pardoux (2007, Chapter 3) or Walsh (1986, Chapter 3)].

Of course one can develop a theory of distribution-valued processes [as is done in Walsh (1986, Chapter 4)] to interpret solutions of (3.2) in the obvious way: one says that the random field (Y(t,x))t0,xRN is a (weak) solution to (3.2) if for all ϕC0(RN) it holds that

RNϕ(x)Y(t,x)dx=e-tRNϕ(x)Y(0,x)dx+0tRNe-(t-s)ϕ(x)RNw(x,y)G(Y(s,y))dydxds+0tRNe-(t-s)ϕ(x)σ(Y(s,x))W˙(s,x)dxds,

for all t0. Here all the integrals can be well-defined, which makes sense intuitively if we think of W˙(t,x) as a distribution. In fact it is more common to write 0tRNe-(t-s)ϕ(x)W(dsdx) for the stochastic integral term, once it has been rigorously defined.

However, we argue that it is not worth developing this theory here, since distribution valued solutions are of little interest physically. It is for this reason that we instead look for other types of random noise to add to the deterministic Eq. (3.1) which in particular will be correlated in space that will produce solutions that are real-valued random fields, and are at least Hölder continuous in both space and time. In the theory of SPDEs, when the spatial dimension is 2 or more, the problem of an equation driven by space-time white noise having no real-valued solution is a well-known and much studied one [again see for example Pardoux (2007, Chapter 3) or Walsh (1986, Chapter 3) for a discussion of this]. To get around the problem, a common approach (Dalang and Frangos 1998; Ferrante and Sanz-Solé 2006; Sanz-Solé and Sarrà 2002) is to consider random noises that are smoother than white noise, namely a Gaussian noise that is white in time but has a smooth spatial covariance. Such random noise is known as either spatially colored or spatially homogeneous white-noise. One can then formulate conditions on the covariance function to ensure that real-valued Hölder continuous solutions to the specific SPDE exist.

It should also be mentioned, as remarked in Dalang and Frangos (1998), that in trying to model physical situations, there is some evidence that white-noise smoothed in the spatial direction is more natural, since spatial correlations are typically of a much larger order of magnitude than time correlations.

In the stochastic neural field case, since we have no second order differential operator, our solution will only ever be as smooth as the noise itself. We therefore look to add a noise term to (3.1) that is at least Hölder continuous in the spatial direction instead of pure white noise, and then proceed to look for solutions to the resulting equation in the sense of Walsh.

The section is structured as follows. First we briefly introduce Walsh’s theory of stochastic integration, for which the classical reference is Walsh (1986). This theory will be needed to well-define the stochastic integral in our definition of a solution to the neural field equation. We then introduce the spatially smoothed space-time white noise that we will consider, before finally applying the theory to analyze solutions of the neural field equation driven by this spatially smoothed noise under certain conditions.

Walsh’s stochastic integral

We will not go into the details of the construction of Walsh’s stochastic integral, since a very nice description is given by D. Khoshnevisan in Dalang et al. (2009) [see also Walsh (1986)]. Instead we present the bare essentials needed in the following sections.

The elementary object of study is the centered Gaussian random field6

W˙:=(W˙(A))AB(R+×RN)

indexed by AB(R+×RN) (where R+:=[0,)) with covariance function

EW˙(A)W˙(B)=|AB|,A,B,B(R+×RN), 3.4

where |AB| denotes the Lebesgue measure of AB. We say that W˙ is a white noise on R+×RN. We then define the white noise process W:=(Wt(A))t0,AB(RN) by

Wt(A):=W˙([0,t]×A),t0. 3.5

Now define the norm

fW2:=E0TRN|f(t,x)|2dtdx, 3.6

for any (random) function f that is knowable7 at time t given (Ws(A))st,AB(RN). Then let PW be the set of all such functions f for which fW<. The point is that this space forms the set of integrands that can be integrated against the white noise process according to Walsh’s theory.

Indeed, we have then following theorem ((Walsh 1986, Theorem 2.5)).

Theorem 3.1

For all fPW, t[0,T] and AB(RN),

0tAf(s,x)W(dsdx)

can be well-defined in L2(Ω,F,P). Moreover for all t(0,T] and A,BB(RN), E[0tAf(s,x)W(dsdx)]=0 and

E0tAf(s,x)W(dsdx)0tBf(s,x)W(dsdx)=E0tABf2(s,x)dxdt.

The following inequality will also be fundamental:

Theorem 3.2

(Burkhölder’s inequality) For all p2 there exists a constant cp (with c2=1) such that for all fPW, t(0,T] and AB(RN),

E0tAf(s,x)W(dsdx)pcpE0TRN|f(t,x)|2dtdxp2.

Spatially smoothed space-time white noise

Let W=(Wt(A))t0,AB(RN) be a white-noise process as defined in the previous section. For φL2(RN), we can well-define the (Gaussian) random field (Wφ(t,x))t0,xRN for any T>0 by

Wφ(t,x):=0tRNφ(x-y)W(dsdy). 3.7

To see this one just needs to check that φ(x-·)PW for every x, where PW is as above. The function φ(x-·) is clearly completely determined by W for each x (since it is non-random) and for every T>0

φ(x-·)W2=E0TRN|φ(x-z)|2dtdz=TφL2(RN)2<,

so that the integral in (3.7) is indeed well-defined in the sense of the above construction. Moreover, by Theorem 3.1 the random field (Wφ(t,x))t0,xRN has spatial covariance

E[Wφ(t,x)Wφ(t,y)]=E0tRNφ(x-z)W(dsdz)0tRNφ(y-z)W(dsdz)=0tRNφ(x-z)φ(y-z)dzds=tφφ~(x-y),

where denotes the convolution operator as usual, and φ~(x)=φ(-x). Thus the random field (Wφ(t,x))t0,xRN is spatially correlated.

The regularity in time of this process is the same as that of a Brownian path:

Lemma 3.3

For any xRN, the path tWφ(t,x) has an η-Hölder continuous modification for any η(0,1/2).

Proof

For xRN, s,t0 with st and any p2 we have by Burkhölder’s inequality (Theorem 3.2 above) that

EWφ(t,x)-Wφ(s,x)pcpφL2(RN)2(t-s)p2.

The result follows from the standard Kolmogorov continuity theorem [see for example Theorem 4.3 of Dalang et al. (2009, Chapter 1)].

More importantly, if we impose some (very weak) regularity on φ then Wφ inherits some spatial regularity:

Lemma 3.4

Suppose that there exists a constant Cφ such that

φ-τz(φ)L2(RN)Cφ|z|α,zRN, 3.8

for some α(0,1], where τz indicates the shift by z operator (so that τz(φ)(y):=φ(y+z) for all y,zRN). Then for all t0, the map xWφ(t,x) has an η-Hölder continuous modification, for any η(0,α).

Proof

For x,x~RN, t0, and any p2 we have (again by Burkhölder’s inequality) that

EWφ(t,x)-Wφ(t,x~)ptp2cpRN|φ(x-y)-φ(x~-y)|2dyp2=tp2cpRN|φ(y)-φ(y+x~-x)|2dyp2tp2cpCφp|x-x~|pα.

The result follows by Kolmogorov’s continuity theorem.

Remark 3.5

The condition (3.8) with α=1 is true if and only if the function φ is in the Sobolev space W1,2(RN) ((Brezis 2010, Proposition 9.3)).

When α<1 the set of functions φL2(RN) which satisfy (3.8) defines a Banach space denoted by Nα,2(RN) which is known as the Nikolskii space. This space is closely related to the more familiar fractional Sobolev space Wα,2(RN) though they are not identical. We refer to Simon (1990) for a detailed study of such spaces and their relationships. An example of when (3.8) holds with α=1/2 is found by taking φ to be an indicator function. It is in this way we see that (3.8) is a rather weak condition.

The stochastic neural field equation driven by spatially smoothed space-time white noise

We now have everything in place to define and study the solution to the stochastic neural field equation driven by a spatially smoothed space-time white noise. Indeed, consider the equation

tY(t,x)=-Y(t,x)+RNw(x,y)G(Y(t,y))dy+σ(Y(t,x))tWφ(t,x), 3.9

with initial condition Y(0,x)=Y0(x) for xRN and t0, where (Wφ(t,x))t0,xRN is the spatially smoothed space-time white noise defined by (3.7) for some φL2(RN). As above, we will impose Lipschitz assumptions on σ and G, by supposing that

  • σ:RR is globally Lipschitz [exactly as in (2.15)] i.e. there exists a constant Cσ such that
    |σ(a)-σ(b)|Cσ|a-b|,and|σ(a)|Cσ(1+|a|),a,bR;
  • G:RR is bounded and globally Lipschitz (exactly as above) i.e. such that there exists a constant CG with supaR|G(a)|CG and
    |G(a)-G(b)|CG|a-b|,a,bR.

Although the above equation is not well-defined (tWφ(t,x) does not exist), we will interpret a solution to (3.9) in the following way.

Definition 3.6

By a solution to (3.9) we will mean a real-valued random field (Y(t,x))t0,xRN such that

Y(t,x)=e-tY0(x)+0te-(t-s)RNw(x,y)G(Y(s,y))dyds+0tRNe-(t-s)σ(Y(s,x))φ(x-y)W(dsdy), 3.10

almost surely for all t0 and xRN, where the stochastic integral term is understood in the sense described in Sect. 3.1.

Once again we are interested in the conditions on the neural field kernel w that allow us to prove the existence of solutions in this new sense. Recall that in Sect. 2 we either required conditions (C1) and (C2) or (C1’) and (C2’) to be satisfied. The difficulty was to keep everything well-behaved in the Hilbert space L2(RN) (or L2(RN,ρ)). However, when looking for solutions in the sense of random fields (Y(t,x))t0,xRN such that (3.10) is satisfied, such restrictions are no longer needed, principally because we no longer have to concern ourselves with the behavior in space at infinity. Indeed, in this section we simply work with the condition (C2’) i.e. that

xRN(yw(x,y))L1(RN),andsupxRNw(x,·)L1(RN)Cw,

for some constant Cw. Using the standard technique of a Picard iteration scheme [closely following Walsh (1986, Theorem 3.2)] and the simple properties of the Walsh stochastic integral stated in Sect. 3.1, we can prove the following:

Theorem 3.7

Suppose that the map xY0(x) is Borel measurable almost surely, and that

supxRNE|Y0(x)|2<.

Suppose moreover that the neural field kernel w satisfies condition (C2’). Then there exists an almost surely unique predictable random field (Y(t,x))t0,xRN which is a solution to (3.9) in the sense of Definition 3.6 such that

supt[0,T],xRNE|Y(t,x)|2<, 3.11

for any T>0.

Proof

The proof proceeds in a classical way, but where we are careful to interpret all stochastic integrals as described in Sect. 3.1, and so we provide the details.

Uniqueness: Suppose that (Y(t,x))t0,xRN and (Z(t,x))t0,xRN are both solutions to (3.9) in the sense of Definition 3.6. Let D(t,x)=Y(t,x)-Z(t,x) for xRN and t0. Then we have

D(t,x)=0te-(t-s)RNw(x,y)[G(Y(s,y))-G(Z(s,y))]dyds+0tRNe-(t-s)[σ(Y(s,x))-σ(Z(s,x))]φ(x-y)W(dsdy).

Therefore

E|D(t,x)|22E0te-(t-s)RN|w(x,y)||G(Y(s,y))-G(Z(s,y))|dyds2+2E0tRNe-(t-s)[σ(Y(s,x))-σ(Z(s,x))]φ(x-y)W(dsdy)22t0te-2(t-s)ERN|w(x,y)||G(Y(s,y))-G(Z(s,y))|dy2ds+20tRNe-2(t-s)E|σ(Y(s,x))-σ(Z(s,x))|2|φ(x-y)|2dsdy,

where we have used Cauchy–Schwarz and Burkhölder’s inequality (Theorem 3.2) with p=2. Thus, using the Lipschitz property of σ and G,

E|D(t,x)|22tCG20te-2(t-s)ERN|w(x,y)||D(s,y)|dy2ds+2Cσ2φL2(RN)20te-2(t-s)E|D(s,x)|2ds.

By the Cauchy–Schwarz inequality once again

E|D(t,x)|22tCG2w(x,·)L1(RN)0te-2(t-s)RN|w(x,y)|E|D(s,y)|2dyds+2Cσ2φL2(RN)20te-2(t-s)E|D(s,x)|2ds.

Let H(s):=supxRNE[|D(s,x)|2], which is finite since we are assuming Y and Z satisfy (3.11). Writing K=2max{Cσ2,CG2}, we have

E|D(t,x)|2KtCw2+φL2(RN)20te-2(t-s)H(s)dsH(t)KtCw2+φL2(RN)20tH(s)ds.

An application of Gronwall’s lemma then yields supstH(s)=0 for all t0. Hence Y(t,x)=Z(t,x) almost surely for all t0,xRN.

Existence: Let Y0(t,x)=Y0(x). Then define iteratively for nN0, t0, xRN,

Yn+1(t,x):=e-tY0(x)+0te-(t-s)RNw(x,y)G(Yn(s,y))dyds+0tRNe-(t-s)σ(Yn(s,x))φ(x-y)W(dsdy). 3.12

We first check that the stochastic integral is well-defined, under the assumption that

supt[0,T],xRNE(|Yn(t,x)|2)<, 3.13

for any T>0, which we know is true for n=0 by assumption, and we show by induction is also true for each integer n1 below. To this end for any T>0

E0TRNe-2(t-s)σ2(Yn(s,x))φ2(x-y)dsdy2Cσ2φL2(RN)20T(1+E|Yn(s,x)|2)ds2Cσ2φL2(RN)2T1+supt[0,T],xRNE|Yn(t,x)|2<.

This shows that the integrand in the stochastic integral is in the space PW (for all T>0), which in turn implies that the stochastic integral in the sense of Walsh is indeed well-defined (by Theorem 3.1).

Now define Dn(t,x):=Yn+1(t,x)-Yn(t,x) for nN0, t0 and xRN. Then exactly as in the uniqueness calculation we have

E|Dn(t,x)|22tCG2Cw0te-2(t-s)RN|w(x,y)|EDn-1(s,y)2dyds+2Cσ2φL2(RN)20tEDn-1(s,x)2e-2(t-s)ds.

This implies that by setting Hn(s)=supxRNEDn(s,x)2,

Hn(t)KntCw2+φL2(RN)2n0t0t10tn-1H0(tn)dtndt1, 3.14

for all nN0 and t0. Now, similarly, we can find a constant Ct such that

ED0(s,x)2Ct1+supxRNE|Y0(x)|2,

for any xRN and s[0,t], so that for s[0,t],

H0(s)=supxRNED0(s,x)2Ct1+supxRNE|Y0(x)|2,

Using this in (3.14) we see that,

Hn(t)CtKntCw2+φL2(RN)2n1+supxRNE|Y0(x)|2tnn!,

for all t0. This is sufficient to see that (3.13) holds uniformly in n.

By completeness, for each t0 and xRN there exists Y(t,x)L2(Ω,F,P) such that Y(t,x) is the limit in L2(Ω,F,P) of the sequence of square-integrable random variables (Yn(t,x))n1. Moreover, the convergence is uniform on [0,T]×RN, i.e.

supt[0,T],xRNEYn(t,x)-Y(t,x)20.

From this we can see that (3.11) is satisfied for the random field (Y(t,x))t0,xRN. It remains to show that (Y(t,x))t0,xRN satisfies (3.10) almost surely. By the above uniform convergence, we have that

E0tRNe-(t-s)σ(Yn(s,x))-σ(Y(s,x))φ(x-y)W(dsdy)20,

and

E0te-(t-s)RNw(x,y)G(Yn(s,y))-G(Y(s,y))dsdy20,

uniformly for all t0 and xRN. Thus taking the limit as n in (3.12) [in the L2(Ω,F,P) sense] proves that (Y(t,x))t0,xRN does indeed satisfy (3.10) almost surely.

In a very similar way, one can also prove that the solution remains Lp-bounded whenever the initial condition is Lp-bounded for any p>2. Moreover, this also allows us to conclude that the solution has time continuous paths for all xRN.

Theorem 3.8

Suppose that we are in the situation of Theorem 3.7, but in addition we have that supxRNE|Y0(x)|p< for some p>2. Then the solution (Y(t,x))t0,xRN to (3.9) in the sense of Definition 3.6 is Lp-bounded on [0,T]×RN for any T i.e.

supt[0,T],xRNEY(t,x)p<,

and the map tY(t,x) has a continuous version for all xRN.

If the initial condition has finite p-moments for all p>2, then tY(t,x) has an η-Hölder continuous version, for any η(0,1/2) and any xRN.

Proof

The proof of the first part of this result uses similar techniques as in the proof of Theorem 3.7 in order to bound E|Y(t,x)|p uniformly in t[0,T] and xRN. In particular, we use the form of Y(t,x) given by (3.10), Burkhölder’s inequality (see Theorem 3.2), Hölder’s inequality and Gronwall’s lemma, as well as the conditions imposed on w, σ, G and φ.

For the time continuity, we again use similar techniques to achieve the bound

EY(t,x)-Y(s,x)pCT(p)1+supr[0,T],yRNE|Y(r,y)|p(t-s)p2,

for all s,t[0,T] with st and xRN, for some constant CT(p). The results then follow from Kolmogorov’s continuity theorem once again.

Spatial regularity of solution

As mentioned in the introduction to this section, the spatial regularity of the solution (Y(t,x))t0,xRN to (3.9) is of interest. In particular we would like to find conditions under which it is at least continuous in space. As we saw in Lemma 3.4, under the weak condition on φ given by (3.8), we have that the spatially smoothed space-time white noise is continuous in space. We here show that under this assumption together with a Hölder continuity type condition on the neural field kernel w, the solution (Y(t,x))t0,xRN inherits the spatial regularity of the the driving noise.

It is worth mentioning that the neural field equation fits into the class of degenerate diffusion SPDEs (indeed there is no diffusion term), and that regularity theory for such equations is an area that is currently very active [see for example Hofmanová (2013) and references therein]. However, in our case we are not concerned with any kind of sharp regularity results [in contrast to those found in Dalang and Sanz-Solé (2009) for the stochastic wave equation], and simply want to assert that for most typical choices of neural field kernels w made by practitioners, the random field solution to the neural field equation is at least regular in space. The results of the section are simple applications of standard techniques to prove continuity in space of random field solutions to SPDEs, as is done for example in Walsh (1986, Corollary 3.4).

The condition we introduce on w is the following:

Kw0s.t.w(x,·)-w(x~,·)L1(RN)Lw|x-x~|α,x,x~RN,(C3)

for some α(0,1].

Remark 3.9

This condition is certainly satisfied for all typical choices of neural field kernel w. In particular, any smooth rapidly decaying function will satisfy (C3).

Theorem 3.10

(Regularity) Suppose that we are in the situation of Theorem 3.7 and

supxRNE|Y0(x)|p<

for all p2. Suppose moreover that there exists α(0,1] such that

  • w satisfies (C3’);

  • φ satisfies (3.8) i.e.
    φ-τz(φ)L2(RN)Cφ|z|α,zRN,
    where τz indicates the shift by zRN operator;
  • xY0(x) is α-Hölder continuous.

Then (Y(t,x))t0,xRN has a modification such that (t,x)Y(t,x) is (η1,η2)-Hölder continuous, for any η1(0,1/2) and η2(0,α).

Proof

Let (Y(t,x))t0,xRN be the mild solution to (3.9), which exists and is unique by Theorem 3.7. The stated regularity in time is given in Theorem 3.8. It thus remains to prove the regularity in space.

Let t0, xRN. Then by (3.10)

Y(t,x)=e-tY0(x)+I1(t,x)+I2(t,x), 3.15

for all t0 and xRN, where I1(t,x)=0te-(t-s)RNw(x,y)G(Y(s,y))dyds and I2(t,x)=0tRNe-(t-s)σ(Y(s,x))φ(x-y)W(dsdy).

Now let p2. The aim is to estimate E|Y(t,x)-Y(t,x~)|p for x,x~RN and then to use Kolmogorov’s theorem to get the stated spatial regularity. To this end, we have that

EI1(t,x)-I1(t,x~)pE0tRN|w(x,y)-w(x~,y)||G(Y(s,y))|dydspCGptpw(x,·)-w(x~,·)L1(RN)pCGptpKwp|x-x~|pα, 3.16

where we have used (C3’). Moreover, by Hölder’s and Burkhölder’s inequalities once again, we see that

EI2(t,x)-I2(t,x~)p2p-1E0tRNe-(t-s)σ(Y(s,x))-σ(Y(s,x~))φ(x-y)W(dyds)p+2p-1E0tRNe-(t-s)σ(Y(s,x~))[φ(x-y)-φ(x~-y)]W(dyds)p2p-1cpE0tRN|σ(Y(s,x))-σ(Y(s,x~))|2φ(x-y)2dydsp2+2p-1cpE0tRNσ(Y(s,x~))2|φ(x-y)-φ(x~-y)|2dydsp2,

for all x,x~RN and p2. Thus

EI2(t,x)-I2(t,x~)p2p-1cpCσptp2-1φL2(RN)p0tE|Y(s,x)-Y(s,x~)|pds+22(p-1)cpCσptp2φ-τx~-x(φ)L2(RN)p1+sups[0,T],yRNEY(s,y)p, 3.17

where we note that the right-hand side is finite thanks to Theorem 3.8. Returning to (3.15) and using estimates (3.16) and (3.17) we see that there exists a constant CT(p) (depending on T,p,CG,Kw,Cσ,Cφ,φL2(RN) as well as sups[0,T],yRNE|Y(s,y)|p), such that

EY(t,x)-Y(t,x~)pCT(p)EY0(x)-Y0(x~)p+|x-x~|pα+0tE|Y(s,x)-Y(s,x~)|pdsCT(p)|x-x~|pα+0tE|Y(s,x)-Y(s,x~)|pds,

where the last line follows from our assumptions on Y0 and by adjusting the constant CT(p). This bound holds for all t0, x,x~RN and p2. The proof is then completed using Gronwall’s inequality, and Kolmogorov’s continuity theorem once again.

Comparison of the two approaches

The purpose of this section is to compare the two different approaches taken in Sects. 2 and 3 above to give sense to the stochastic neural field equation. Such a comparison of the two approaches in a general setting has existed for a long time in the probability literature [see for example Jetschke (1982, 1986), or more recently Dalang and Quer-Sardanyons (2011)], but we provide a proof of the main result (Theorem 4.1) in the Appendix for completeness.

Our starting point is the random field solution, given by Theorem 3.7. Suppose that the conditions of Theorem 3.7 are satisfied [i.e. φL2(RN), σ:RR Lipschitz, G:RR Lipschitz and bounded, w satisfies (C2’) and the given assumptions on the initial condition]. Then, by that result, there exists a unique random field (Y(t,x))t0,xRN such that

Y(t,x)=e-tY0(x)+0te-(t-s)RNw(x,y)G(Y(s,y))dyds+0tRNe-(t-s)σ(Y(s,x))φ(x-y)W(dsdy) 4.1

where

supt[0,T],xRNE|Y(t,x)|2<, 4.2

for all T>0, and we say that (Y(t,x))t0,xRN is the random field solution to the stochastic neural field equation.

It turns out the that this random field solution is equivalent to the Hilbert space valued solution constructed in Sect. 2, in the following sense.

Theorem 4.1

Suppose the conditions of Theorem 3.7 and Theorem 3.8 are satisfied. Moreover suppose that condition (C1’) is satisfied for some ρwL1(RN). Then the random field (Y(t,x))t0 satisfying (4.1) and (4.2) is such that (Y(t))t0:=(Y(t,·))t0 is the unique L2(RN,ρw)-valued solution to the stochastic evolution equation

dY(t)=[-Y(t)+F(Y(s))]dt+B(Y(t))dW(t),t[0,T], 4.3

constructed in Theorem 2.2, where B:HL0(U,H) (with U=L2(RN) and H=L2(RN,ρw)) is given by (2.14) i.e.

B(h)(u)(x):=σ(h(x))RNφ(x-y)u(y)dy,hH,uU.

Example 4.2

We finish this section with an example illustrating the above result, and the applicability of the two approaches. Indeed, we make the same choices for the neural field kernel w and noise term as in Bressloff and Webber (2012), by taking

w(x,y)=12βe-|x-y|β,x,yRN,σ(a)=λa,aR,

where β and λ are constants. As noted in Sect. 2.6, β determines the range of the local synaptic connections. Then, first of all, it is clear that condition (C2’) is satisfied (indeed w(x-·)L1(RN) is constant) and σ is Lipschitz and of linear growth, so that (assuming the initial condition has finite moments), Theorems 3.7 and 3.8 can be applied to yield a unique random field solution (Y(t,x))t0 to the stochastic neural field equation. Moreover, by Example 2 in Sect. 2.8, we also see that (C1’) is satisfied. Thus Theorem 2.2 can also be applied to construct a Hilbert space valued solution to the stochastic neural field equation (Eq. (4.3)). By Theorem 4.1, the solutions are equivalent.

Conclusion

We have here explored two rigorous frameworks in which stochastic neural field equations can be studied in a mathematically precise fashion. Both these frameworks are useful in the mathematical neuroscience literature: the approach of using the theory of Hilbert space valued processes is adopted in Kuehn and Riedler (2014), while we the random field framework is more natural for Bressloff, Ermentrout and their associates in Bressloff and Webber (2012), Bressloff and Wilkerson (2012), Kilpatrick and Ermentrout (2013).

It turns out that the constructions are equivalent (see Sect. 4), when all the conditions are satisfied (which we emphasize is certainly the case for all usual modeling choices of the neural field kernel w and noise terms made in the literature—see Sects. 2.6, 2.7 and Example 4.2). However, there are still some advantages and disadvantages for taking one approach over the other, depending on the purpose. For example, an advantage of the construction of a solution as a stochastic process taking values in a Hilbert space carried out in Sect. 2, is that it allows one to consider more general diffusion coefficients. Moreover, it easy to apply results from a large body of literature taking this approach (for example LDP results—see Remark 2.3). A disadvantage is that we have to be careful to impose conditions which control the behavior of the solution in space at infinity and guarantee the integrability of the solution. In particular we require that the connectivity function w either satisfies the strong conditions (C1) and (C2), or the weaker but harder to check conditions (C1’) and (C2’).

On the other hand, the advantage of the random field approach developed in Sect. 3 is that one no longer needs to control what happens at infinity. We therefore require fewer conditions on the connectivity function w to ensure the existence of a solution [(C2’) is sufficient—see Theorem 3.7]. Moreover, with this approach, it is easier to write down conditions that guarantee the existence of a solution that is continuous in both space and time (as opposed to the Hilbert space approach, where spatial regularity is somewhat hidden). However, in order to avoid non-physical distribution valued solutions, we had to impose a priori some extra spatial regularity on the noise (see Sect. 3.2).

Acknowledgments

The authors are grateful to James Maclaurin for suggesting the use of the Fourier transform in Example 2 on page 18, to Etienne Tanré for discussions, and to the referees for their useful suggestions and references.

Appendix

Proof (of Theorem 4.1)

The proof of the result involves some technical definition chasing, and in fact is contained in Dalang and Quer-Sardanyons (2011), though rather implicitly, of but see also Jetschke (1982, 1986). It is for this reason that we carry out the proof explicitly in our situation, by closely following Dalang and Quer sardanyons (2011 Proposition 4.10). The most important point is to relate the stochastic integrals that appear in the two different formulations of a solution. To this end, define

I(t,x):=0tRNe-(t-s)σ(Y(s,x))φ(x-y)W(dsdy),xRN,t0,

to be the Walsh integral that appears in the random field solution (4.1). Our aim is to show that

I(t,·)=0te-(t-s)B(Y(s))dW(s), 6.1

where the integral on the right-hand side is the H-valued stochastic integral which appears in the solution to (4.3).

Step 1: Adapting Proposition 2.6 of Dalang and Quer-Sardanyons (2011) very slightly, we have that the Walsh integral I(t,x) can be written as the integral with respect to the cylindrical Wiener process W={Wt(u):t0,uU} with covariance IdU.8 Precisely, we have

I(t,x)=0tgst,xdWs,

for all t0,xRN, where gst,x(y):=e-(t-s)σ(Y(s,x))φ(x-y), yRN, which is in L2(Ω×[0,T];U) for any T>0 thanks to (4.2). By definition, the integral with respect to the cylindrical Wiener process W is given by

0tgst,xdWs=k=10tgst,x,ekUdβk(s),

where {ek}k=1 is a complete orthonormal basis for U, and (βk(t))t0:=(Wt(ek))t0 are independent real-valued Brownian motions. This series is convergent in L2(Ω).

Step 2: Fix arbitrary T>0. As in Section 3.5 of Dalang and Quer-Sardanyons (2011), we can consider the process {W(t),t[0,T]} defined by

W(t)=k=1βk(t)J(ek) 6.2

where J:UU is a Hilbert-Schmidt operator.

W(t) takes its values in U, where it is a Q(=JJ)-Wiener process with Tr(Q)< [Proposition 3.6 of Dalang and Quer-Sardanyons (2011)]. We define J(u):=kλku,ekUek for a sequence of positive real numbers (λk)k1 such that kλk<.

Now define

Φst,x(u)=gst,x,uU,

which takes values in R. Proposition 3.10 of Dalang and Quer-Sardanyons (2011) tells us that the process {Φst,x,s[0,T]} defines a predictable process with values in L2(U,R) and

0tΦst,xdW(s)=0tgst,xdWs, 6.3

where the integral on the left is defined according to Sect. (2.2), with values in R.

Step 3: We now note that the original Walsh integral I(·,·)L2(Ω×[0,T];H). Indeed, because of Burkhölder’s inequality with p=2,

IL2(Ω×[0,T];H)2=E0TI(t,·)H2dt=0TRNE|I(t,x)|2ρw(x)dxdtφL2(RN)20T0tRNe-2(t-s)Eσ2(Y(s,x))dsρw(x)dxdt,

which is finite, again thanks to (4.2). Hence I(t,·) takes values in H, and we can therefore write

I(t,·)=j=1I(t,·),fjHfj=j=10tΦst,·dW(s),fjHfj,

by (6.3), where {fj}j=1 is a complete orthonormal basis in H. Moreover, by using (6.2)

I(t,·)=j=1RN0tΦst,xdW(s)fj(x)ρw(x)dxfj=j=1RNk=10tΦst,x(λkek)dβk(s)fj(x)ρw(x)dxfj. 6.4

Finally, consider the H-valued stochastic integral

0te-(t-s)B(Y(s))dW(s),

where B:HL0(U,H) is given above. Then similarly

0te-(t-s)B(Y(s))dW(s)=j=10te-(t-s)B(Y(s))dW(s),fjHfj=j=1k=10te-(t-s)λkB(Y(s))(ek)dβk(s),fjHfj=j=1RNk=10te-(t-s)λkB(Y(s))(ek)(x)dβk(s)fj(x)ρw(x)dxfj.

Here, by definition, for xRN, 0st,

e-(t-s)λkB(Y(s))(ek)(x)=RNe-(t-s)σ(Y(s,x))φ(x-y)λkek(y)dy=e-(t-s)σ(Y(s,x))φ(x-·),λkekU=Φst,x(λkek),

which proves (6.1) by comparison with (6.4).

Step 4: To conclude it suffices to note that the pathwise integrals in (4.1) and the H-valued solution to (4.3) coincide as elements of H. Indeed, it is clear that, by definition of F,

0te-(t-s)RNw(·,y)G(Y(s,y))dyds=0te-(t-s)F(Y(s))ds,

where the later in an element of H.

Footnotes

1

The norm of BL0(U,H) is classically defined as supx0BxHxU.

2

The covariance operator C:UU of W is defined as E[W(s),gUW(t),hU]=stCg,hU for all g, hU.

3

Technically this means that Φ(s) is measurable with respect the σ-algebra generated by all left-continuous processes that are known at time s when (W(u))us is known (these process are said to be adapted to the filtration generated by W).

4

This would be for an infinite size cortex. The cortex is in effect of finite size but the spatial extents of wloc and wlr are very small with respect to this size and hence the model in which the cortex is R2 is acceptable.

5

This can also be obtained by applying the operator B to the representation (2.2) of W.

6

Recall that a collection of random variables X={X(θ)}θΘ indexed by a set Θ is a Gaussian random field on Θ if (X(θ1),,X(θk)) is a k-dimensional Gaussian random vector for every θ1,,θkΘ. It is characterized by its mean and covariance functions.

7

Precisely we consider functions f such that (t,x,ω)f(t,x,ω) is measurable with respect to the σ-algebra generated by linear combinations of functions of the form X(ω)1(a,b](t)1A(x), where a,bR+, AB(RN), and X:ΩR is bounded and measurable with respect to the σ-algebra generated by (Ws(A))sa,AB(RN).

8

This is a family of random variables such that for each uU, (Wt(u))t0 is a Brownian motion with variance tuU2, and for all s,t0, u1,u2U, E[Wt(u1)Ws(u2)]=(st)u1,u2U. See for example Dalang and Quer-Sardanyons (2011) Section 2.1

This work was partially supported by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 269921 (BrainScaleS), no. 318723 (Mathemacs), and by the ERC advanced grant NerVi no. 227747.

Contributor Information

O. Faugeras, Email: olivier.faugeras@inria.fr

J. Inglis, Email: james.inglis@inria.fr

References

  1. Amari SI. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern. 1977;27(2):77–87. doi: 10.1007/BF00337259. [DOI] [PubMed] [Google Scholar]
  2. Baker T, Cowan J. Spontaneous pattern formation and pinning in the primary visual cortex. J Physiol Paris. 2009;103(1–2):52–68. doi: 10.1016/j.jphysparis.2009.05.011. [DOI] [PubMed] [Google Scholar]
  3. Bressloff P. Spatially periodic modulation of cortical patterns by long-range horizontal connections. Phys D Nonlinear Phenom. 2003;185(3–4):131–157. doi: 10.1016/S0167-2789(03)00238-0. [DOI] [Google Scholar]
  4. Bressloff P. Stochastic neural field theory and the system-size expansion. SIAM J Appl Math. 2009;70:1488–1521. doi: 10.1137/090756971. [DOI] [Google Scholar]
  5. Bressloff P (2010) Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics. Phys Rev E 82(5):051903 [DOI] [PubMed]
  6. Bressloff P (2012) Spatiotemporal dynamics of continuum neural fields. J Phys A Math Theor 45(3):033001
  7. Bressloff P, Cowan J, Golubitsky M, Thomas P, Wiener M. Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philos Trans R Soc Lond B. 2001;306(1407):299–330. doi: 10.1098/rstb.2000.0769. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bressloff P, Webber M (2012) Front propagation in stochastic neural fields. SIAM J Appl Dyn Syst 11(2):708–740
  9. Bressloff PC, Folias SE. Front bifurcations in an excitatory neural network. SIAM J Appl Math. 2004;65(1):131–151. doi: 10.1137/S0036139903434481. [DOI] [Google Scholar]
  10. Bressloff PC, Wilkerson J (2012) Traveling pulses in a stochastic neural field model of direction selectivity. Front Comput Neurosci 6(90) [DOI] [PMC free article] [PubMed]
  11. Brezis H. Functional analysis, Sobolev spaces and Partial Differential Equations. Berlin: Springer; 2010. [Google Scholar]
  12. Brzeźniak Z, Peszat S. Space-time continuous solutions to SPDE’s driven by a homogeneous Wiener process. Studia Math. 1999;137(3):261–299. [Google Scholar]
  13. Dalang R, Khoshnevisan D, Mueller C, Nualart D, Xiao Y (2009) In: Khoshnevisan and Firas Rassoul-Agha (eds) A minicourse on stochastic partial differential equations, Lecture Notes in Mathematics, vol 1962. Springer, Berlin. Held at the University of Utah, Salt Lake City
  14. Dalang RC, Frangos NE. The stochastic wave equation in two spatial dimensions. Ann. Probab. 1998;26(1):187–212. doi: 10.1214/aop/1022855416. [DOI] [Google Scholar]
  15. Dalang RC, Quer-Sardanyons L. Stochastic integrals for spde’s: a comparison. Expo. Math. 2011;29(1):67–109. doi: 10.1016/j.exmath.2010.09.005. [DOI] [Google Scholar]
  16. Dalang RC, Sanz-Solé M. Hölder-Sobolev regularity of the solution to the stochastic wave equation in dimension three. Mem. Am. Math. Soc. 2009;199(931):vi+70. [Google Scholar]
  17. Du Y. Order structure and topological methods in nonlinear partial differential equations, vol 1. Hackensack: World Scientific Publishing Co., Pte. Ltd.; 2006. [Google Scholar]
  18. Ermentrout G, McLeod J (1993) Existence and uniqueness of travelling waves for a neural network. In: Proceedings of the Royal Society of Edinburgh, vol 123, pp 461–478
  19. Eveson SP. Compactness criteria for integral operators in L and L1 spaces. Proc. Am. Math. Soc. 1995;123(12):3709–3716. [Google Scholar]
  20. Faye G, Chossat P, Faugeras O (2011) Analysis of a hyperbolic geometric model for visual texture perception. J Math Neurosci 1(4) [DOI] [PMC free article] [PubMed]
  21. Ferrante M, Sanz-Solé M. SPDEs with coloured noise: analytic and stochastic approaches. ESAIM Probab. Stat. 2006;10:380–405. doi: 10.1051/ps:2006016. [DOI] [Google Scholar]
  22. Folias SE, Bressloff PC. Breathing pulses in an excitatory neural network. SIAM J Appl Dyn Syst. 2004;3(3):378–407. doi: 10.1137/030602629. [DOI] [Google Scholar]
  23. Hofmanová M. Degenerate parabolic stochastic partial differential equations. Stoch Process Appl. 2013;123(12):4294–4336. doi: 10.1016/j.spa.2013.06.015. [DOI] [Google Scholar]
  24. Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern. 1995;73:357–366. doi: 10.1007/BF00199471. [DOI] [PubMed] [Google Scholar]
  25. Jetschke G (1982) Different approaches to stochastic parabolic differential equations. In: Proceedings of the 10th Winter School on Abstract Analysis, pp 161–169
  26. Jetschke G. On the equivalence of different approaches to stochastic partial differential equations. Math Nachr. 1986;128(1):315–329. doi: 10.1002/mana.19861280127. [DOI] [Google Scholar]
  27. Kilpatrick ZP, Ermentrout B. Wandering bumps in stochastic neural fields. SIAM J Appl Dyn Syst. 2013;12(1):61–94. doi: 10.1137/120877106. [DOI] [Google Scholar]
  28. Kuehn C, Riedler MG (2014) Large deviations for nonlocal stochastic neural fields. J Math Neurosci 4(1) [DOI] [PMC free article] [PubMed]
  29. Lopes da Silva F, Hoeks A, Zetterberg L. Model of brain rhythmic activity. Kybernetik. 1974;15:27–37. doi: 10.1007/BF00270757. [DOI] [PubMed] [Google Scholar]
  30. Lopes da Silva F, van Rotterdam A, Barts P, van Heusden E, Burr W. Model of neuronal populations. The basic mechanism of rhythmicity. In: Corner MA, Swaab DF, editors. Progress in brain research. Amsterdam: Elsevier; 1976. pp. 281–308. [DOI] [PubMed] [Google Scholar]
  31. Lund JS, Angelucci A, Bressloff PC. Anatomical substrates for functional columns in macaque monkey primary visual cortex. Cereb Cortex. 2003;12:15–24. doi: 10.1093/cercor/13.1.15. [DOI] [PubMed] [Google Scholar]
  32. Mariño J, Schummers J, Lyon D, Schwabe L, Beck O, Wiesing P, Obermayer K, Sur M. Invariant computations in local cortical networks with balanced excitation and inhibition. Nat Neurosci. 2005;8(2):194–201. doi: 10.1038/nn1391. [DOI] [PubMed] [Google Scholar]
  33. Owen M, Laing C, Coombes S. Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J Phys. 2007;9(10):378–401. doi: 10.1088/1367-2630/9/10/378. [DOI] [Google Scholar]
  34. Pardoux E (2007) Stochastic partial differential equations. Lectures given in Fudan University, Shanghaï
  35. Peszat S. Large deviation principle for stochastic evolution equations. Probab Theory Relat Fields. 1994;98(1):113–136. doi: 10.1007/BF01311351. [DOI] [Google Scholar]
  36. Potthast R, Beim Graben P. Existence and properties of solutions for neural field equations. Math Methods Appl Sci. 2010;33(8):935–949. [Google Scholar]
  37. Prato GD, Zabczyk J. Stochastic equations in infinite dimensions. Cambridge: Cambridge University Press; 1992. [Google Scholar]
  38. Prévôt C, Röckner M. A concise course on stochastic partial differential equations. Berlin: Springer; 2007. [Google Scholar]
  39. Sanz-Solé M, Sarrà M (2002) Hölder continuity for the stochastic heat equation with spatially correlated noise. In: Seminar on Stochastic Analysis, Random Fields and Applications, III (Ascona, 1999), Progr. Probab., vol 52. Birkhäuser, Basel, pp 259–268
  40. Simon J. Sobolev, Besov and Nikol’ skiĭ fractional spaces: embeddings and comparisons for vector valued spaces on an interval. Ann. Math. Pura Appl. 1990;4(157):117–148. doi: 10.1007/BF01765315. [DOI] [Google Scholar]
  41. Veltz R, Faugeras O. Local/global analysis of the stationary solutions of some neural field equations. SIAM J Appl Dyn Syst. 2010;9(3):954–998. doi: 10.1137/090773611. [DOI] [Google Scholar]
  42. Walsh JB (1986) École d’été de probabilités de Saint-Flour, XIV–1984, Lecture Notes in Mathematics. An introduction to stochastic partial differential equations. Springer, Berlin, pp 265–439
  43. Wilson H, Cowan J. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12:1–24. doi: 10.1016/S0006-3495(72)86068-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Wilson H, Cowan J. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Biol Cybern. 1973;13(2):55–80. doi: 10.1007/BF00288786. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Mathematical Biology are provided here courtesy of Springer

RESOURCES