Skip to main content
Journal of Mathematical Neuroscience logoLink to Journal of Mathematical Neuroscience
. 2011 Aug 25;1:8. doi: 10.1186/2190-8567-1-8

Statistics of spike trains in conductance-based neural networks: Rigorous results

Bruno Cessac 1,
PMCID: PMC3496623  PMID: 22657160

Abstract

We consider a conductance-based neural network inspired by the generalized Integrate and Fire model introduced by Rudolph and Destexhe in 1996. We show the existence and uniqueness of a unique Gibbs distribution characterizing spike train statistics. The corresponding Gibbs potential is explicitly computed. These results hold in the presence of a time-dependent stimulus and apply therefore to non-stationary dynamics.

1 Introduction

Neural networks have an overwhelming complexity. While an isolated neuron can exhibit a wide variety of responses to stimuli [1], from regular spiking to chaos [2,3], neurons coupled in a network via synapses (electrical or chemical) may show an even wider variety of collective dynamics [4] resulting from the conjunction of non-linear effects, time propagation delays, synaptic noise, synaptic plasticity, and external stimuli [5]. Focusing on the action potentials, this complexity is manifested by drastic changes in the spikes activity, for instance when switching from spontaneous to evoked activity (see for example A. Riehle’s team experiments on the monkey motor cortex [6-9]). However, beyond this, complexity may exist some hidden laws ruling an (hypothetical) “neural code” [10].

One way of unraveling these hidden laws is to seek some regularities or reproducibility in the statistics of spikes. While early investigations on spiking activities were focusing on firing rates where neurons are considered as independent sources, researchers concentrated more recently on collective statistical indicators such as pairwise correlations. Thorough experiments in the retina [11,12] as well as in the parietal cat cortex [13] suggested that such correlations are crucial for understanding spiking activity. Those conclusions where obtained using the maximal entropy principle[14]. Assume that the average value of observables quantities (e.g., firing rate or spike correlations) has been measured. Those average values constitute constraints for the statistical model. In the maximal entropy principle, assuming stationarity, one looks for the probability distribution which maximizes the statistical entropy given those constraints. This leads to a (time-translation invariant) Gibbs distribution. In particular, fixing firing rates and the probability of pairwise coincidences of spikes lead to a Gibbs distribution having the same form as the Ising model. This idea has been introduced by Schneidman et al. in [11] for the analysis of retina spike trains. They reproduce accurately the probability of spatial spiking pattern. Since then, their approach has known a great success (see, e.g., [15-17]), although some authors raised solid objections on this model [12,18-20] while several papers have pointed out the importance of temporal patterns of activity at the network level [21-23]. As a consequence, a few authors [13,24,25] have attempted to define time-dependent models of Gibbs distributions where constraints include time-dependent correlations between pairs, triplets, and so on [26]. As a matter of fact, the analysis of the data of [11] with such models describes more accurately the statistics of spatio-temporal spike patterns [27].

Taking into account all constraints inherent to experiments, it seems extremely difficult to find an optimal model describing spike trains statistics. It is in fact likely that there is not one model, but many, depending on the experiment, the stimulus, the investigated part of the nervous system and so on. Additionally, the assumptions made in the works quoted above are difficult to control. Especially, the maximal entropy principle assumes a stationary dynamics while many experiments consider a time-dependent stimulus generating a time-dependent response where the stationary approximation may not be valid. At this stage, having an example where one knows the explicit form of the spike trains, probability distribution would be helpful to control those assumptions and to define related experiments.

This can be done considering neural network models. Although, to be tractable, such models may be quite away from biological plausibility, they can give hints on which statistics can be expected in real neural networks. But, even in the simplest examples, characterizing spike statistics arising from the conjunction of non-linear effects, time propagation delays, synaptic noise, synaptic plasticity, and external stimuli is far from being trivial on mathematical grounds.

In [28], we have nevertheless proposed an exact and explicit result for the characterization of spike trains statistics in a discrete-time version of Leaky Integrate-and-Fire neural network. The results were quite surprising. It has been shown that whatever the parameters value (in particular synaptic weights), spike trains are distributed according to a Gibbs distribution whose potential can be explicitly computed. The first surprise lies in the fact that this potential has infinite range, namely spike statistics has an infinite memory. This is because the membrane potential evolution integrates its past values and the past influence of the network via the leak term. Although leaky integrate and fire models have a reset mechanism that erases the memory of the neuron whenever it spikes, it is not possible to upper bound the next time of firing. As a consequence, statistics is non-Markovian (for recent examples of non-Markovian behavior in neural models see also [29]). The infinite range of the potential corresponds, in the maximal entropy principle interpretation, to having infinitely many constraints.

Nevertheless, the leak term influence decays exponentially fast with time (this property guarantees the existence and uniqueness of a Gibbs distribution). As a consequence, one can approximate the exact Gibbs distribution by the invariant probability of a Markov chain, with a memory depth proportional to the log of the (discrete time) leak term. In this way, the truncated potential corresponds to a finite number of constraints in the maximal entropy principle interpretation. However, the second surprise is that this approximated potential is nevertheless far from the Ising model or any of the models discussed above, which appear as quite bad approximations. In particular, there is a need to consider n-uplets of spikes with time delays. This mere fact asks hard problems about evidencing such type of potentials in experiments. Especially, new type of algorithms for spike trains analysis has to be developed [30].

The model considered in [28] is rather academic: time evolution is discrete, synaptic interactions are instantaneous, dynamics is stationary (the stimulus is time-constant) and, as in a leaky integrate and fire model, conductances are constant. It is therefore necessary to investigate whether our conclusions remain for more realistic neural networks models. In the present paper, we consider a conductance-based model introduced by Rudolph and Destexhe in [31] called “generalized Integrate and Fire” (gIF) model. This model allows one to consider realistic synaptic responses and conductances depending on spikes arising in the past of the network, leading to a rather complex dynamics which has been characterized in [32] in the deterministic case (no noise in the dynamics). Moreover, the biological plausibility of this model is well accepted [33,34].

Here, we analyze spike statistics in the gIF model with noise and with a time-dependent stimulus. Moreover, the post-synaptic potential profiles are quite general and summarize all the examples that we know in the literature. Our main result is to prove the existence and uniqueness of a Gibbs measure characterizing spike trains statistics, for all parameters compatible with physical constraints (finite synaptic weights, bounded stimulus, and positive conductances). Here, as in [28], the corresponding Gibbs potential has infinite range corresponding to a non-Markovian dynamics, although Markovian approximations can be proposed in the gIF model too. The Gibbs potential depends on all parameters in the model (especially connectivity and stimulus) and has a form quite more complex than Ising-like models. As a by-product of the proof of our main result, additional interesting notions and results are produced such as continuity, with respect to a raster, or exponential decay of memory thanks to the shape of synaptic responses.

The paper is organized as follows. In Section 2, we briefly introduce integrate and fire models and propose two important extensions of the classical models: the spike has a duration and the membrane potential is reset to a non-constant value. These extensions, which are necessary for the validity of our mathematical results, render nevertheless the model more biologically plausible (see Section 9). One of the keys of the present work is to consider spike trains (raster plots) as infinite sequences. Since in gIF models, conductances are updated upon the occurrence of spikes, one has to consider two types of variables with distinct type of dynamics. On the one hand, the membrane potential, which is the physical variable associated with neurons dynamics, evolves continuously. On the other hand, spikes are discrete events. Conductances are updated according to these discrete-time events. The formalism introduced in Sections 2 and 3 allows us to handle properly this mixed dynamics. As a consequence, these sections define gIF model with more mathematical structure than the original paper [31] and mostly contain original results. Moreover, we add to the model several original features such as the consideration of a general form of synaptic profile with exponential decay or the introduction of noise. Section 4 proposes a preliminary analysis of gIF model dynamics. In Sections 5 and 6, we provide several useful mathematical propositions as a necessary step toward the analysis of spike statistics, developed in Section 7, where we prove the main result of the paper: existence and uniqueness of a Gibbs distribution describing spike statistics. Sections 8 and 9 are devoted to a discussion on practical consequences of our results for neuroscience.

2 Integrate and fire model

We consider the evolution of a set of N neurons. Here, neurons are considered as “points” instead of spatially extended and structured objects. As a consequence, we define, for each neuron k{1,,N}, a variable Vk(t) called the “membrane potential of neuron k at time t” without specification of which part of a real neuron (axon, soma, dendritic spine, …) it corresponds to. Denote V(t) the vector (Vk(t))k=1N.

We focus here on “integrate and fire models”, where dynamics always consists of two regimes.

2.1 The “integrate regime”

Fix a real number θ called the “firing threshold of the neuron”.1 Below the threshold, Vk<θ, neuron k’s dynamics is driven by an equation of the form:

CkdVkdt+gkVk=ik, (1)

where Ck is the membrane capacity of neuron k. In its most general form, the neuron k’s membrane conductance gk>0 depends on Vk plus additional variables such as the probability of having ionic channels open (see, e.g., Hodgkin-Huxley equations [35]) as well as on time t. The explicit form of gk in the present model is developed in Section 3.4. The current ik typically depends on time t and on the past activity of the network. It also contains a stochastic component modeling noise in the system (e.g., synaptic transmission, see Section 3.5).

2.2 LIF model

A classical example of integrate and fire model is the Leaky Integrate and Fire’s (LIF) introduced in [36] where Equation (1) reads:

dVkdt=VkτL+ik(t)Ck. (2)

where gk is a constant and τL=Ckgk is the characteristic time for membrane potential decay when no current is present (“leak term”).

2.3 Spikes

The dynamical evolution (1) may eventually lead Vk to exceed θ. If, at some time t, Vk(t)=θ, then neuron k emits a spike or “fires”. In our model, like in biophysics, a spike has a finite duration δ>0; this is a generalization of the classical formulation of integrate and fire models where the spike is considered instantaneous. On biophysical grounds, δ is of order of a millisecond. Changing the time units, we may set δ=1 without loss of generality. Additionally, neurons have a refractory period τrefr>0 where they are not able to emit a new spike although their membrane potential can fluctuate below the threshold (see Figure 1). Hence, spikes emitted by a given neuron are separated by a minimal time scale

Fig. 1.

Fig. 1

Time course of the membrane potential in our model. The blue dashed curve illustrates the shape of a real spike, but what we model is the red curve.

τsep=δ+τrefr. (3)

2.4 Raster plots

In experiments, spiking neuron activity is represented by “raster plots”, namely a graph with time in abscissa, and a neuron labeling in ordinate such that a vertical bar is drawn each “time” a neuron emits a spike. Since spikes have a finite duration δ such a representation limits the time resolution: events with a time scale smaller than δ are not distinguished. As a consequence, if neuron 1 fires at time t1 and neuron 2 at time t2 with |t2t1|<δ=1 the two spikes appear to be simultaneous on the raster. Thus, the raster representation introduces a time quantization and has a tendency to enhance synchronization. In gIF models, conductances are updated upon the occurrence of spikes (see Section 3.2) which are considered as such discrete events. This could correspond to the following “experiment”. Assume that we measure the spikes emitted by a set of in vitro neurons and that we use this information to update the conductances of a model, in order to see how this model “matches” the real neurons (see [34] for a nice investigation in this spirit). Then, we would have to take into account that the information provided by the experimental raster plot is discrete, even if the membrane potential evolves continuously. The consequences of this time discretization as well as the limit δ0 are developed in the discussion section.

As a consequence, one has to consider two types of variables with distinct type of dynamics. On the one hand, the membrane potential, which is the physical variable associated with neuron dynamics, evolves with a continuous time. On the other hand, spikes, which are the quantities of interest in the present paper, are discrete events. To properly define this mixed dynamics and study its properties, we have to model spikes times and raster plots.

2.5 Spike times

If, at time t, Vk(t)=θ, a spike is registered at the integer time immediately aftert, called the spike time. Choosing integers for the spike time occurrence is a direct consequence of setting δ=1. Thus, to each neuron k and integer n, we associate a “spiking state” defined by:

ωk(n)={1if t]n1,n]suchthatVk(t)=θ;0otherwise.

For convenience and in order to simplify the notations in the mathematical developments, we call [t] the largest integer which is ≤t (thus [1.2]=2 and [1.2]=1). Thus, the integer immediately after t is [t+1] and we have therefore that ωk([t+1])=1 whenever Vk(t)=θ. Although, characteristic events in a raster plot are spikes (neuron fires), it is useful in subsequent developments to consider also the case when neuron is not firing (ωk(n)=0).

2.6 Reset

In the classical formulation of integrate and fire models, the spike occurs simultaneously with a reset of the membrane potential to some constant value Vreset, called the “reset potential”. Instantaneous reset is a source of pathologies as discussed in [32,37] and in the discussion section. Here, we consider that reset occurs after the time delay τsep1 including spike duration and refractory period. We set:

Vk(t)=θVk([t+τsep])=Vreset. (4)

The reason why the reset time is the integer number [t+τsep] instead of the real t+τsep is that it eases the notations and proofs. Since the reset value is random (see below and Figure 1), this assumption has no impact on the dynamics.

Indeed, in our model, the reset value Vreset is not a constant. This is a Gaussian random variable with mean zero (we set the rest potential to zero without loss of generality) and variance σR2>0. In this way, we model the spike duration and refractory period, as well as the random oscillations of the membrane potential during the refractory period. As a consequence, the value of Vk when the neuron can fire again is not a constant, as it is in classical IF models. A related reference (spiking neurons with partial reset) is [38]. The assumption that σR2>0 is necessary for our mathematical developments (see the bounds (37)). We assume σR2 to be small to avoid trivial and unrealistic situations where Vresetθ with a large probability leading the neuron to fire all the time. Note, however, that this is not a required assumption to establish our mathematical results. We also assume that, in successive resets, the random variables Vreset are independent.

2.7 The shape of membrane potential during the spike

On biophysical grounds, the time course of the membrane potential during the spike includes a depolarization and re-polarization phase due to the non-linear effects of gated ionic channels on the conductance. This leads to introduce, in modeling, additional variables such as activation/inactivation probabilities as in the Hodgkin-Huxley model [35] or adaptation current as, e.g., in FitzHugh-Nagumo model [39-42] (see the discussion section for extensions of our results to those models). Here, since we are considering only one variable for the neuron state, the membrane potential, we need to define the spike profile, i.e., the course of Vk(t) during the time interval (t,[t+τsep]). It turns out that the precise shape of this profile plays no role in the developments proposed here, where we concentrate on spike statistics. Indeed, a spike is registered whenever Vk(t)=θ, and this does not depend on the spike shape. What we need is therefore to define the membrane potential evolution before the spike, given by (1), and after the spike, given by (4) (see Figure 1).

2.8 Mathematical representation of raster plots

The “spiking pattern” of the neural network at integer time n is the vector ω(n)=(ωk(n))k=1N. For m<n, we note ωmn={ω(m)ω(m+1)ω(n)} the ordered sequence of spiking patterns between m and n. Such sequences are called spike blocks. Additionally, we note that ωmn11ωn1n=ωmn, the concatenation of the blocks ωmn11 and ωn1n.

Call A={0,1}N the set of spiking patterns (alphabet). An element of X=defAZ, i.e., a bi-infinite ordered sequence ω={ω(n)}n=+ of spiking patterns, is called a “raster plot”. It tells us which neurons are firing at each time nZ. In experiments, raster plots are obviously finite sequences of spiking pattern but the extension to Inline graphic, especially the possibility of considering an arbitrary distant past (negative times), is a key of the present work. In particular, the notation ωn refers to spikes occurring from −∞ to n.

To each raster ωX and each neuron index j=1,,N, we associate an ordered (generically infinite) list of “spike times” {tj(r)(ω)}r=1 (integer numbers) such that tj(r)(ω) is the r-th time of firing of neuron j in the raster ω. In other words, we have ωj(n)=1 if and only if n=tj(r)(ω) for some r=1,,+. We use here the following convention. The index k is used for a post-synaptic neuron while the index j refers to pre-synaptic neurons. Spiking events are used to update the conductance of neuron k according to spikes emitted by pre-synaptic neurons. That is why we label the spike times with an index j.

We introduce here two specific rasters which are of use in the paper. We note Ω0 the raster such that ωk(n)=0, k=1,,N, nZ (no neuron ever fires) and Ω1 the raster ωk(n)=1, i=1,,N, nZ (each neuron is firing at every integer time).

Finally, we use the following notation borrowed from [43]. We note, for nZm0, and r integer:

ω=m,nωif ω(r)=ω(r),r{nm,,n}. (5)

For simplicity, we consider that τref, the refractory period, is smaller than 1 so that a neuron can fire two consecutive time steps (i.e., one can have ωk(n)=1 and ωk(n+1)=1). This constraint is discussed in Section 9.2.

2.9 Representation of time-dependent functions

Throughout the paper, we use the following convention. For a real function of t and ω, we write f(t,ω) for f(t,ω[t]) to simplify notations. This notation takes into account the duality between variables such as membrane potential evolving with respect to a continuous time and raster plots labeled with discrete time. Thus, the function f(t,ω) is a function of the continuous variable t and of the spike block ω[t], where by definition [t]t, namely f(t,ω) depends on the spike sequences occurring beforet. This constraint is imposed by causality.

2.10 Last reset time

We define τk(t,ω) as the last time before t where neuron k’s membrane potential has been reset, in the raster ω. This is −∞ if the membrane potential has never been reset. As a consequence of our choice (4) for the reset time τk(t,ω) is an integer number fixed by t and the raster before t. The membrane potential value of neuron k at time t is controlled by the reset value Vreset at time τk(t,ω) and by the further subthreshold evolution (1) from time τk(t,ω) to time t.

3 Generalized integrate and fire models

In this paper, we concentrate on an extension of (2), called “generalized Integrate-and-Fire” (gIF), introduced in [31], closer to biology [33,34], since it considers more seriously neurons interactions via synaptic responses.

3.1 Synaptic conductances

Depending on the neurotransmitter they use for synaptic transmission (AMPA, NMDA, GABA A, GABA B [44]), neurons can be excitatory (population Inline graphic) or inhibitory (population Inline graphic). This is modeled by introducing reversal potentials E+ for excitatory (typically E+0 mV for AMPA and NMDA) and E for inhibitory (E70 mV for GABA A and E95 mV for GABA B). We focus here on one population of excitatory and one population of inhibitory neurons although extensions to several populations may be considered as well. Also, each neuron is submitted to a current Ik(t). We assume that this current has some stochastic component that mimics synaptic noise (Section 3.5).

The variation in the membrane potential of neuron k at time t reads:

CkdVkdt=gL,k(VkEL)gk(E)(t)(VkE+)gk(I)(t)(VkE)+Ik(t), (6)

where gL,k is a leak conductance, EL<0 is the leak reversal potential (about −65 mV), gk(E)(t) the conductance of the excitatory population and gk(I)(t) the conductance of inhibitory population. They are given by:

gk(E)(t)=jEgkj(t);gk(I)(t)=jIgkj(t), (7)

where gkj is the conductance of the synaptic contact jk.

We may rewrite Equation (6) in the form (1) setting

gk(t)=gL,k+gk(E)(t)+gk(I)(t),

and

ik(t)=gL,kEL+gk(E)(t)E++gk(I)(t)E+Ik(t).

3.2 Conductance update upon a spike occurrence

The conductances gkj(t) in (7) depend on time t but also on pre-synaptic spikes occurring before t. This is a general statement, which is modeled in gIF models as follows. Upon arrival of a spike in the pre-synaptic neuron j at time tj(r)(ω), the membrane conductance of the post-synaptic neuron k is modified as:

gkj(t)=gkj(tj(r)(ω))+Gkjαkj(ttj(r)(ω)),t>tj(r)(ω). (8)

In this equation, the quantity Gkj0 characterizes the maximal amplitude of the conductance during a post-synaptic potential. We use the convention that Gkj=0 if and only if there is no synapse between j and k. This allows us to encode the graph structure of the neural network in the matrix G with entries Gkj. Note that the Gkj’s can evolve in time due to synaptic plasticity mechanisms (see Section 9.4).

The function αkj (called “alpha” profile [44]) mimics the time course of the synaptic conductance upon the occurrence of the spike. Classical examples are:

αkj(t)=etτkjH(t), (9)

(exponential profile) or:

αkj(t)=tτkjetτkjH(t), (10)

with H the Heaviside function (that mimics causality) and τkj is the characteristic decay times of the synaptic response. Since t is a time, the division by τkj ensures that αkj(t) is a dimensionless quantity: this eases the legibility of the subsequent equations on physical grounds (dimensionality of physical quantities).

Contrarily to (9) the synaptic profile (10), with αkj(0)=0 while αkj(t) is maximal for t=τkj, allows one to smoothly delay the spike action on the post-synaptic neuron. More general forms of synaptic responses could be considered as well. For example, the α profile may obey a Green equation of type [45]:

l=0kakj(l)dlαkjdul(t)=δ(t),

where k=1akj(0)=1τkjakj(1)=1, corresponds to (9), and so on.

3.3 Mathematical constraints on the synaptic responses

In all the paper, we assume that the αkj’s are positive and bounded. Moreover, we assume that:

αkj(t)tdτkjdetτkj=deffkj(t),t+, (11)

for some integer d. So that αkj(t) decays exponentially fast as t+, with a characteristic time τkj, the decay time of the evoked post-synaptic potential. This constraint matches all synaptic response kernels that we know (where typically d=0,1) [44,45].

This has the following consequence. For all t, M<t integer, r integer, we have, setting t={t}+[t], where {t} is the fractional part:

r<Mαkj(tr)=r<Mαkj([t]r+{t})=n>[t]Mαkj(n+{t}).

Therefore, as M,

r<Mαkj(tr)n[t]M+1fkj(n+{t})<tM+fkj(u)du=Pd(tMτkj)etMτkj,

where Pd() is a polynomial of degree d.

We introduce the following (Hardy) notation: if a function f(t) is bounded from above, as t+, by a function g(t) we write: f(t)g(t). Using this notation, we have therefore:

Proposition 1

r<Mαkj(tr)Pd(tMτkj)etMτkj, (12)

asM.

Additionally, the constraint (11) implies that there is some α+<+ such that, for all t, for all k, j:

r<tαkj(tr)α+. (13)

Indeed, for n0 integer, call Akj(n)=sup{αkj(n+x);x[0,1[}. Then,

r<tαkj(tr)=n=1+αkj(n+{t})n=1+Akj(n).

Due to (11), this series converges (e.g., from Cauchy criterion). We set:

α+=maxkjn=1+Akj(n).

On physical grounds, it implies that the conductance gk remains bounded, even if each pre-synaptic neuron is firing all the time (see Equation (29) below).

3.4 Synaptic summation

Assume that Equation (8) remains valid for an arbitrary number of pre-synaptic spikes emitted by neuron j within a finite time interval [s,t] (i.e., neglecting non-linear effects such as the fact that there is a finite amount of neurotransmitter leading to saturation effects). Then, one obtains the following equation for the conductance gkj at time t, upon the arrival of spikes at times tj(r)(ω) in the time interval [s,t]:

gkj(t)=gkj(s)+Gkj{r;stj(r)(ω)<t}αkj(ttj(r)(ω)).

The conductance at time s, gkj(s), depends on the neuron j’s activity preceding s. This term is therefore unknown unless one knows exactly the past evolution before s. One way to circumvent this problem is to taking s arbitrary far in the past, i.e., taking s in order to remove the dependence on initial conditions. This corresponds to the following situation. When one observes a real neural network, the time where the observation starts, say t=0, is usually not the time when the system has begun to exist, s in our notations. Taking s arbitrary far in the past corresponds to assuming that the system has evolved long enough so that it has reached sort of a “permanent regime”, not necessarily stationary, when the observation starts. On phenomenological grounds, it is enough to take −s larger than all characteristic relaxation times in the system (e.g., leak rate and synaptic decay rate). Here, for mathematical purposes, it is easier to take the limit s.

Since gkj(t) depends on the raster plot up to time t, via the spiking times tj(r)(ω), this limit makes only sense when taking it “conditionally” to a prescribed raster plot ω. In other words, one can know the value of the conductances gkj at time t only if the past spike times of the network are known. We write gkj(t,ω) from now on to make this dependence explicit.

We set

αkj(t,ω)=lims{r;stj(r)(ω)<t}αkj(ttj(r)(ω)){r;tj(r)(ω)<t}αkj(ttj(r)(ω)), (14)

with the convention that =0 so that αkj(t,Ω0)=0 (recall that Ω0 is the raster such that no neuron ever fires). The limit (14) exists from (13).

3.5 Noise

We allow, in the definition of the current Ik(t) in Equation (6), the possibility of having a stochastic term corresponding to noise so that:

Ik(t)=ik(ext)(t)+σBξk(t), (15)

where ik(ext)(t) is a deterministic external current and ξk(t) a noise term whose amplitude is controlled by σB>0. The model affords an extension where σB depends on k but this extension is straightforward and we do not develop it here. The noise term can be interpreted as the random variation in the ionic flux of charges crossing the membrane per unit time at the post-synaptic button, upon opening of ionic channels due to the binding of neurotransmitter.

We assume that ξk(t) is a white noise, ξk(t)=dBkdt where dBk(t) is a Wiener process, so that dB(t)=(dBk(t))k=1N is a N-dimensional Wiener process. Call P the noise probability distribution and E[] the expectation under P. Then, by definition, E[dBk(t)]=0, k=1,,N, tR, and E[dBk(s)dBl(t)]=δklδ(ts)dt where δkl=1 if l=k, l,k=1,,N and δ(ts) is the Dirac distribution.

3.6 Differential equation for the integrate regime of gIF

Summarizing, we write Equation (6) in the form:

CkdVkdt+gk(t,ω)Vk=ik(t,ω), (16)

where:

gk(t,ω)=gL,k+j=1NGkjαkj(t,ω). (17)

This is the more general conductance form considered in this paper.

Moreover,

ik(t,ω)=gL,kEL+j=1NWkjαkj(t,ω)+ik(ext)(t)+σBξk(t), (18)

where Wkj is the synaptic weight:

{Wkj=E+Gkj,if jE,Wkj=EGkj,if jI.

These equations hold when the membrane potential is below the threshold (Integrate regime).

Therefore, gIF models constitute rather complex dynamical systems: the vector field (r.h.s) of the differential Equation (16) depends on an auxiliary “variable”, which is the past spike sequence ω[t] and to define properly the evolution of Vk from time t to later times one needs to know the spikes arising before t. This is precisely what makes gIF models more interesting than LIF. The definition of conductances introduces long-term memory effects.

IF models implement a reset mechanism on the membrane potential: If neuron k has been reset between s and t, say at time τ, then Vk(t) depends only on Vk(τ) and not on previous values, as in (4). But, in gIF model, contrarily to LIF, there is also a dependence in the past via the conductance and this dependence is not erased by the reset. That is why we have to consider a system with infinite memory.

3.7 The parameters space

The stochastic dynamical system (16) depends on a huge set of parameters: the membrane capacities Ck, k=1,,N, the threshold θ, the reversal potentials EL, E+, E, the leak conductance gL; the maximal synaptic conductances Gkj, k,j=1,,N which define the neural network topology; the characteristic times τkj, k,j=1,,N of synaptic responses decay; the noise amplitude σB; and additionally, the parameters defining the external current ik(ext). Although some parameters can be fixed from biology, such as Ck, the reversal potentials, τkj, …some others such as the Gkj’s must be allowed to vary freely in order to leave open the possibility of modeling very different neural networks structures.

In this paper, we are not interested in describing properties arising for specific values of those parameters, but instead in generic properties that hold on sets of parameters. More specifically, we denote the list of all parameters ((Ck)k=1N,EL,E+,E,(Gkj)k,j=1N,) by the symbol γ. This is a vector in RK where K is the total number of parameters. In this paper, we assume that γ belongs to a bounded subset HRK. Basically, we want to avoid situations where some parameters become infinite, which would be unphysical. So the limits of Inline graphic are the limits imposed by biophysics. Additionally, we assume that σR>0 and σB>0. Together with physical constraints such as “conductances are positive”, these are the only assumption made in parameters. All mathematical results stated in the paper hold for any γH.

4 gIF model dynamics for a fixed raster

We assume that the raster ω is fixed, namely the spike history is given. Then, it is possible to integrate the Equation (16) (Integrate regime) and to obtain explicitly the value of the membrane potential of a neuron at time t, given the membrane potential value at time s. Additionally, the reset condition (4) has the consequence of removing the dependence of neuron k on the past anterior to τk(t,ω).

4.1 Integrate regime

For t1t2, t1,t2R, set:

Γk(t1,t2,ω)=e1Ckt1t2gk(u,ω)du. (19)

We have:

Γk(t1,t1,ω)=Γk(t2,t2,ω)=1,

and:

Γk(t1,t2,ω)t1=gk(t1,ω)CkΓk(t1,t2,ω).

Fix two times s<t and assume that for neuron k, Vk(u)<θ, sut so that the membrane potential Vk obeys (16). Then,

t1[Γk(t1,t2,ω)Vk(t1)]=Γk(t1,t2,ω)[dVkdt1+gk(t1,ω)CkVk(t1)]=Γk(t1,t2,ω)ik(t1,ω)Ck.

We have then integrating the previous equation with respect to t1 between s and t and setting t2=t:

Vk(t)=Γk(s,t,ω)Vk(s)+1CkstΓk(t1,t,ω)ik(t1,ω)dt1.

This equation gives the variation in membrane potential during a period of rest (no spike) of the neuron. Note, however, that this neuron can still receive spikes from the other neurons via the update of conductances (made explicit in the previous equation by the dependence in the raster plot ω).

The term Γk(s,t,ω) given by (19) is an effective leak between s,t. In the leaky integrate and fire model, it would have been equal to est1τLdt1=etsτL. The term:

Vk(s,t,ω)=def1CkstΓk(t1,t,ω)ik(t1,ω)dt1,

has the dimension of a voltage. It corresponds to the integration of the total current between s and t weighted by the effective leak term Γk(t1,t,ω). It decomposes as

Vk(s,t,ω)=Vk(syn)(s,t,ω)+Vk(ext)(s,t,ω)+Vk(B)(s,t,ω),

where,

Vk(syn)(s,t,ω)=1Ckj=1NWkjstΓk(t1,t,ω)αkj(t1,ω)dt1, (20)

is the synaptic contribution. Moreover,

Vk(ext)(s,t,ω)=ELτL,kstΓk(t1,t,ω)dt1+1Ckstik(ext)(t1)Γk(t1,t,ω)dt1,

where we set:

τL,k=defCkgL,k, (21)

the characteristic leak time of neuron k. We have included the leak reversal potential term in this “external” term for convenience. Therefore, even if there is no external current, this term is nevertheless non-zero.

The sum of the synaptic and external terms gives the deterministic contribution in the membrane potential. We note:

Vk(det)(s,t,ω)=Vk(syn)(s,t,ω)+Vk(ext)(s,t,ω).

Finally,

Vk(B)(s,t,ω)=defσBCkstΓk(t1,t,ω)ξk(t1)dt1=σBCkstΓk(t1,t,ω)dBk(t1), (22)

is a noise term. This is a Gaussian process with mean 0 and variance:

(σBCk)2E[(stΓk(t1,t,ω)dBk(t1))2]=(σBCk)2stΓk2(t1,t,ω)dt1. (23)

The square root of this quantity has the dimension of a voltage.

As a final result, for a fixed ω, the variation in membrane potential during a period of rest (no spike) of neuron k between s and t reads (subthreshold oscillations):

Vk(t)=Γk(s,t,ω)Vk(s)+Vk(det)(s,t,ω)+Vk(B)(s,t,ω). (24)

4.2 Reset

In Equation (4), as in all IF models that we know, the reset of the membrane potential has the effect of removing the dependence of Vk on its past since Vk([t+τsep]) is replaced by Vreset. Hence, reset removes the dependence in the initial condition Vk(s) in (24) provided that neuron k fires between s and t in the raster ω. As a consequence, Equation (24) holds, from the “last reset time” introduced in Section 2.10 up to time t. Then, Equation (24) reads

Vk(t)=Vk(det)(τk(t,ω),t,ω)+Vk(noise)(τk(t,ω),t,ω), (25)

where:

Vk(noise)(τk(t,ω),t,ω)=Γk(τk(t,ω),t,ω)Vreset+Vk(B)(τk(t,ω),t,ω), (26)

is a Gaussian process with mean zero and variance:

σk2(τk(t,ω),t,ω)=Γk2(τk(t,ω),t,ω)σR2+(σBCk)2τk(t,ω)tΓk2(t1,t,ω)dt1. (27)

5 Useful bounds

We now prove several bounds used throughout the paper.

5.1 Bounds on the conductance

From (13), and since αkj(t)0:

0=αkj(t,Ω0)αkj(t,ω)={r;tj(r)(ω)<t}αkj(ttj(r)(ω))r<tαkj(tr)=αkj(t,Ω1)α+. (28)

Therefore,

gL,k=gk(t,Ω0)gk(t,ω)gk(t,Ω1)=defgM,kgL,k+α+j=1NGkj, (29)

so that the conductance is uniformly bounded in t and ω. The minimal conductance is attained when no neuron fires ever so that Ω0 is the “lowest conductance state”. On the opposite, the maximal conductance is reached when all neurons fire all the time so that Ω1 is the “highest conductance state”. To simplify notations, we note τM,k=CkgM,k. This is the minimal relaxation time scale for neuron k while τL,k=CkgL,k is the maximal relaxation time.

τM,k=CkgM,kτL,k=CkgL,k. (30)

5.2 Bounds on membrane potential

Now, from (19), we have, for s<t:

0Γk(s,t,Ω1)=etsτM,kΓk(s,t,ω)Γk(s,t,Ω0)=etsτL,k<1. (31)

As a consequence, Γk(s,t,ω)0 exponentially fast as s.

Moreover,

0stΓk(t1,t,ω)αkj(t1,ω)dt1α+stett1τL,kdt1=α+τL,k(1etsτL,k)α+τL,k, (32)

so that:

α+gL,kjIWkjVk(syn)(s,t,ω)α+gL,kjEWkj. (33)

Thus, Vk(syn)(s,t,ω) is uniformly bounded in s,t.

Establishing similar bounds for Vk(ext)(s,t,ω) requires the assumption that Aik(ext)(t)B, but obtaining tighter bounds requires additionally the knowledge of the sign of ACk+ELτL,k and of BCk+ELτL,k. Here, we have only to consider that:

|ik(ext)(t)|i+.

In this case,

|Vk(ext)(s,t,ω)|=|ELτL,kstΓk(t1,t,ω)dt1+1Ckstik(ext)(t1)Γk(t1,t,ω)dt1|[|EL|τL,k+i+Ck]stΓk(t1,t,ω)dt1[|EL|τL,k+i+Ck]stett1τL,kdt1,

so that:

|Vk(ext)(s,t,ω)|(|EL|+i+gL,k)(1etsτL,k)|EL|+i+gL,k. (34)

Consequently,

Proposition 2

Vk=defα+gL,kjIWkj(|EL|+i+gL,k)<Vk(det)(s,t,ω)<Vk+=defα+gL,kjEWkj+|EL|+i+gL,k, (35)

which provides uniform bounds ins, t, ωfor the deterministic part of the membrane potential.

5.3 Bounds on the noise variance

Let us now consider the stochastic part Vk(noise)(τk(t,ω),t,ω). It has zero mean, and its variance (27) obeys the bounds:

e2tτk(t,ω)τM,kσR2+τM,k2(σBCk)2(1e2tτk(t,ω)τM,k)σk2(τk(t,ω),t,ω)e2tτk(t,ω)τL,kσR2+τL,k2(σBCk)2(1e2tτk(t,ω)τL,k).

If σR2<τM,k2(σBCk)2 the left-hand side is an increasing function of u=tτk(t,ω)0 so that the minimum, σR2 is reached for u=0 while the maximum is reached for u=+ and is τM,k2(σBCk)2. The opposite holds if σR2τM,k2(σBCk)2. The same argument holds mutatis mutandis for the right-hand side. We set:

σk=defmin(σBCkτM,k2,σR);σk+=defmax(σBCkτL,k2,σR) (36)

so that:

Proposition 3

0<σkσk(τk(t,ω),t,ω)σk+<+. (37)

5.4 The limit τk(t,ω)

For fixed s and t, there are infinitely many rasters such that τk(t,ω)<s (we remind that rasters are infinite sequences). One may argue that taking the difference ts sufficiently large, the probability of such sequences should vanish. It is indeed possible to show (Section 8.1) that this probability vanishes exponentially fast with ts, meaning unfortunately that it is positive whatever ts. So we have to consider cases where τk(t,ω) can go arbitrary far in past (this is also a key toward an extension of the present analysis to more general conductance-based models as discussed in Section 9.3). Therefore, we have to check that the quantities introduced in the previous sections are well defined as τk(t,ω).

Fix s real. For all ω such that τk(t,ω)s - this condition ensuring that k does not fire between s and t - we have, from (28), (31), 0Γk(t1,t,ω)αkj(t1,ω)α+ett1τL,k, t1[s,t]. Now, since limsstett1τL,kdt1=τL,k exists, the limit

limτk(t,ω)Vk(syn)(τk(t,ω),t,ω)=1Ckj=1NWkjtΓk(t1,t,ω)αkj(t1,ω)dt1.

exists as well. The same holds for the external term Vk(ext)(τk(t,ω),t,ω).

Finally, since Γk(τk(t,ω),t,ω)0 as τk(t,ω) the noise term (26) becomes in the limit:

limτk(t,ω)Vk(noise)(τk(t,ω),t,ω)=σBCktΓk(t1,t,ω)dBk(t1),

which is a Gaussian process with mean 0 and a variance (σBCk)2tΓk2(t1,t,ω)dt1 which obeys the bounds (37).

6 Continuity with respect to a raster

6.1 Definition

Due to the particular structure of gIF models, we have seen that the membrane potential at time t is both a function of t and of the full sequence of past spikes ω[t]. One expects, however, the dependence with respect to the past spikes to decay as those spikes are more distant in the past. This issue is related to a notion of continuity with respect to a raster that we now characterize.

Definition 1 Let m be a positive integer. The m-variation of a function f(t,ω)f(t,ω[t]) is:

varm[f(t,)]=sup{|f(t,ω)f(t,ω)|:ω=m,[t]ω}. (38)

where the definition of =m,[t] is given in Equation (5). Hence, this notion characterizes the maximal variation of f(t,) on the set of spikes identical from time [t]m to time [t] (cylinder set). It implements the fact that one may truncate the spike history to time [t]m and make an error which is at most varm[f(t,)].

Definition 2 The function f(t,ω) is continuous if varm[f(t,)]0 as m+.

An additional information is provided by the convergence rate to 0 with m. The faster this convergence, the smaller the error made when replacing an infinite raster by a spike block on a finite time horizon.

6.2 Continuity of conductances

Proposition 4The conductancegk(t,ω)is continuous inω, for allt, for allk=1,,N.

Proof Fix k=1,,N, tR, m>0 integer. We have, for ω=m,[t]ω:

|αkj(t,ω)αkj(t,ω)|=|{r;tj(r)(ω)<t}αkj(ttj(r)(ω)){r;tj(r)(ω)<t}αkj(ttj(r)(ω))|=|{r;tj(r)(ω)<tm}αkj(ttj(r)(ω)){r;tj(r)(ω)<tm}αkj(ttj(r)(ω))|,

since the set of firing times {tmtj(r)(ω)<t}, {tmtj(n)(ω)<t} are identical by hypothesis. So, since αkj(x)0,

|αkj(t,ω)αkj(t,ω)|{r;tj(r)(ω)<tm}αkj(ttj(r)(ω))+{r;tj(r)(ω)<tm}αkj(ttj(r)(ω))2r<tmαkj(tr).

Therefore, as m+, from (12) and setting M=tm,

varm[αkj(t,)]2r<Mαkj(tr)2Pd(mτkj)emτkj,

which converges to 0 as m+.

Therefore, from (17), gk(t,ω) is continuous with a variation

varm[gk(t,)]2j=1NGkjPd(mτkj)emτkj,

which converges exponentially fast to 0 as m+. □

6.3 Continuity of the membrane potentials

Proposition 5The deterministic part of the membrane potential, Vk(det)(τk(t,ω),t,ω), is continuous and itsm-variation decays exponentially fast withm.

Proof In the proof, we shall establish precise upper bounds for the variation in Vk(syn)(τk(t,),t,), Vk(ext)(τk(t,),t,) since they are used later on for the proof of uniqueness of a Gibbs measure (Section 7.4.1). From the previous result, it is easy to show that, for all ω=m,[t]ω, t1t2t:

|Γk(t1,t2,ω)Γk(t1,t2,ω)|Γk(t1,t2,ω)(evarm[gk]Ck(t2t1)1),

Therefore, from (31),

varm[Γk(t1,t2,)]et2t1τL,k(evarm[gk]Ck(t2t1)1)

and Γk(t1,t2,ω) is continuous in ω.

Now, the product Γk(t1,t2,ω)αkj(t1,ω) is continuous as a product of continuous functions. Moreover,

varm[Γk(t1,t2,)αkj(t1,)]supωXΓk(t1,t2,ω)varm[αkj(t1,)]+supωXαkj(t1,ω)varm[Γk(t1,t2,)]=et2t1τL,k[varm[αkj(t1,)]+(evarm[gk]Ck(t2t1)1)αkj(t1,Ω1)],

so that:

varm[Γk(t1,t2,)αkj(t1,)]<et2t1τL,k[2Pd(mτkj)emτkj+(evarm[gk]Ck(t2t1)1)α+].

Since, as m+:

evarm[gk]Ck(t2t1)1varm[gk]Ck(t2t1)2(t2t1)Ckj=1NGkjPd(mτkj)emτkj,

we have,

varm[Γk(t1,t2,)αkj(t1,)]2et2t1τL,k[Pd(mτkj)emτkj+α+(t2t1)Ckj=1NGkjPd(mτkj)emτkj],

which converges to 0 as m+.

Let us show the continuity of Vk(syn)(τk(t,),t,). We have, from (20),

|Vk(syn)(τk(t,ω),t,ω)Vk(syn)(τk(t,ω),t,ω)|1Ckj=1N|Wkj|×|τk(t,ω)tΓk(t1,t,ω)αkj(t1,ω)dt1τk(t,ω)tΓk(t1,t,ω)αkj(t1,ω)dt1|.

The following inequality is used at several places in the paper. For a t1-integrable function f(t1,t,ω), we have:

|τk(t,ω)tf(t1,t,ω)dt1τk(t,ω)tf(t1,t,ω)dt1|τk(t,ω)t|f(t1,t,ω)f(t1,t,ω)|dt1+|τk(t,ω)τk(t,ω)f(t1,t,ω)dt1|. (39)

Here, it gives, for t1t:

|τk(t,ω)tΓk(t1,t,ω)αkj(t1,ω)dt1τk(t,ω)tΓk(t1,t,ω)αkj(t1,ω)dt1|τk(t,ω)t|Γk(t1,t,ω)αkj(t1,ω)Γk(t1,t,ω)αkj(t1,ω)|dt1+|τk(t,ω)τk(t,ω)Γk(t1,t,ω)αkj(t1,ω)dt1|.

For the first term, we have,

τk(t,ω)t|Γk(t1,t,ω)αkj(t1,ω)Γk(t1,t,ω)αkj(t1,ω)|dt1τk(t,ω)tvarm[Γk(t1,t,)αkj(t1,)]dt1tvarm[Γk(t1,t,)αkj(t1,)]dt1t2ett1τL,k[Pd(mτkj)emτkj+α+(tt1)Ckj=1NGkjPd(mτkj)emτkj]dt1=2Pd(mτkj)emτkjtett1τL,kdt1+2α+Ckj=1NGkjPd(mτkj)emτkjt(tt1)ett1τL,kdt1=2τL,k[Pd(mτkj)emτkj+α+τL,kCkj=1NGkjPd(mτkj)emτkj].

Let us now consider the second term. If τk(t,ω)tm or τk(t,ω)tm, then τk(t,ω)=τk(t,ω) and this term vanishes. Therefore, the supremum in the definition of varm[Vk(syn)(τk(t,),t,)] is attained if τk(t,ω)<tm and τk(t,ω)<tm. We may assume, without loss of generality, that τk(t,ω)τk(t,ω). Then, from (32),

τk(t,ω)τk(t,ω)Γk(t1,t,ω)αkj(t1,ω)dt1<α+τk(t,ω)τk(t,ω)ett1τL,kdt1=α+τL,ketτk(t,ω)τL,k(1eτk(t,ω)τk(t,ω)τL,k)α+τL,ketτk(t,ω)τL,kα+τL,kemτL,k.

So, we have, for the variation of Vk(syn)(τk(t,),t,), using (21):

varm[Vk(syn)(τk(t,),t,)]1gL,kj=1N|Wkj|[2(Pd(mτkj)emτkj+α+gL,kj=1NGkjPd(mτkj)emτkj)1gL,kj=1N|Wkj|[+α+emτL,k],

so that finally,

varm[Vk(syn)(τk(t,),t,)]j=1NAkj(syn)Pd(mτkj)emτkj+Bk(syn)emτL,k, (40)

with

Akj(syn)=1gL,k(2|Wkj|+α+GkjgL,kj=1N|Wkj|), (41)
Bk(syn)=α+gL,kj=1N|Wkj|, (42)

and varm[Vk(syn)(τk(t,),t,)] converges to 0 exponentially fast as m+.

Now, let us show the continuity of Vk(ext)(τk(t,ω),t,ω) with respect to ω. We have:

|Vk(ext)(τk(t,ω),t,ω)Vk(ext)(τk(t,ω),t,ω)|=|τk(t,ω)t(ELτL,k+ik(ext)(t1)Ck)Γk(t1,t,ω)dt1|τk(t,ω)t(ELτL,k+ik(ext)(t1)Ck)Γk(t1,t,ω)dt1|τk(t,ω)t|ELτL,k+ik(ext)(t1)Ck||Γk(t1,t,ω)Γk(t1,t,ω)|dt1+τk(t,ω)τk(t,ω)|ELτL,k+ik(ext)(t1)Ck|Γk(t1,t,ω)dt1(|EL|τL,k+i+Ck)×(τk(t,ω)t|Γk(t1,t,ω)Γk(t1,t,ω)|dt1+τk(t,ω)τk(t,ω)Γk(t1,t,ω)dt1)(|EL|τL,k+i+Ck)(τk(t,ω)tvarm[Γk(t1,t,)]dt1+τk(t,ω)τk(t,ω)ett1τL,kdt1)(|EL|τL,k+i+Ck)(2τL,k2Ckj=1NGkjPd(mτkj)emτkj+τL,kemτL,k),

where, in the last inequality, we have used that the supremum in the variation is attained for τk(t,ω)<tm and τk(t,ω)<tm. Finally:

varm[Vk(ext)(τk(t,),t,)]j=1NAkj(ext)Pd(mτkj)emτkj+Bk(ext)emτL,k, (43)

where,

Akj(ext)=2GkjgL,kBk(ext), (44)
Bk(ext)=|EL|+i+gL,k, (45)

and Vk(ext)(τk(t,),t,) is continuous.

As a conclusion, Vk(det)(τk(t,),t,) is continuous as the sum of two continuous functions. □

6.4 Continuity of the variance of Vk(noise)(τk(t,),t,)

Using the same type of arguments, one can also prove that

Proposition 6The varianceσk(τk(t,ω),t,ω)is continuous inω, for allt, for allk=1,,N.

Proof We have, from (27)

|σk2(τk(t,ω),t,ω)σk2(τk(t,ω),t,ω)|σR2varm[Γk2(τk(t,),t,)]+(σBCk)2|τk(t,ω)tΓk2(t1,t,ω)dt1τk(t,ω)tΓk2(t1,t,ω)dt1|.

For the first term, we have that the sup in varm[Γk2(τk(t,),t,)] is attained for τk(t,ω),τk(t,ω)<tm and:

varm[Γk2(τk(t,),t,)]2sup{Γk2(τk(t,ω),t,ω);ωs.t.τk(t,ω)<tm}2e2mτL,k.

For the second term, we have:

|τk(t,ω)tΓk2(t1,t,ω)dt1τk(t,ω)tΓk2(t1,t,ω)dt1|τk(t,ω)t|Γk2(t1,t,ω)Γk2(t1,t,ω)|dt1+τk(t,ω)τk(t,ω)Γk2(t1,t,ω)dt1τk(t,ω)tvarm[Γk2(t1,t,)]dt1+τk(t,ω)τk(t,ω)e2(tt1)τL,kdt1te2(tt1)τL,k(e2varm[gk(t,)(tt1)]Ck1)dt1+τL,k2e2mτL,kτL,k22Ckvarm[gk(t,)]+τL,k2e2mτL,k,

so that finally,

varm[σk2(t,)]j=1NAkj(σ)Pd(mτkj)emτkj+Ck(σ)e2mτL,k, (46)

with

Akj(σ)=GkjgL,k(σBτL,kCk)2,Ck(σ)=12(σBτL,kCk)2+2σR2,

and continuity follows. □

6.5 Remark

Note that the variation in all quantities considered here is exponentially decaying with a time constant given by max(τkj,τL,k). This is physically satisfactory: the loss of memory in the system is controlled by the leak time and the decay of the post-synaptic potential.

7 Statistics of raster plots

7.1 Conditional probability distribution of Vk(t)

Recall that P is the joint distribution of the noise and E[] the expectation under P. Under P, the membrane potential V is a stochastic process whose evolution, below the threshold, is given Equations (24), (25) and above by (4). It follows from the previous analysis that:

Proposition 7Conditionally toω[t], V(t)is Gaussian with mean:

E[Vk(t)|ω[t]]=Vk(det)(τk(t,ω),t,ω),k=1,,N,

and covariance:

Cov[Vk(t),Vl(t)|ω[t]]=σk2(τk(t,ω),t,ω)δkl,k,l=1,,N

whereσk2(τk(t,ω),t,ω)is given by (27).

Moreover, theVk(t)’s, k=1,,Nare conditionally independent.

Proof Essentially, the proof is a direct consequence of Equations (24), (25) and the Gaussian nature of the noise Vk(noise)(τk(t,ω),t,ω). The conditional independence results from the fact that:

Cov[Vk(t),Vl(t)|ω[t]]=σB2CkClE[τk(t,ω)tΓk(t1,t,ω)dBk(t1)τl(t,ω)tΓl(t2,t,ω)dBl(t2)|ω]+Cov[Γk(τk(t,ω),t,ω)Vreset,Γl(τl(t,ω),t,ω)Vreset]=σB2CkClτk(t,ω)tτl(t,ω)tΓk(t1,t,ω)Γl(t2,t,ω)E[dBk(t1)dBl(t2)]+σR2Γk2(τk(t,ω),t,ω)δkl=δkl[(σBCk)2τk(t,ω)tτk(t,ω)tΓk(t1,t,ω)Γk(t2,t,ω)δ(t1t2)dt1dt2δkl[+σR2Γk2(τk(t,ω),t,ω)]=σk2(τk(t,ω),t,ω)δkl.

 □

7.2 The transition probability

We now compute the probability of a spiking pattern at time n=[t], ω(n), given the past sequence ωn1.

Proposition 8The probability ofω(n)conditionally toωn1is given by:

P[ω(n)|ωn1]=k=1NP[ωk(n)|ωn1], (47)

with

P[ωk(n)|ωn1]=ωk(n)π(Xk(n1,ω))+(1ωk(n))(1π(Xk(n1,ω))), (48)

where

Xk(n1,ω)=θVk(det)(τk(n1,ω),n1,ω)σk(τk(n1,ω),n1,ω), (49)

and

π(x)=12πx+eu22du. (50)

Proof We have, using the conditional independence of the Vk(n)’s:

P[ω(n)|ωn1]=k=1N(ωk(n)P[Vk(n1)θ|ωn1]+(1ωk(n))P[Vk(n1)<θ|ωn1]).

Since the Vk(n1)’s are conditionally Gaussian, with mean Vk(det)(τk(n1,ω),n1,ω) and variance σk2(τk(n1,ω),n1,ω), we directly obtain (47), (48).

Note that since σk(τk(n1,ω),n1,ω) is bounded from below by a positive quantity (see (37)), the ratio θVk(det)(τk(n1,ω),n1,ω)σk(τk(n1,ω),n1,ω) in (48) is defined for all ωX. □

7.3 Chains with complete connections

The transition probabilities (47) define a stochastic process on the set of raster plots where the underlying membrane potential dynamics is summarized in the terms Vk(det)(τk(n1,ω),n1,ω) and σk(τk(n1,ω),n1,ω). While the integral defining these terms extends from τk(n1,ω) to n1 where τk(n1,ω) can go arbitrary far in the past, the integrand involves the conductance gk(n1,ω) that summarizes an history dating back to s=. As a consequence, the probability transitions generate a stochastic process with unbounded memory, thus non-Markovian. One may argue that this property is a result of our procedure of taking the initial condition in a infinite past s, to remove the unresolved dependency on gk(s) (Section 3.4). So the alternative is either to keep s finite in order to have a Markovian process; then, we have to fix arbitrarily gk(s) and the probability distribution of Vk(s). Or we take s, removing the initial condition, to the price of considering a non-Markovian process. Actually, such processes are widely studied in the literature under the name of “chains with complete connections” [43,46-49] and several important results can be used here. So we adopt the second approach of the alternative. As a by-product, the knowledge of the Gibbs measure provided by this analysis allows a posteriori to fix the probability distribution of Vk(s).

For the sake of completeness, we give here the definition of a chain with complete connections (see [43] for more details). For nZ, we note An1 the set of sequences ωn1 and Fn1 the related σ-algebra, while Inline graphic is the σ-algebra related with X=AZ. P(X,F) is the set of probability measures on (X,F).

Definition 3 A system of transition probabilities is a family {Pn}nZ of functions

Pn[|]:A×An1[0,1],

such that the following conditions hold for every nZ:

• For every ω(n)A, the function Pn[ω(n)|.] is measurable with respect to Fn1.

• For every ωn1An1,

ω(n)APn[ω(n)|ωn1]=1.

A probability measure μ in P(X,F) is consistent with a system of transition probabilities {Pn}nZ if for all nZ and all Fn-measurable functions f:

f(ωn)μ(dω)=ω(n)Af(ωn1ω(n))Pn[ω(n)|ωn1]μ(dω).

Such a measure μ is called a “chain with complete connections consistent with the system of transition probabilities {Pn}nZ”.

The transitions probabilities (47) constitute such a system of transitions probabilities: the summation to 1 is obvious while the measurability follows from the continuity of Pn[ω(n)|ωn1] proved below. To simplify notations, we write p(n,ω) instead of Pn[ω(n)|ωn1] whenever it makes no confusion.

7.4 Existence of a consistent probability measure μ

In the definition above, the measure μ summarizes the statistics of spike trains from −∞ to +∞. Its marginals allow the characterization of finite spike blocks. So, μ provides the characterization of spike train statistics in gIF models. Its existence is established by a standard result in the frame of chains with complete connections stating that a system of continuous transition probabilities on a compact space has at least one probability measure consistent with it [43]. Since π is a continuous function the continuity of p(n,ω) with respect to ω follows from the continuity of Vk(det)(τk(n1,ω),n1,ω) and the continuity of σk(τk(n1,ω),n1,ω), proved in Section 6.

Therefore, there is at least one probability measure consistent with (47).

7.4.1 The Gibbs distribution

A system of transition probabilities is non-null if for all nZ and all ωn1An1P[ω(n)|ωn1]>0. Following [50], a chain with complete connection μ is a Gibbs measure consistent with the system of transition probabilities p(n,) if this system is continuous and non-null. Gibbs distributions play an important role in statistical physics, as well as ergodic theory and stochastic processes. In statistical physics, they are usually derived from the maximal entropy principle [14]. Here, we use them in a more general context affording to consider non-stationary processes. It turns out that the spike train statistics in gIF model is given by such a Gibbs measure. In this section, we prove the main mathematical result of this paper (uniqueness of the Gibbs measure). The consequences for spike trains characterizations are discussed in the next section.

Theorem 1For each choice of parametersγH, the gIF model (16) has a unique Gibbs distribution.

The proof of uniqueness is based on the following criteria due to Fernandez and Maillard [50].

Proposition 9Let:2

m(p)=infnZinfωAnp(n,ω),

and

v(p)=supmZnmvarnm[p(n,)].

Ifm(p)>0andv(p)<, then there exists at most one Gibbs measure consistent with it.

So, to prove the uniqueness, we only have to establish that

m(p)>0, (51)
v(p)<+. (52)

Proofm(p)>0.

Recall that:

p(n,ω)=k=1N[ωk(n)π(Xk(n1,ω))+(1ωk(n))(1π(Xk(n1,ω)))].

From (35), (37), we have:

<θVk+σk+<Xk(n1,ω)<θVkσk<+. (53)

Since π, given by (50), is monotonously decreasing, we have:

0<πk=defπ(θVkσk)<π(Xk(n1,ω))<πk+=defπ(θVk+σk+)<1,

so that:

0<Πk=defmin(πk,1πk+)<ωk(n)πk+(1ωk(n))(1πk+)<pk(n,ω)<ωk(n)πk++(1ωk(n))(1πk)<Πk+=defmax(πk+,1πk)<1. (54)

Finally,

m(p)>k=1NΠk>0,

which proves (51). This also proves the non-nullness of the system of transition probabilities.

v(p)<.

The proof, which is rather long, is given in the appendix. □

8 Consequences

8.1 The probability that neuron k does not fire in the time interval [s,t]

In Section 4.2, we argued that this probability vanishes exponentially fast with ts. This probability is μ[n=[s]+1[t]{ωk(n)=0}]. We now prove this result.

Proposition 10The probability that neuronkdoes not fire within the time interval[s,t], ts>1has the following bounds:

0<Π[t][s]<μ[n=[s]+1[t]{ωk(n)=0}]<Π+[t][s]<1,

for some constants0<Π<Π+<1depending on the system parametersγH.

Proof We have:

μ[n=[s]+1[t]{ωk(n)=0}]=A[s]μ[n=[s]+1[t]{ωk(n)=0}|ω[s]]dμ(ω)=A[s]n=[s]+1[t]μ[{ωk(n)=0}|ωn1]dμ(ω)=A[s]n=[s]+1[t]P[ωk(n)=0|ωn1]dμ(ω),

where P[ωk(n)=0|ωn1] is given by (47) and obeys the bounds (54). Therefore, setting Π=k=1NΠk and Π+=k=1NΠk+, we have

Π[t][s]A[s]dμ(ω)=Π[t][s]μ[τk(t,ω)s]Π+[t][s]A[s]dμ(ω)=Π+[t][s].

 □

8.2 Back to spike trains analysis with the maximal entropy principle

Here, we shortly develop the consequences of our results in relation with the statistical model estimation discussed in the introduction. A more detailed discussion will be published elsewhere (in preparation and [51]). Set:

ϕ(n,ω)=deflogp(n,ω)=k=1Nϕk(n,ω), (55)

with,

ϕk(n,ω)=defωk(n)logπ(Xk(n1,ω))+(1ωk(n))log(1π(Xk(n1,ω))). (56)

The function ϕ is a Gibbs potential [52]. Indeed, we have m<nωn:

μ[ωmn|ωm1]=expl=mnϕ(l,ω).

This equation emphasizes the connection with Gibbs distributions in statistical physics that considers probability distributions on multidimensional lattices with specified boundary conditions and their behavior under space translations [52]. The correspondence with our case is that “time” is represented by a mono-dimensional space and where the “boundary conditions” are the past ωm1. Note that in our case, the partition function is equal to 1.

For simplicity, assume stationarity (this is equivalent to assuming a time-independent external current). In this case, it is sufficient to consider the potential at time n=0.

Thanks to the bounds (53), one can make a series expansion of the functions log(π) and log(1π) and rewrite the potential under the form of the expansion:

graphic file with name 2190-8567-1-8-i506.gif (57)

where P(N,R) is the set of non-repeated pairs of integers (k,N) with k{1,,N} and n{R,,0}. We call the product ωk1(n1)ωkr(nr) a monomial. It is 1 if and only if neuron k1 fires at time n1,,kr fires at time nr. The λ(k1,n1),,(kr,nr)’s are explicit functions of the parameters γ. Due to the causal form of the potential, where the time-0 spike, ωk(0), is multiplied by a function of the past ω1, the polynomial expansion does not contain monomials of the form ωk1(0)ωkr(0), r>1 (the corresponding coefficient λ vanishes).

Since the potential has infinite range, the expansion (57) contains infinitely many terms. One can nevertheless consider truncations to a range R=D+1 corresponding to truncating the memory of the process to some memory depth D. Note that although truncations with a memory depth D are approximations, the distance with the exact potential converges exponentially fast to 0 as D+ thanks to the continuity of the potential, with a decay rate controlled by synaptic responses and leak rate.

The truncated Gibbs potential has the form:

ϕ(D)(ωD0)=lλlϕl(ωD0); (58)

where l stands for (k1,n1),,(kr,nr) and is an enumeration of the elements in P(N,D+1) and where ϕl is the corresponding monomial. Due to the truncation, (58), contrarily to (55), is not normalized. Its partition function3 is not equal to 1, and its computation becomes rapidly intractable as soon as the number of neurons and memory depth increases.

Clearly, (58) is precisely the form of potential which is obtained under the maximal entropy principle, where theϕl’s are constraints of type “neuronk1is firing at timen1, neuronk2is firing at timen2,” and theλl’s the conjugated Lagrange multipliers. Thus, using the maximal entropy principle to characterize spike statistics in the gIF model by expressing constraints in terms of spike events (monomials), one can at best find an approximation which can be rather bad, especially if those constraints focus on instantaneous spike patterns (D=0) or short memory patterns. Moreover, increasing the memory depth to approach better the right statistics leads to an exponential increase in the number of monomials which becomes rapidly intractable. Finally, the Lagrange multipliers λl are rather difficult to interpret.

On the opposite, the analytic form (55) depends only on a finite numbers of parameters (γ) constraining the neural network dynamics, which have a straightforward interpretations being physical quantities. This shows that, at least in gIF model, the linear Gibbs potential (58) obtained from the maximal entropy principle is not really appropriate, even for empirical/numerical purposes, and that a form (55) where the infinite memory ω1 is replaced by ωD1 could be more efficient although non-linear.

To finish this section, let us discuss the link with Ising model in light of the present work. Ising model corresponds to a memory-less case, hence to D=0. Since the causal structure of the Gibbs potential forbids monomials of the form ωk1(0)ωkr(0), the D=0 expansion of the gIF-Gibbs potential corresponds to a Bernoulli distribution where neurons are independentϕ(0)(0,ω)=k=1Nλkωk(0). The Ising model is therefore irrelevant to approximate the exact potential of gIF model, if one wants to reproduce spike statistics at the minimal discretization time scaleδ without considering memory effects.

However, in real data analysis, people are usually binning data, with a time windows of width w10-20 ms. Binning consists of recoding the raster plot with spikes amalgamation. The binned raster b consists of “spikes” bk(n){0,1} where bk(n)=1 if neuron k fired at least once in the time window [nw,(n+1)w[. In the expansion (57), this corresponds to collecting all monomials corresponding to bk(n)=1 in a unique monomial. In this way, the binned potential contains indeed an Ising term… that mixes all spike events occurring within the time interval w. These events appear simultaneous because of binning, leading to the Ising pairwise term bk1(0)bk2(0) while events occurring on smaller time scales are scrambled by this procedure.

The binning effect on Gibbs potential requires, however, a more detailed description. This will be discussed elsewhere.

9 Discussion

To conclude this paper, we would like to discuss several consequences and possible extensions of this work.

9.1 The spike time discretization

In gIF model, membrane potential evolves continuously while conductance is updated with spike occurrence considered as discrete events. Here, we discuss this time discretization. Actually, there are two distinct questions.

9.1.1 The limit of time-bin tending to 0

This limit would correspond to a case where spike is instantaneous and modeled by a Dirac distribution. As discussed in [32], this limit raises serious difficulties. To summarize, in real neurons, firing occurs within a finite time δ corresponding to the time of raise and fall for the membrane potential. This involves physicochemical processes that cannot be instantaneous. The time curse of the membrane potential during the spike is described by differential equations, like Hodgkin-Huxley’s [35]. Although, the time scale dt appearing in the differential equations has the mathematical meaning of being arbitrary small, on biophysical grounds, this time scale cannot be arbitrary small, otherwise the Hodgkin-Huxley equations loose their meaning. Indeed, they correspond to an average over microscopic phenomena such as ionic channels dynamics. In particular, their time scale must be sufficiently large to ensure that the description of ionic channels dynamics (opening and closing) in terms of probabilities is valid so dt must be larger than the characteristic time of opening-closing of ionic channels τP. Additionally, Hodgkin-Huxley’s equations use a Markovian approach (master equation) for the dynamics of hmn gates. This requires that the characteristic time dt is quite a bit larger than the characteristic time of decay for the time correlations between gates activity τC. Summarizing, we must have 0<τCτP<dt<δ. Thus, on biophysical grounds, δ cannot be arbitrary small.

In our case, the δ0 limit is armless, however, provided we keep a non-zero refractory period, ensuring that only finitely many spikes occur in a finite time interval. Taking the limit δ0 without considering a refractory period raises mathematical problems. One can in principle have uncountably many spikes in a finite time interval leading to the divergence of physical quantities like energy. Also, one can generate nice causal paradoxes [37]. Take a loop with two neurons one excitatory and one inhibitory and assume instantaneous propagation (the α profile is then represented by a Dirac distribution). Then, depending on the synaptic weights value, one can have a situation where neuron 1 fires instantaneously and make instantaneously 2 firing which prevents instantaneously 1 from firing and so on. So taking the limit δ0 as well as τrefr0 induces pathologies not inherent to our approach but to IF models.

9.1.2 Synchronization for distinct neurons

There is a more subtle issue pointed out in [53]. We do not only discretize time for each neuron’ spikes, we align the spikes emitted by distinct neurons on a discrete-time grid, as an experimental raster does. As shown in [32] this induces, in gIF models with a purely deterministic dynamics (no noise and reset to a constant value), an artificial synchronization. As a consequence, the deterministic dynamics of gIF models has generically only stable periodic orbits, although periods can be larger than any accessible computational time in a specific region of the parameters space. Additionally, these periods increase as δ decreases. The addition of noise on dynamics and on the reset value, as we propose in this paper, removes this synchronization effect.

9.2 Refractory period

In the definition of the model, we have assumed that the refractory period τref was smaller than 1. The consequence for raster plots is that one can have two consecutive 1’s in the spike sequence of a neuron. The extension to the case where τref>1 is straightforward for spike statistics. Having such a refractory period forbids some sequences. For example, if 1<τref2, then all sequences containing two consecutive 1’s for one neuron (1,1) are forbidden. If 2<τref3 sequences containing 1,,1 for a given neuron, where =0,1, are forbidden, and so on. More generally, the procedure consisting of forbidding specific (finite) spike blocks is equivalent to introducing a grammar in the spike generation. This grammar can be implemented in the Gibbs potential: forbidden sequences have a potential equal to −∞ (resp. a zero probability). In this case, X=AZ, the set of all possible rasters, becomes a subset where forbidden sequences have been removed.

9.3 Beyond IF models

Let us now discuss the extension of the present work to more general models of neurons. First, one characteristic feature of integrate and fire models is the reset, which has the consequence that the memory of activity preceding the spike is lost after reset. Although in the deterministic (noiseless) case, this is a simplifying feature allowing for example to fully characterize the asymptotic dynamics of (discrete time) IF models [32,54], here, it somewhat renders more complex the analysis. Indeed, it led us to introduce the notion of “last reset” time and, at some point in the proof (see, e.g., Equation (39)), obliged us to consider several situations (e.g., τk(t,ω)tm or τk(t,ω)tm versus τk(t,ω)<tm and τk(t,ω)<tm in the proof of continuity, Section 6). On the opposite, considering a model where no such reset occur would simply lead us to consider a model where τk(t,ω), ∀ω, ∀k. This case is already considered in our formalism, and actually, considering that τk(t,ω), ∀ω, ∀k simplifies the proofs (for example it eliminates the second term in Equation (39)).

As a matter of fact, the theorems established in this paper should therefore also hold without reset. But this requires to replace the firing condition (4) by another condition stating what the membrane potential does during the spike. Although it could be possible to propose an ad hoc form for the spike, it would certainly be more interesting to extend the results here to models where neurons activity depends on additional variables such as adaptation currents, as in the FitzHugh-Nagumo model [39-42], or activation-inactivation variables as in the Hodgkin-Huxley model [35].

The present formalism affords an extension toward such models, where the neuron fires whenever its membrane potential belongs to a region of the phase space, which can be delimited by membrane potentials plus additional variables such as adaptation currents or activation-inactivation variables, and where the spike is controlled by the global dynamics of all these variables. But while here the firing of a neuron is described by the crossing of a fixed threshold, in the FitzHugh-Nagumo model, it is given by the crossing of a separatrix in the plane (voltage-adaptation current) and by a more complex “frontier” in the Hodgkin-Huxley model [2,55]. One difficulty is to precisely define this region. To our knowledge, there is no clear agreement for the Hodgkin-Huxley model (some authors [2] even suggest that the “spike region” could have a fractal frontier). The extension toward FitzHugh-Nagumo seems more manageable.

Finally, the most important difficulty toward extending this paper results in more realistic neural networks is the definition of the synaptic spike response. In IF models, the spike is thought as a punctual “event” (typically, an “instantaneous” pulse) while the synaptic response is described by a convolution kernel (the α-profile). This leads one to consider a somewhat artificial mixed dynamics where membrane potential evolves continuously while spike are discrete events. In more realistic models, one would have to consider kinetic equations for neurotransmitter release, receptor binding and opening of post-synaptic ionic channels [5,44]. Additionally, the consideration of these mechanics deserves a spatially extended modeling of the neuron, with time delays. In this case, all variables evolve continuously and the statistics of spike trains would be characterized by the statistics of return times in the “spike region”. This statistics is induced by some probability measure in the phase space; a natural candidate would be the Sinai-Ruelle-Bowen measure [56-58], for stationary dynamics, or the time-dependent SRB measure for non-stationary cases, as defined, e.g., in [59]. These measures are Gibbs measures as well [60]. Here, the main mathematical property ensuring existence and uniqueness of such a measure would be uniform hyperbolicity. To our knowledge, conditions ensuring such a property in networks have not been established yet neither for Hodgkin-Huxley’s nor for FitzHugh-Nagumo’s models.

9.4 Synaptic plasticity

As the results established in this paper hold for any synaptic weight value in Inline graphic, they hold as well for networks underlying synaptic plasticity mechanisms. The effects of a joint evolution of spikes dynamics, depending on synaptic weights distributions, and synaptic weights evolution depending on spike dynamics have been studied in [61]. In particular it has been shown that mechanisms such as Spike-Time-Dependent Plasticity are related to a variational principle for a quantity, the topological pressure, derived for the thermodynamic formalism of Gibbs distributions. In the paper [61], the fact that spike trains statistics were given by a Gibbs distribution was a working assumption. Therefore, the present work establishes a firm ground for [61].

Appendix

Here, we establish (52). We use the following lemma.

Lemma 1For a collection0ak,ak1, k=1,,N, we have

|k=1Nakk=1Nak|k=1N|akak|. (59)

This lemma is easily proved by recursion.

We have, for nZ, m0

varm[p(n,)]=sup{|k=1Nakk=1Nak|;ω=m,nω},

where:

ak=ωk(n)π(Xk(n1,ω))+(1ωk(n))(1π(Xk(n1,ω))),ak=ωk(n)π(Xk(n1,ω))+(1ωk(n))(1π(Xk(n1,ω))).

Therefore, using inequality (59),

varm[p(n,)]k=1Nsup{|akak|;ω=m,nω}.

The condition ω=m,nω implies ωk(n)=ωk(n) so that:

|akak|=|π(Xk(n1,ω))π(Xk(n1,ω))|.

We have

|π(Xk(n1,ω))π(Xk(n1,ω))||Xk(n1,ω)Xk(n1,ω)|π,

with π=supxR|π(x)|=12π, so that

varm[p(n,)]12πk=1Nvarm[Xk(n1,)].

We have now to upper bound varm[Xk(n1,)]=sup{|Xk(n1,ω)Xk(n1,ω)|;ω=m,n1ω}. We have

varm[Xk(n1,)]varm[θVk(det)(τk(n1,),n1,)]supωX1σk(τk(n1,ω),n1,ω)+supωX|θVk(det)(τk(n1,ω),n1,ω)|varm[1σk(τk(n1,),n1,)],

with,

varm[θVk(det)(τk(n1,),n1,)]=varm[Vk(det)(τk(n1,),n1,)]varm[Vk(syn)(τk(n1,),n1,)]+varm[Vk(ext)(τk(n1,),n1,)]

so that, from (40), (43):

varm[θVk(det)(τk(n1,),n1,)]j=1NAkj(det)Pd(mτkj)emτkj+Bk(det)emτL,k (60)

where:

Akj(det)=Akj(syn)+Akj(ext),

(see Equations (41), (44)) and:

Bk(det)=Bk(syn)+Bk(ext),

(see Equations (42), (45)).

Moreover, from (37),

supωX1σk(τk(n1,ω),n1,ω)1σk. (61)

From (35),

supωX|θVk(det)(τk(n1,ω),n1,ω)|max(|θVk|,|θVk+|). (62)

Finally,

varm[1σk(τk(n1,),n1,)]varm[σk(τk(n1,),n1,)]supωX1σk2(τk(n1,ω),n1,ω)1(σk)2varm[σk(τk(n1,),n1,)],

while

varm[σk2(τk(n1,),n1,)]2σkvarm[σk(τk(n1,),n1,)],

from (37), so that:

varm[σk(τk(n1,),n1,)]12σkvarm[σk2(τk(n1,),n1,)],

and, from (46),

varm[1σk(τk(n1,),n1,)]12(σk)3[j=1NAkj(σ)Pd(mτkj)emτkj+Ck(σ)e2mτL,k]. (63)

Summarizing (60), (61), (62), (63)

varm[Xk(n1,)]1σk(j=1NAkj(det)Pd(mτkj)emτkj+Bk(det)emτL,k)+max[|θVk|,|θVk+|]+max×1(2σk)3[j=1NAkj(σ)Pd(mτkj)emτkj+Ck(σ)e2mτL,k].

Therefore, we have:

varm[Xk(n1,)]j=1NAkj(X)Pd(mτkj)emτkj+Bk(X)emτL,k+Ck(X)e2mτL,k,

for constants Akj(X), Bk(X), Ck(X).

As a consequence,

varm[p(n,)]12πk=1N[j=1NAkj(X)Pd(mτkj)emτkj+Bk(X)emτL,k+Ck(X)e2mτL,k].

Therefore, nmvarnm[p(n,)] is bounded from above by the series

12πk=1N[j=1NAkj(X)nmPd(nmτkj)enmτkj+Bk(X)nmenmτL,k12πk=1N[+Ck(X)nme2(nm)τL,k]=12πk=1N[j=1NAkj(X)l0Pd(lτkj)elτkj+Bk(X)l0elτL,k12πk=1N[+Ck(X)l0e2lτL,k],

which converges, uniformly in m. As a consequence, in (52), v(p)<+ and we are done.

Competing interests

The author declare that he has no competing interests.

Footnotes

1We assume that all neurons have the same firing threshold. The notion of threshold is already an approximation which is not sharply defined in Hodgkin-Huxley [35] or Fitzhugh-Nagumo [40,41] models (more precisely it is not a constant but it depends on the dynamical variables [55]). Recent experiments [62-64] even suggest that there may be no real potential threshold.

2In [50], the authors use the following definition for the nm variation, which reads in our notations:

varm[p(n,)]=sup{|p(n,ω)p(n,ω)|;ω,ωAn,ωmn=ωmn},mn.

It differs therefore slightly from our definition (5), (38). The correspondence is varm[p(n,)]=varnm[p(n,)]. The definition of v(p) takes this correspondence into account.

3For D>0, this is Z(ωD1)=ω(0)eϕ(D)(ωD0), ensuring that eϕ(D)(ωD0)Z(ωD1) is a conditional probability P[ω(0)|ωD1]. Hence, it is not a constant but a function of the past ωD1, in a similar way to statistical physics on lattices where the partition function depends on the boundary conditions. Only in the case D=0 (memory less) is this a constant.

Acknowledgements

I would like to thank both reviewers for a careful reading of the manuscript, constructive criticism, and helpful comments. I also acknowledge G. Maillard and J. R. Chazottes for invaluable advices and references. I am also grateful to O. Faugeras and T. Viéville for useful comments. This work has been supported by the ERC grant Nervi, the ANR grant KEOPS, and the European grant BrainScales.

References

  1. Izhikevich E. Which model to use for cortical spiking neurons? IEEE Trans. 2004;15(5):1063–1070. doi: 10.1109/TNN.2004.832719. [DOI] [PubMed] [Google Scholar]
  2. Guckenheimer J, Oliva R. Chaos in the Hodgkin-Huxley model. SIAM J. 2002;1:105. [Google Scholar]
  3. Faure P, Korn H. A nonrandom dynamic component in the synaptic noise of a central neuron. Proc. Natl. Acad. Sci. USA. 1997;94:6506–6511. doi: 10.1073/pnas.94.12.6506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Faure P, Korn H. Is there chaos in the brain? I. Concept of nonlinear dynamics and methods of investigation. C. R. Acad. Sci., Sér. 3 Sci. Vie. 2001;324:773–793. doi: 10.1016/s0764-4469(01)01377-4. [DOI] [PubMed] [Google Scholar]
  5. Ermentrout GB, Terman D. Foundations of Mathematical Neuroscience. Springer, Berlin; 2010. [Google Scholar]
  6. Riehle A, Grün S, Diesmann M, Aertsen A. Spike synchronization and rate modulation differentially involved in motor cortical function. Science. 1997;278:1950–1953. doi: 10.1126/science.278.5345.1950. [DOI] [PubMed] [Google Scholar]
  7. Grammont F, Riehle A. Precise spike synchronization in monkey motor cortex involved in preparation for movement. Exp. 1999;128:118–122. doi: 10.1007/s002210050826. [DOI] [PubMed] [Google Scholar]
  8. Riehle A, Grammont F, Diesmann M, Grün S. Dynamical changes and temporal precision of synchronized spiking activity in monkey motor cortex during movement preparation. J. Phys. (Paris) 2000;94:569–582. doi: 10.1016/s0928-4257(00)01100-1. [DOI] [PubMed] [Google Scholar]
  9. Grammont F, Riehle A. Spike synchronization and firing rate in a population of motor cortical neurons in relation to movement direction and reaction time. Biol. 2003;88(5):260–373. doi: 10.1007/s00422-002-0385-3. [DOI] [PubMed] [Google Scholar]
  10. Rieke F, Warland D, de Ruyter van Steveninck R, Bialek W. Spikes: Exploring the Neural Code. Bradford Books, Denver, CO; 1997. [Google Scholar]
  11. Schneidman E, Berry M, Segev R, Bialek W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature. 2006;440(7087):1007–1012. doi: 10.1038/nature04701. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Tkačik, G., Schneidman, E., Berry, M.J. II, Bialek, W.: Spin glass models for a network of real neurons. ArXiv:0912.5409v1, 15 (2009)
  13. Marre O, Boustani SE, Frégnac Y, Destexhe A. Prediction of spatiotemporal patterns of neural activity from pairwise correlations. Phys. 2009;102 doi: 10.1103/PhysRevLett.102.138101. [DOI] [PubMed] [Google Scholar]
  14. Jaynes E. Information theory and statistical mechanics. Phys. 1957;106:620. [Google Scholar]
  15. Shlens J, Field G, Gauthier J, Grivich M, Petrusca D, Sher A, Litke A, Chichilnisky E. The structure of multi-neuron firing patterns in primate retina. J. 2006;26(32) doi: 10.1523/JNEUROSCI.1282-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Cocco S, Leibler S, Monasson R. Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods. Proc. Natl. Acad. Sci. USA. 2009;106(33):14058–14062. doi: 10.1073/pnas.0906705106. http://www.pnas.org/cgi/doi/10.1073/pnas.0906705106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Tang A, Jackson D, Hobbs J, Chen W, Smith JL, Patel H, Prieto A, Petrusca D, Grivich MI, Sher A, Hottowy P, Dabrowski W, Litke AM, Beggs JM. A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. J. 2008;28(2):505–518. doi: 10.1523/JNEUROSCI.3359-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Roudi Y, Nirenberg S, Latham P. Pairwise maximum entropy models for studying large biological systems: when they can work and when they can’t. PLoS Comput. 2009;5(5):1–18. doi: 10.1371/journal.pcbi.1000380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Shlens J, Field GD, Gauthier JL, Greschner M, Sher A, Litke AM, Chichilnisky EJ. The structure of large-scale synchronized firing in primate retina. J. 2009;29(15):5022–5031. doi: 10.1523/JNEUROSCI.5187-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Ohiorhenuan IE, Mechler F, Purpura KP, Schmid AM, Hu Q, Victor JD. Sparse coding and high-order correlations in fine-scale cortical networks. Nature. 2010;466(7):617–621. doi: 10.1038/nature09178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Lindsey B, Morris K, Shannon R, Gerstein G. Repeated patterns of distributed synchrony in neuronal assemblies. J. 1997;78:1714–1719. doi: 10.1152/jn.1997.78.3.1714. [DOI] [PubMed] [Google Scholar]
  22. Villa AEP, Tetko IV, Hyland B, Najem A. Spatiotemporal activity patterns of rat cortical neurons predict responses in a conditioned task. Proc. Natl. Acad. Sci. USA. 1999;96(3):1106–1111. doi: 10.1073/pnas.96.3.1106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Segev R, Baruchi I, Hulata E, Ben-Jacob E. Hidden neuronal correlations in cultured networks. Phys. 2004;92 doi: 10.1103/PhysRevLett.92.118102. [DOI] [PubMed] [Google Scholar]
  24. Amari SI. In: Analysis of Parallel Spike Trains. Grün S., Rotter S., editor. Springer, Berlin; 2010. Information geometry of multiple spike trains; pp. 221–253. [Google Scholar]
  25. Roudi, Y., Hertz, J.: Mean field theory for non-equilibrium network reconstruction. ArXiv:1009.5946v1 (2010)
  26. Vasquez, J.C., Viéville, T., Cessac, B.: Parametric estimation of Gibbs distributions as generalized maximum-entropy models for the analysis of spike train statistics. Research Report RR-7561, INRIA (2011)
  27. Vasquez, J.C., Palacios, A.G., Marre, O., Berry, M.J., Cessac, B.: Gibbs distribution analysis of temporal correlation structure on multicell spike trains from retina ganglion cells. J. Phys. (Paris) (in press) (2011) [DOI] [PMC free article] [PubMed]
  28. Cessac, B.: A discrete time neural network model with spiking neurons II. Dynamics with noise. J. Math. Biol. (in press) (2010). http://www.springerlink.com/content/j602452n1224x602/ [DOI] [PubMed]
  29. Kravchuk, K., Vidybida, A.: Delayed feedback causes non-Markovian behavior of neuronal firing statistics. J. Phys. A (2010). ArXiv:1012.6019.
  30. Vasquez, J.C., Viéville, T., Cessac, B.: Entropy-based parametric estimation of spike train statistics. Research Report, INRIA (2010). http://arxiv.org/abs/1003.3157.
  31. Rudolph M, Destexhe A. Analytical Integrate and Fire Neuron models with conductance-based dynamics for event driven simulation strategies. Neural Comput. 2006;18:2146–2210. doi: 10.1162/neco.2006.18.9.2146. http://www.mitpressjournals.org/doi/abs/10.1162/neco.2006.18.9.2146. [DOI] [PubMed] [Google Scholar]
  32. Cessac, B., Viéville, T.: On dynamics of integrate-and-fire neural networks with adaptive conductances. Frontiers Neurosci 2(2) (2008) [DOI] [PMC free article] [PubMed]
  33. Jolivet R, Lewis T, Gerstner W. Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. J. 2004;92:959–976. doi: 10.1152/jn.00190.2004. [DOI] [PubMed] [Google Scholar]
  34. Jolivet R, Rauch A, Lescher HR, Gerstner W. Integrate-and-Fire Models with Adaptation Are Good Enough. MIT Press, Cambridge; 2006. [Google Scholar]
  35. Hodgkin A, Huxley A. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. 1952;117:500–544. doi: 10.1113/jphysiol.1952.sp004764. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Lapicque L. Recherches quantitatifs sur l’excitation des nerfs traitee comme une polarisation. J. Physiol. Paris. 1907;9:620–635. [Google Scholar]
  37. Cessac B. A view of neural networks as dynamical systems. Int. J. Bifurc. Chaos. 2010;20(6):1585–1629. http://lanl.arxiv.org/abs/0901.2203. [Google Scholar]
  38. Kirst C, Geisel T, Timme M. Sequential desynchronization in networks of spiking neurons with partial reset. Phys. 2009;102 doi: 10.1103/PhysRevLett.102.068101. http://link.aps.org/doi/10.1103/PhysRevLett.102.068101. [DOI] [PubMed] [Google Scholar]
  39. FitzHugh R. Mathematical models of threshold phenomena in the nerve membrane. Bull. 1955;17:257–278. [Google Scholar]
  40. FitzHugh R. Impulses and physiological states in models of nerve membrane. Biophys. 1961;1:445–466. doi: 10.1016/s0006-3495(61)86902-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Nagumo J, Arimoto S, Yoshizawa S. An active pulse transmission line simulating nerve axon. Proc. IRE. 1962;50:2061–2070. [Google Scholar]
  42. FitzHugh R. Mathematical Models of Excitation and Propagation in Nerve. McGraw-Hill, New York; 1969. chap. 1. [Google Scholar]
  43. Maillard, G.: Introduction to Chains with Complete Connections. Ecole Federale Polytechnique de Lausanne (2007)
  44. Destexhe A, Mainen ZF, Sejnowski TJ. Methods in Neuronal Modeling. MIT Press, Cambridge; 1998. Kinetic models of synaptic transmission; pp. 1–25. [Google Scholar]
  45. Ermentrout B. Neural networks as spatio-temporal pattern-forming systems. Rep. 1998;61:353–430. [Google Scholar]
  46. Chazottes, J.: Entropie relative, dynamique symbolique et turbulence. Ph.D. thesis, Université de Provence-Aix Marseille I (1999)
  47. Ledrappier F. Principe variationnel et systèmes dynamiques symboliques. Z. 1974;30(185):185–202. [Google Scholar]
  48. Coelho Z, Quas A. Criteria for d-continuity. Trans. 1998;350(8):3257–3268. [Google Scholar]
  49. Bressaud X, Fernandez R, Galves A. Decay of correlations for non Hölderian dynamics. A coupling approach. Electron. 1999;4(3):1–19. [Google Scholar]
  50. Fernández R, Maillard G. Chains with complete connections: general theory, uniqueness, loss of memory and mixing properties. J. 2005;118:555–588. [Google Scholar]
  51. Cessac B, Palacios A. Spike Train Statistics from Empirical Facts to Theory: The Case of the Retina. Springer, Berlin; 2011. [Google Scholar]
  52. Georgii HO. Gibbs Measures and Phase Transitions. de Gruyter, Berlin; 1988. [Google Scholar]
  53. Kirst C, Timme M. How precise is the timing of action potentials? Front. 2009;3:2–3. doi: 10.3389/neuro.01.009.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Cessac B. A discrete time neural network model with spiking neurons. Rigorous results on the spontaneous dynamics. J. 2008;56(3):311–345. doi: 10.1007/s00285-007-0117-3. [DOI] [PubMed] [Google Scholar]
  55. Cronin J. Mathematical Aspects of Hodgkin-Huxley Theory. Cambridge University Press, Cambridge; 1987. [Google Scholar]
  56. Sinai Y. Gibbs measures in ergodic theory. Russ. 1972;27(4):21–69. [Google Scholar]
  57. Bowen R. Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms. Springer-Verlag, New York; 1975. [Google Scholar]
  58. Ruelle D. Thermodynamic Formalism. Addison-Wesley, Reading, Massachusetts; 1978. [Google Scholar]
  59. Ruelle D. Smooth dynamics and new theoretical ideas in nonequilibrium statistical mechanics. J. 1999;95:393–468. [Google Scholar]
  60. Keller G. Equilibrium States in Ergodic Theory. Cambridge University Press, Cambridge; 1998. [Google Scholar]
  61. Cessac B, Rostro-Gonzalez H, Vasquez JC, Viéville T. How Gibbs distribution may naturally arise from synaptic adaptation mechanisms: a model based argumentation. J. 2009;136(3):565–602. http://www.springerlink.com/content/3617162602160603. [Google Scholar]
  62. Naundorf B, Wolf F, Volgushev M. Unique features of action potential initiation in cortical neurons. Nature. 2006;440:1060–1063. doi: 10.1038/nature04610. [DOI] [PubMed] [Google Scholar]
  63. McCormick D, Shu Y, Yu Y. Neurophysiology: Hodgkin and Huxley model - still standing? Nature. 2007;445:E1–E2. doi: 10.1038/nature05523. [DOI] [PubMed] [Google Scholar]
  64. Naundorf B, Wolf F, Volgushev M. Neurophysiology: Hodgkin and Huxley model - still standing? (Reply) Nature. 2007;445:E2–E3. doi: 10.1038/nature05523. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Mathematical Neuroscience are provided here courtesy of Springer-Verlag

RESOURCES