Skip to main content
Chaos logoLink to Chaos
. 2016 Sep 8;26(9):093104. doi: 10.1063/1.4962326

Average activity of excitatory and inhibitory neural populations

Javier Roulet 1, Gabriel B Mindlin 1
PMCID: PMC5018006  PMID: 27781447

Abstract

We develop an extension of the Ott-Antonsen method [E. Ott and T. M. Antonsen, Chaos 18(3), 037113 (2008)] that allows obtaining the mean activity (spiking rate) of a population of excitable units. By means of the Ott-Antonsen method, equations for the dynamics of the order parameters of coupled excitatory and inhibitory populations of excitable units are obtained, and their mean activities are computed. Two different excitable systems are studied: Adler units and theta neurons. The resulting bifurcation diagrams are compared with those obtained from studying the phenomenological Wilson-Cowan model in some regions of the parameter space. Compatible behaviors, as well as higher dimensional chaotic solutions, are observed. We study numerical simulations to further validate the equations.


An active area of research in Physics deals with establishing connections across different scales of description for out-of-equilibrium systems. This is the reason why, for example, macroscopic models of nervous systems are usually made phenomenologically as opposed to statistically. In 2008, Ott and Antonsen developed a statistical method for obtaining the evolution of the macroscopic “order parameter” of a large ensemble of coupled oscillators, which describes its degree of synchronization.1 This method has recently been applied to densely connected neural populations. However, it is often the case that the mean activities of the populations (i.e., their spiking rates) are the variables of interest, particularly for behavioral control. In this paper, we extend the Ott-Antonsen method, in order to obtain equations for the mean activity of a population in terms of its order parameter. We apply this result to two different models of a “neural oscillator,” consisting of coupled excitatory and inhibitory populations of excitable units, and compare the resulting dynamics with those of the frequently used phenomenological Wilson-Cowan model. We obtain compatible behaviors in a wide range of parameter values, as well as more complex chaotic solutions.

I. INTRODUCTION

The description of how thousands of fireflies, crickets, or neurons fall into step, collectively synchronizing, has attracted the attention of dynamicists for decades. Winfree made significant progress in this field by arguing that in certain limits, amplitude variations could be neglected, and the oscillators could be described solely by their phases along their limit cycles.2 Kuramoto introduced a model for a large set of weakly coupled, nearly identical oscillators, with interactions depending sinusoidally on the phase difference between each pair of units.3 Interestingly, stationary solutions of this nonlinear model can be solved exactly, in the infinite-N limit, with the application of self-consistency arguments.4

In 2008, Ott and Antonsen introduced an ansatz for studying the behavior of globally coupled oscillators,1 which has been the most convenient for studying continuous time-dependent collective dynamics. The ansatz refers to the statistical description of the oscillators, and the result of this technique is a low dimensional system of reduced equations that describe the asymptotic behavior of the order parameter of the system. This order parameter, which is the resultant phasor of the system, describes the degree of synchrony of the ensemble.

Phase equations are not only an adequate representation for oscillatory dynamics but they can also describe the dynamics of a class of excitable systems, and large sets of coupled excitable units are a natural proxy for understanding the dynamics of many dynamically rich systems, neural networks among them. Recently, the Ott and Antonsen ansatz was used to explore the macroscopic dynamics of large ensembles of coupled excitable units.5–7 Yet, the macroscopic dynamics in those works was described in terms of the order parameter (as in the case of coupled oscillators), while for the study of neural arrangements, a natural macroscopic observable is the activity of the network.8,9

In this work, we study the average activity of a large set of coupled excitable units. We are interested in a particular architecture: the neural oscillator, built out of coupled excitatory and inhibitory units. We show that the average activity of the network can be analytically computed in terms of the order parameters of the problem and investigate the dynamics displayed by those macroscopic variables. We compare our results with the solutions of the phenomenologically derived Wilson-Cowan dynamical system. The comparison between the analytical expressions and the averages computed from numerical simulations allows us to unveil the subpopulation dynamics that coexist with different average behaviors. We analyze two different cases. In the first one, the dynamics of the individual units (both excitatory and inhibitory) is modeled by Adler's equations.10 In the second case, the individual units are “theta neurons.”11 In both cases, we emphasize the similarities and differences between the macroscopic solutions and those of the phenomenological Wilson-Cowan system.

The work is organized as follows. Section II presents the analysis of the first model, which consists of a set of impulsively coupled excitable phase oscillators, whose individual dynamics are ruled by Adler's equations. Section III contains the analytical results for that case, which include the computation of the average activity as a function of the order parameters of the problem. Section IV presents a similar analysis for the second case under study, corresponding to the theta neurons. In Section V, we discuss the bifurcation diagrams for the order parameter equations of the two models under analysis, and we compare them with a bifurcation diagram for the Wilson Cowan model. We report regions of the parameter space where the dynamics of our macroscopic models derived from first principles is similar to those of the phenomenological Wilson-Cowan system. We also report and discuss the departures from it. Numerical simulations of extended systems are described in Section VI. We finish with our discussion and conclusions in Section VII.

II. COUPLED ADLER'S EQUATIONS

By “neural oscillator,” we refer to an ensemble of two large populations of globally coupled excitable units: one excitatory and the other inhibitory. The proposed dynamics for the individual units is a phase oscillator in an excitable regime. One hypothesis in this approach is that all the relevant information about the internal state of an individual unit can be contained in a phase variable θ on the unit circle. Consequently, the microscopic variables in our model are as many phases {θi} as there are units in the population. If these excitable oscillators are used to model the dynamics of neurons, then the cycles of θ are interpreted as the neuron's spikes.

One widely used model of an excitable oscillator is given by the Adler equation θ˙=ωicosθi.10 It features a Saddle-Node in Limit Cycle (SNILC) bifurcation at ω = 1, which is the known mechanism for the onset of spiking activity in Type-I neurons. For ωi1, the unit is said to be in the excitable regime, with a stable resting state close to an unstable one, near θ0. That is, perturbations to the resting state larger than a certain threshold can trigger a large reaction on the system (a “spike”). The threshold size depends on ω, which can be interpreted as the intrinsic excitability of the unit. The Adler model has another SNILC bifurcation at ω=1, where the unit starts spiking with its phase running backwards (an unwanted dynamical feature if the excitable units are asked to represent neurons).

In turn, the units are supposed to be globally coupled, so that their evolutions obey the following equations:

{θ˙i(t)=ωicosθi(t)+I({θj(t)},{θ˜j(t)}),(1a)θ˜˙i(t)=ω˜icosθ˜i(t)+I˜({θj(t)},{θ˜j(t)}),(1b) (1)

where the untilded variables refer to the units in the excitatory population, and titled variables (∼) to the inhibitory ones. The first two terms in Eq. (1) describe the internal dynamics of each unit in the neural oscillator, and the couplings I,I˜ parametrize the interaction between them. These are chosen to be

(2).

I({θj},{θ˜j})=kENj=1N(1cosθj)kIN˜j=1N˜(1cosθ˜j), (2a)
I˜({θj},{θ˜j})=k˜ENj=1N(1cosθj)k˜IN˜j=1N˜(1cosθ˜j), (2b)

where the different k, k˜>0 describe the coupling strengths between neurons, and N and N˜ are the number of neurons in each of the two populations. The functional form is chosen so that the j-th unit influences the others via an impulsive term proportional to (1cosθj). This term is maximum at θj=π (where the spike occurs), and nearly zero close to the resting state θj0. It differs from the Kuramoto coupling sin(θiθj) in that it only depends on the phase of the pre-synaptic unit, and it is always excitatory for excitatory units (and inhibitory for inhibitory units). This is represented by the sign accompanying each term, which determines whether the phases of the post-synaptic units are driven towards or away from threshold from the resting states.

A first macroscopic variable describing the collective behavior of the system, the Kuramoto order parameter, can be defined for each of the two sub-populations, averaging their phasors

(3).

z(t)=1Nj=1Neiθj(t), (3a)
z˜(t)=1N˜j=1N˜eiθ˜j(t). (3b)

These variables account for the synchrony within the sub-populations: if all the oscillations are in phase, these order parameters will present moduli equal to one. On the other hand, if the populations are active, i.e., with units spiking, but out of synchrony, the order parameters will present small values. In this sense, the order parameters do not capture all the features of a macroscopic state.

These order parameters allow us to rewrite Eq. (2) in a compact way

(4).

I(z,z˜)=kE(1Rez)kI(1Rez˜), (4a)
I˜(z,z˜)=k˜E(1Rez)k˜I(1Rez˜), (4b)

which makes the system in Eq. (1) suitable to the application of the method introduced in 2008 by Ott and Antonsen.1 The essence of the computation consists in describing the state of each population of neurons through a distribution function, expanding it in Fourier modes, and using a continuity equation to derive the dynamical rules governing their behavior. Following their procedure, we first approximate the problem assuming an infinite population. The system description is now made in terms of the distributions f(θ,ω,t),f˜(θ,ω,t) that represent the density of units with a given excitability ω and phase θ in the excitatory and inhibitory populations, respectively. In the following discussion, we show the steps of our calculation for the excitatory population, and a completely analogous procedure is carried out for the inhibitory one.

The distribution functions are normalized so that

dω02πdθf(θ,ω,t)=1,

with the excitabilities distributed according to

g(ω)=02πf(θ,ω,t)dθ, (5)

which is time-independent since the excitabilities are assumed to be constant. In this representation, the order parameters will be expressed as integrals, namely,

z(t)=dω02πdθf(θ,ω,t)eiθ,

and our macroscopic questions can be addressed as we compute the distributions f and f˜.

Conservation of neurons with excitability ω means that these satisfy the continuity equation

ft+θ(fv)=0, (6)

where the velocity is

v(θ,ω,t)=ωcosθ+I(z(t),z˜(t)). (7)

One way to solve this problem consists of performing a mode decomposition of the distributions and finding the dynamics of the mode amplitudes. By virtue of Eq. (5), f can be decomposed as

f(θ,ω,t)=g(ω)2π[1+(n1αn(ω,t)einθ+c.c.)], (8)

where c.c. means the complex conjugate of the preceding term. In principle, substitution of Eq. (8) in Eq. (6) leads to an infinite set of equations for the evolution of each αn. Yet, Ott and Antonsen found an ansatz (OA ansatz) that simplifies the problem1 by proposing

αn(ω,t)=[α(ω,t)]n. (9)

The equations for all the modes are satisfied as long as the first mode satisfies

αt=i(ω+I(z,z˜))αi2(1+α2). (10)

The relation between α and z is obtained by multiplying Eq. (8) by eiθ and integrating in θ and ω

g(ω)α(ω,t)dω=z(t), (11)

(α(ω,t) can be interpreted as an order parameter restricted to the units with excitability ω). To solve Eq. (10), we still have to compute the integral in Eq. (11), which requires assuming a specific distribution g(ω) for the system's excitabilities. The Lorentzian distribution is particularly useful here. It is defined as

g(ω)=Δπ1(ωω0)2+Δ2, (12)

which has a maximum at ω0 and a half-width at half-maximum Δ. Setting g(ω) as in Eq. (12) and assuming that α(ω,t) is analytic in the complex ω upper half-plane, we can solve Eq. (11) by contour integration, evaluating α at the pole ω0+iΔ

α(ω0+iΔ,t)=z(t). (13)

Evaluating Eq. (10) at the pole, the partial differential equations for the first mode amplitudes α and α˜ become a coupled, 4-dimensional dynamical system of the first order ordinary differential equations for the order parameters, namely,

{z˙=[Δ+i(ω0+I(z,z˜))]zi2(1+z2),(14a)z˜˙=[Δ˜+i(ω˜0+I˜(z,z˜))]z˜i2(1+z˜2).(14b) (14)

Ott and Antonsen demonstrated that the ansatz Eq. (9) defines an invariant manifold, which is globally attracting for the order parameters under very general conditions.12 In this way, Eq. (14) describes the long term solution of the problem, regardless of the initial conditions.

III. COMPUTATION OF THE AVERAGE ACTIVITY

As we discussed in Section II, the order parameters z and z˜ describe the degree of synchrony of the system. Another sensible description of its macroscopic behavior is the level of activity of the sub-populations, understood as the total number of spikes taking place per unit of time. This quantity can be computed as the flux of phasors through θ=π

(15).

ϕ(t)=f(θ,ω,t)v(θ,ω,t)|θ=πdω, (15a)
ϕ˜(t)=f˜(θ,ω,t)v˜(θ,ω,t)|θ=πdω, (15b)

with v,v˜ satisfying Eq. (7). A convenient expression for f(π,ω,t) can be obtained by imposing the OA ansatz Eq. (9) explicitly in Eq. (8). Now, each sum becomes a geometric series that can be written in terms of α

n1αneiπn=n1(α)n=11+α1.

The expression for f given in Eq. (8) is not analytical when extended to the complex plane, because of the appearance of α* in the complex conjugate term, and therefore, Eq. (15) cannot be integrated by means of the residue theorem. In order to solve this problem, we propose decomposing f into two terms, one of them analytical, and the other its complex conjugate. This yields

f(π,ω,t)=g(ω)2π(11+α(ω,t)12)+c.c., (16)
ϕ(t)=g(ω)2π(11+α(ω,t)12)v(π,ω,t)dω+c.c., (17)

since both g and v are real for real ω and therefore equal to their complex conjugates. The integrand in Eq. (17) is now analytical and we can apply the residue theorem.

This integral needs to be evaluated in principal value, as v(ω)ω for large ω, causing the integral to diverge in ±. The infinite contribution to the mean activity made by the “unphysical” units with ω is canceled by the negative infinite activity of the equally unphysical units with ω, leaving only the contribution of the (“physical”) intermediate ω units. In Appendix A, we give a rigorous method for avoiding these infinities by slightly changing the distribution function g(ω).

We can perform the integral in Eq. (17) by means of the residue theorem. To do so, we enclose the upper complex half-plane with a semicircle of radius R and subtract its contribution to the integral, which yields

ϕ(t)=[12π(11+z(t)12)v(ω0+iΔ,π,t)+limRiΔ2π20π(1α(Reiφ,t)+112)dφ]+c.c.,

where we made use of Eq. (13) to write the first term as a function of the order parameter. Using that α(ω,t>0)0 as Imω (which follows from Eq. (10)), we can perform the integral in the second term. This leads to the important result

(18).

ϕ(z,z˜)=1π(1+Rez|1+z|212)(ω0+1+I(z,z˜))+ΔπImz|1+z|2, (18a)
ϕ˜(z,z˜)=1π(1+Rez˜|1+z˜|212)(ω˜0+1+I˜(z,z˜))+Δ˜πImz˜|1+z˜|2. (18b)

Equations (18) give an explicit relation for the mean activities in terms of the order parameters. They were obtained under the OA ansatz without making any additional assumption. In this way, once we compute the order parameters satisfying our nonlinear ordinary differential equations (14), we can obtain the activity of each sub-population by the evaluation of the algebraic expression above.

This result allows to make a connection between the order-parameter-based description of the neural oscillator suggested by the Ott-Antonsen statistical method and the mean-activity-based description made by phenomenological, additive models (as the Wilson and Cowan neural oscillator model). A similar search for a macroscopic description of coupled excitable cells in terms of activity was carried out recently by Montbrió et al. in Ref. 9. Another derivation of activity rates in a population of theta neurons, valid for order parameters constant in time, was done in Ref. 13.

Note that the mean activities obtained from Eq. (18) are a projection of the 4-dimensional dynamics given by Eq. (14), while the Wilson-Cowan model obeys a simpler, 2-dimensional system. Thus, their qualitative behaviors can only be compatible if a further dimensional collapse occurs.

IV. RESULTS FOR THE THETA NEURON MODEL

A word of caution should be said about the consequences that the Lorentzian distribution proposal has on the mean activity defined by Eq. (15). In general, the single-unit models subjected to the Ott-Antonsen method are meaningful in a range of parameter values (physical regime) but feature some kind of pathological behavior when the frequency parameter becomes large, like the arbitrarily fast or “backward” spikes in the Adler model presented above. However, the Ott-Antonsen prescription requires integrating a nonzero (Lorentzian) distribution over the whole infinite range of frequencies. It is difficult to prevent the single-unit models from having some unphysical regime at big parameter values (for example, by changing the phasor velocities' dependence with ω), because our prescription for computing the mean activities requires that g·v be analytical and integrable in the whole upper complex ω plane. Thus, we can try to overcome this problem by choosing a narrow distribution width Δ, so that the vast majority of the units lie in the physical regime. Indeed, the impact of the unphysical units in the order parameter dynamics is limited, since the influence (k/N)(1cosθ) that each neuron has on the others is bounded by 2k/N, independently of the spiking frequency. In this way, “a small proportion of unphysical units” means that they make a small contribution to the system dynamics. However, we defined the mean activity as the spiking frequency of the population, to which the unphysical units can significantly contribute. We have shown in Appendix A the mechanism by which the problem resolves in the Adler-units model, which involves the cancellation of opposite diverging contributions made by the high- and low-ω tails of the Lorentzian distribution.

It is worth exploring a model for which no parameter value places the units into a backwards oscillation. This can be achieved if the individual units are “theta neurons,” a canonical model for Type-I excitability neurons.11 The proposed dynamics for the individual units is also a phase oscillator in an excitable regime, but the equation driving its dynamics is given by

θ˙i(t)=1cosθi(t)+(1+cosθi(t))ηi, (19)

where now η plays the role of the frequency parameter. As in the previous model, the system undergoes a saddle node in a limit cycle, a global bifurcation leading the unit from an excitable regime to an oscillatory one, at η = 0. The main difference between this model and the previous one is that now no parameter value puts our units into a backwards oscillation. This automatically resolves the divergence in the negative η end of the integral in Eq. (15). The spiking frequency of the individual (uncoupled) neurons still diverges for large η, but now only as η instead of linearly: τ(η)=02πθ˙1dθ=π/η for η>0. The η2 decay of the Lorentzian function is then sufficient to render the integral finite.

The equations now read

{θ˙i(t)=1cosθi(t)+(1+cosθi(t))[ηi+I({θj(t)},{θ˜j(t)})],(20a)θ˜˙i(t)=1cosθ˜i(t)+(1+cosθ˜i(t))[η˜i+I˜({θj(t)},{θ˜j(t)})],(20b) (20)

with I and I˜ given by Eq. (4).

A procedure analogous to the one followed in Section II leads to the following equations for the order parameters

{z˙=2iz+12[Δ+i(η01+I(z,z˜))](z+1)2,(21a)z˜˙=2iz˜+12[Δ˜+i(η˜01+I˜(z,z˜))](z˜+1)2.(21b) (21)

The computation of the activity for the populations can be carried out by following the steps described in Section III, and we obtain

ϕ(z)=2π(1+Rez|1+z|212), (22a)
ϕ˜(z˜)=2π(1+Rez˜|1+z˜|212). (22b)

Notably, in this case, each sub-population's activity is determined solely by its order parameter (i.e., it does not depend on the order parameter of the other sub-population).

V. BIFURCATION DIAGRAMS

In this section, we compare a bifurcation diagram of the Wilson-Cowan neural oscillator with the diagrams obtained for the coupled Adler units and for the coupled theta neurons. The local bifurcations were computed numerically with PyDSTool.14

The Wilson-Cowan oscillator is a phenomenologically derived model for the activity of two coupled neural populations, excitatory and inhibitory. The variables x and y represent their activities, and their dynamics are prescribed by the following differential equations:

{x˙=x+S(ρx+axby),(23a)y˙=y+S(ρy+cxdy),(23b) (23)

where S(ξ)=1/(1+eξ) is a sigmoidal function that represents the nonlinear nature of the response: beyond some input level, the average activity of a population no longer increases its value. The excitatory or inhibitory nature of x and y is represented by the sign accompanying the positive coupling parameters a, b, c, and d. The parameters ρx and ρy describe the external inputs to the excitatory and inhibitory populations, respectively. For instance, these could be from other regions of the nervous system and are expected to be the most dynamical system parameters. Therefore, it is useful to understand the bifurcation diagram for the parameters ρx and ρy.

The region of the parameter space that we chose to display in Fig. 1 presents a variety of dynamical regimes. At the blue dashed curves labeled as “Hopf,” oscillations are born in Hopf bifurcations. The curves labeled “SN” correspond to saddle node bifurcations, where saddles and attractors (or repulsors) meet and disappear. “SNILC” denotes the saddle-node in a limit cycle bifurcation. In this case, before the disappearance of a saddle and a node, the unstable manifold of the saddle was part of the stable manifold of the attractor. In this way, at the bifurcation, the manifolds become a limit cycle, and an infinite period bifurcation of finite (nonzero) amplitude is born. The juncture of two SN curves is known as “cusp,” and these two colliding SN curves delimit a region in the parameter space where three fixed points exist (labeled 5, 6, and 2 in Fig. 1(b)). The global description of the stationary dynamics is the following: increasing ρx, the attractor with small x value and a saddle collide, and the only surviving attractor is one with high x value: the system has been excited (regions 3 and 4, the right side of 1/4). Analogously, decreasing ρx leads to the disappearance of the excited attractor and the surviving attractor is the “off, nonexcited” state (region 1, the left part of 1/4). In between Hopf bifurcation lines (regions 3 and 6), the system displays stable oscillations: the excitatory and inhibitory populations get to be sequentially excited. The dotted green line, labeled “Hom,” corresponds to a homoclinic bifurcation, at which a limit cycle collides with the saddle of the system. This curve is born tangent to a SN curve at a point where a Hopf and a SN curves touch, a codimension-2 Bogdanov-Takens bifurcation (“BT”), and dies nontangent at another SN curve, in a Saddle-Node in Saddle-Loop (SNSL) codimension-2 bifurcation.

FIG. 1.

FIG. 1.

Bifurcation diagram of the Wilson-Cowan model (Eq. (23)) for the parameters ρx and ρy. The remaining parameters have been set to a = 15, b = 15, c = 12, and d = 5. The right panel (b) shows a detail near one of the Bogdanov-Takens bifurcations. The bifurcation curves define 5 regions with qualitatively different limit sets. The insets show limit trajectories in the phase space (x, y) at parameter values representative of each region, labeled 1–6: filled dots represent stable fixed points, empty dots represent unstable fixed points, and closed curves represent limit cycles. Regions 1 and 4 have been identified to help later comparison to the other models.

Fig. 2 displays the bifurcation diagram that we obtained studying Eqs. (14) for the order parameters of the Adler system, at the parameter values that we report in the caption. Each circular inset represents the |z|1 disk, where the limit sets of the system in Eq. (14) at different parameter values are projected. Fig. 3 displays a similar diagram, for the equations of the theta neuron model. Both diagrams show a direct correspondence in their bifurcations and the limit sets in each region. The two SN curves approach significantly, which suggests the presence of a cusp bifurcation (an 82=6 dimensional hypersurface in the 8-dimensional parameter space) nearby. However, these curves do not intersect, rather they approach and then separate (Fig. 2(b)), which means that the cusp does not cross the hyperplane defined by our parameter choice. We conjecture that the regions 1 and 4 are actually the same region at the two sides of the postulated cusp (i.e., in the full parameter space, they could be connected without crossing any bifurcation). Then, these bifurcation diagrams also match the Wilson-Cowan one, displayed in Fig. 1(b). They both share with the Wilson-Cowan model the coexistence of “on” and “off” stationary attracting states separated by a saddle, the existence of simple oscillations where the activity of the competing populations alternate, and a series of global bifurcations that allow the possibility of oscillations with critical slowing down.

FIG. 2.

FIG. 2.

(a) Bifurcation diagram of the Adler-units model Eq. (14) for the parameters ω0,ω˜0. The remaining 6 parameters were set to Δ=0.1, Δ˜=0.11,kE=3.0,k˜E=2.7,kI=2.45,k˜I=2.35. The insets show the z component of the limit trajectories in the z,z˜ space for a representative set (ω0,ω˜0) in each region. (b) A “zoom out” of the same bifurcation diagram, showing the SN curves (solid red) approaching and separating.

FIG. 3.

FIG. 3.

Bifurcation diagram of the theta neuron model (Eq. (21)) for the parameters η0 and η˜0. The remaining 6 parameters were chosen as in Fig. 2.

Recently, complex motor patterns in birdsong production were described as the solutions displayed by a neural oscillator at the expiratory related area of the song system, when driven by inputs from other neural structures.15 In that model, the key dynamical feature that allows reproducing those patterns was associated with the proximity between a SN and a Hopf curve, a dynamical scenario present in the Wilson-Cowan, and shared by the solutions of the average equations derived from the first principles.

Since the equations for the order parameters (in the two models analyzed in this work) are four dimensional, it is possible to find behaviors more complicated than the one present in the Wilson-Cowan model. As an example, Fig. 4 displays a chaotic solution. However, in wide regions of the bio-physically relevant parameter space volume, the system's attractors are those characteristic of a two dimensional dynamical system: fixed points or simple, “untwisted” limit cycles. In the Adler-units model, this behavior was observed in about 95% of 6500 runs, varying independently all 8 parameters in Eq. (14) in the ranges: Δ[0.05,0.27],ω0[0.6,1.5],k[2,9.5] for each population.

FIG. 4.

FIG. 4.

(a) Projection of a chaotic attractor for the order parameters of the Adler units system, at parameters ω0=0.35,ω˜0=1.73,Δ=0.15,Δ˜=0.27,kE=9.0,k˜E=5.0,kI=3.5,k˜I=2.5. Stable and unstable fixed points coexist, plotted as filled or empty circles, respectively. (b) Bifurcation diagram showing the birth of the chaotic attractor as the parameters are changed. In the vertical axis, we plot the imaginary part of the solution for z at the intersections zn with a Poincaré section at angle ψ=π/2. At ω0=0 (and all the other parameters constant), the system has a period 1 limit cycle, seen as a single intersection. Increasing ω0 continuously brings it through a cascade of period-doubling bifurcations that gives birth to the strange attractor, and a crisis in which it becomes more complex. Further increase of ω0 brings the attractor back to a simple limit cycle through the same changes in reverse order.

This suggests the existence of an attracting, invariant two dimensional manifold within the four dimensional phase space for those parameter values. Indeed, it is possible to find it analytically in the special case where the two populations have symmetric parameters (i.e., ω0=ω˜0,Δ=Δ˜,kE=k˜E,kI=k˜I). In Appendix B, we show that for this specific case, the plane manifold z=z˜ is invariant and stable. We expect that departing away from the symmetric case starts causing a deformation of the two dimensional manifold before the system explores the full dimensionality. Notice that the parameters of the bifurcation diagrams in Figs. 2 and 3 are close for fulfilling the symmetry condition, and the system displays rich two dimensional behavior. The parameters of Fig. 4, on the other hand, are not, and the system can explore a higher dimensionality.

VI. NUMERICAL SIMULATIONS

In this section, we analyze simulations of the full system Eq. (1), for a network of 104 Adler units in each of the two populations. These simulations allow to test the validity of the mean field Eqs. (14) and (18) for the dynamics of the order parameters and the mean activity, respectively. Moreover, they help us in gaining further insight into the role that the synchronization of units at the microscopic level plays on the macroscopic dynamics.

The order parameters of the simulated populations can be computed from the individual phases by means of Eq. (3). The accuracy of the mean field Eq. (14) can be tested by comparing the simulated and predicted trajectories in the (z,z˜) space. Furthermore, the mean activity can be computed in the simulation by definition as the fraction of phasors that crossed θ=π in a small time interval δt divided by δt. Equation (18) makes a testable prediction of these mean activities from the order parameters of the simulation.

In the simulations, the sets of individual excitabilities and initial phases {ωi,θi},{ω˜i,θ˜i} were chosen so that the resulting distribution functions satisfied the Ott-Antonsen ansatz (i.e., were given by Eq. (16)) and had an arbitrary initial condition for their order parameters. This was achieved by proposing an uncorrelated initial distribution function f(θ,ω,0)=g(ω)h(θ), with g(ω) given by Eq. (12) and

h(θ)=12π(11z(0)eiθ12)+c.c.,

which fulfills both conditions automatically. The initial phases were chosen randomly according to h(θ), and the excitabilities were generated by taking ωi=ω0+Δtanxi with {xi} distributed uniformly in the interval (π/2,π/2). This yields the desired Lorentzian distribution for the {ωi}. A similar procedure was applied to the inhibitory population.

The differential equations (1) and (14) were integrated numerically using the order 4 Runge-Kutta method with a time step of size 0.01. A time interval of δt=0.8 was used to compute the mean activities from the definition as described above.

Fig. 5 shows the attracting limit sets for the order parameters of simulations of the full network at the representative parameter values chosen in Fig. 2. These are labeled 1–6, for each of the six regions of the analyzed parameter space. All the simulations are in good agreement with the mean field prediction, although the “active” fixed point in regions 4 and 5 presents somewhat large fluctuations with a predominating frequency (see upper panel of Fig. 6(b)). Simulations with larger N and N˜ present smaller fluctuations (not shown), suggesting that these are due to finite size effects. Even this behavior can be accounted for by the mean field equation: in these cases, the stable fixed point lies in a slow two dimensional manifold to which all trajectories nearby are rapidly attracted before they spiral into the fixed point, as linearization of the mean field Eq. (14) at the fixed point shows. The fixed point has a weak stability along the slow manifold (notice that it is close to losing it at the Hopf bifurcation), and the system is sensitive to fluctuations in those directions. For example, the Jacobian at the fixed point in region 4 has eigenvalues λ1,2=0.026±0.49i associated with the slow manifold and λ3,4=0.34±0.65i associated with the fast one. From the imaginary part of the slow eigenvalues, we can correctly predict the period of the fluctuation oscillations, given by τ=2π/0.49=12.8. This is in agreement with the one observed in Fig. 6(b). The real parts of the eigenvalues determine the timescale of the transient motions, which are an order of magnitude faster in the fast manifold, supporting the idea of an effective dimensional collapse.

FIG. 5.

FIG. 5.

z component of the attractor sets for the order parameters in a simulation of the coupled Adler units (104 in each population), at the parameter values labeled 1–6 in Fig. 2.

FIG. 6.

FIG. 6.

Spiking configuration of the excitatory population at the three qualitatively different activity regimes, with (a) low, (b) high, or (c) oscillating activity, corresponding to the attractors in regions 2, 4, and 3 of Fig. 2, respectively. The upper panel in each inset shows the mean activity computed by counting spikes (light green) or by means of Eq. (18) from the order parameters (dark green). The lower panel shows the raster plot of the full population.

To validate Eq. (18), we plot in the upper panels of Fig. 6 the mean activity of the excitatory population obtained by counting spikes directly (light green) or by computing it from the order parameters (dark green), as described above. This is done for each of the three qualitatively different regimes found: with low, high, or oscillating activity. The agreement between both methods is impressive. Even in the active fixed point discussed earlier, the activity obtained from the order parameters of the simulation accurately reproduces the fluctuations. These three regimes appear to correspond, respectively, to the partially synchronous rest state, partially synchronous spiking state, and collective periodic wave reported by So et al. in Ref. 7.

We can gain further insight on the mechanisms at the level of the individual spikes that generate the distinct macroscopic behaviors, by recording, in the simulation, each unit's spiking times (i.e., the times at which θi=π). This is somewhat analogous to the experimentalists' raster plots, but for the whole population. Three “raster plots” are displayed in the lower panels of Fig. 6, one for each of the regimes studied. In the horizontal axis, we represent time, and in the vertical axis we display the unit's index. We ordered the units according to their intrinsic excitability ωi. The dots represent the individual spikes. Thus, horizontal patterns mean that each unit's behavior depends on its individual excitability, and vertical patterns are associated with synchronization between spikes. To make the synchronization structure clearer, in each case we chose a reference unit that spiked at regular (maximum) time intervals, thereby defining the fundamental frequency of the population. We used the reference unit's spikes, plotted as vertical broken lines, as a natural way to bin time. In this way, each unit in the population fires an integer number of times in each bin, which we use to color-code the spikes. Spikes occurring with θ˙i<0 are colored in grey, and we see that only a negligible fraction in the lowest-ω end of the population does fire backwards in the studied cases. The inhibitory population displays behavior qualitatively similar to the excitatory one in all these cases (not shown).

Fig. 6(a) shows that in the fixed point solution, the units either spike non-synchronously or do not spike at all, depending on whether their intrinsic excitability is above or below a definite threshold. This can be understood by looking at the coupling terms in Eq. (1), which only depend on time through the order parameters. For this reason, they present a constant value at the fixed point, say I0. Thus, each unit's evolution is ruled by θ˙i=ωicosθi+I0, which yields a threshold of ω=1+I0 below which units do not spike. Above it, they spike with different periods τi=02π(ωi+I0cosθ)1dθ, so that no synchronization can possibly occur at a fixed point for the order parameters. The election of the reference neuron was arbitrary here, since this solution has no natural timescale.

The main differences that Fig. 6(b) presents are that a bigger fraction of the population is above threshold, yielding a higher mean activity. Although the majority of the population fires non-synchronously, a number of units tend to synchronize their spikes, causing the mean activity to present fluctuations with regular frequency. Conversely, these fluctuations in the order parameters are needed to make the argument above inapplicable to this case and to provide a mechanism for synchronization.

In Fig. 6(c), the role that synchronization has in the time dependence of the mean field variables becomes clear: periodic activity peaks are caused by synchronized firing of a large fraction of units within a range of excitabilities. In turn, smaller groups form that synchronize with different spiking ratios 2:1, 3:1, …, 7:1 (notice that each group has a different color). This forms what is called a “chimera state.” Notably, the mean field variables (the order parameters and mean activities) do not reflect such a nontrivial pattern of underlying rhythms. Horizontal strips of non-synchronous spikes form at the critical excitabilities at which the firing ratios change, and these units fail to lock to the mean field. The mechanism that synchronizes the different units is the now time-dependent order parameter in the coupling terms of Eq. (1).

We conclude that, in this model, synchronized firing is intimately related to the time dependence of the mean field variables.

VII. DISCUSSION AND CONCLUSIONS

Most mathematicians and physicists who study brain functions use empirical models, simple dynamical systems reflecting one or more important neuro-physiological observations. One celebrated case is the additive Wilson-Cowan empirical model of neural networks. This model is based on the observation that the activity of a neural population increases non-linearly with its input (with the non-linearity reflecting the saturating nature of the response). Remarkably, the simple model that is obtained with coupled excitatory and inhibitory populations is capable of displaying a rich set of dynamical solutions.

Recent advances in the study of coupled oscillators allowed us to obtain equations for variables describing some aspects of the global behavior of the network. In particular, equations were derived for the order parameter of a neural population, describing the degree of synchrony of the solutions. In order to compare a statistical study of a set of coupled oscillators with a phenomenological model as the Wilson-Cowan neural oscillator, it was necessary to go beyond the order parameter and derive the equations for the activity of the network: the average of the actual number of spikes generated over the whole network, at a given time. In this work, we performed this calculation and obtained analytical expressions which could be computed as functions of the order parameters. Two cases were studied in this work: the coupling of Adler units (i.e., elements whose dynamics without coupling were ruled by Adler's equations) and the coupling of theta neurons. In both cases, the couplings were impulsive.

We have found regions of the parameter space where the dynamics of the order parameters derived for the models presented here was equivalent to what is observed in the Wilson-Cowan oscillator. Remarkably, this very simple model is capable of capturing many of the subtle features that a population of coupled units displays after computing its macroscopic behavior from first principles.

ACKNOWLEDGMENTS

We thank Matías Leoni, Ana Amador, and Gastón Giribet for illuminating comments. This work was supported by CONICET, ANCyT, UBA, and NIH through R01-DC-012859 and R01-DC-006876.

APPENDIX A: ON THE DIVERGENCES IN THE MEAN ACTIVITY

In this Appendix, we show a method to avoid the divergences in the mean activity (Eq. (17)) occurring for large values of ω by slightly changing the distribution function g(ω). The divergences are due to the slow decay of the Lorentzian function and the term linear in ω in v(ω,θ,t), which means that for large ω, g·vω1, whose integral diverges. As it has been said in the main text, a similar behavior but with the opposite sign occurs for large negative ω, which compensates the divergence giving a finite result.

Thus, the divergence would not occur if g(ω) were a sharper function. The Ott-Antonsen method contemplates distribution functions having any number of poles off the real axis and being analytical everywhere else. In particular, a slight perturbation of the Lorentzian distribution can be made by taking g(ω) to be the product of two Lorentzian functions with the same ω0 but different widths Δ and D, i.e.,

g(ω)=DΔ(D+Δ)π((ωω0)2+Δ2)((ωω0)2+D2), (A1)

which has been normalized according to g(ω)dω=1. The distribution in Eq. (A1) can be made arbitrarily close to the Lorentzian (Eq. (12)) by taking D to be sufficiently large, while at the same time no divergences occur in the integral in Eq. (17) for any finite D, as now g·vω3 for large ω. Although, as we show below, the dynamics of the order parameters now becomes 8-dimensional, it is logical to expect that it will converge to the 4-dimensional one described in Eq. (14) for sufficiently large D. We will now make this statement quantitative.

With the new distribution function, all steps in the main text remain valid until Eq. (13), which, introducing Eq. (A1) in Eq. (11) and integrating by residues, now becomes

z(t)=(1+μ)z1(t)μz2(t),

where we have defined the two new complex quantities z1(t)=α(ω0+iΔ,t) and z2(t)=α(ω0+iD,t), and the perturbation parameter μ=Δ/(DΔ). We also define the analogous quantities z˜1(t),z˜2(t),μ˜, etc., for the inhibitory population. In the limit μ0, the two distributions Eqs. (12) and (A1) become identical, and z1(t)z(t). In the general case, evaluation of Eq. (10) at ω=ω0+iΔ and ω=ω0+iD for each of the two populations yields the 8-dimensional mean-field dynamics

z˙1=[Δ+i(ω0+I(z,z˜))]z1i2(1+z12), (A2a)
z˜˙1=[Δ˜+i(ω˜0+I˜(z,z˜))]z˜1i2(1+z˜12), (A2b)
z˙2=[D+i(ω0+I(z,z˜))]z2i2(1+z22), (A2c)
z˜˙2=[D˜+i(ω˜0+I˜(z,z˜))]z˜2i2(1+z˜22), (A2d)

where

I(z,z˜)=kE(1(1+μ)Rez1+μRez2)kI(1(1+μ˜)Rez˜1+μ˜Rez˜2),
I˜(z,z˜)=k˜E(1(1+μ)Rez1+μRez2)k˜I(1(1+μ˜)Rez˜1+μ˜Rez˜2)

couple all the equations.

For large D, z1 and z˜1 become the only active degrees of freedom; departures from the 4-dimensional dynamics Eq. (14) are represented in the coupling terms with z2, z˜2, of the form kμRez2. These perturbation terms are small not only because they are weighed by the small parameters μ,μ˜ but also because z2 and z˜2 are themselves small, as can be seen by taking the radial component of Eqs. (A2c) and (A2d). Letting z2=ρ2eiψ2, then ρ˙2=Re{z˙2eiψ2} is bounded from above

ρ˙2=Dρ212(1ρ22)sinψ2Dρ2+12.

Thus, in the stationary regime, ρ2(2D)1, as ρ˙2<0 for larger ρ2, and similarly ρ˜2(2D˜)1. The perturbation terms are then bounded by |kμRez2|kΔ/((DΔ)2D)kΔ/(2D2). Taking D,D˜1Δ,Δ˜, all the perturbation terms quickly become negligible.

APPENDIX B: DIMENSIONAL COLLAPSE FOR SYMMETRIC PARAMETERS

In this Appendix, we show analytically that z=z˜ is a (two dimensional) invariant, stable manifold in the special case that the system's parameters are chosen symmetrically for both populations. This degenerate election should be considered as a convenient “attack point” to Eqs. (14), since the existence of a two dimensional invariant stable manifold should be robust under variations in the parameters up to some extent.

We first rewrite Eq. (14) in terms of the new variables z±(z±z˜)/2, which account for the average and the difference between the two populations' order parameters, and evolve according to

{z˙+=[Δ++i(ω0++(kE+kI+)(1Rez+)(kE++kI+)Rez)]z++[Δ+i(ω0+(kEkI)(1Rez+)(kE+kI)Rez)]zi2(1+z+2+z2),(B1a)z˙=[Δ+i(ω0+(kEkI)(1Rez+)(kE+kI)Rez)]z++[Δ++i(ω0++(kE+kI+)(1Rez+)(kE++kI+)Rez)]ziz+z.(B1b) (B1)

Here, the parameters have been redefined in an analogous way: Δ±=(Δ±Δ˜)/2 and so on. The symmetric-parameter case corresponds to Δ=ω0=kE=kI=0, in which the radial component of Eq. (B1b) reduces to

ρ˙=(Δ++Imz+)ρ,

with ρ=|z|. Now, z=0 is a solution, which defines the invariant two dimensional manifold z=z˜. Moreover, it will be the only long term solution for z unless Imz+>Δ+, at least in some part of its evolution. We continue the argument by ruling this possibility out: if we set z=0, then the radial part of Eq. (B1a) reads

ρ˙+=Δ+ρ+1ρ+22ρ+Imz+,

which is always negative for Imz+>0. Therefore, z+ cannot have solutions exclusively in the upper plane (in particular, fixed points). z+ following a bounded 2-dimensional dynamics, the only other possible attractor with Imz+>Δ+ somewhere would be a limit cycle that shrunk towards the origin while in the upper plane and expanded away from it in the lower. However, this last possibility can also be severely constrained by noting that, if z=0, the evolution Eq. (B1a) becomes the equation of a single, purely excitatory or inhibitory (depending on the sign of kE+kI+) population, which hardly can oscillate in the parameter range explored in this work.16 Therefore, z=0 is quite generally a stable invariant manifold for any symmetric choice of the parameters, with a number of fixed points in the z+ lower plane. Numerical simulations support this result.

Even if no limit cycles exist on the manifold z=z˜ for the exactly symmetric case, small departures from it in the parameter space can produce rich two-dimensional behavior on the (deforming) stable manifold before a higher dimensionality is explored, as seen in Fig. 2. In Fig. 7, we show the z component of the trajectories in the z,z˜ space (or equivalently, the z+,z space), at the same parameter values 1–6 of the insets of Fig. 2. Comparing both figures, we see that z is much smaller than z (and thus than z˜ and z+) in the whole region. This supports the idea that the two dimensional manifold to which the dynamics collapse is a deformation of the one defined by z=0, which would correspond to the exactly symmetric case.

FIG. 7.

FIG. 7.

z component of the attractor sets for the order parameters of the coupled Adler units model (Eq. (14)), at the parameter values labeled 1–6 in Fig. 2. Filled dots represent stable fixed points, empty dots represent unstable fixed points, and closed curves represent limit cycles.

References

  • 1. Ott E. and Antonsen T. M., “ Low dimensional behavior of large systems of globally coupled oscillators,” Chaos (3), 037113 (2008). 10.1063/1.2930766 [DOI] [PubMed] [Google Scholar]
  • 2. Winfree A. T., “ Biological rhythms and the behavior of populations of coupled oscillators,” J. Theor. Biol. (1), 15–42 (1967). 10.1016/0022-5193(67)90051-3 [DOI] [PubMed] [Google Scholar]
  • 3. Kuramoto Y., “ Self-entrainment of a population of coupled non-linear oscillators,” in International Symposium on Mathematical Problems in Theoretical Physics ( Springer Verlag, New York, 1975), pp. 420–422. [Google Scholar]
  • 4. Strogatz S. H., “ From Kuramoto to Crawford: Exploring the onset of synchronization in populations of coupled oscillators,” Physica D (1–4), 1–20 (2000). 10.1016/S0167-2789(00)00094-4 [DOI] [Google Scholar]
  • 5. Alonso L. M., Alliende J. A., and Mindlin G. B., “ Dynamical origin of complex motor patterns,” Eur. Phys. J. D (2), 361–367 (2010). 10.1140/epjd/e2010-00225-2 [DOI] [Google Scholar]
  • 6. Alonso L. M. and Mindlin G. B., “ Average dynamics of a driven set of globally coupled excitable units,” Chaos (2), 023102 (2011). 10.1063/1.3574030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. So P., Luke T. B., and Barreto E., “ Networks of theta neurons with time-varying excitability: Macroscopic chaos, multistability, and final-state uncertainty,” Physica D , 16–26 (2014). 10.1016/j.physd.2013.04.009 [DOI] [Google Scholar]
  • 8. Hoppensteadt F. C. and Izhikevich E. M., Weakly Connected Neural Networks ( Springer, New York, 1997). [Google Scholar]
  • 9. Montbrió E., Pazó D., and Roxin A., “ Macroscopic description for networks of spiking neurons,” Phys. Rev. X (2), 021028 (2015). 10.1103/PhysRevX.5.021028 [DOI] [Google Scholar]
  • 10. Adler R., “ A study of locking phenomena in oscillators,” Proc. IRE (6), 351–357 (1946). 10.1109/JRPROC.1946.229930 [DOI] [Google Scholar]
  • 11. Ermentrout G. B. and Kopell N., “ Parabolic bursting in an excitable system coupled with a slow oscillation,” SIAM J. Appl. Math. (2), 233–253 (1986). 10.1137/0146017 [DOI] [Google Scholar]
  • 12. Ott E. and Antonsen T. M., “ Long time evolution of phase oscillator systems,” Chaos (2), 023117 (2009). 10.1063/1.3136851 [DOI] [PubMed] [Google Scholar]
  • 13. Laing C. R., “ Derivation of a neural field model from a network of theta neurons,” Phys. Rev. E (1), 010901(R) (2014). 10.1103/PhysRevE.90.010901 [DOI] [PubMed] [Google Scholar]
  • 14. Clewley R., “ Hybrid models and biological model reduction with PyDSTool,” PLoS Comput. Biol. (8), e1002628 (2012). 10.1371/journal.pcbi.1002628 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Alonso R. G., Trevisan M. A., Amador A., Goller F., and Mindlin G. B., “ A circular model for song motor control in serinus canaria,” Front. Comput. Neurosci. , 41 (2015). 10.3389/fncom.2015.00041 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Reports of a population of coup-led inhibitory theta neurons presenting oscillatory dynamics exist, but for units in a region of the parameter space where the dynamics are qualitatively different from those of neurons. For example, where the phasors' slowing down would occur at θ=π instead of 0.7

Articles from Chaos are provided here courtesy of American Institute of Physics

RESOURCES