Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Oct 14.
Published in final edited form as: SIAM J Appl Dyn Syst. 2008;7(2):609–649. doi: 10.1137/070705842

Mechanisms for Frequency Control in Neuronal Competition Models

Rodica Curtu , Asya Shpiro , Nava Rubin , John Rinzel §
PMCID: PMC2954747  NIHMSID: NIHMS222139  PMID: 20953287

Abstract

We investigate analytically a firing rate model for a two-population network based on mutual inhibition and slow negative feedback in the form of spike frequency adaptation. Both neuronal populations receive external constant input whose strength determines the system’s dynamical state—a steady state of identical activity levels or periodic oscillations or a winner-take-all state of bistability. We prove that oscillations appear in the system through supercritical Hopf bifurcations and that they are antiphase. The period of oscillations depends on the input strength in a nonmonotonic fashion, and we show that the increasing branch of the period versus input curve corresponds to a release mechanism and the decreasing branch to an escape mechanism. In the limiting case of infinitely slow feedback we characterize the conditions for release, escape, and occurrence of the winner-take-all behavior. Some extensions of the model are also discussed.

Keywords: Hopf bifurcation, antiphase oscillations, slow negative feedback, winner-take-all, release and escape, binocular rivalry, central pattern generators

1. Introduction

Competition models have a long tradition in ecology and population biology (see, e.g., [22]). Typically, the competition involves negative interactions in the battle for a common resource. Eventually, one of the participant populations emerges as the winner eliminating the competitors. This framework has appeared in models of neuronal development where the competition is for synapse formation such as the development of neuromuscular connections for innervated muscle fibers and for the formation of ocular dominance columns and topographic maps (as reviewed in [35]). The notion of competition has also been applied in the modeling of various neuronal computational tasks. Winner-take-all behavior, when one neural population remains active and the others inactive indefinitely as a result of inhibitory interactions, has been proposed in models for short term memory and attention [13] or for the selection and switching in the striatum of the basal ganglia under both normal and pathological conditions [15, 21].

The winner-take-all steady state may persist for a long time but not indefinitely if some mechanism for slow fatigue or adaptation is at work. In this case one population may be dominant for a while, then another, and so on. Competition between, say, two neuronal populations, via reciprocal inhibition and slow adaptation underlies models for alternating rhythmic behavior in central pattern generators (CPGs) [11, 32, 6, 33] and in perceptual bistability [17, 37, 24]. CPGs consist of neural circuits that drive alternately contracting muscle groups. Perceptual bistability refers to a class of phenomena in which a deeply ambiguous stimulus gives rise to two different interpretations that alternate over time, only one being perceived at any given moment. Slow adaptation may be implemented via a cellular mechanism (fatigue in the spike generation mechanism) or a negative feedback in the coupling (depression of the synaptic transmission mechanism). In some neuronal competition models the alternations may be irregular and caused primarily by noise, with adaptation playing a secondary role [30, 10, 24].

Both for CPGs and perceptual bistability the issue of oscillations’ frequency or period detection (and eventually control) seems to be important. For example, a classical example of perceptual bistability is binocular rivalry whose properties were summarized in the so-called Levelt’s propositions [19]. In binocular rivalry, a subject views an ambiguous stimulus in which each eye is presented with a drastically different image. Instead of perceiving a mixture of the two images, the subject reports (over a large range of stimulus conditions) an alternation between the two competing percepts; one image is perceived for a while (a few seconds), then the other, etc. Levelt’s proposition IV (LP-IV) states that increasing the contrast of the rivaling images increases the frequency of percept switching, or, in other words, that dominance times of both perceived images decrease with equal increase of stimulus strength. Since 1968, binocular rivalry has been investigated intensively in other psychophysics experiments [2, 25, 20, 1, 28, 29, 3], in experiments using fMRI techniques [34, 26, 38, 18], and also in modeling studies [17, 37, 10, 24].

In a recent modeling paper, Shpiro et al. [31] show that for a class of competition models, the LP-IV type of dynamics occurs in fact only within a limited range of stimulus strength. Outside this range four other types of behavior were observed: (i) fusion at a very high level of activity, (ii) winner-take-all behavior, (iii) a region where dominance times increase with stimulus strength (as opposed to LP-IV), and then (iv) fusion again for very low levels of activity (see Figure 3F in section 2). These differences between experimental reports and theory have important implications, either predicting new possible dynamics in binocular rivalry or, if future experiments do not confirm them, pointing to the necessity for other types of models. Meanwhile, it is important to understand the sources or mechanisms that lead to the nonmonotonic dependence of oscillation period on the stimulus strength for this class of neuronal competition models. Our paper aims to investigate this issue.

Figure 3.

Figure 3

Bifurcation diagrams and examples of activity timecourses for neuronal competition model (2.1) with parameter values g = 0.5, τ = 100, r = 10, θ = 0.2, and β = 1.1 (A–G), respectively, β = 0.75 (H–I). Timecourses of u1, u2 corresponding to panel F for different values of I: (A) I = 1.86, (B) I = 1.5, (C) I = 1, (D) I = 0.5, and (E) I = 0.08. Bifurcation diagrams of period T of the network oscillation versus input strength I (F and H). Bifurcation diagram of population activity u1 versus I (G and I).

We analyze a firing rate model in which competition between populations is a result of a combination of reciprocal inhibition and a slow negative feedback process. We prove that, as the input strength changes, oscillations appear in the system through a Hopf bifurcation and that they are antiphase. Due to the two time scales involved in the system there is a regime where periodic solutions take the form of relaxation-oscillators. Their period of oscillations depends nonmonotonically on the input strength, say, I: in a range of low values for I, the period increases with I, and we show that the dynamics is due to a release mechanism; on the other hand, in a range of higher values for I, the period decreases with I, and escape is the underlying mechanism (see section 4; we define release as the case when the switch in dominance during oscillation occurs due to a significant change in the response to an input to the dominant population. On the contrary, escape corresponds to the case of a significant change in the input-output function for the suppressed population). For intermediate values of I, winner-take-all is possible and we explain how it appears. Then in section 5 we present some model modifications that allow for reducing, or even excluding, one of the escape and release regimes, thus leading to a monotonic period versus input curve.

2. The mathematical model

The model we investigate in this paper assumes a network architecture of two populations of neurons (Figure 1) that respond to two competing stimuli of equal strength:

u.1=u1+S(Iβu2ga1),u.2=u2+S(Iβu1ga2),τa.1=a1+u1,τa.2=a2+u2. (2.1)

Variables uj (j = 1, 2) measure short-time and spatially averaged firing rates of the two populations that inhibit each other. The system is nonlinear due to the gain function S; it is the steady input-output function for the population and it has a sigmoid shape as in Figure 2A. The strength of inhibition is modeled by the positive parameter β, while I is the control parameter directly associated to the external stimulus strength (e.g., it grows with growing stimulus strength such as contrast). Each population is subject to a slow negative feedback process aj such as spike frequency adaptation of positive strength g. Since variables aj evolve in much slower time than uj, the parameter τ takes large values, τ ⪢ 1 (e.g., the time-scale for uj is about 10 msec, while for aj it is about 1000 msec).

Figure 1.

Figure 1

Network architecture of neuronal competition model.

Figure 2.

Figure 2

Graphical representations of generic (A) gain function S; (B) its inverse F; and (C) first derivative F’.

Remark 2.1. In general, firing-rate models like (2.1) include in the equation of uj a nonlinear term of the form S(I + αujβukgaj). The product αuj is associated with the intrapopulation recurrent excitation. It is important to note that for neuronal competition model (2.1) we have disallowed recurrent excitation (taken α = 0) in order to preclude an isolated population (β = 0) from oscillating on its own. This is a restriction imposed by experimental observations on binocular rivalry and other perceptual bistable phenomena.

The nonlinear gain function S that appears in the differential equations for uj is usually defined in neuronal models by

S(x)=11+er(xθ) (2.2)

with positive r and real θ.

Function S is invertible with F = S−1 a C(0,1)-function and F(u)=1ru(1u) (Figure 2B–C). Based on this example, we consider the following assumptions for with positive for the gain function S.

S : R → (0, 1) is a differentiable, monotonically increasing function with S(θ) = u0 ∈ (0, 1) and limx→−∞ S(x) = 0, limx→∞ S(x) = 1. Moreover, its first and second derivatives satisfy the conditions limx→±∞ S’(x) = 0, S”(x) > 0 for x < θ, S”(x) < 0 for x > θ, and S”(θ) = 0, so S’ has a maximum at θ.

As a consequence, S is invertible with F = S−1 : (0, 1) → R monotonically increasing function such that limu→0 F(u) = −∞, limu→1 F(u) = ∞, F(u0) = θ, and limu→0 F’(u) limu→1 F’(u) = ∞, F”(u) < 0 for u ∈ (0, u0), F”(u) > 0 for u ∈ (u0, 1), F”(u0) = 0. Obviously, F’ has a minimum value at u0 which is F’(u0) = 1/S’(θ).

Additionally we assume that F is a C-function on (0, 1), or at least C2 on (0, 1) and C on (0, 1) \ {u0}.

The typical graphs of function S and its corresponding F and F’ are drawn in Figure 2A–C. We used the example (2.2) with parameter values r = 10 and θ = 0.2; obviously in this case S(θ) = u0 = 0.5.

All the experiments that motivated our work report oscillatory phenomena with frequencies tightly connected to the stimulus strengths. Moreover, as Levelt [19] pointed out for binocular rivalry, those experiments show large ranges for stimulus strength where the corresponding oscillation periods/frequencies behave monotonically. What kind of possible mechanism is behind this type of dynamics is the question we will focus on in this paper.

Given the neuronal competition model (2.1), the goal is to examine the effect the parameter I has on the existence of oscillations and on their period. The system is a simplified version of an entire class of competition models that, as we found [31], share important dynamical features.

To illustrate those commonalities we draw in Figure 3A–E the timecourses of activity variables u1(t) and u2(t) for different values of control parameter I. Then we summarize the result in the bifurcation diagram of the period T versus I (Figure 3F). Other parameter values are fixed to β = 1.1, g = 0.5, τ = 100, and S as in (2.2) with r = 10 and θ = 0.2 (here u0 = 1/2). The system (2.1) exhibits five possible types of behavior: for large values of I (region I in Figure 3F) both populations are active at identically high levels (Figure 3A); the timecourses of u1(t) and u2(t) tend to a stable steady state larger than u0. As I decreases (region II, Figure 3F) the system starts oscillating with u1(t) and u2(t) alternatively on and off (Figure 3B); in this region the period of oscillation decreases with increasing input strength. At intermediate values of I a winner-take-all kind of behavior is observed (region III, Figure 3F); depending on the choice of initial conditions, one of the two populations is active indefinitely, while the other one remains inactive (Figure 3C). Decreasing I even more (region IV, Figure 3F) the neuronal model becomes oscillatory again (Figure 3D) with u1 and u2 competing for the active state; however, for this range of parameter the oscillation period T increases with input value I—an opposite behavior to that observed in region II. Last, for small values of stimulus strength (region V, Figure 3F) both populations remain inactive at identically low level firing rates (Figure 3E); the timecourses of u1(t) and u2(t) tend to a stable steady state less than u0.

To further characterize system (2.1)’s dynamics as the input value I is varied, we also construct the local bifurcation diagram of amplitude response u1 to I (Figure 3G). For the parameter ranges I and V the trajectories are attracted to a stable fixed point satisfying the u~1=u~2 condition (thick line in Figure 3G). This fixed point becomes unstable (dashed line) in regions II, III, and IV, where the attractor is replaced by either a stable limit cycle (regions II and IV: branched filled-circle curves corresponding to the maximum and minimum amplitudes during rivalry oscillations) or another stable fixed point with u~1u~2 (region III). Due to the symmetry in the equations of (2.1) whenever (u~1,u~2, a~1, a~2) is an equilibrium point, (u~2, u~1, a~2, a~1) is as well. The local bifurcation diagram suggests the existence of some Hopf and pitchfork bifurcations in the model that we will further investigate in section 3.

There is another common feature of many neuronal competition models based on reciprocal inhibition architecture with slow negative feedback in the form of spike frequency adaptation and/or synaptic depression [31]: the absence of the winner-take-all behavior when inhibition strength β is sufficiently small. As we illustrate in Figure 3H–I for β = 0.75 (all other parameters are the same as above), the winner-take-all regime at intermediate I disappears. However, the dependency of period T on stimulus strength remains nonmonotonic.

An intriguing question is: What are the neuronal mechanisms underlying the two distinct dynamics—one of increasing period with increase of stimulus strength (for smaller values of input), and one of decreasing period (for larger values)? This issue is discussed in section 4 and then extended in section 5.

Our general goal is to understand and possibly to analytically characterize the numerical results obtained for this specific competition model (2.1). Consequently, as already pointed out, this step will help us understand the typical behavior of an entire class of neuronal competition models.

3. Oscillatory antiphase solutions and local analysis

In this section we use methods from local bifurcation theory [14, 16] to prove the existence of periodic solutions (u1(t), u2(t), a1(t), a2(t)) for the two-population network (2.1). We also show that the main variables u1 and u2 oscillate in antiphase, therefore competing for the ON/active state.

The bifurcation diagrams obtained numerically in section 2 suggest the existence of an equilibrium point satisfying u1 = u2 no matter the value of parameter I. Let us now investigate system (2.1) theoretically.

All equilibria satisfy the conditions u1 = S(Iβu2ga1), u2 = S(Iβu1ga2), a1 = u1, and a2 = u2 that are equivalent (due to the invertibility of S) to F(u1) = Iβu2ga1, F(u2) = Iβu1ga2, and a1 = u1, a2 = u2. Looking for a particular type of equilibrium point, that is, for points with u1 = u2, we obtain the equation I = H(u) with H defined by

H:(0,1)R,H(u)=defF(u)+(β+g)u. (3.1)

Since F is monotonically increasing on (0, 1) with vertical asymptotes limu→0 F(u) = −∞ and limu→1 F(u) = ∞, (3.1) has a unique solution uI ∈ (0, 1) for any real value of the parameter I. Moreover, from the identity I = F(uI) + (β + g)uI we compute

duIdI=1β+g+F(uI), (3.2)

so a decrease in I leads to a decrease in uI with limI→∞ uI = 1 and limI→−∞ uI = 0.

The neuronal competition model (2.1) always possesses an equilibrium of the type (uI, uI, uI, uI). Its stability properties are then defined by the linearized system dYdt=AY, Y = (u1uI, u2uI, a1uI, a2uI)T, where ()T stays for the transpose, and matrix

A=(1βF(uI)gF(uI)0βF(uI)10gF(uI)1τ01τ001τ01τ).

This form of the matrix relies on the equality S’(Iβu2ga1) = S’(F(u1)) = S’(S−1(u1)) = 1/(S−1)’(u1) = 1/F’(u1), which is true at the equilibrium point.

The characteristic equation of A takes the form

[(λ+1)(λ+1τ)+gτF(uI)]2[βF(uI)(λ+1τ)]2=0.

As a difference of squares it can be decomposed into two quadratic equations: the first is λ2+λ(1+1τ+βF(uI))+1τ(1+g+βF(uI))=0, so two eigenvalues of the matrix A, say, λ1 and λ2, have negative real part no matter the value of parameter I. The other eigenvalues λ3 and λ4 satisfy the second quadratic equation λ2+λ(1+1τβF(uI))+1τ(1+gβF(uI))=0, and their real part can change sign when I is varied.

For |I| sufficiently large, uI is close to either zero or one, keeping F’(uI) larger than both β(1+1τ) and βg (see Figure 2C); the corresponding equilibrium point (uI, uI, uI, uI) is asymptotically stable.

There are two ways this equilibrium point can lose stability: either through a pair of purely imaginary eigenvalues λ3,4 = ± at F(uI)=β(1+1τ) or through a zero eigenvalue λ3 = 0, λ4 < 0 at F’(uI) = βg. Which of these two cases occurs first depends on the relationship between β(1+1τ) and βg: if β(1+1τ)>βg, i.e., β/g < τ + 1, then the eigenvalues λ3, λ4 change the sign of their real part from negative to positive by crossing the imaginary axis (λ3,4 = ±); if β/g > τ + 1, then the case λ3 = 0, λ4 < 0 is encountered first.

At this point we remind the reader of our assumption of a large time constant value τ. (The competition between the populations in the network comes from the combination of two important ingredients: reciprocal inhibition and the addition of a slow negative feedback process.) Therefore, it makes sense to situate ourselves in the case of β/gτ, which implies

β<g(τ+1). (3.3)

Inequality (3.3) can be interpreted as a feature of the neuronal competition model to be rather (adaptation) feedback-dominated than (inhibitory) coupling-dominated.

We will assume in the following that parameters g and τ are fixed and that β is chosen such that (3.3) is true.

Another observation is that the graph of F’ has a well-like shape with positive minimum at 1/S’(θ) as in Figure 2C. Consequently the straight horizontal line y=β(1+1τ) intersects it twice (if β(1+1τ)>1S(θ)), once (for the equality), or not at all (if β(1+1τ)<1S(θ)). As observed in numerical simulations, in order for the system to oscillate, a sufficiently large inhibition strength has to be considered. Mathematically that reduces to

β>1+1τS(θ). (3.4)

Thus we are able to characterize the stability of the equilibrium (uI, uI, uI, uI).

Theorem 3.1. The dynamical system (2.1) has a unique equilibrium point with u1 = u2, say, eI = (uI, uI, uI, uI), for any real I. The value uI increases monotonically with I and belongs to the interval (0, 1).

Let us assume that the adaptation-dominance condition (3.3) is true.

  1. If β<1+1τS(θ), then eI is asymptotically stable for all IR.

  2. If β>1+1τS(θ) then there exist exactly two values uhb, uhb(0,1) such that uhb<u0<uhb and
    F(uhb)=F(uhb)=β1+1τ. (3.5)

The equilibrium point eI is asymptotically stable for all I(,Ihb)(Ihb,) and unstable for I(Ihb,Ihb), where Ihb=H(uhb), Ihb=H(uhb) defined by (3.1). At Ihb and Ihb the stability is lost through a pair of purely imaginary eigenvalues.

Proof.

  1. Since βg<β(1+1τ)<1S(θ)=min(F), all eigenvalues of the linearized system about eI have negative real part.

  2. The conclusion is based on the properties of F’, which decreases on interval (0, u0) and increases on (u0, 1) with F’(u0) = min(F’). The sum λ3 + λ4 changes sign from negative to positive when I increases through Ihb and then back from positive to negative when passing through Ihb. For I(Ihb,Ihb) at least one real part of λ3 and λ4 is positive, so eI is unstable. At I=Ihb and I=Ihb we have λ3,4 = ±. For all other values of I we have λ3 + λ4 < 0, λ3λ4 > 0; so eI is asymptotically stable.

Since the equilibrium point eI becomes unstable through a pair of purely imaginary eigen-values as I crosses the values Ihb and Ihb, we expect to find there two Hopf bifurcation points. Indeed, in section 3.1 we prove the existence of a supercritical Hopf bifurcation at both Ihb and Ihb and, as a consequence, the existence of stable oscillatory solutions for system (2.1).

3.1. Normal form for the Hopf bifurcation

In the following we assume that both inequalities (3.3) and (3.4) are true; that is, we take the coupling in (2.1) to be sufficiently strong but still adaptation-dominated.

Let us use the notation I* for any of the critical values Ihb and Ihb and similarly the notation u* for u{uhb,uhb}. Then the linearization matrix A at u* becomes

A0=(1(1+1τ)gβ(1+1τ)0(1+1τ)10gβ(1+1τ)1τ01τ001τ01τ),

and it has eigenvalues λ1,2 with Re(λ1,2) < 0 and λ3,4 = ±,

ω=1τg(τ+1)β1. (3.6)

The system (2.1) has an equilibrium eI for any IR; we translate eI to the origin with the change of variables vj = ujuI, bj = ajuI (j = 1, 2) and obtain a system topologically equivalent to (2.1),

v.1=v1+S(F(uI)βv2gb1)uI,v.2=v2+S(F(uI)βv1gb2)uI,τb.1=b1+v1,τb.2=b2+v2. (3.7)

Near u*, u* ≠ u0, we expand the nonlinear terms in (3.7) with respect to uI and obtain for v1 (and similarly for v2) an equation of the form

v.1=v1S(F(uI))(βv2+gb1)+12S(F(uI))(βv2+gb1)216S(F(uI))(βv2+gb1)3+h.o.t. (3.8)

Here h.o.t. means the higher order terms. The parameter value I* is a possible Hopf bifurcation, so we consider small perturbations about it and about the solution u*. That is, we take

II=ε2α,V(t)=εV0(t)+ε2V1(t)+ε3V2(t)+, (3.9)

where α is the bifurcation parameter and V = (v1, v2, b1, b2)T.

The expansions of the coeffcients S(k)(F(uI)), k = 1, 2, 3, …, with respect to take the form S(F(uI))=(1+1τ)β+αAε2+O(ε4), S(F(uI))=B+O(ε2) and S(F(uI))=D+O(ε2), where A, B, and D are defined by

A=F(u)F(u)2(β+g+F(u)),B=F(u)F(u)3,D=3F(u)2F(u)F(u)F(u)5 (3.10)

(see Appendix A for more details). Let us introduce the following notation: for two vectors U = (v1, v2, b1, b2)T and W = (w1, w2, c1, c2)T we define first the quantities v~ij=βvj+gbi, w~ij=βwj+gci and then, using the scalars from (3.10), we define the operators

L0U=dUdtA0U,ΛU=αA(v~12,v~21,0,0)T,B(U,W)=B2(v~12w~12,v~21w~21,0,0)T,C(U,U,U)=D6(v~123,v~213,0,0)T.

Then based on (3.8), (3.9), and (3.10), we write system (3.7) as

L0V0=ε[B(V0,V0)L0V1]+ε2[C(V0,V0,V0)+2B(V0,V1)+ΛV0L0V2]+O(ε3).

The construction of the normal form relies on an algorithm that we describe in Appendix A (see also [4, 5] for a similar approach) and that involves tedious calculations; we present here only the main result.

Theorem 3.2. Let us assume that conditions (3.3) and (3.4) are true (sufficiently strong coupling and adaptation-dominated system), and take I{Ihb,Ihb}, u{uhb,uhb} as in normal Theorem 3.1. Then the system (2.1) has in the neighborhood of I* the normal form

z.=Aφ(II)zLz2z, (3.11)

where z is a complex variable and

L=4τ2ω2φ2ψ2φψ(β+g2+iβωτ)B2+2τ2ω2φ2φ(2(β+g)1+(gβ+1)(1+1τ)B2D)

with φ=β2+igβ2τw, ψ=23gβ+1τ(53gβ)4iω(τ+1) and A, B, D as in (3.10).

In order to show that Ihb and Ihb are supercritical Hopf bifurcation points, we need to check that the coefficient of z2z‾ has negative real part, i.e., that Re(L)>0.

As we prove in the appendix, for our range of parameters β, g, and τ, the first term in L has indeed positive real part (see (A.3)); on the other hand, the real part of the second term (see (A.4)) is larger than

βτ2ω2φ2B2F(u)(F(u)F(u)F(u)22).

Remark 3.1. The inverse of function S defined by (2.2) satisfies the condition

F(u)F(u)>2F(u)2. (3.12)

Moreover, the inequality (3.12) is true not only for u* but for all u ∈ (0, 1).

We use this observation to state our next result.

Theorem 3.3. Let us assume the same hypotheses as in Theorem 3.2. Given at u{uhb,uhb} the property (3.12) for the gain function S, the input value I* is a supercritical Hopf bifurcation point for system (2.1). The stable limit cycle occurs on the left side of Ihb (that is, for sufficiently close I<Ihb) and on the right side of Ihb(I>Ihb).

Proof. The Hopf bifurcation is supercritical since Re(L)>0 in the normal form; the nondegeneracy condition is Re() = /2 ≠ 0.

The sign of A is opposite to the sign of F”(u*), so A > 0 for uhb and A < 0 for uhb. Consequently the sign of Re((II*)) that shows the direction of limit cycle bifurcation is positive for I>Ihb and I<Ihb.

Remark 3.2. It is possible to obtain supercritical Hopf bifurcation points for other types of gain-function than (3.12), as long as Re(L) has positive value. When (3.12) is not valid, the sign of Re(L) will be computed directly from the definition formula of L.

3.2. Antiphase oscillations

The stable limit cycle that exists in the neighborhood of the bifurcation points Ihb (for I>Ihb) and Ihb (for I<Ihb) is a periodic solution L1(t) = (u1(t), u2(t), a1(t), a2(t)) of period, say, T. Due to the symmetry of the system (2.1) with respect to the group Γ = {14, γ} where 14 is the unitary 4-by-4 matrix and

γ=(0100100000010010),

L2(t) = γL1(t) = (u2(t), u1(t), a2(t), a1(t)) is also a periodic solution for (2.1). L2 results automatically from solution L1 by relabeling the two network’s populations. Moreover, it belongs to the same neighborhood of the equilibrium point as L1 does.

Since the limit cycle born through the Hopf bifurcation is unique, the corresponding phase space trajectories of L1 and L2 coincide. Therefore, there exists a phase shift α0 ∈ [0, T) such that L2(t) = L1(t + α0). This implies u2(t) = u1(t + α0) and u1(t) = u2(t + α0), i.e., u1(t) = u1(t + 2α0) for all real t [12]. The phase shift α0 needs to satisfy α0 = kT/2 with k an integer, so k is either k = 0 or k = 1. If k = 0, the two populations in the network will oscillate in synchrony. If k = 1, we have u2(t) = u1(t + T/2) and a2(t) = a1(t + T/2), which means an antiphase oscillation.

Let us assume for the moment that k = 0. Then U(t) = (u1(t), u1(t), a1(t), a1(t)) is a periodic solution of (2.1), and, as a consequence, (u1(t), a1(t)) is a periodic solution of the two-dimensional system

u.1=u1+S(Iβu1ga1),τa.1=a1+u1.

This contradicts Bendixon’s criterion: the expression

u1(u1+S(Iβu1ga1))+a1(a1τ+u1τ)=1βS(Iβu1ga1)1τ

is always negative, so our initial assumption should be false.

Excluding the first case, the limit cycle has to be an antiphase solution of (2.1): the two populations compete indeed for the active state (see, for example, Figure 3B or 3D).

We state our conclusion in the following theorem.

Theorem 3.4. Let us assume that conditions (3.3) and (3.4) are true and the coefficient L in the normal form (3.11) has positive real part. Then the stable limit cycle obtained at the supercritical Hopf bifurcation (as I crosses either Ihb or Ihb) corresponds to an antiphase oscillation: the limit cycle of period T satisfies u2(t) = u1(t + T/2) and a2(t) = a1(t + T/2) for any real t.

3.3. Multiple equilibria for large enough inhibition

Our local analysis shows how stable oscillations occur in system (2.1)—through a Hopf bifurcation. The uniform equilibrium point eI has four eigenvalues, λ1 and λ2 with negative real part independent of I (Re(λ1,2) < 0), and λ3 and λ4 that can cross the imaginary axis. Besides Hopf, another type of local bifurcation appears in (2.1) when one of the eigenvalues λ3, λ4 takes zero value, that is, when F’(uI) = βg. Because of the system’s symmetry we expect it to be a pitchfork bifurcation.

Numerical simulations of system (2.1) reveal indeed the existence of additional equilibrium points. However, they exist for stronger (Figure 3G, β = 1.1) but not for weaker inhibition (Figure 3I, β = 0.75). We explain analytically how that happens.

Theorem 3.5.

  1. If βg < 1/S’(θ), then the dynamical system (2.1) has a unique equilibrium point for all real I and this is eI = (uI, uI, uI, uI).

  2. For strong inhibition,
    βg>1S(θ), (3.13)
    there are exactly two values, say, upf, upf(0,1) such that upf<u0<upf and
    F(upf)=F(upf)=βg. (3.14)
    At Ipf=H(upf) and Ipf=H(upf) defined by (3.1), the equilibrium point eI has a zero eigenvalue.

Proof. The condition that characterizes the equilibrium points of (2.1) is equivalent to G(u1) = G(u2) = Iβ(u1 + u2), where we define G by G(u) = F(u) + (gβ)u, u ∈ (0, 1). We have limu→0 G(u) = −∞, limu→1 G(u) = ∞, and G’(u) = F’(u) + gβF’(u0) + gβ = 1/S’(θ) + gβ. Obviously, based on hypothesis (i), we have G’(u) > 0. Therefore, G is a monotonically increasing function; so it is injective, and the conclusion follows immediately. Statement (ii) results from the shape of F’.

By constructing the normal form of the system around the bifurcation point Ipf or Ipf, we prove the existence of a subcritical pitchfork bifurcation. Therefore, in the neighborhood of Ipf or Ipf the system (2.1) has multiple (three) equilibria. However, since the pitchfork is subcritical, the two newly born equilibrium points are unstable. In the four-dimensional eigenspace they actually possess two unstable modes (see Remark 3.4).

In some cases these nonuniform equilibria (having u1u2) might change their stability for I between Ipf and Ipf. Depending on the initial condition a trajectory will be attracted either to the fixed point with u1 > u2 or to that with u1 < u2, so one population is dominant and the other is suppressed forever. We call this type of dynamics in (2.1) winner-take-all behavior. The issue of the existence of the winner-take-all regime will be discussed separately in section 4.2. In this section we focus only on the mechanism that introduces additional equilibrium points to system (2.1).

Theorem 3.6. Let us assume β(1+1τ)>βg>1S(θ) and take I°{Ipf,Ipf}, u°{upf,upf} as in (3.14), I° = F (u°) + (β + g)u°. Then the system (2.1) has in the neighborhood of I° the normal form

z.=(II°)F(u°)2β[g(τ+1)β]z+(βg)26[g(τ+1)β](F(u°)F(u°)32F(u°)2+3g2β)z3. (3.15)

Moreover, if the gain function S satisfies (3.12) at u°{upf,upf}, then I° is a subcritical pitchfork bifurcation point for the system (2.1). Two additional unstable equilibrium points occur on the left side of Ipf(I<Ipf) and on the right side of Ipf(I>Ipf).

Proof. The construction of the normal form is sketched in Appendix B. Since (3.12) is true, the coefficient of z3 in the normal form is positive and the pitchfork is subcritical. Additional equilibrium points appear for (II°)F”(u°) < 0 with F”(u°) negative at upf and positive at upf.

Remark 3.3. In case of adaptation-dominated systems (when condition (3.3) or, equivalently, β(1+1τ)>βg is true) we conclude the following: (1) for weak inhibition βg<β(1+1τ)<1S(θ) system (2.1) has a unique equilibrium point eI which is asymptotically stable for all I; (2) for some intermediate value of inhibition (βg<1S(θ)<β(1+1τ)), the system still has a unique equilibrium point for all I but this becomes unstable in the interval (Ihb, Ihb). However in order to obtain this case we need to properly adjust the maximum gain to the adaptation parameters (i.e., we need S’(θ) > 1/(τg)); (3) for strong inhibition (1S(θ)<βg<β(1+1τ)), additional equilibrium points occur in system (2.1) for I>Ipf and I<Ipf

Remark 3.4. In case of strong inhibition and adaptation-dominated system we obtain uhb<upf<u0<upf<uhb and

Ihb<Ipf<I0<Ipf<Ihb.

(Note that I0 = H(u0) is independent of β.) At each I between Ihb and Ihb, the equilibrium point eI has at least one eigenvalue of positive real part. In fact for I(Ihb,Ipf)(Ihb,Ipf) it has exactly two eigenvalues with positive real part and for I(Ipf,Ipf) it has only one eigenvalue with positive real part. Due to the multidimensionality of the eigenspace, at Ipf and Ipf the equilibrium eI does not actually change its stability (even if an eigenvalue takes zero value). Instead two new (nonuniform) equilibria are born (e.g., Figure 3G). The two nonuniform equilibria inherit the number of unstable modes from their “parent”–fixed point, which means they have exactly two unstable modes.

As we see, condition (3.13) is necessary but not sufficient to obtain a winner-take-all behavior in system (2.1). In section 4.2, equations (4.11) and (4.10), we will determine the minimum value of β for which winner-take-all exists (βwta) and the corresponding values Iw, Iw where transition from oscillation to winner-take-all dynamics takes place. We summarize all these results in Figure 4 by drawing the bifurcation diagram in the parameter plane (I, β).

Figure 4.

Figure 4

Dynamical regimes in system (2.1) as inhibition strength β and stimulus strength I vary (other parameters are fixed: g = 0.5, τ = 100, r = 10, θ = 0.2). To the left of curve Ihb (solid line) the system has a unique stable equilibrium, and this satisfies u1 = u2 (simultaneous activity); in the region between curves Ihb and Iw (dashed line) the system oscillates; then to the right of Iw the system has a winner-take-all behavior (two stable and one unstable equilibria). The curve Ipf (dotted line) indicates a transition from one equilibrium to multiple equilibria in (2.1). While the attractor’s type (limit cycle) does not change between Ihb and Iw, the number of equilibria does: we find one unstable equilibrium between Ihb, Ipf and three unstable equilibria between Ipf, Iw. The turning points of curves Ihb, Ipf, and Iw are obtained at β=1+1τS(θ)=0.404, β = g+1/S’(θ) = 0.9, and βwta = 1.0387, respectively.

4. Release, escape, and winner-take-all mechanisms in neuronal competition models

As we mentioned in section 1, some common features are observed for a large class of neuronal competition models based on mutual inhibition and slow negative feedback process. An important example is the nonmonotonic dependency of the rivalry-oscillation’s period T on the stimulus strength I: in a range of small values for I the period increases with input strength; however, there exists another range for I, at larger values, where the period decreases with stimulus strength. These two dynamical regimes are usually separated by another one that is nonoscillatory; it occurs for sufficiently strong inhibition and corresponds to winner-take-all behavior (see Figures 3F and 3H).

The goal of this section is to characterize the underlying mechanisms of the above dynamical scheme. We aim to understand what causes the two opposite rivalry dynamics: as we will see, a release kind of mechanism is associated with the increasing branch of the T versus I curve (region IV in Figure 3F); on the other hand, for the decreasing branch of the I-T curve (region II in Figure 3F), an escape mechanim is responsible.

The terms release and escape were previously introduced by [36] for inhibition-mediated rhythmic patterns in thalamic model neurons and then extended and refined by [32] for Morris–Lecar equations. Most cases include an autocatalytic process either intrinsic by voltage-gated persistent inward currents or synaptic by intrapopulation recurrent excitation. In neuronal competition model (2.1) mutual inhibition plays the role of autocatalysis: one population inhibits the network partner that inhibited it; thus the combination of these two negative factors has a positive effect on its own activity. Rhythmicity is obtained due to a fast positive feedback (disinhibition) and a slow negative feedback process. The slow negative feedback process can be either an intrinsic property of the neuronal populations (e.g., spike frequency adaptation as in (2.1)) or a property of the inhibitory connections between them (e.g., synaptic depression). A simplified model similar to (2.1) but with synaptic depression and a Heaviside step-gain function was analytically investigated in [33]. Numerical results for models with synaptic depression and smooth sigmoid gain functions were also reported in [33] and [31].

In the context of neuronal competition models we define release and escape mechanisms as follows: The two populations in the network oscillate in antiphase competing for the active state; for the dominant population of variable, say, u1, the net input Iβu2ga1 decreases as the slow negative feedback accumulates; on the contrary, for the suppressed population u2 the feedback recovers (decays) so the net input Iβu1ga2 increases. However, since function S is highly nonlinear, equal changes in the net input Iβujgak of both populations can lead to drastically different changes in the corresponding effective response S(Iβujgak). This transformation has a direct influence on the variation of u1 and u2. The switch in dominance is due either to a significant, more abrupt change (decrease) in the response to an input to the dominant population, or, on the contrary, to a significant change (increase) in the response to an input to the suppressed population. In the first case the dominant population loses control, its activity drops, and it no longer suppresses its competitor, which becomes active. We call this mechanism release. In the latter case, when the input-output function S of the suppressed population changes faster, this population regains control, its activity rises, and it forces its competitor into the inhibited state. We call this mechanism escape.

Intuitively, escape occurs for higher stimulus ranges than release. Therefore, we expect that an escape (release) mechanism underlies the dynamics in region II (region IV) in Figure 3F with decreasing (increasing) I-T curve. For large values of I the gain function for the dominant population is relatively constant and close to 1 while that for the suppressed population falls in the interval where it is steeper. For example, let us consider the fast plane (u1, u2) and assume that u1 is ON and u2 is OFF; then the dominance switching point is on the shallow part of the active population nullcline u1 = S(Iβu2ga1) and on steeper part of down population nullcline u2 = S(Iβu1ga2) (see the animation 70584_01.gif [3.7MB] in Appendix C). A larger variation in u2 than in u1 is expected, and that corresponds to the escape mechanism. For small values of I the gain function for the dominant population is steeper (the steeper part of active population nullcline u1 = S(Iβu2ga1)), while the gain function for the suppressed population is relatively constant and close to 0 (the shallow part of the down population nullcline u2 = S(Iβu1ga2))—see the animation 70584_02.gif [3.8MB] in Appendix C. That is what we call release.

For sufficiently large inhibition β, at intermediate I the effective response to an input to both populations might end up being relatively constant: closer to 1 for the active population and closer to 0 for the down population. The switching does not take place anymore; instead a winner-take-all dynamics is obtained.

In the following we investigate analytically how the period of oscillations for system (2.1) depends on the input strength and show that the increasing (decreasing) branch of the I-T curve is associated with the release (escape) mechanism. Our analysis is done in two steps: first, in section 4.1, we consider the limiting case sigmoid function S(x) = Heav(xθ) such that S(x) = 0 if x < θ and S(x) = 1 if x > θ; the function does not obey the hypotheses in section 2, but it provides a useful example where escape, release, and winner-take-all dynamics are easily characterized. Then in section 4.2 we return to the case of smooth sigmoid function and describe the notions defined above in this more general context. We find a precise mathematical characterization for the minimum value of β and then for the corresponding input values, say, Iw and Iw, where the winner-take-all regime appears. In the absence of a winner-take-all regime we provide a mathematical definition for the transition between escape and release.

4.1. A relevant example: The Heaviside step function

The choice of S(x) = Heav(xθ) allows us to solve for completely the intervals of the stimulus strength I where oscillations and winner-take-all dynamics exist. For system (2.1) with a Heaviside step function, there are only four possible equilibrium points: (1, 1, 1, 1), (1, 0, 1, 0), (0, 1, 0, 1), and (0, 0, 0, 0). Here (1, 1, 1, 1) and (0, 0, 0, 0) correspond respectively to the simultaneously high and low activity states observed in numerical simulations in regions I and V of Figure 3F. Oscillations can occur only between the states (u1, u2) = (1, 0) and (u1, u2) = (0, 1) in which one population is dominant and the other one is suppressed. Let us now determine the necessary and sufficient conditions for the oscillations to exist. The idea used in our analysis is similar to that in [17] and [33].

In the fast plane, the nullclines of u1 and u2 consist in two constant plateaus of zero and unit value discontinuously connected at a “threshold” point (I − (θ + ga1))/β and (I − (θ + ga2))/β, respectively (Figure 5A). During oscillation, due to the change in slow variables a1 and a2 these thresholds move along the vertical and horizontal axes. For example, assuming u1 = 1, u2 = 0, the slow equations become τa.1=a1+1>0 and τa.2=a2<0; thus the u1-nullcline moves down while the u2-nullcline moves to the right. If these nullclines slide enough and the thresholds cross either 0 (for the u1-nullcline) or 1 (for the u2-nullcline), i.e., either a1J = (Iθ)/g or a2J = (I − (θ + β))/g are reached, then the equilibrium point (u1, u2) = (1, 0) disappears and the system will be attracted to (u1, u2) = (0, 1). The switch takes place and the slow equations change to τa.1=a1<0, τa.2=a2+1>0, now pushing the nullclines in opposite directions. As we explain below, depending on which of the two jumping values a1J or a2J is reached first, a release or an escape mechanism will underlie the oscillation.

Figure 5.

Figure 5

System (2.1)’s dynamics for large inhibition strength (β/g > 1) and Heaviside step function. (A) Nullclines in the plane of fast variables (u1, u2); (B–H) System’s dynamics in the slow variables plane (a1, a2) for: (B) Iθ+β+g; (C) θ+β+g/2 ≤ I < θ+β+g; (D) θ+β < I < θ+β+g/2; (E) θ+gIθ+β; (F) θ + g/2 < I < θ + g; (G) θIθ + g/2; (H) I < θ.

We note that for an oscillatory solution, u1+u2 = 1 always, and so τ(a.1+a.2)=1(a1+a2). Asymptotically, the slow dynamics will occur along the diagonal a1 + a2 = 1 of the unit square (Figure 5D or 5F). The positions the horizontal line a2 = (I − (θ + β))/g and the vertical line a1 = (Iθ)/g have relative to the unit square is important when the trajectory points to the lower-right corner; on the other hand, when the trajectory points to the upper-left corner, the position of vertical line a1 = (I − (θ + β))/g and horizontal line a2 = (Iθ)/g will matter. Therefore, in the slow plane (a1, a2) (see Figure 5B–5H) we are interested in the intersection of the unit square (grey) with the square defined by the possible jumping values (I − (θ+β))/g (blue) and (Iθ)/g (red). If the red line is reached, then the active cell (uj = 1) becomes suddently inactive (uj = 0), allowing its competitor to go up. That is why we call this case a release mechanism; otherwise, if the blue line is reached first, then the suppressed cell becomes suddenly active, forcing its competitor to go down. That is the escape mechanism.

As parameter I decreases, the blue-red square slides down along the first diagonal, starts to intersect the grey unit square, and then leaves it (Figure 5B–5H). The way those two squares intersect determines the system’s dynamics, so we need to take into account the relative size of their sides: 1 and β/g.

Theorem 4.1 (five modes of behavior for large enough inhibition strength). Let us assume that β/g > 1. The following five dynamical regimes exist for the neuronal competition model (2.1) with S(x) = Heav(xθ).

  1. If Iθ + β + g/2, then the system’s attractor is the simultaneously high activity state u1 = u2 = 1.

  2. If θ+β < I < θ+β+g/2, then the system oscillates between the states (u1, u2) = (1, 0) and (u1, u2) = (0, 1) due to an escape mechanism. The period of oscillations decreases with I and satisfies
    Tescape=2τln(gI(θ+β)1). (4.1)
  3. If θ + gIθ + β, then the system is in a winner-take-all regime with fast variables either (1, 0) or (0, 1) depending on the initial condition choice.

  4. If θ + g/2 < I < θ + g, then the system oscillates between the states (u1, u2) = (1, 0) and (u1, u2) = (0, 1) due to a release mechanism. The period of oscillations increases with I and satisfies
    Trelease=2τln(gθ+gI1). (4.2)
  5. If Iθ + g/2, then the system’s attractor is the simultaneously low activity state u1 = u2 = 0.

Proof. Since β/g > 1, there are exactly seven relative positions of the blue-red square to the unit square that lead to conclusions (i) to (v).

  1. For 1 ≤ (I − (θ + β))/g (Figure 5B) both a1, a2 are smaller than (I − (θ + β))/g, so IβuigakIβgθ, and we always have u1 = u2 = 1.

    If 1/2 ≤ (I − (θ + β))/g < 1 < (Iθ)/g (Figure 5C), let us assume u1 = 1, u2 = 0, and a1 + a2 = 1 as initial values. The slow variable a1 increases while a2 decreases; in this case only a2 can cross the horizontal blue line, producing the jump of u2 from 0 to 1. However, just after the jump, in the equation of u1 we have Iβu2ga1 = Iβg(1 − a2J) = 2(I − (β + θ)) − g + θθ, which keeps u1 at its value 1. Therefore, the point will be attracted to the corner (a1, a2) = (1, 1) and the oscillation dies.

  2. For 0 < (I − (θ + β))/g < 1/2 < 1 < (Iθ)/g (Figure 5D), a2 will first cross the blue line and induce a sudden change in u2. Then in the u1-equation we obtain Iβu2ga1 = Iβg(1 − a2J) = 2(I − (β + θ)) − g + θ < θ, which forces u1 to take zero value. The fast system switches from (1, 0) to (0, 1) and the slow dynamics changes its direction of movement along the upper-left–lower-right diagonal of the unit square. The change Iβu1ga2 = Iga2Igθ does not affect the new value u2 = 1; oscillation exists indeed and its projection on the slow plane is the segment defined by the intersection of the unit square’s secondary diagonal with the two blue lines. When I decreases, the length of this segment increases, so it will take longer to go from one endpoint to the other. We expect the period T to increase as I decreases. Indeed, at the jumping point, the slow variable for the suppressed population takes the value afa2J = (I − (θ + β))/g; however, just after the previous jump it was ai ≔ 1 − a1J = 1 − (I − (θ + β))/g. Therefore, from τa.2=a2 we compute the solution a(t) = aiet/τ, t ∈ (0,T/2). The period T of oscillation is T = 2τ ln(ai/af), i.e., exactly (4.1). Moreover, dT/dI < 0, so T decreases with I.

  3. By choosing I such that (I − (θ + β))/g < 0 < 1 ≤ (Iθ)/g, the unit square falls completely inside the blue-red square (Figure 5E). Starting at u1 = 1, u2 = 0 we have Iβu2ga1 = Iga1Igθ and Iβu1ga2 = Iβga2Iβ < θ; the slow variables a1 and a2 continue to increase, respectively, decrease, approaching the lower-right corner of the unit square, and a winner-take-all state is achieved. For u1 = 0, u2 = 1, the point (a1, a2) will be attracted to the upper-left corner.

  4. Decreasing I even more, we enter the region (I − (θ + β))/g < 0 < 1/2 < (Iθ)/g < 1 (Figure 5F). Here the threshold is crossed at the red line and (when u1 = 1, u2 = 0) u1 jumps from 1 to 0. In the u2-equation, the expression Iβu1ga2 becomes Ig(1 − a1J) = 2(Iθ) − g + θθ, which leads to u2 = 1. That is the release mechanism. In the slow plane (a1, a2) the point changes its direction of movement; the oscillation occurs along the segment defined by the intersection of the unit square’s secondary diagonal with the two red lines. The length of this segment decreases as I decreases; it takes less time to go from one endpoint to the other, so we expect the period T to decrease. Indeed, for the release mechanism, afa1J = (Iθ)/g, with value just after the previous jump ai ≔ 1 − a2J = 1 − (Iθ)/g. The slow differential equation is now τa.1=1a1, that is, a(t) = 1 − (1 − ai)et/τ, t ∈ (0, T/2). The period T of oscillation satisfies T = 2τ ln((1−ai)/(1−af)), or (4.2). Obviously dT/dI > 0.

  5. Oscillations cannot exist anymore for (I−(θ+β))/g < 0 (Iθ)/g 1/2 (Figure 5G). Say, that again, we have u1 = 1, u2 = 0: only a1 can reach the threshold (red) line for u1 to become inactive. However, just after the jump, in the u2-equation the net input Iβu1ga2 = Ig(1 − a1J) = 2(Iθ) − g + θ is still less than θ, and it keeps u2 at zero. Both slow variables start to decrease approaching the corner (a1, a2) = (0, 0). The oscillation dies, and the fast system has (0, 0) as a single attractor. For even smaller values of input strength, (I − (θ + β))/g < (Iθ)/g < 0 (Figure 5H), it is true that IβuigakI < θ and u1 = u2 = 0.

Remark 4.1. We note that in the escape regime, T → 0 as Iθ + β + g/2 and T → ∞ as Iθ + β. Similarly, in the release regime, T → 0 as Iθ + g/2 and T → ∞ as Iθ + g.

Case (iii) in Theorem 4.1 (case (E) in Figure 5) is not possible if the side of the blue-red square is smaller than the unit. Therefore, the winner-take-all dynamics is eliminated. On the other hand, the blue-red square can lie completely inside the unit square. Then we need to distinguish between two cases: the point moving along the secondary diagonal of the unit square may first reach the blue line (Figure 6A) or the red line (Figure 6B).

Figure 6.

Figure 6

(A) Escape and (B) release mechanisms for low inhibition strength (β/g < 1) in system (2.1) with Heaviside step function.

Theorem 4.2 (no winner-take-all regime for low inhibition strength). Let us assume that β/g < 1. Then the neuronal competition model (2.1) with S(x) = Heav(xθ) exhibits only four dynamical regimes.

If Iθ + β + g/2, then the system’s attractor is the high activity state u1 = u2 = 1. Similarly, if Iθ + g/2, the system’s attractor is the low activity state u1 = u2 = 0.

For intermediate values of input strength, the system oscillates between the states (u1, u2) = (1, 0) and (u1, u2) = (0, 1). If θ + (β + g)/ 2 < I < θ + β + g/2, oscillations occur due to an escape mechanism; the period T decreases with I and satisfies (4.1). If θ + g/2 < I < θ + (β + g)/2, then a release mechanism underlies the oscillations and T increases with I according to (4.2).

Moreover, at ITmax = θ + (β + g)/2 the system has maximum oscillation period

Tmax=2τln(1+βg1βg).

Proof. Let us assume again initial conditions u1 = 1, u2 = 0, and a1 + a2 = 1; therefore, a1 increases and a2 decreases. In order to first reach the blue line (Figure 6A), the inequality 1 − (I − (θ + β))/g < (Iθ)/g, i.e., I > ITmax, must be true. When the blue line is reached, the down variable u2 switches from 0 to 1. If (I − (θ + β))/g < 1/2, the net input for u1 becomes NeIβu2ga1 = Iβg(1 − a2J) = 2(Iβθ) − g + θ < θ, so u1 changes to 0 and oscillation occurs due to an escape mechanism. If (I − (θ + β))/g ≥ 1/2, then Neθ and u1 remains 1; the fast system has (1, 1) as an attractor. Similar arguments are used for the release mechanism; the condition for intersection with the red line (Figure 6B) is equivalent to the inequality 1 − (Iθ)/g > (I − (θ + β))/g, i.e., I < ITmax. For both (4.1) and (4.2), TTmax as Iθ + (β + g)/2. Also, Tescape → 0 as Iθ + β + g/2 and Trelease → 0 as Iθ + g/2.

4.2. Global features of the competition model with smooth sigmoid gain function

Numerical simulations of system (2.1) with smooth gain function S indicate that the limit cycle born through the Hopf bifurcation as in section 3 takes a relaxation-oscillator form just beyond the bifurcation (Figure 3B and 3D). That is, because of the two time-scales involved in the system, variables u1 and u2 evolve much faster than a1 and a2 (τ ⪢ 1). We use this observation to describe for (2.1) the relaxation-oscillator solution in the singular limit 1/τ = 0. In the plane (a1, a2) of slow variables we construct the curve of “jumping” points from the dominant to the suppressed state of each population, and back. This curve is the equivalent of the blue-red square from the case of Heaviside step function, and similarly it consists of two arcs associated with an escape and release mechanism, respectively. Then we give analytical conditions for the winner-take-all regime to exist.

4.2.1. The singular relaxation-oscillator solution

For large values of τ let us consider the slow time s = εt (ε = 1/τ; ’ = d/ds) and rewrite system (2.1) as εu1=u1+S(Iβu2ga1), εu2=u2+S(Iβu1ga2), a1=a1+u1, a2=a2+u2.

In the singular perturbation limit ε = 0 any solution will belong to the slow manifold Σ defined by −u1 + S(Iβu2ga1) = 0, −u2 + S(Iβu1ga2) = 0 or, based on the inverse property of S (S−1 = F),

Σ={(u1,u2,a1,a2):u1,u2(0,1),a1,a2R,andu2=S(Iβu1ga2),a1=1g[IF(u1)βS(Iβu1ga2)]}. (4.3)

The surface Σ is multivalued and can be visualized by plotting a1 as a function of u1 and a2 (Figure 7).

Figure 7.

Figure 7

Projection of the slow manifold Σ on the space (u1, a1, a2). System (2.1)’s parameters are β = 1.1, g = 0.5, θ = 0.2, r = 10, and I = 1.5 (A–C), respectively, I = 1 (D–F). The curve SN of saddle-nodes (jumping points) is represented in blue-red (C, F). Nullclines N1 (magenta) and N2 (black curve) intersect in three points for (B) I = 1.5 and (E) I = 1. However, either the equilibrium points can all lie on the middle branch of Σ (A), that is, inside the curve of saddle-nodes (C), or two equilibrium points can move to the lateral branches, outside of SN (D, F).

Then the slow dynamics is according to equations a1=a1+u~1(a1,a2), a2=a2+u~2(a1,a2), where (u~1, u~2, a1, a2) ∈ Σ. The “slow” nullclines are characterized by additional conditions: a1 = u1 for a1-nullcline (N1) and a2 = u2 for a2-nullcline (N2); geometrically these are two curves situated on the surface Σ and defined by

N1(a1=0)={(u1,u2,a1,a2):u1=a1,u2=S(Iβa1ga2(a1)),a2=1g[Iβa1F(Iga1F(a1)β)]witha1(α1,α2)} (4.4)

and

N2(a2=0)={(u1,u2,a1,a2):u1=S(Iβa2ga1(a2)),u2=a2,a1=1g[Iβa2F(Iga2F(a2)β)]witha2(α1,α2)}. (4.5)

Here α1, α2 ∈ (0, 1) are the unique solutions of the equations F(α1) + 1 = Iβ and F(α2) + 2 = I, respectively.

The equilibrium points are at the nullclines’ intersection, which means a1 = u1 and a2 = u2 simultaneously on Σ. They are characterized by the conditions

N1N2:u1=a1,u2=a2,F(a1)+ga1+βa2=F(a2)+ga2+βa1=I. (4.6)

We recall from section 3 that if β>(1+1τ)S(θ) (which in the limit case ε = 0 corresponds to β > 1/S’(θ)), there exist two values of input parameter Ihb<Ihb such that the system has a unique stable equilibrium point for I(,Ihb)(Ihb,). This equilibrium point takes the form eI = (uI, uI, uI, uI) with uI ∈ (0, 1), F(uI) + (β + g)uI = I, and it becomes unstable at Ihb and Ihb; a stable limit cycle appears for I>Ihb and I<Ihb. The critical parameter values are defined by Ihb=F(uhb)+(β+g)uhb and Ihb=F(uhb)+(β+g)uhb with uhb, uhb according to (3.5). Again, in the limit case ε = 0, condition (3.5) becomes

F(uhb)=F(uhb)=β, (4.7)

and obviously we have F’(uI) < β for each I(Ihb,Ihb).

Moreover, if βg > 1/S’(θ), two additional equilibrium points occur for I>Ipf and I<Ipf through a pitchfork bifurcation. However, at least in the neighborhood of Ipf or Ipf, these equilibrium points are unstable. Their coordinates (u1p, u2p) satisfy (4.6), so either u1p < uI < u2p or u1p > uI > u2p. In addition, they are close to uI for all I sufficiently close to Ipf or Ipf; in conclusion, at least in a small region, F’(u1p) < β and F’(u2p) < β.

In Figure 7 we plotted the projection on the three-dimensional space (u1, a1, a2) of the slow manifold Σ and the nullclines N1 and N2 (N1 is colored in magenta and N2 is colored in black; because Σ is defined over the full range −∞ to ∞ with respect to a1 and a2, we plot only part of it). All plots are done for S as in (2.2), β = 1.1, g = 0.5, r = 10, θ = 0.2, and two values of input strength, I = 1.5 (Figure 7A–C) and I = 1 (Figure 7D–F). For these values of parameters we have Ipf=0.4064 and Ipf=1.5936, so at both I = 1.5 and I = 1 the system has three equilibrium points; nullclines intersect three times and the middle intersection point is exactly eI. We note that all equilibrium points are unstable at I = 1.5 while two of them become stable at I = 1 (see also Figure 3G).

On the surface Σ we have u1a1=g[β2S(Iβu1ga2)F(u1)] and u1a2=βg[β2F(u1)S(Iβu1ga2)], i.e.,

u1a1=gF(u2)β2F(u1)F(u2)andu1a2=βgβ2F(u1)F(u2).

For an initial condition (u1, u2, a1, a2) with u1 sufficiently large (close to 1) we have u1a1<0 and u1a2>0; therefore, a simultaneous increase in a1 and decrease in a2 lead to a decrease in u1. Similarly, for a choice of u1 sufficiently low (close to 0), u1a1<0, u1a2>0, and a simultaneous decrease in a1 and increase in a2 lead to an increase in u1. On the other hand, the limit cycle exists for some I(Ihb,Ihb). Since here eI ∈ Σ with F’(uI) < β, we have u1a1>0 and u1a2<0 in a neighborhood of this equilibrium point; the behavior of u1 with respect to a1 and a2 is opposite that previously described.

Therefore, relative to the plane (u1, a1) the surface Σ has a cubic-like shape: its left and right branches decrease with u1 while the middle branch increases. The curve of lower (u1 < u2) and upper (u1 > u2) knees on Σ is defined by F’(u1)F’(u2) = β2 (blue-red curve in Figure 7C and 7F). As we show in the following, when the trajectory on surface Σ reaches the curve of knees on its upper side, the point will jump from the right branch of Σ to its left branch. Similarly, when it reaches the lower side of the curve of knees, the point will jump from the left to the right branch of Σ. In the plane of fast variables (u1, u2) the curve of knees corresponds to a saddle-node bifurcation (a node approaching a saddle then merging with it and disappearing—see the animations 70584_01.gif [3.7MB] and 70584_02.gif [3.8MB] in Appendix C). For this reason we also call it the curve of saddle-nodes (SN) or the curve of jumping points. It is defined by the equations

SN:F(u1J)F(u2J)=β2,a1J=1g[IF(u1J)βu2J],a2J=1g[IF(u2J)βu1J]. (4.8)

We do not prove here the existence of the relaxation-oscillator singular solution. We aim only to provide the reader with the intuition for how oscillations occur in the competition model (2.1) if a smooth sigmoid is taken as a gain function. Thus, if the oscillations exist and, for example, u1 is dominant and u2 is suppressed (u1 > u2), we have −a1 + u1 > 0, −a2 + u2 < 0, so a1 increases and a2 decreases. They push u1 down and the point moves on the trajectory until it reaches SN at an upper knee U of coordinates (u1J, u2J); here the derivatives u1a1 and u1a2 become infinite, so u1 jumps from the upper to the lower branch of Σ (Figure 8A–8B). On the lower branch (u1 < u2) we have −a1 + u1 < 0 and −a2 + u2 > 0, so u1 increases and the point will move due to the decrease of a1 and increase of a2 until it touches SN at L. At L the point jumps up to the right branch of Σ. The projection of the limit cycle on the slow plane (a1, a2) is a closed curve that touches the projection of SN at two points (a1J, a2J) and (a2J, a1J) symmetric to the line a1 = a2 (Figures 8C and 9B).

Figure 8.

Figure 8

(A–B) Limit cycle solution and the slow manifold Σ for system (2.1) with parameters I = 1.5, β = 1.1, g = 0.5, r = 10, θ = 0.2, and τ = 5000. (C) The projection of the limit cycle on the slow plane (a1, a2); P1, P2, and P3 are the projections of the equilibrium points.

Figure 9.

Figure 9

(A) The graph of W = W(u1J) for upper knees on SN computed for g = 0.5, r = 10, θ = 0.2, and S as in (2.2). At β = 0.75 (dashed-dotted curve) the maximum value of W is less than unity: Winner-take-all regime does not exist; at β = 1.1 (thick curve) the maximum value of W is larger than unity: the values of I where winner-take-all occurs correspond to u1J at points a and b. (B) Projection Γ of the curve of saddle-nodes on the slow plane (a1, a2) for β = 0.75, g = 0.5, r = 10, θ = 0.2, S as in (2.2), and two values of input I = 1.2 and I = 1; Γ moves down on the first bisector direction as I decreases. The blue side is associated to the right branch of W in panel A, from W = 0 to Wmax (an escape mechanism), and the red side is associated to the left branch of W from Wmax to 0 (a release mechanism).

4.2.2. Release, escape, and winner-take-all in the competition model with smooth sigmoid gain function

Analogous to our analysis in section 4.1 we consider in the following the projection of the curve of jumping points SN on the slow plane (a1, a2). As mentioned above, if the up-to-down jump takes place at (a1J, a2J), then the down-to-up jump is at (a2J, a1J); so the projection is a closed curve symmetric to the diagonal a1 = a2.

We consider in (a1, a2) the curve Γ0 of equations

A1=1g[F(u1J)+βu2J],A2=1g[F(u2J)+βu1J]

with F’(u1J) F’(u2J) = β2. Then from (4.8) we have

a1J=A1+Ig,a2J=A2+Ig.

For a given I, the projection of SN on the slow plane (say, Γ = ΓI) is exactly the translation of Γ0 with quantity (I/g, I/g) along the first bisector. Obviously as I decreases, the curve Γ moves down on the upper-right–lower-left direction (Figure 9B). That is similar to the movement of the blue-red square for the case of the Heaviside step function in section 4.1 (Figure 5).

Let us consider now an oscillatory solution that exists for some I(Ihb,Ihb). If population 1 is dominant (u1 > u2), then −a1 + u1 > 0 and −a2 + u2 < 0 imply u1u2 > a1a2. On the other hand, since the trajectory is on Σ, we also have a1 = [IF(u1) − βu2]/g, a2 = [IF(u2) − βu1]/g, and thus a1a2 = [β(u1u2) − (F(u1) − F(u2))]/g. At the jumping point a1 reaches its maximum and a2 its minimum, so 0 < a1Ja2J < u1Ju2J can be written as

0<W(u1J)=1g(βF(u1J)F(u2J)u1Ju2J)<1. (4.9)

The point (u1J, u2J), u1J > u2J, satisfies F’(u1J)F’(u2J) = β2 and describes the part of SN that corresponds to the upper knees. Let u, u ∈ (0, 1) be the values defined by umβ<uhb<u0<uhb<uMβ, F’(u) = F’(u) = β2/F’(u0) = β2S’(θ) (see the shape of F’ in Figure 2C). Then the upper branch of SN results by gluing together three arcs on which (1) u1J increases between uhb and u (so u2J decreases from uhb to u0); (2) u1J decreases from u to u0 (and u2J decrease from u0 to u); (3) u1J continues to decrease from u0 to uhb (however, u2J increases now between u and uhb).

We can plot the expression W(u1J) from (4.9) for u1J[uhb,uMβ] and u1J[uhb,uMβ] with the corresponding u2J = u2(u1J) and check if the graph is below the horizontal line W = 1 (Figure 9A). If for all combinations (u1J, u2J) we have W < 1, then the winner-take-all regime does not exist in the interval (Ihb, Ihb). Otherwise, for (u1J, u2J) where W = 1 we can compute the values of I where winner-take-all occurs (see below).

Remark 4.2. Let us observe that W(u1J) = [βF’(ξ)]/g for some intermediate ξ between u2J and u1J. At u1J=u2J=uhb or u1J=u2J=uhb (that represents exactly the equilibrium point eI at I=Ihb and Ihb, respectively) we have F’(ξ) = β; therefore, we can extend by continuity the function W[uhb,uMβ] at u1J=uhb and the function W[uhb,uMβ] at u1J=uhb by taking W = 0 at these edges.

For gain function S as in (2.2) and different values of β we plotted the curve W; the curve W always has a maximum value Wmax for some u1JWmax(u0,uMβ) with u2 = u2(u1) ∈ (u, u0). Here ∥u2/∥u1 = -[F’(u2)F”(u1)]/[F’(u1)F”(u2)] > 0. W measures the relative distance between the values of slow variables against that between fast variables. Based on this observation we color SN, and obviously Γ, in blue for (u1J, u2J) starting at (uhb, uhb) and varying until it reaches (u1JWmax,u2JWmax) and in red for (u1J, u2J) between (u1JWmax,u2JWmax) and (uhb,uhb) (Figures 7C, 9B). The first corresponds to the right branch of W from W = 0 to Wmax, and the latter corresponds to the left branch of W from Wmax to 0.

Well inside the interval that defines the blue part of Γ (that is, not too close to the value u1JWmax that gives the maximum W) we have either u0<u2J<uhb<u1J or uhb<u2J<u0<uhb<u1J; so F’(u2J) < β < F’(u1J). However, we recall that F’(u2J) = 1/S’(Iβu2Jga2J) and F’(u1J) = 1/S’(Iβu2Jga1J). That means that the gain function S has at the jump a bigger slope at Iβu1ga2, the net input to the suppressed population, than at Iβu2ga1, the net input to the dominant population; in other words, the gain to the suppressed population falls in the range of steeper S. According to the definitions introduced at the beginning of section 4, this case corresponds to an escape type of dynamics.

On the red part of Γ there is an opposite behavior: at least away from the edge u1JWmax we have either u2J<uhb<u0<u1J<uhb or u2J<uhb<u1J<u0<uhb. That is, F’(u2J) = 1/S’(Iβu1Jga2J) > β > F’(u1J) = 1/S’(Iβu2Jga1J). At the jump the gain S’(Iβu2ga1) to the dominant population falls in the range of steeper S than that to the suppressed population. We called this a release mechanism.

In fact, the values u2JWmax and u1JWmax with u2JWmax<u0<u1JWmax (and not uhb and uhb) determine what we generically call the “steeper” part of S. For the particular case of gain function symmetric to its inflection point (θ, u0), i.e., for S such that S(θ + x) + S(θx) = 1 it can be shown that indeed u2JWmax=uhb and u1JWmax=uhb (for example, S as in (2.2); see Remark 4.3). However, in other cases that is not true anymore; then it is more difficult to interpret the terms escape and release for u1J close to u1JWmax, the passage point from one dynamical regime to another.

Occurrence of the winner-take-all regime

We can determine now the minimum value of β, say, βwta, where the winner-take-all regime occurs in system (2.1). Moreover, for a given β > βwta we find the corresponding values of I that delineate this regime.

We note that if I(Ihb,Ipf][Ipf,Ihb), then system (2.1) has a unique equilibrium point and it belongs to the middle branch of Σ (F’(uI) < β).

For I>Ipf or I<Ipf sufficiently close to the pitchfork bifurcation point, system (2.1) has three equilibria and all are situated again on the middle branch of Σ, inside SN, the curve of lower and upper knees (Figure 7A and 7C). That is because F’(u1p) < β and F’(u2p) < β, so F’(u1p)F’(u2p) < β2. A trajectory that starts on either the upper or the lower branch of Σ cannot approach an equilibrium point since it will first reach SN, and so the system oscillates. However, for intermediate values of I between Ipf and Ipf, two of the equilibrium points may move on the lateral branches of Σ and become stable (F’(u1p)F’(u2p) > β2; see Figure 7D and 7F). That is the case where the winner-take-all regime occurs. We should point out that the equilibrium eI = (uI, uI, uI, uI) always remains unstable for I(Ipf,Ipf)(Ihb,Ihb). The boundary between oscillatory and winner-take-all dynamics is obtained when the equilibrium points (u1p, u2p) belong to SN, that is, when on SN both a1 = u1 and a2 = u2 are true. We find in the following the values of I where winner-take-all appears.

The values of I, say, Iw, that delineate the winner-take-all regime are defined by

F(u1J)F(u2J)=β2,1g(βF(u1J)F(u2J)u1Ju2J)=1,Iw=F(u1J)+gu1J+βu2J[=F(u2J)+gu2J+βu1J]. (4.10)

The left-hand side of the second equation in (4.10) is in fact W(u1J). Since the curve W always has a maximum value Wmax for some u1JWmax(u0,uMβ), the line W = 1 can be either above the maximum or below it. If Wmax < 1 (Figure 9A, β = 0.75), then there is no winner-take-all regime. If Wmax > 1 (Figure 9A, β = 1.1), then there exist two values for u1J where the curve W intersects the horizontal line W = 1.

The critical (minimum) value βwta where the winner-take-all regime appears in system (2.1) results from the case of Wmax = 1; i.e., it satisfies the conditions W(u1J) = 1 and W’(u1J) = 0. Thus the minimum value βwta that introduces winner-take-all dynamics in system (2.1) is defined by

F(u1J)F(u2J)=β2,u1Ju2J,(F(u1J)F(u2J))[F(u1J)F(u2J)+F(u2J)F(u1J)]=(u1Ju2J)[F(u1J)2F(u2J)+F(u2J)2F(u1J)],βwta=g+F(u1J)F(u2J)u1Ju2J. (4.11)

Remark 4.3. We note that due to its particular form (2.2), S is symmetric about the point (θ, u0) with u0 = 0.5. The graph of Fis symmetric to the vertical line u = 0.5, i.e., F’(1 − u) = F’(u) for all u ∈ (0, 1), and so F”(1 − u) = F”(u). In this particular case we have F(uhb)=F(uhb)=β, so uhb=1uhb and F(uhb)=F(uhb). According to (4.11) the maximum value Wmax is obtained exactly at u1J=uhb, u2J=uhb.

For gain function S as in (2.2) and different values of β we plotted the curve W and determined the interval [Iw, Iw] where winner-take-all occurs. For example, choosing r = 10, θ = 0.2, g = 0.5, and β = 1.1, we have Wmax = 1.1046 and Iw=0.697 (computed at u1J = 0.7158, u2J = 0.0424) and Iw=1.303 (computed at u1J = 0.9576, u2J = 0.2842). The minimum value of β for the winner-take-all regime is βwta = 1.0387. As expected from our analysis in section 3.3, the value of β that guarantees existence of multiple equilibria in system (2.2), i.e., βpf = g + 1/S’(θ) = 0.9, is smaller than βwta.

For the same choice of parameters as above (β = 1.1, g = 0.5, and S as in (2.2) with r = 10, θ = 0.2), we plot the projection of the limit cycle and the curve of saddle-nodes on the slow plane (a1, a2) for different values of parameter I. Figure 10A gives the bifurcation diagram of activity u1 versus input strength I for τ = 100. In the rest of the panels we choose τ = 5000 to mimic the singular limit cycle solution with jumping points exactly on the curve of saddle-nodes (for smaller τ, e.g., τ = 100, the jumping points do not belong to the curve of saddle-nodes but fall close to it). In this case we have

Ihb=0.1434,Ipf=0.4064,Iw=0.697,Iw=1.303,Ipf=1.5936,Ihb=1.8566.

At larger values of I (I = 1.7 and I = 1.5) oscillation is due to an escape mechanism (Figure 10B–C); at an intermediate value I = 1 the system is in the winner-take-all regime (Figure 10D); at smaller values of I (I = 0.5 and I = 0.3) oscillation is due to a release mechanism (Figure 10E–F). Besides the limit cycle other important trajectories are the equilibrium points. There are three unstable equilibria for I = 1.5 and I = 0.5 but only one equilibrium point at I = 1.7 and I = 0.3. At I = 1 two out of three equilibria are stable.

Figure 10.

Figure 10

(A) Bifurcation diagram of activity u1 versus input I for β = 1.1, g = 0.5, r = 10, θ = 0.2, τ = 100. (B–F) Projection of the limit cycle and the curve of saddle-nodes on the slow plane (a1, a2) for different values of parameter I chosen according to the bifurcation diagram in panel A; however, in order to mimic the singular limit cycle solution we choose here τ = 5000. Symbol * indicates the location of equilibria. At large values of I oscillation is due to an escape mechanism: (B) I = 1.7, (C) I = 1.5; (D) at intermediate value I = 1 winner-take-all dynamics is observed; then at low values of I oscillation is due to a release mechanism: (E) I = 0.5, (F) I = 0.3. Single (panels B, F) or multiple (panels C, E) unstable equilibria can coexist with the limit cycle.

Remark 4.4. Equations (4.11) and (4.10), used to determine the critical β where the winner-take-all regime exists in system (2.1) and then, for β > βwta, to estimate Iw and Iw, prove to be reliable. The estimations obtained by this method are in excellent agreement with the results found in system (2.1)’s numerical simulations for both symmetric and asymmetric gain functions. The latter case is discussed in section 5.

5. Neuronal competition models that favor the escape (or release) dynamical regime

As seen in section 4.1, the dynamical scheme of T versus I is symmetric to I=θ+β+g2 for S, the Heaviside step function. When the winner-take-all regime exists, its corresponding I-input interval is equally split around the value θ+β+g2, that is, θ + gIθ + β as in Theorem 4.1. Moreover, an equal input range is found for both release and escape mechanisms: θ+g2<I<θ+g and θ+β<I<θ+β+g2 as in Theorem 4.1, or θ+g2<I<θ+β+g2 and θ+β+g2<I<θ+β+g2 as in Theorem 4.2. Therefore, it seems reasonable to ask to what extent the symmetry of the I-T dynamical scheme about a specific value I* relates to the geometry of S and, more generally, to the form of the equations in (2.1). We address this question in the following and find a heuristic method to reduce one of the two ranges of escape and release mechanisms while still maintaining the other one.

First let us note that we obtain a result similar to that in section 4.1 for any smooth sigmoid S as long as it is symmetric about its threshold. The threshold (say, th) is defined as the value where the gain function reaches its middle point: for S taking values between 0 and 1, it is S(th) = 0.5. The terminology comes from the fact that if a net input to one population is below the threshold (x < th), then it determines a weak effective response (S(x) < 1/2); at equilibrium that would correspond to an inactive state. On the other hand, a net input above the threshold (x > th) determines a strong effective response (S(x) > 1/2) which, at equilibrium, corresponds to an active state. The symmetry condition of S with respect to the threshold is described mathematically by the equality S(th + x) + S(thx) = 1 for any real x or, equivalently, S(x) + S(2 thx) = 1. The gain function defined by (2.2) is such an example: in this case the threshold is exactly the inflection point θ (S”(θ) = 0; S(θ) = 0.5).

Theorem 5.1. If the gain function S satisfies S(θ + x) + S(θx) = 1, then system (2.1) with input I* is diffeomorphic equivalent to system (2.1) with input I = 2θ + β + gI*.

Proof. Due to the symmetry of S, equation u~˙1=u~1+S(Iβu~2ga~1) with the change u1=1u~1, u2=1u~2, a1=1a~1, a2=1a~2 becomes u.1=u1+1S(Iβ(1u2)g(1a1))=u1+S((2θ+β+gI)βu2ga1); on the other hand, equation τa~˙1=a~1+u~1 becomes τa.1=a1+u1. Therefore, system (2.1) with input I* is diffeomorphic equivalent to system (2.1) with input (2θ + β + gI*).

Remark 5.1. Theorem 5.1 implies that system (2.1) has the same type of solutions for any two values of input strength I1 and I1 such that 12(I1+I1)=θ+β+g2. Moreover, if at I1 and I1 an oscillatory solution of period T1 and T1 exists, then, due to the diffeomorphism, T1=T1. Obviously, if T1 < T2 for I1<I2<θ+β+g2, then T1<T2 for the corresponding values I1>I2>θ+β+g2. Therefore, for symmetric S to its inflection point (threshold in this case), the intervals of I for regions II and IV (see Figure 3F or H) have the same length and are symmetric to the line I=θ+β+g2.

In order to explore the effect the asymmetry of S has on the bifurcation diagram, we consider S to be

S(x)={2u0(1+er2u0(xθ)),xθ,12(1u0)(1+er2(1u0)(θx)),x>θ, (5.1)

with u0 ∈ (0, 1).

Therefore the inflection point θ and the threshold th satisfy S(θ) = u0 and th > θ if u0 < 1/2, respectively, th < θ if u0 > 1/2.

The graphs of S as in (5.1) and their corresponding F’ (where F = S−1) are drawn in Figure 11A–B with parameter values r = 10, θ = 0.2, and u0 = 0.1, 0.5, 0.9. Then the I-T bifurcation diagram for the competition model (2.1) is constructed at β = 0.75 (Figure 11C). Numerical results show that the symmetry of this bifurcation diagram is indeed a direct consequence of the symmetry of S to its threshold. Such is the case u0 = 0.5. For other choices of u0 one of the two regions that correspond to the increasing and decreasing I-T branch is favored (it is wider)—the former when u0 > 1/2 and the latter when u0 < 1/2.

Figure 11.

Figure 11

Consider parameter values r = 10, θ = 0.2, g = 0.5, τ = 100 and gain functions S as in (5.1) with u0 = 0.1, 0.5, 0.9. (A) Graphs of S. (B) Their corresponding Fwhere F = S−1. (C) Bifurcation diagram for period T versus input I in system (2.1) as β = 0.75. (D) The graph of W = W(u1J) for upper knees on SN computed as β = 0.75. (E–F) Projection Γ of the curve of saddle-nodes on the slow plane (a1, a2) for β = 0.75, I = 1.2, and u0 = 0.1, 0.9. The blue side is associated to the escape mechanism and the red side to the release mechanism. The asymmetry of S to the threshold favors (E) escape if u0 < 1/2 and (F) release if u0 > 1/2.

Recall from section 4 the definition (4.8) for the curve SN of jumping points (with its projection Γ on the plane of slow variables (a1, a2)) and the definition (4.9) for W that characterizes escape, release, and winner-take-all regimes. The shape of W changes dramatically with u0 (Figure 11D, β = 0.75) but not at all with β. However, for any fixed u0 the curve W always has a unique maximum point; this maximum moves down as β decreases.

The asymmetry of S leads to an asymmetry of the curve Γ: for u0 < 1/2 we have u0<u1JWmax<uhb, u2JWmax<uhb, and the curves SN and Γ have a longer blue part than red part; the escape mechanism is favored (Figure 11E, u0 = 0.1). On the contrary, for u0 > 1/2 the release mechanism is favored since the red part of Γ is longer: u1JWmax>uhb and uhb<u2JWmax<u0 (Figure 11F, u0 = 0.9). We can understand this specific behavior by looking at the network populations’ dynamics about the threshold: in case of u0 < 1/2 the maximum gain to each population is reached for some net input below the threshold (S’(θ) = max S’ with S(θ) = u0, so θ < th); then about the threshold th the gain S’ has a decreasing trend. Consequently, there is a wider range for inputs where the inactive population has a significant gain compared to the active population, and so the escape mechanism is favored (the suppressed population regains control on its own, thus becoming active). On the other hand, if u0 > 1/2, the maximum gain to the network’s populations is reached for some net input above the threshold (θ > th) and the gain has an increasing trend in the vicinity of th. Therefore, the release mechanism is indeed much more easily obtained.

Remark 5.2. The function S defined by (5.1) does not entirely satisfy the hypotheses introduced in section 2. In this case F”’(u0) does not exist anymore at u0 ≠ 1/2 even if we still have F as C2(0,1) and C((0,1)\{u0}). Nevertheless this property does not affect our analytical results (e.g., the expansion we used in section 3 to prove the existence of stable oscillatory solutions still makes sense since it is done locally about some point u* different than u0). Consequently, all the results found for system (2.1) in previous sections remain valid.

5.1. A competition model that favors escape

In [31] we investigated four distinct neuronal competition models: a model by Wilson [37], one by Laing and Chow (the LC-model [17]), and two other variations of the LC-model that we called depression-LC and adaptation-LC. The latter is exactly the system (2.1) with symmetric sigmoid function. Besides the models’ commonalities we also noticed some differences: in some cases the bifurcation diagrams show a preference of the system to the escape mechanism (the region of I that corresponds to the decreasing I-T branch is wider; see Figures 3 and 4 in [31]). Moreover, for sufficiently low inhibition in Wilson’s and the depression-LC models the increasing (release-related) branch can disappear completely. Based on the results obtained in the present paper, we can explain those numerical observations.

Let us take, for example, Wilson’s model for binocular rivalry [37, 31]. Since the time-scale for inhibition is much shorter than the time-scale for the (excitatory) firing rate, we can assume that the inhibitory population tracks the excitatory population almost instantaneously. Thus Wilson’s model becomes equivalent to a system of the form

u.1=u1+γ(Iβu2)+2(θ+a1)2+(Iβu2)+2,u.2=u2+γ(Iβu1)+2(θ+a2)2+(Iβu1)+2,τa.1=a1+gu1,τa.2=a2+gu2, (5.2)

where γ is a positive constant and [x]+ is defined as [x]+ = 0 if x < 0 and [x]+ = x if x ≥ 0. As in (2.1), parameters β and g represent here the strength of the inhibition and adaptation; I is the external input strength. We see that in the differential equations for uj the nonlinearity is introduced through a function

S~(x;ϴ)=γ[x]+2ϴ2+[x]+2,

that is, u.1=u1+S~(Iβu2;θ+a1) and u.2=u2+S~(Iβu1;θ+a2). We note that S~ is asymptotic to γ as x → ∞ and satisfies S~=0 for x ≤ 0 and S~(ϴ;ϴ)=γ2, which means Θ is the threshold. To compare system (5.2)’s dynamics with that of (2.1), we will assume without loss of generality that γ = 1.

By the change of variables w1 = βu1I, w2 = βu2I, A1 = βa1/gI, A2 = βa2/gI, the system (5.2) is diffeomorphic equivalent to

w.1=w1+βS~(w2;θ1)I,w.2=w2+βS~(w1;θ2)I,τA.1=A1+w1,τA.2=A2+w2 (5.3)

with θ1(t)=θ+gβ(A1(t)+I) and θ2(t)=θ+gβ(A2(t)+I).

On the other hand, let us consider the system (2.1) with S as in (2.2). By a similar change of variables w1 = βu1I, w2 = βu2I, A1 = βa1I, A2 = βa2I this system is diffeomorphic equivalent to

w.1=w1+βS(w2;θ1)I,w.2=w2+βS(w1;θ2)I,τA.1=A1+w1,τA.2=A2+w2 (5.4)

with θ1(t) and θ2(t) as above. The threshold of S (rewritten as S(x; θ) = 1/(1 + er(xθ))) is θ.

As we can see, up to the specific expression of the gain function, Wilson’s and the adaptation-LC models are equivalent. The reason Wilson’s model shows preference to the escape mechanism instead of release (while for the adaptation-LC model the interval ranges for escape and release dynamics have equal length) resides in the asymmetric shape of S~ with respect to its threshold. The threshold is Θ but the maximum gain is obtained at ϴ3<ϴ (S~xx(x;ϴ)=0 at x=ϴ3). The resulting behavior is similar to that of S as in (5.1) with u0 < 1/2 (S”(θ) = 0, S(θ) = u0, and θ < th). The role of θ is played by ϴ3 and the corresponding value for u0 is S~(ϴ3;ϴ)=14.

Remark 5.3. In fact, the difference between S~ and asymmetric S from (5.1) is more subtle. Take again γ = 1; the restriction of S~ on (0, ∞) is invertible with inverse F~ defined on (0, 1) by F~(u;ϴ)=ϴu1u. In systems (5.3) and (5.4) the graph of F~u, F~u(u;θj)=θj2(1u)u(1u) has a well-like shape similar to that of the graph of Fu(u;θj)=4u02ru(2u0u) if u ∈ (0, u0] and 4(1u0)2r(1u)(12u0+u) if u ∈ (u0, 1). However, F~u depends on θj (that implicitly means dependence on the slow variable) while Fu does not. That explains why, for S as in (5.1), when they exist, the Hopf bifurcation points always come in pairs (and oscillations start at both points with the same frequency—see (3.5) and (3.6) in section 3); so release and escape oscillatory regimes (even if not equally balanced) always coexist for (2.1). On the contrary, in Wilson’s model (5.2) the dependence of F~u on the slow variable allows us to find a (reduced) parameter regime where only the escape mechanism is possible [31].

5.2. Competition models with nonlinear slow negative feedback

The local analysis pursued in section 3 shows that the uniform equilibrium point eI can lose its stability through either a Hopf bifurcation (at exactly two values Ihb and Ihb) or a pitchfork bifurcation (at exactly two values Ipf and Ipf). This result comes from the intersection of the graph of F’ (which has a well-like shape) with the straight horizontal lines y=β1+1τ and y = βg, respectively. This type of intersection restricts the possibilities to either Ihb<Ipf<Ipf<Ihb or Ipf<Ihb<Ihb<Ipf. The first case corresponds to a feedback/adaptation-dominated neuronal competition model and ensures the existence of stable oscillations in (2.1). Moreover, these appear with the same frequency ω=1τg(τ+1)β1 at both values Ihb and Ihb, and they are due to two different mechanisms: escape (for larger values of I) and release (for lower values of I). Thus we conclude that in system (2.1) escape and release oscillatory regimes always coexist. The choice of asymmetric gain function with respect to its threshold helps to reduce one or another regime but cannot eliminate it completely.

Nevertheless there is a way to modify (2.1) such that the new obtained system shows preference to either the escape or release mechanim; moreover, as for Wilson’s model, in some parameter regime we can find only escape-based oscillations (or, vice versa, only release-based oscillations). In this sense we consider the neuronal competition model with nonlinear slow negative feedback

u.1=u1+S(Iβu2ga1),u.2=u2+S(Iβu1ga2),τa.1=a1+a(u1),τa.2=a2+a(u2) (5.5)

with S as in (2.2) and

a(x;θa)=1(1+era(xθa)). (5.6)

We obtain the following result, which is similar to Theorem 5.1.

Theorem 5.2. Consider the nonlinear term a(x; θa) defined by (5.6). If the gain function S satisfies S(θ + x) + S(θx) = 1, then system (5.5) with input I* and slow equation nonlinearity a(x; θa) is diffeomorphic equivalent to system (5.5) with input I = 2θ+β+gI* and slow nonlinearity a(x; 1 − θa).

Proof. With the change of variables u1=1u~1, u2=1u~2, a1=1a~1, a2=1a~2, system (5.5) with input I* and nonlinear term in the slow equation a(x; θa) takes the form (5.5) with input (2θ + β + gI*) and a(x; 1 − θa).

If θa = 1/2, we have a(x; θa) = a(x; 1 − θa), and thus the system (5.5) has the same type of solutions for any two values of input strength I1 and I1 such that 12(I1+I1)=θ+β+g2. As for the case of linear adaptation (where the choice was a(u) = u), the intervals of I where the period of oscillations increases and decreases have the same length and are symmetric to the line I=θ+β+g2.

The symmetry of the I-T bifurcation diagram can be destroyed by choosing θa ≠ 1/2. As numerical simulations of system (5.5) show, the choice of θa > 1/2 leads to a preference of the system for the escape mechanism (decreasing T versus I). On the contrary, for θa < 1/2 the system shows preference to the release mechanism (increasing T versus I).

For parameters β = 1.1, g = 0.5, τ = 100, and θ = 0.2, r = 10 for S and ra = 10 and θa = 0.7 for a, we plot in Figure 12 the bifurcation diagram of population activity u1 versus the stimulus strength I and then the period T versus I. In Figure 12A we observe that the Hopf bifurcation point that existed for low value of I in case of linear adaptation disappears now. Instead we find a supercritical pitchfork bifurcation where stable nonuniform equilibrium points are born (they correspond to the winner-take-all case). The increasing branch of T versus I graph disappears while the decreasing branch is still present (Figure 12B). We find only four dynamical regimes (as opposed to the five described in Figure 3F and G): fusion (equal activity levels) for large and low input, winner-take-all, and oscillations with decreasing T as function of I for intermediate values of stimulus strength.

Figure 12.

Figure 12

Bifurcation diagrams for neuronal competition model (5.5): (A) population activity u1 versus input strength I; (B) period T of the network oscillations versus I. Parameter values are β = 1.1, g = 0.5, τ = 100, θ = 0.2, r = 10, ra = 10, and θa = 0.7. Functions S and a are defined by (2.2) and (5.6).

By choosing θa = 0.3 the bifurcation diagrams in Figure 12 are virtually mirrored along the stimulus axis; in this case only the increasing I-T branch exists.

We explain these numerical results through an analytical approach. Similarly to the local analysis in section 3, we note that system (5.5) has a unique uniform equilibrium eI = (uI, uI, a(uI), a(uI)) for any real I. The value uI ∈ (0, 1) is defined by equation I = F(uI) + βuI + ga(uI) and decreases with a decrease in I; moreover, limI→∞ uI = 1 and limI→−∞ uI = 0. The characteristic equation of the linearization matrix about eI is a product of two factors: λ2+λ(1+1τ+βF(uI))+1τ(1+ga(uI)+βF(uI))=0 and λ2+λ(1+1τβF(uI))+1τ(1+ga(uI)βF(uI))=0. Thus two of the eigenvalues always have negative real part, while the other two show the type of stability for eI. Decreasing I, the stability of eI is lost either through a pair of purely imaginary eigenvalues at F(uI)=β(1+1τ), F(uI)+ga(uI)>β or through a zero eigenvalue at F(uI)+ga(uI)=β and F(uI)>β(1+1τ).

Assume now that θa is greater than 1/2 and close to 1. For large I the corresponding fixed point eI has uI close to 1 and in the vicinity of θa. Since this is the steeper (almost linear) part of a, we can approximate a(uI) ≈ kuI. The condition F(uI)=β(1+1τ), F(uI)>βga(uI)=βgk, can be attained (at least for adaptation-dominated systems) and stable oscillations occur. On the other hand, for small I, the fixed point eI has uI close to 0 and further away from θa. The function a is almost constant there, so a(uI)0. Then F(uI)+ga(uI)β>β(1+1τ) and the stability of eI is lost through a zero eigenvalue (a pitchfork bifurcation).

Remark 5.4. We note that the choice of θa > 1/2 sufficiently close to 1 helps the slow variable for the suppressed population, say, a1, to change faster than the slow variable for the dominant population, say, a2. Since a(u1) is approximately constant to zero and a(u2) falls onto the linear part of the sigmoid, a1 decays almost exponentially according to τa.1a1, while a2 grows more slowly according to τa.2a2+ku2. The input-output function of the suppressed population changes faster, favoring escape. An opposite effect is obtained for θa < 1/2.

6. Discussion

We investigated a class of competition models that describe rhythmic (alternating) phenomena that arise in a range of neural contexts including perception of ambiguous sensory stimuli (such as binocular rivalry) and motor coordination (as in CPGs). These models rely on mutual inhibition between populations of neurons and a slow process in the form of spike frequency adaptation and/or synaptic depression, and they have the following commonalities [31].

A decrease in the strength of the input to the noise-free system leads to five possible types of dynamics: (i) at high values of input strength both competing populations are active at equal levels; (ii) by decreasing the input strength the system enters an oscillatory regime with the oscillation period increasing as input decreases; (iii) then for lower input the system is in a winner-take-all regime where only one population is dominant while the other is suppressed forever; (iv) continuing to decrease the input strength, the system oscillates again; in this region the oscillation period decreases as the input decreases; (v) at low values of input, oscillations disappear and both competing populations are inactive at an equal level. In addition, for weak inhibition the winner-take-all regime does not occur; however, the period of oscillations still depends on the input strength in a nonmonotonic fashion.

In a computational study of a model similar to (2.1), the authors of [23] also report transitions between simultaneous activity (single equilibrium), oscillations, and winner-take-all. The bifurcation diagram (Figure 4) in the parameter plane (I, β) resembles Figure 9 in [23]. However, Moldakarimov et al. were interested mostly in how the system’s dynamics changes with inhibition strength (an internal parameter of the system) and not with stimulus strength. By analyzing the influence of stimulus on the oscillations’ frequency/period, we provide a refined characterization of possible behaviors in this class of competition models.

The five dynamical regimes mentioned above were illustrated in Figure 3 for system (2.1), our choice as a particular example. Despite the fact that it has some limitations such as a lack of recurrent excitation, a symmetric gain function, and a nonsaturating a(u), system (2.1) has the advantage of being simple enough to allow for a thorough analytical investigation while still displaying the dynamical characteristics found in a larger class of models. Thus, for (2.1) we proved the existence of oscillations and we showed that they are antiphase as expected for a network of two competing populations. Moreover, by considering the period of oscillations T as a function of input strength I, we proved that the I-T graph is nonmonotonic. We associated the increasing I-T branch with a release mechanism and the decreasing branch with an escape mechanism. Release occurs for lower values of I. It means that during oscillation the dominant population loses control due to accumulating slow negative feedback and it becomes unable to suppress its competitor; consequently the latter becomes active and thereby takes the role of suppressor. On the contrary, escape occurs for higher values of I and it means that the suppressed population regains control on its own, starts to inhibit its competitor, and forces it into the down state. Moreover, using singular perturbation techniques, we characterized in the limiting case the conditions for occurrence of winner-take-all dynamics at intermediate values of stimulus strength.

Our presumption is that the potential for alternation of percepts depends on neuronal competition. If competition were significantly reduced or eliminated (say, effectively making β very small in the model), alternations would not occur in the presence of a stimulus. That is, we suppose that an isolated population would not oscillate. To satisfy this constraint we have disallowed recurrent excitation in (2.1): this precludes oscillations in an isolated population for any input value I. Perhaps for this goal the complete elimination of recurrent excitation is an extreme way to satisfy the constraint. Alternatively, we could consider systems with fast equations of the form u.j=uj+S(I+αujβukgaj) and allow for some recurrent excitation but not strong enough to let an isolated population oscillate (e.g., take α<(1+1τ)S(θ)). This modification, however, does not affect our conclusion on the nonmonotonicity of the period of oscillations versus input strength curve (not shown): both “release” and “escape” branches still appear.

Other modifications of (2.1) that favor either release or escape as responsible for oscillations were discussed in section 5. One extension of (2.1) allows the gain function to be asymmetric with respect to the threshold. This maintains one of the two oscillatory regimes while reducing the other one. A different rendition of (2.1) invokes a nonlinear slow negative feedback (a sigmoidal-shaped a(u)) and completely eliminates one of the two regimes. The existence of the saturating branches for a sigmoidal a(u) introduces an asymmetry in the system; thus under some specific conditions either the suppressed population recovers from slow negative feedback faster than the dominant population accumulates its own negative feedback (favoring escape, see Figure 12), or the reverse occurs. Interestingly, adding noise to these specially designed models, we automatically recover the nonmonotonicity of the period versus input curve [31].

Oscillations in mutually inhibitory neuronal networks based on fast-slow dynamics [27] as well as the terms “release” and “escape” [36, 32] were previously discussed for neuronal networks in the presence of local autocatalysis. The autocatalysis was either an intrinsic process (like voltage-gated persistent inward currents) or a synaptic process (like intrapopulation recurrent excitation). Other models assumed networks of excitatory cells interacting through a global inhibitory feedback that typically produce the winner-take-all dynamics; the inhibition was dynamic with a slow time-scale and induced more complicated oscillatory patterns with one cell being active for a while and then spontaneously turning off and allowing another one to take over [8]. Contrary to these examples, there is no autocatalysis in the neuronal competition models we investigate here. Instead the alternation is a combined result of two processes: mutual inhibition that acts effectively as a fast positive feedback (disinhibition) and a slow negative feedback (adaptation, but that could alternatively be synaptic depression).

Acknowledgment

We thank Sukbin Lim for helpful discussions.

The work of this author was supported by the Romanian grants CNCSIS AT55 and PNCDI-2 11-039/2007.

The work of the second author was supported by NIH grant EY07-158-03. The work of the third author was supported by the Swartz Foundation.

The work of this author was supported by the Swartz Foundation.

Appendix A. Normal form for Hopf bifurcation

To construct the normal form for the Hopf bifurcation, in section 3 we use the expansion of S(k)(F(uI)), k = 1, 2, 3, … , with respect to ε. That is obtained as follows: take, for example,

f1(I)=defS(F(uI))=f1(I+ε2α)=f1(I)+f1(I)ε2α+O(ε4).

Here f1(I)=S(F(u))=1F(u)=(1+1τ)β and f1(I)=S(F(uI))F(uI)duIdI.

Based on (3.2), f1(I)=S(F(uI))F(uI)(β+g+F(uI)). On the other hand, since uS(F(u)), we have 1 ≡ S’(F(u))·F’(u), and further 0 ≡ S”(F (u))·F’(u)2+S’(F(u))·F”(u), i.e., S”(F(u)) = −F”(u)/F’(u)3. Therefore,

f1(I)=1+1τβF(u)F(u)2(β+g+F(u))ε2α+O(ε4).

Similarly, we compute

f2(I)=defS(F(uI))=f2(I+ε2α)=f2(I)+O(ε2)=F(u)F(u)3+O(ε2)

and f3(I)=defS(F(uI))=f3(I+ε2α)=f3(I)+O(ε2)=S(F(u))+O(ε2). From S”(F(u)) = −F”(u)/F’(u)3 we obtain S”’(F(u)) = [3F”(u)2F’(uF”’(u)]/F’(u*)5, so

f3(I)=3F(u)2F(u)F(u)F(u)5+O(ε2).

Normal form

Let us now present the main steps in the algorithm for the construction of the normal form starting with

L0V0=ε[B(V0,V0)L0V1]+ε2[C(V0,V0,V0)+2B(V0,V1)+ΛV0L0V2]+O(ε3). (A.1)

In the limit ε → 0, the vector V0 is a solution of the linear system L0V0 = 0 with two eigenvalues λ1,2 of negative real part and two purely imaginary eigenvalues λ3,4 = ±. Thus V0 belongs to the center manifold; i.e., for an eigenvector ξC4 of A0 that satisfies A0ξ=iωξ say,

ξ=(τω+i,τωi,i,i)T, (A.2)

the solution V0 takes the form

V0(t)=w(t)ξeiωt+w(t)ξeiωt.

However, since L0V0=O(ε), w(t) is ε-dependent, and it can be written in slow time s = ε2t as w = w(s). In the singular perturbation expansion, w(s)=w(s)ε=0+dwdsε=0ε2t+O(ε4). With notation Z = w(s)|ε=0, Z=dwdsε=0, and so on (here ’ stands for the derivative with respect to the slow time), we have w=Z+ε2tZ+O(ε4) and

V0=(Zξeiωt+Zξeiωt)+ε2t(Zξeiωt+Zξeiωt)+O(ε4).

Then we compute L0 (teiωtξ) = eiωtξ, L0 (teiωtξ‾) = eiωtξ‾ and B(ξeiωt,ξeiωt)=12Bb2pe2iωt, B(ξeiωt,ξeiωt)=12Bb2pe2iωt, B(ξeiωt,ξeiωt)=12Bb2p, where b = βτω + i(gβ) and p = (1, 1, 0, 0)T. Equation (A.1) becomes

L0V1=12Z2Bb2pe2iωt+12Z2Bb2pe2iωt+ZZBb2p+ε[ZξeiωtZξeiωt+C(V0,V0,V0)+2B(V0,V1)+ΛV0L0V2]+O(ε2),

so we look for solutions V1 of the form

V1=w2ξ1e2iωt+2wwξ2+w2ξ3e2iωt=Z2ξ1e2iωt+2ZZξ2+Z2ξ3e2iωt+O(ε2).

From the singular perturbation expression we determine ξ1=12Bb2(2iωIA0)1p, ξ2=12Bb2A01p and ξ3 = ξ‾1, that is,

ξ1=Bb2ψ2ψ2(1+2iτω,1+2iτω,1,1)T,ξ2=Bb22[1+(gβ+1)(1+1τ)](1,1,1,1)T,

where ψ=14ω2τ+(gβ+1)(1+1τ)4iω(τ+1).

Then we compute C(V0,V0,V0), B(V0,V1) and ΛV0=αA(Zbeiωt+Zbeiωt)q+O(ε2) with q = (1, −1, 0, 0)T and obtain

L0V2=(Zξ+αAZbq)eiωt+Z2Zb2bqeiωt(D2+(β+g)B21+(gβ+1)(1+1τ)+B2(β+g+2iβωt)ψ2ψ2)+e3iωt()+cc+O(ε).

In order for the solution V2 to exist, the right-hand side of the above equation should be orthogonal on the eigenvectors of the adjoint operator L0=ddtA0T on the space of periodic solutions V(t)=V(t+2πω) with inner product

V,W=ω2π02πωi=14vi(t)wi(t)dt,V=(vi)i=1,4T,W=(wi)i=1,4T.

That means the right-hand side should be orthogonal on {ηeiωt, η‾eiωt} with η solution of A0Tη=iωη and ξ · η‾ = 1; that is,

η=14ωt(1,1,1iωτ,1+iωτ)T.

We obtain the normal form Z=αAφZLZ2Z with L as in Theorem 3.2.

By rescaling II* = ε2α and z(t) = εZ(ε2t) = εZ(s), we have z.=dzdt=εdZdsdsdt=ε3Z; the above differential equation becomes exactly (3.11).

First Lyapunov coefficient

We would like to determine sufficient conditions for the Hopf bifurcation to be supercritical, i.e., for Re(L)>0.

We now use inequality (3.3) to show that the first term in the sum that defines L has positive real part:

Re(φψ(β+g2+iβωt))=(2βg)τ+14(7g2+5βg6β2)+14τ(7g26βg+3β2)>2β22βg+14(7g2+5βg6β2)+14τ(7g26βg+3β2)=14(7g23βg+2β2)+14τ(7g26βg+3β2)>0. (A.3)

For the second term, (3.3) implies

l1=2(β+g)1+(gβ+1)(1+1τ)=21β+g+1F(u)>(1+1τ+1)F(u)>F(u).

Therefore, the real part of the second term satisfies

βτ2ω2φ2B2(l1DB2)>βτ2ω2φ2B2F(u)(F(u)F(u)F(u)22). (A.4)

Appendix B. Normal form for pitchfork bifurcation

The construction of this normal form follows similar steps to those in section 3.1 and Appendix A. Here F’(u°) = βg with u°{upf,upf} and I°{Ipf,Ipf}.

The operators B, C, Λ, and L0 are defined in the same way as in section 3.1 with coefficients A, B, D as in (3.10). However, the derivatives of F at the bifurcation point u° take different values, so S(F(uI))=1(βg)+αAε2+O(ε4), S(F(uI))=B+O(ε2), and S(F(uI))=D+O(ε2) with A, B, and D evaluated at either upf or upf. The matrix of the linearized system is now

A0=(1ββggβg0ββg10gβg1τ01τ001τ01τ)

and has a zero eigenvalue. Therefore, we find an eigenvector ξ (A0ξ=0) and an eigenvector η of the adjoint matrix (A0Tη=0) such that ξ · η = 1. They are

ξ=(1,1,1,1)T,η=12(βg(τ+1))(βg,β+g,τg,τg)T.

With the perturbation II° = ε2α, V(t) = εV0(t) + ε2V1(t) + ε3V2(t) + ⋯ , we obtain that V0 belongs to the eigenspace, that is, V0 = w(t)ξ. The expansion with respect to the slow time (s = ε2t, w = w(s), Z = w(s)|ε=0, etc.) implies V0=(Z+ε2tZ+O(ε4))ξ and

L0V1=Z2B(ξ,ξ)+ε[Zξ+Z3C(ξ,ξ,ξ)+2B(V0,V1)+ZΛξL0V2]+O(ε2).

The vector V1 is chosen to be of the form V1=w2ξ1=Z2ξ1+O(ε2) with ξ1 orthogonal on η. It results in ξ1=A01B(ξ,ξ), i.e., ξ1=B2(βg)34β(1,1,1,1)T. The normal form is

Z=(Λξη)Z+[C(ξ,ξ,ξ)η+2B(ξ,ξ1)η]Z3.

From Λξ = αA(βg)q, C(ξ,ξ,ξ)=D6(βg)3q, B(ξ,ξ1)=B28β(βg)4(β+g)q (where q = (1, −1, 0, 0)T), and by rescaling II° = ε2α and z(t) = εZ(ε2t) = εZ(s), we obtain exactly (3.15).

Appendix C. Additional material

To illustrate system (2.1)’s dynamics under the escape and release mechanisms we run numerical simulations in XPPAUT [7, 9] with S as in (2.2), r = 10, θ = 0.2, and parameters β = 1.1, g = 0.5, τ = 100 as in Figure 3F. Then in the fast plane (u1, u2) we obtain the trajectory of the point on the rivalry limit cycle: the point is drawn as a black thick dot; the u1-nullcline is colored in red, and the u2-nullcline is colored in blue.

  • 70584_01.gif [3.7MB] illustrates the escape mechanism for I = 1.5.

  • 70584_02.gif [3.8MB] illustrates the release mechanism for I = 0.5.

The small black square corresponds to the slow plane (a1, a2), where we included in green the projection of the limit cycle trajectory. This picture shows how the slow negative feedback accumulates for the dominant population and then how it recovers for the suppressed population (e.g., if u1 is ON and u2 is OFF, then a1 increases and a2 decreases). Then the cycle repeats.

Footnotes

AMS subject classifications. 34C15, 34C23, 37G05, 92B20

REFERENCES

  • [1].Andrews TJ, Purves D. Similarities in normal and binocular rivalrous viewing. Proc. Natl. Acad. Sci. USA. 1997;94:9905–9908. doi: 10.1073/pnas.94.18.9905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Blake R. A neural theory of binocular rivalry. Psychol. Rev. 1989;96:145–167. doi: 10.1037/0033-295x.96.1.145. [DOI] [PubMed] [Google Scholar]
  • [3].Brascamp JW, van Ee R, Noest AJ, Jacobs RH, van den Berg AV. The time course of binocular rivalry reveals a fundamental role of noise. J. Vis. 2006;6:1244–1256. doi: 10.1167/6.11.8. [DOI] [PubMed] [Google Scholar]
  • [4].Curtu R, Ermentrout B. Oscillations in a refractory neural net. J. Math. Biol. 2001;43:81–100. doi: 10.1007/s002850100089. [DOI] [PubMed] [Google Scholar]
  • [5].Curtu R, Ermentrout B. Pattern formation in a network of excitatory and inhibitory cells with adaptation. SIAM J. Appl. Dyn. Syst. 2004;3:191–231. [Google Scholar]
  • [6].Calabrese RL. Half–center oscillators underlying rhythmic movements. In: Arbib MA, editor. The Handbook of Brain Theory and Neural Networks. MIT Press; Cambridge, MA: 1995. pp. 444–447. [Google Scholar]
  • [7].Ermentrout B. 2006 http://www.math.pitt.edu/~bard/xpp/xpp.html.
  • [8].Ermentrout B. Complex dynamics in winner–take–all neural nets with slow inhibition. Neural Networks. 1992;5:415–431. [Google Scholar]
  • [9].Ermentrout B. Software Environ. Tools. Vol. 14. SIAM; Philadelphia: 2002. Simulating, Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT for Researchers and Students. [Google Scholar]
  • [10].Freeman AW. Multistage model for binocular rivalry. J. Neurophysiol. 2005;94:4412–4420. doi: 10.1152/jn.00557.2005. [DOI] [PubMed] [Google Scholar]
  • [11].Getting PA. Comparative analysis of invertebrate central pattern generators. In: Cohen AH, Rossignol S, Grillnet S, editors. Neural Control of Rhythmic Movements in Vertebrates. Wiley; New York: 1988. pp. 101–127. [Google Scholar]
  • [12].Golubitsky M, Stewart I. The Symmetry Perspective: From Equilibrium to Chaos in Phase Space and Physical Space. Birkhäuser; Basel: 2004. [Google Scholar]
  • [13].Grossberg S. Pattern formation by the global limits of a nonlinear competitive interaction in n dimensions. J. Math. Biol. 1977;4:237–256. doi: 10.1007/BF00280974. [DOI] [PubMed] [Google Scholar]
  • [14].Guckenheimer J, Holmes P. Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Field. Springer-Verlag; New York: 1983. [Google Scholar]
  • [15].Gurney K, Prescott TJ, Redgrave P. A computational model action selection in the basal ganglia (I): A new functional anatomy. Biol. Cybernet. 2001;84:401–410. doi: 10.1007/PL00007984. [DOI] [PubMed] [Google Scholar]
  • [16].Kuznetsov YA. Elements of Applied Bifurcation Theory. 2nd ed. Springer-Verlag; New York: 1998. [Google Scholar]
  • [17].Laing CR, Chow CC. A spiking neuron model for binocular rivalry. J. Comput. Neurosci. 2002;12:39–53. doi: 10.1023/a:1014942129705. [DOI] [PubMed] [Google Scholar]
  • [18].Lee SH, Blake R, Heeger DJ. Traveling waves of activity in primary visual cortex during binocular rivalry. Nat. Neurosci. 2005;8:22–23. doi: 10.1038/nn1365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Levelt WJM. Psychological Studies, Minor Series. Vol. 2. Mouton, The Hague, The Netherlands: 1968. On Binocular Rivalry. [Google Scholar]
  • [20].Logothetis NK, Leopold DA, Sheinberg DL. What is rivaling during binocular rivalry? Nature. 1996;380:621–624. doi: 10.1038/380621a0. [DOI] [PubMed] [Google Scholar]
  • [21].Mao ZH, Massaquoi SG. Dynamics of winner–take–all competition in recurrent neural networks with lateral inhibition. IEEE Trans. Neural Networks. 2007;18:55–69. doi: 10.1109/TNN.2006.883724. [DOI] [PubMed] [Google Scholar]
  • [22].Maynard Smith J. Models in Ecology. Cambridge University Press; Cambridge, UK: 1974. pp. 59–68. [Google Scholar]
  • [23].Moldakarimov S, Rollenhagen JE, Olson CR, Chow CC. Competitive dynamics in cortical responses to visual stimuli. J. Neurophysiol. 2005;94:3388–3396. doi: 10.1152/jn.00159.2005. [DOI] [PubMed] [Google Scholar]
  • [24].Moreno–Bote R, Rinzel J, Rubin N. Noise–induced alternations in an attractor network model of perceptual bi–stability. J. Neurophysiol. 2007;98:1125–1139. doi: 10.1152/jn.00116.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Mueller TJ, Blake R. A fresh look at the temporal dynamics of binocular rivalry. Biol. Cybernet. 1989;61:223–232. doi: 10.1007/BF00198769. [DOI] [PubMed] [Google Scholar]
  • [26].Polonsky A, Blake R, Braun J, Heeger DJ. Neuronal activity in human primary visual cortex correlates with perception during binocular rivalry. Nat. Neurosci. 2000;3:1153–1159. doi: 10.1038/80676. [DOI] [PubMed] [Google Scholar]
  • [27].Rubin J, Terman D. Geometric singular perturbation analysis of neuronal dynamics. In: Fiedler B, editor. Handbook of Dynamical Systems. Vol. 2. Elsevier; Amsterdam: 2002. pp. 93–146. [Google Scholar]
  • [28].Rubin N. Binocular rivalry and perceptual multi–stability. Trends in Neurosci. 2003;26:289–291. doi: 10.1016/S0166-2236(03)00128-0. [DOI] [PubMed] [Google Scholar]
  • [29].Rubin N, Hupe JM. Dynamics of perceptual bi–stability: Plaids and binocular rivalry compared. In: Alais D, Blake R, editors. Binocular Rivalry and Bistable Perception. MIT Press; Cambridge, MA: 2004. [Google Scholar]
  • [30].Salinas E. Background synaptic activity as a switch between dynamical states in a network. Neural Comput. 2003;15:1439–1475. doi: 10.1162/089976603321891756. [DOI] [PubMed] [Google Scholar]
  • [31].Shpiro A, Curtu R, Rinzel J, Rubin N. Dynamical characteristics common to neuronal competition models. J. Neurophysiol. 2007;97:462–473. doi: 10.1152/jn.00604.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Skinner FK, Kopell N, Marder E. Mechanisms for oscillation and frequency control in reciprocally inhibitory model neural networks. J. Comput. Neurosci. 1994;1:69–87. doi: 10.1007/BF00962719. [DOI] [PubMed] [Google Scholar]
  • [33].Taylor AL, Cottrell GW, Kristan WB., Jr. Analysis of oscillations in a reciprocally inhibitory network with synaptic depression. Neural Comput. 2002;14:561–581. doi: 10.1162/089976602317250906. [DOI] [PubMed] [Google Scholar]
  • [34].Tong F, Nakayama K, Vaughan JT, Kanwisher N. Binocular rivalry and visual awareness in human extrastriate cortex. Neuron. 1998;21:753–759. doi: 10.1016/s0896-6273(00)80592-9. [DOI] [PubMed] [Google Scholar]
  • [35].van Ooyen A. Competition in the development of nerve connections: A review of models. Network: Comput. in Neural Syst. 2001;12:1–47. [PubMed] [Google Scholar]
  • [36].Wang X-J, Rinzel J. Alternating and synchronous rhythms in reciprocally inhibitory model neurons. Neural Comput. 1992;4:84–97. [Google Scholar]
  • [37].Wilson HR. Computational evidence for a rivalry hierarchy in vision. Proc. Natl. Acad. Sci. USA. 2003;100:14499–14503. doi: 10.1073/pnas.2333622100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Wilson HR, Blake R, Lee SH. Dynamics of travelling waves in visual perception. Nature. 2001;412:907–910. doi: 10.1038/35091066. [DOI] [PubMed] [Google Scholar]

RESOURCES