Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Nov 28.
Published in final edited form as: Phys Rev E Stat Nonlin Soft Matter Phys. 2010 Jul 7;82(1 Pt 1):011903. doi: 10.1103/PhysRevE.82.011903

Stimulus-dependent suppression of chaos in recurrent neural networks

Kanaka Rajan 1,*, L F Abbott 2, Haim Sompolinsky 3
PMCID: PMC10683875  NIHMSID: NIHMS1928792  PMID: 20866644

Abstract

Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a nonmonotonic function of stimulus frequency, revealing a “resonant” frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.


Circuits of the central nervous system exhibit temporally irregular ongoing activity that is not directly related to sensory or behavioral events. The fact that this spontaneous activity is not suppressed by averaging over the large number of synaptic inputs to each neuron [1] suggests that chaotic network dynamics may represent a substantial local source of fluctuating activity in cortical and subcortical circuits. Previous modeling studies have shown that nonlinear random network models with strong recurrent excitatory and inhibitory connections generically exhibit chaotic dynamics [24]. In this work, we ask how intrinsically generated fluctuating activity affects neuronal responses to external stimuli. The nonlinear effects of oscillatory drive, including frequency dependence and phase locking, have been well explored in low-dimensional chaotic dynamical systems (see, e.g., [59]). However, relatively few studies have explored entrainment of extended high-dimensional spatiotemporal chaotic systems by external forcing (see, e.g., [1014]). Here, we explore the locking of large chaotic neuronal networks to external stimuli and study how it depends on stimulus amplitude and frequency.

We study phenomenological firing-rate network models representing neurons in a localized circuit that are coupled by relatively strong excitatory and inhibitory connections randomly distributed in the network. Specifically, we consider a network of N interconnected neurons, each described by an activation variable xi for i=1,2,,N, satisfying

dxidt=xi+j=1NJijϕxj+Hi, (1)

with ϕxi, which is a saturating monotonic function of the total synaptic input xi, representing a normalized firing rate relative to a fixed background rate r0. Here, we choose

ϕ(x)=r0tanhx/r0forx02r0tanhx/2r0forx>0, (2)

so that the normalized firing rate varies from 0 to 2. For r0=1, we recover the often-used tanh function, but we use a smaller value of r0=0.1, which is more biologically reasonable [15]. The time variable in Eq. (1) is defined in units of the single-neuron time constant, τr=10ms. Each element of the network connectivity matrix J is chosen randomly and independently [16] from a Gaussian distribution with zero mean and variance g2/N, where the gain g acts as the control parameter of the network. The external input term is set to Hi=Icosωt+θi, with the phase θi chosen randomly and independently for each neuron from a uniform distribution between 0 and 2π. This corresponds to situations in which the oscillatory input does not introduce global temporal phase coherence, which occurs, for example, for a population of neurons with a broad range of preferred spatiotemporal phases.

To characterize the activity of the network, we make extensive use of the autocorrelation function of each neuronal rate averaged across all the units of the network,

Cτ=1Ni=1Nϕxitϕxit+τ, (3)

where the angular brackets denote a time average. C(0) is related to the total variance in the fluctuations of the firing rates of the network units, whereas C(τ) for nonzero τ provides information about the temporal structure of network activity.

Previous work [2] has shown that, in the limit N with no input (I=0), this model displays only two types of activity: a trivial fixed point with all x=0 when g<1 and chaos when g>1. The spontaneously chaotic state is characterized by highly irregular firing rates [Fig. 1(a)], a decaying average autocorrelation function [Fig. 1(d)], and a continuous power spectrum [Fig. 1(g)]. Note that the fluctuations in Fig. 1(a) are considerably slower than the 10 ms time constant of the model. The associated average autocorrelation function decays to zero as τ increases [Fig. 1(d)], implying that the temporal fluctuations of the spontaneous activity are uncorrelated over large time intervals, a characteristic of the chaotic state. The power spectrum decays from a peak at zero [Fig. 1(g)] and, although it is broad, the power at high frequency is exponentially suppressed. Strong suppression of high-frequency fluctuations is another characteristic of the chaotic state in these networks. By comparison, the power spectrum of a nonchaotic network responding to a white-noise input falls off only as a power law at high frequencies.

FIG. 1.

FIG. 1.

Activity of typical network units (left column), average autocorrelation function (middle column), and logarithmic-power spectrum (right column) for a network with N=1000 and g=1.5. (a) With no input (I=0), network activity is chaotic. (b) In the presence of a weak input (I=0.04,f=ω/2π=4Hz), an oscillatory response is superposed on chaotic fluctuations. (c) For a stronger input (I=0.2,f=4Hz), the network response is periodic. (d)–(f) Average autocorrelation function and (g)–(i) logarithm of the power versus frequency for the network states corresponding to (a)–(c).

When this network is driven with a relatively weak sinusoidal input [Figs. 1(b), 1(e), and 1(h)], the single-neuron response consists of periodic activity induced by the input superposed on a chaotic background [Fig. 1(b)]. The average autocorrelation function for the network driven by weak periodic input consequently reveals a mixture of periodic and chaotic activities [Fig. 1(e)]. Periodic oscillations at the input frequency appear at large values of τ, but the variance given by C(0) is larger than the height of the peaks in these oscillations. This indicates that the total firing-rate variance is not completely accounted for by the oscillatory response of the network to the external drive, with the additional variance arising from residual chaotic fluctuations. Similarly, the power spectrum shows a continuous component generated by the residual chaos, a prominent peak at the frequency of the input, and peaks at harmonics of the input frequency arising from network nonlinearities [Fig. 1(h)].

When the amplitude of the input is increased sufficiently, the single-neuron firing rates oscillate at the input frequency in a perfectly periodic manner [Fig. 1(c)], yielding a periodic autocorrelation function [Fig. 1(f)]. C(0) now matches the height of the peaks in each of its subsequent oscillations, meaning that the periodic component in C accounts for the entire response variance quantified by C(0). All of the network power is focused at the frequency of the input and its harmonics, also indicating a periodic response free of chaotic interference [Fig. 1(i)].

To explore these results analytically and more systematically, we developed dynamic mean-field equations appropriate for large N. The mean-field theory is based on the observation that the total recurrent synaptic input onto each network neuron can be approximated as Gaussian noise [2]. The temporal correlation of this noise is calculated self-consistently from the average autocorrelation function of the network. We begin by writing xi=xi0+xi1, where x0 is the steady-state solution to dxi0/dt=xi0+Icosωt+θi and xi1 satisfies dxi1/dt=xi1+jJijϕ(xj1+xj0). This implies that xI0t=hcosωt+θ˜i, where h=I/1+ω2 and we have incorporated a frequency-dependent phase shift into the factor θ˜i. Mean-field theory replaces the network interaction term in the equation for xi1 with a Gaussian random variable η, so that dxi1/dt=xi1+ηi. Averages over time and network units as in Eq. (3), are implemented by averaging over J,θ, and η (denoted by square brackets), an approximation valid for large N.

Self-consistence is obtained in the mean-field theory by requiring that the first two moments of η match the moments of the network interaction that it represents. Thus, we set ηi(t)=ΣjJijϕxj(t)=0, because Jij=0. Similarly, using the identity JilJjk=g2δijδkl/N, we find that

ηi(t)ηj(t+τ)=l=1NJilk=1NJjkϕxl(t)ϕxk(t+τ)=δijg2Nk=1Nϕxk(t)ϕxk(t+τ)=δijg2C(τ). (4)

Next, defining Δ(τ)=xi1(t)xi1(t+τ) and recalling that dxi1/dt=xi1+ηi, it follows that

d2Δ(τ)dτ2=Δτg2Cτ. (5)

The final step in the derivation of the mean-field equations is to note that because x1(t) and x1(t+τ) are driven by Gaussian noise, they are Gaussian random variables with moments x1(t)=x1(t+τ)=0, x1(t)x1(t)=x1(t+τ)x1(t+τ)=Δ(0), and x1(t+τ)x1(t)=Δ(τ). To realize these constraints, we introduce three Gaussian random variables with zero mean and unit variance, zi for i=1,2,3, and write

x1(t)=Δ(0)|Δ(τ)|z1+sgn(Δ(τ))|Δ(τ)|z3,
x1(t+τ)=Δ(0)|Δ(τ)|z2+Δ(τ)|Δ(τ)|z3.

C can then be computed by writing x=x0+x1 and integrating over these Gaussian variables,

C(τ)=02πdθ2πDz3Dz1ϕΔ(0)|Δ(τ)|z1+sgn(Δ(τ))|Δ(τ)|z3+hcos(θ))Dz2ϕΔ(0)|Δ(τ)|z2+|Δ(τ)|z3+hcos(ωτ+θ), (6)

where Dzi=dziexpzi2/2/2π, for i=1,2,3, and θ=θ˜+ωt. Equation (6) determines C(τ) as a nonlinear function of Δ(τ). Substituting this expression into Eq. (5) provides a nonlinear differential equation for Δ(τ), with g, h, ω, and Δ(0) as parameters.

Equation (5) has the form of the equation of motion for a classical particle of unit mass and position Δ(τ) moving under the influence of a force that depends on C. This force is a function of the current position of the particle, Δ(τ) [as well as on its initial position Δ(0)], and it contains terms representing external forcing that are periodic in τ with period 2π/ω. For weak inputs and g greater than but close to 1, Eq. (5) reduces to an undamped forced Duffing oscillator, although we do not restrict our analysis to this limit.

The analogous mechanics problem has to be solved with the initial condition Δ˙(0)=0, which imposes a smoothness constraint on the correlation function. The initial value Δ(0) is fixed by requiring that Δ(0)Δ(τ). We solved Eq. (5) numerically using iterative methods to determine Δ(0) and found two types of solutions. The first is a solution in which Δ(τ) is a periodic function of τ with frequency ω, as in Fig. 1(f). This solution, which represents a network state that is fully entrained by the oscillatory input, exists for all values of I,ω, and g. The second solution is characterized by Δ(τ) that decays for small τ and oscillates for large τ, so that Δ(0) is larger than the peaks in the large-τ oscillations, as in Fig. 1(e). This solution, which corresponds to a nonperiodic state only partially locked to the oscillatory drive, only exists for I smaller than a critical value that depends on ω and g. A linear perturbation analysis of the mean-field theory shows that this nonperiodic solution is stable throughout the regime where it exists. The periodic solution is unstable in this regime and is stable outside it. The mean-field analysis also shows that the nonperiodic solution corresponds to a state with “exponential” sensitivity to initial conditions (a positive Lyapunov exponent) [2], i.e., a chaotic state.

The resulting phase diagram marks the transition between the periodic and nonperiodic states (Fig. 2). Surprisingly, the transition curves are nonmonotonic functions of frequency and reveal a resonant frequency at which it is easiest to entrain the chaotic network with a periodic input (even though there is no peak in the power spectrum of the chaotic activity at this frequency). This frequency is roughly twice the inverse time constant of the chaotic fluctuations in the spontaneous state and for g not too much greater than 1; the corresponding period can be an order of magnitude longer than the single-neuron time constant. Figures 2 and 3(b) indicate that internally generated fluctuations are most easily suppressed by stimuli oscillating in the few Hz range.

FIG. 2.

FIG. 2.

Phase-transition curves showing the critical input amplitude that divides regions of periodic and chaotic activity as a function of input frequency. (a) Transition curves for g=1.5 (dashed curve) or g=1.8 (solid curve). The stars indicate parameter values used in Figs. 1(b), 1(e), and 1(h) and Figs. 1(c), 1(f), and 1(i). The inset traces show representative single-unit firing rates for the regions indicated. (b) A comparison of the transition curve computed by mean-field theory (open circles and line) and by simulating a network (filled circles) for r0=1, g=2 and, for the simulation, N=10000.

FIG. 3.

FIG. 3.

Signal and noise amplitudes as a function of input amplitude and frequency. (a) Definition of the signal and noise amplitudes, σosc2 and σchaos, respectively, in terms of the mean-subtracted correlation function. (b) Signal and noise amplitudes for f=20Hz and g=1.5 as a function of input amplitude. The transition from chaotic to nonchaotic regimes occurs at I=0.44. (c) Same as (b), but with fixed input amplitude (I=0.2) and varying input frequency. In the region between 3 and 7 Hz, responses of the network are free from chaotic noise. In (b) and (c), open circles denote the signal amplitude and filled circles denote the noise amplitude.

The phase-transition curve shifts upward and to the right as g increases [Figs. 2(a) and 2(b)], indicating a higher resonant frequency as well as a larger critical input amplitude. This occurs because the chaotic activity for larger g has a higher amplitude, making it more difficult to suppress, and a smaller inverse correlation time, leading to a higher resonance frequency. The location of the phase transition computed by mean-field theory is in good agreement with simulation results for large networks [Fig. 2(b)].

To study the implications of the phase transition further, we divide network responses into signal and noise components by separating the full response variance into two terms: σosc2 and σchaos2. For this purpose, we subtract the square of the average value of ϕ from C(τ) and consider the mean-subtracted correlation function C(τ)[ϕ]2. The signal amplitude σosc is the square root of the amplitude of the oscillatory part of this correlation function for large τ [Fig. 3(a)]. The noise amplitude σchaos is the square root of the difference between the values of the mean-subtracted correlation function at τ=0 and the peak of its oscillations [Fig. 3(a)]. In the frequency domain, σosc2 measures the total power in the network activity at the input frequency and its harmonics, whereas σchaos2 measures the residual power.

The signal amplitude increases linearly with the strength of the input (I) over the range considered in Fig. 3(b). The noise amplitude has a more complex nonlinear dependence, reflecting the presence of the phase transition in Fig. 2 and duplicating the effect seen in Fig. 1, in which a sufficiently strong input completely suppresses the chaotic component of the response. An interesting feature to note is that there is no clear signature of this chaotic-to-periodic transition in the signal amplitude. When plotted as a function of input frequency for fixed I, the signal amplitude shows relatively weak frequency dependence below about 4 Hz and then rolls off at higher frequencies [Fig. 3(c)]. This is a result of the low-pass filtering property of the network. The noise amplitude has a more interesting dependence. Between 0 and 3 Hz, the noise amplitude drops steeply and vanishes for frequencies between 3 and 7 Hz, rising again above 7 Hz. This double transition is a consequence of the nonmonotonicity of the phase-transition curves in Fig. 2. As in Fig. 3(b), there is no apparent indication of these transitions in the signal amplitude.

It has previously been noted that chaotic activity in neuronal networks can be suppressed by either white-noise [13] or constant [14] input in discrete-time models. However, discrete-time versions fail to capture the rich dynamics of the chaotic fluctuations and their effect on responses to time-dependent inputs. Suppression of spatiotemporal chaos by periodic forcing has also been reported [1012], mostly through numerical simulations. In some of these simulations, an optimal frequency for complete locking similar to Fig. 2 has been observed [10]. Our results show that such a resonance effect occurs even when the power spectrum of the unforced chaotic fluctuations falls monotonically from zero frequency (Fig. 1). The networks we considered only describe the effects of fluctuations induced by local interactions, whereas additional sources of variability carried by long-range connections or by local sources of stochasticity are present in real neurons. Therefore, we predict that an experimental plot of response variability versus stimulus frequency will follow a nonzero U-shaped curve with a minimum in the several Hz range, rather than falling to zero as in Fig. 3(c).

Variability in cortical responses is sometimes described by adding stochastic noise linearly to a deterministic response [17,18]. Our results indicate that the interaction between intrinsically generated “noise” and responses to external drive is highly nonlinear. Near the onset of chaos, complete noise suppression can be achieved with relatively low amplitude inputs, weaker—for example—than the strength of the internal feedback. Thus, suppression of spontaneously generated noise in neural networks does not require stimuli so strong that they simply overwhelm fluctuations through saturation. A number of experiments indicate that stimuli as well as attention can suppress firing-rate variability [1923] (but see [24]). Although other mechanisms for nonlinear suppression of neuronal variability have been proposed [2530], our analysis indicates that such suppression is a general property of the interaction between internally generated dynamics and external drive in a nonlinear network.

Spontaneous fluctuations in neural activity occur across a wide range of time scales, with increasing variability over long time intervals [31] and increasing power at low frequencies, although resonances may appear [24,32]. In this work we have focused on firing-rate fluctuations using smooth rate-based dynamics, not spiking dynamics. Spiking neuron models with strong “balanced” interactions can exhibit chaotic firing patterns [2,3], but the fluctuations they produce have relatively flat power spectra associated with variability in short interspike intervals. It will be interesting to study stimulus effects in spiking network models that exhibit slow irregular modulations of firing rates.

In our model, weak correlations (on the order of 1/N) in activity fluctuations exist between all pairs of neurons. These correlations are distributed evenly between negative and positive values across the population. Slow spontaneous rate fluctuations in the cortex are often associated with long-range spatial correlations, especially in anesthetized animals [33,34]. As in our model, the observed spatial correlations are weaker than the firing-rate autocorrelations. In some cases, both negative and positive rate fluctuations are also observed, such that the mean value of the pairwise correlations across a populations is much smaller than the width of the distribution of correlations [3537]. However, the extent of the contribution of local network dynamics to the observed low-frequency correlations is unclear [22,34].

Neuronal selectivity to stimulus features is typically studied by determining how the mean response across experimental trials depends on various stimulus parameters. The presence of nonlinear interactions between stimulus-evoked and spontaneous fluctuating activities indicates that response components that are not locked to the temporal modulation of the stimulus may also be sensitive to stimulus parameters. In general, our results suggest that experiments studying the stimulus dependence of the noise component of neural responses could provide important insights into the nature and origin of activity fluctuations in neuronal circuits, as well as their role in neuronal information processing.

Acknowledgments

K.R. and L.F.A. were supported by the National Science Foundation Grant No. IBN-0235463 and the NIH Director’s Pioneer Award Program (5-DP1-OD114-02), part of the NIH Roadmap for Medical Research. H.S. was supported by grants from the Israel Science Foundation and the Israeli Ministry of Defence. This research was also supported by the Swartz Foundation through the Swartz Centers at Columbia and Harvard.

Contributor Information

Kanaka Rajan, Lewis-Sigler Institute for Integrative Genomics, Icahn 262, Princeton University, Princeton, New Jersey 08544, USA.

L. F. Abbott, Department of Neuroscience and Department of Physiology and Cellular Biophysics, College of Physicians and Surgeons, Columbia University, New York, New York 10032-2695, USA

Haim Sompolinsky, Racah Institute of Physics, Interdisciplinary Center for Neural Computation, Hebrew University, Jerusalem, Israel.

References

RESOURCES