Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2019 Feb 6.
Published in final edited form as: Neuron. 2014 Jun 18;82(6):1394–1406. doi: 10.1016/j.neuron.2014.04.045

Optimal control of transient dynamics in balanced networks supports generation of complex movements

Guillaume Hennequin 1,2,*, Tim P Vogels 1,3, Wulfram Gerstner 1
PMCID: PMC6364799  EMSID: EMS81160  PMID: 24945778

Abstract

Populations of neurons in motor cortex engage in complex transient dynamics of large amplitude during the execution of limb movements. Traditional network models with stochastically assigned synapses cannot reproduce this behavior. Here we introduce a class of cortical architectures with strong and random excitatory recurrence that is stabilized by intricate, fine-tuned inhibition, optimized from a control theory perspective. Such networks transiently amplify specific activity states and can be used to reliably execute multidimensional movement patterns. Similar to the experimental observations, these transients must be preceded by a steady state initialization phase from which the network relaxes back into the background state by way of complex internal dynamics. In our networks, excitation and inhibition are as tightly balanced as recently reported in experiments across several brain areas, suggesting inhibitory control of complex excitatory recurrence as a generic organizational principle in cortex.

Introduction

The neural basis for movement generation has been the focus of several recent experimental studies (Churchland et al., 2010, 2012; Ames et al., 2014). In a typical experiment (Figure 1a), a monkey is trained to prepare a particular arm movement and execute it after the presentation of a go-cue. Concurrent electrophysiological recordings in cortical motor and premotor areas show an activity transition from spontaneous firing into a movement-specific preparatory state with firing rates that remain stable until the go-cue is presented (Figure 1b). Following the go-cue, network dynamics begin to display quickly changing, multiphasic firing rate responses that form spatially and temporally complex patterns and eventually relax towards spontaneous activation levels (Churchland and Shenoy, 2007).

Figure 1. Dynamical systems view of movement planning and execution.

Figure 1

(a) A typical delayed movement generation task starts with the instruction of what movement must be prepared. The arm must then be held still until the go cue is given, upon which the movement is performed.

(b) During the preparatory period, model neurons receive a ramp input (green). Following the go-cue, that input is withdrawn, leaving the network activity free to evolve from the initial condition set up during the preparatory period. Model neurons then exhibit transient oscillations (black) which drive muscle activity (red).

(c) Black-box view on movement generation. Muscles (red, right) are thought to be activated by a population of motor cortical neurons (“neural dynamical system”, middle). To prepare the movement, this network is initialized in a desired state by the slow activation of a movement-specific pool of neurons (green, left).

Recent studies (Afshar et al., 2011; Shenoy et al., 2011) have suggested a mechanism similar to a spring loaded box, in which motor populations could act as a generic dynamical system that is driven into specific patterns of collective activity by preparatory stimuli (Figure 1). When released, intrinsic population dynamics would commandeer the network activity and orchestrate a sequence of motor commands leading to the correct movement. The requirements for a dynamical system of this sort are manifold. It must be highly malleable during the preparatory period, excitable and fast when movement is triggered, and stable enough to return to rest after an activity transient. Moreover, the dynamics must be sufficiently rich to support complex movement patterns (Maass et al., 2002; Sussillo and Abbott, 2009; Laje and Buonomano, 2013).

How the cortical networks at the heart of this black box (Figure 1c) could generate such complex transient amplification through recurrent interactions is still poorly understood. Randomly connected, globally balanced networks of leaky integrate-and-fire neurons exhibit stable background states (van Vreeswijk and Sompolinsky, 1996; Tsodyks et al., 1997; Brunel, 2000; Vogels et al., 2005; Renart et al., 2010) but cannot autonomously produce the substantial yet reliable, spatially patterned departure from background activity observed in the experiments. Networks with strong recurrent pathways can exhibit ongoing, complex rate fluctuations beyond the population mean (Sompolinsky et al., 1988; Sussillo and Abbott, 2009; Rajan and Abbott, 2010; Litwin-Kumar and Doiron, 2012; Ostojic, 2014) but do not capture the transient nature of movement-related activity. Moreover, such rate dynamics are chaotic, and sensitivity to noise seems improper in a situation in which the initial conditions dictate the subsequent evolution of the system. Chaos can be controlled either through continuous external feedback loops, or modifications of the recurrent connectivity itself (Sussillo and Abbott, 2009; Hoerzer et al., 2012; Laje and Buonomano, 2013). However, all of these models violate Dale's principle, according to which neurons can be either excitatory or inhibitory, but not of a mixed type. In other words, there is currently no biologically plausible network model to implement the spring loaded box of Figure 1c, i.e. a system that well-chosen inputs can prompt to autonomously generate multiphasic transients of large amplitude.

Here we introduce a new class of neuronal networks composed of excitatory and inhibitory neurons which, similarly to chaotic networks, rely on strong and intricate excitatory synaptic pathways. Because traditional homogeneous inhibition is not enough to quench and balance chaotic firing rate fluctuations in these networks, we build a sophisticated inhibitory counter-structure that successfully dampens chaotic behavior but allows strong and fast break-out transients of activity. This inhibitory architecture is constructed with the help of an optimization algorithm that aims to stabilize the activity of each unit by adjusting the strength of existing inhibitory synapses, or by adding or pruning inhibitory connections. The result is a strongly connected, but non-chaotic, balanced network, which otherwise looks random. We refer to such networks as “Stability-Optimized Circuits”, or SOCs. We study both a rate-based formulation of SOC dynamics and a more realistic spiking implementation. We show that external stimuli can force these networks into unique and stable activity states. When input is withdrawn, the subsequent free transient dynamics are in good qualitative agreement with the motor cortex data on single-cell and network-wide levels.

We show that SOCs connect hitherto unrelated aspects of balanced cortical dynamics. The mechanism that underlies the generation of large transients here is a more general form of “balanced amplification” (Murphy and Miller, 2009), which was previously discovered in the context of visual cortical dynamics. Additionally, during spontaneous activity in SOCs, a “detailed balance” (Vogels and Abbott, 2009) of excitatory and inhibitory inputs emerges, but much finer than expected from shared population fluctuations (Okun and Lampl, 2008; Cafaro and Rieke, 2010; Renart et al., 2010), beyond also what is possible with recently published inhibitory learning rules (Vogels, Sprekeler, et al., 2011; Luz and Shamir, 2012) that only alter the weights of inhibitory synapses, but not the structure of the network itself. Preparing such exquisitely balanced systems with an external stimulus into a desired initial state will then lead to momentary but dramatic departure from balance, demonstrating how realistically shaped cortical architectures can produce a large library of unique, transient activity patterns that can be decoded into motor commands.

Results

We are interested in studying how neural systems (Figure 1c) can produce the large, autonomous, and stable “spring box dynamics” as described above. We will first investigate how to construct the architectures that display such behavior and show how their activity can be manipulated to produce motor-like activity. We then discuss the implications of this new type of architecture for the joint dynamics of excitation and inhibition. Finally, we confirm our results in a more realistic spiking network.

Stability-optimized circuits (SOCs)

We use N=200 interconnected rate units (Dayan and Abbott, 2001; Gerstner and Kistler, 2002), of which 100 are excitatory and 100 are inhibitory. These “rate units”' are best thought of as subgroups of statistically comparable neurons within some area of cortex. We describe the temporal evolution of their “potentials”, gathered in a vector x(t), according to

τdxdt=x(t)+I(t)+WΔr(x,t) (1)

where τ=200 ms, the combined time constant of membrane and synaptic dynamics, is set to match the dominant timescale in the data of Churchland et al., 2012. I(t)=ξ(t)+S(t) denotes all external inputs, i.e. an independent noise term ξ(t) and a specific, patterned external stimulation S(t). The vector Δr(x,t) contains the instantaneous single-unit firing rates, measured relative to a low level of spontaneous activity (r0=5 Hz). These rates are given by a nonlinear function Δri = g(xi) of the potentials (Figure 3e and Experimental Procedures), although we will also consider the linear case Δri ∝ xi in our analysis. The final term in Equation 1 accounts for the recurrent dynamics of the system due to its connectivity W. We focus here on connectivities that obey Dale’s principle, i.e. on weight matrices composed of separate positive and negative columns.

Figure 3. Transient amplification in SOCs.

Figure 3

(a) The energy E evoked by N=200 orthogonal initial conditions (a1, …, aN) as the network evolves linearly (Δri=xi) with no further input according to Equation 1. The energy (Equation 4) is normalized such that it equals 1 for an unconnected network (W=0) irrespective of the initial condition (dashed horizontal line). Each successive initial condition ai is defined as the one that evokes maximum energy, within the subspace orthogonal to all previous input patterns aj<i (Experimental Procedures). The black arrowhead indicates the mean, or the expected evoked energy E0 when the neurons are initialized in a random activity state.

(b) Dynamics of the SOC in the linear regime. Top: time-evolution of ‖Δr‖/√N - which measures the momentary spread of firing rates in the network above or below baseline - as the dynamics unfold from any of the 10 best or 10 worst initial states (same color code as in (a)). Initial states have a standard deviation of σ=1.5 Hz across the population. The dashed gray line shows σ × exp(-t/τ), i.e. the behavior of an unconnected pool of neurons. Bottom: sample firing rate responses of 10 randomly chosen neurons following initialization in state a1 or a199. The red line indicates the momentary population-averaged firing rate.

(c,d) Same as in (b), now with the nonlinear gain function shown in (e). Unlike in the linear case, the dynamics now depend on the spread σ of the initial firing rates across the network (1.5 Hz in (c) as in (b), 2 Hz in (d)). The larger this spread, the longer the duration of the population transient. When σ>3 Hz, the network initiates self-sustained chaotic activity (not shown).

(e) Single-unit input-output nonlinearity (solid line, Δri=g(xi) given by Equation 2) and its linearization (dashed line, Δri=xi).

Random balanced networks can have qualitatively different types of dynamics depending on the overall magnitude of W. With weak synapses, their activity decays rapidly against baseline when perturbed (not shown). To yield a more interesting, qualitatively different behavior, one can strengthen the existing connections (Figure 2c, left; Experimental Procedures), increasing the radius of the characteristically circular distribution of eigenvalues (Rajan and Abbott, 2006; Figure 2b). Small perturbations of the network dynamics can now propagate chaotically across the network (Sompolinsky et al., 1988; Rajan et al., 2010, Ostojic, 2014), generating uncontrollable, switch-like fluctuations in the neurons’ firing rates even without external drive (Figure 2e).

Figure 2. Random inhibition-stabilized networks (SOCs).

Figure 2

(a) Schematic of a SOC. A population of rate units is recurrently connected, with strong and intricate excitatory pathways (red) that would normally produce unstable, chaotic activity. Stabilization is achieved through fine-tuned inhibitory feedback (blue).

(b) Eigenvalue spectrum of the connectivity matrix of a SOC (black), and that of the chaotic random network from which it is derived (gray). Stability requires all eigenvalues to lie to the left of the dashed vertical line. Note the large negative real eigenvalue, which corresponds to the spatially uniform activity pattern.

(c) Matrices of synaptic connectivity before (unstable) and after (SOC) stability optimization through inhibitory tuning. By design, the excitatory weights are the same in both matrices. Matrices were thinned out to 40×40 for visualization purposes. The bottom row shows the strengths of all the inhibitory input synapses to a single sample neuron, in the unstable network (gray) and in the corresponding SOC (black).

(d) Distribution of inhibitory weights in the unstable network (10% connection density, gray peak at Wij ≈ -3.18) and in the stabilized version (40% connection density, black). The mean inhibitory weight of all possible synapses is the same before and after optimization (≈ -0.318, gray and black arrowheads).

(e,f) Spontaneous activity in the unstable network (e) and in the SOC (f), for four example units. Note the difference in firing rate scales.

Here we construct non-chaotic networks that exhibit stable background activity but retain interesting dynamical properties. Starting with the above deeply chaotic network Wchaos, we build a second, stability-optimized circuit (SOC, Figure 2a). The excitatory connections are kept identical to those in the reference network (Figure 2c), but the inhibitory connections are no longer drawn randomly. Instead, they are precisely matched against the excitatory connectivity. This “matching” is achieved by an algorithmic optimization procedure that modifies the inhibitory weights and wiring patterns of the reference network, aiming to pull the unstable eigenvalues of W towards stability (Experimental Procedures, and Supplementary Movie 1). The total number of inhibitory connections is increased and the distribution of their strengths is wider, but the mean inhibitory weight is kept the same (Figure 2d). The resulting SOC network is as strongly connected as the reference chaotic network, but is no longer chaotic, as indicated by the distribution of its eigenvalues in the complex plane, which all lie well within the stable side (Figure 2b, black dots). Accordingly, the background activity is now stable (Figure 2e), with small noisy fluctuations around the mean caused by ξ(t). Shuffling the optimal inhibitory connectivity results in chaotic dynamics similar to the reference network (not shown), indicating that it is not the broad, sparse distribution of inhibitory weights but the precise inhibitory wiring pattern that stabilizes the dynamics.

SOCs exhibit complex transient amplification

To test whether SOCs can produce the type of complex transient behavior seen in experiments (Churchland and Shenoy, 2007; Churchland et al., 2012; cf. also Figure 1), we momentarily clamp each unit to a specific firing rate, and observe the network as it relaxes to the background state (later we will model the preparatory period explicitly). Depending on the spatial pattern of initial stimulation, the network activity exhibits a variety of transient behaviors. Some initial conditions result in fast monotonous decay towards rest, while others drive large transient deviations from baseline rate in most neurons.

To quantify this amplifying behavior of the network in response to a stimulus, we introduce the notion of “evoked energy” E(a), measuring both the amplitude and duration of the collective transient evoked by initial condition a for a given SOC compared to an unconnected network (Experimental Procedures). Of all initial conditions Δr with constant power σ2 =∑i Δri2/N, we find the one that maximizes this energy and call it a1. We repeat this procedure among all patterns orthogonal to a1 to obtain the second best pattern a2, and iterate until we have filled a full basis of N=200 orthogonal initial conditions {a1, a2, …, aN} (an analytical solution exists for the linear case, Δri ∝ xi; cf. Experimental Procedures). A large set of these orthogonal initial conditions are transiently amplified by the connectivity of the network, with the strongest states evoking energies ~25 times greater than expected from the exponential decay of activity in unconnected neurons (Figure 3a). For these strongly amplifying states, the population-averaged firing rate remains roughly constant during the transient (red line in Figure 3b, middle), but the average absolute deviation from baseline firing rate per unit can grow dramatically (Figure 3b, top), because some units become more active and others become less active than baseline. Amplifying behavior progressively attenuates but subsists for roughly the first half of the basis (a1, a2, …, a100). Eventually, amplification disappears, and even turns into active dampening of the initial condition (Figure 3a, green dots). For a200, the least amplifying initial condition, return to rest occurs three times faster than it would in unconnected neurons (Figure 3b). Here, the least-preferred state a200 corresponds to the uniform spatial mode of activity (1, 1, …, 1), i.e. the trivial case in which all neurons are initialized slightly above (or below) their baseline rate.

Finally, if we increase the firing rate standard deviation σ in the initial condition, such that a substantial number of (excitatory and inhibitory) neurons will reach lower saturation and stop firing during the transient, the duration of the response increases (Figure 3c,d). For σ>3 Hz the network response begins to self-sustain in the chaotic regime (not shown). This behavior is beyond the scope of our study, and in the following we set σ=1.5 Hz, which results in transients of ~1 second duration. Note also that we did not observe a return to chaotic behavior in the full spiking network, even though firing rates in the initial conditions deviated more dramatically from baseline.

SOC dynamics are consistent with experimental data

In Churchland et al., 2012, monkeys were trained to perform 27 different cued and delayed arm movements (Figure 1a). The activity of the neurons recorded during this task (Figure 4a) displayed transient activity similar to the responses of appropriately initialized SOCs (Figure 3c). To model this behavior, we assume that each of the 27 instructed movements is associated with a pool of prefrontal cortical neurons (Figure 1c) feeding the motor network through sets of properly tuned input weights (Experimental Procedures). For a given movement, the corresponding command pool becomes progressively more active during the one-second-long delay period (Amit and Brunel, 1997; Wang 1999). Remarkably, this simple input drives the SOC into a stable steady state (Figure 4b). By adjusting the movement-specific input weights, we can manipulate this steady state and force the network into a specific spatial arrangement of activity. This is not possible in generic chaotic networks in which external inputs are overwhelmed by a strong and uncontrolled recurrent activity. We chose the input weights such that, by the end of the delay period, the network arrives at a state that is one of 27 different linear combinations of a1 and a2, i.e. the two orthogonal activity states that evoke the strongest collective responses. The go cue quickly silences the command pool, leaving the network free to depart from its preparatory state and to engage in transient amplification. The resulting recurrent dynamics produce strong, multiphasic, and movement-specific responses in single units (Figure 4b), qualitatively similar to the data.

Figure 4. SOCs agree with experimental data.

Figure 4

(a) Experimental data, adapted with permission from Churchland et al., 2012. Each trace denotes the trial-averaged firing rate of a single cell (two sample cells are shown here) during a delayed reaching task. Each trace corresponds to one of 27 different movements. Vertical scale bars denote 20 spikes/sec. The go cue is not explicitly marked here, but occurs about 200 ms before movement onset.

(b) Time-varying firing rates of two neurons in the SOC, for 27 “conditions”, each characterized by a different collective steady state of preparatory activity (see text).

(c) Experimental data adapted from Churchland et al., 2012, showing the first 200 ms of movement-related population activity projected onto the top jPC plane. Each trajectory corresponds to one of the 27 conditions mentioned in (a).

(d) Same analysis as in (c), for the SOC.

In the data of Churchland et al., the complexity of the single-neuron multiphasic responses was in fact hiding orderly rotational dynamics on the population level. A plane of projection could be found in which the vector of population firing activity (Δr(t) in our model) would start rotating after the go cue, and consistently rotate in the same direction for all movements (Figure 4c). Our model, analyzed with the same dynamical variant of principal component analysis (jPCA, Churchland et al., 2012; Experimental Procedures) displays the same phenomenon (Figure 4d).

SOCs can generate complex movements

The complicated, multiphasic nature of the firing rate transients in SOCs suggests the possibility of reading out equally complex patterns of muscle activity. We illustrate this idea in a task in which the joint activation of two muscles must produce one of two target movements (“snake” or “butterfly” in Figure 5), within 500 ms following the go cue. Similarly to Figure 4, the preparatory input for the “snake” (resp. “butterfly”) movement is chosen such that, by the arrival of the go cue, the network activity matches the network's preferred initial condition a1 (resp. a2). Two readout units (“muscles”) compute a weighted sum of all neuronal activities in the network that we take to directly reflect the horizontal and vertical coordinates of the movement. Simple least-squares regression learning of the output weights (Experimental Procedures) can map the activity following each command onto the correct trajectory (compare the five test trials in Figure 5a).

Figure 5. Generation of complex movements through SOC dynamics.

Figure 5

(a) Firing rates versus time for 10 neurons of the SOC, as the system prepares and executes either of the two target movements (snake, left or butterfly, right). Five test trials are shown for each neuron. The corresponding muscle trajectories following the go cue are shown for the same five test trials (thin traces) and compared to the target movement (black trace and dots).

(b) Same as in (a), for a weakly connected (untuned) random balanced network (Experimental Procedures).

We conclude that the SOC's single neuron responses form a set of basis functions that is rich enough to allow read-out of non-trivial movements. This is not possible in untuned, chaotic balanced networks without exquisite feedback loops or supervised learning of lateral connections (Sussillo and Abbott, 2009; Hoerzer et al., 2012; Laje and Buonomano, 2013) because of the high sensitivity to noise. Further, in balanced networks with weak connections, each neuron’s activity decays exponentially: this redundancy prevents the network to robustly learn the snake and butterfly trajectories (Figure 5b).

Interaction between excitation and inhibition in SOCs

To understand the mechanism by which SOCs amplify their preferred inputs, we dissociated the excitatory (cE) and inhibitory (cI) synaptic inputs each unit received from other units in the network in the absence of specific external stimulation (S(t)=0). We quantified the E/I balance by rEI(t), the momentary Pearson correlation coefficient between cE and cI across the network population. The preferred initial states of the SOC momentarily produce substantially negative E/I input correlations (Figure 6a), indicating an average mismatch between E and I inputs. Balance is then quickly restored by internal network dynamics, with rEI(t) reaching ~0.8 at the peak of the transient triggered by initial condition a1. The effect subsists, although progressively attenuated, for roughly the first 100 preferred initial states (a1, a2, …, a100), which are also the initial states that trigger amplified responses.

Figure 6. Precise balance of excitation and inhibition in SOCs.

Figure 6

The network is initialized in state a1 (left), a10 (middle) or a100 (right), and runs freely thereafter. The amplitude of the initial condition is chosen weak enough for the dynamics of amplification to remain linear (cf. Figure 3).

(a) Temporal evolution of the Pearson correlation coefficient rEI between the momentary excitatory and inhibitory recurrent inputs across the population.

(b) Corresponding time course of the correlation coefficients between the network activity and the initial state, calculated from the activity of the entire population (black), of the excitatory subpopulation (red), and of the inhibitory one (blue).

(c) Temporal evolution of the correlation coefficient between the network activity when initialized in state ai (i=1 (left), 10 (middle) or 100 (right)) and when initialized in a different state aj (j ≠ i, j<100). Solid lines denote the average across j, and the dashed flanking lines indicate one standard deviation. Small values indicate that the responses to the various initial conditions ai are roughly decorrelated.

(d) Black: spontaneous fluctuations around baseline rate of a sample unit in the network. The corresponding rate distribution is shown on the right (black), and compared to the distribution obtained if the unit were not connected to the rest of the network (gray). The green line denotes the momentary population average rate, which fluctuates much less.

(e) Histogram of pairwise correlations between neuronal firing rates estimated from 100 seconds of spontaneous activity. The black triangular mark indicates the mean (~0.014).

(f) Excitatory (red) and inhibitory (blue) inputs taken in the same sample unit (top) or in a pair of different units (bottom), and normalized to z-scores. The corresponding Pearson correlation coefficients are indicated above each combination, and computed from 100 seconds of spontaneous dynamics.

(g) Brown: lagged cross-correlogram of excitatory and inhibitory inputs to single units, each normalized to z-score (cf. (f), top row). The solid line is an average across all neurons; flanking lines denote ± 1 std. Inhibition lags behind excitation by a few milliseconds. Cross-correlating the E input into one unit with the I input into another unit (cf. (f), bottom row) yields the black curve, which is an average over 1’000 randomly chosen such pairs in the SOC.

Notably, the patterns of neuronal activity after 100 ms of recurrent processing have a larger amplitude than - but bear little spatial resemblance to - the initial condition. This is reflected by a rapid decay (within 100 ms) of the correlation coefficient between the momentary network activity and the initial state (Figure 6b, black). However, considering the excitatory and inhibitory populations separately shows that the excitatory subpopulation remains largely in the same spatial activity mode throughout the transient, i.e. units that were initially active (resp. inactive) tend to remain active (resp. inactive) throughout the relaxation (Figure 6b, red). In contrast, the inhibitory subpopulation becomes negatively correlated with its initial pattern after only 60 ms (Figure 6b, blue). In other words, it is mostly the swift reversal of inhibitory activity that quenches a growing excitatory transient and pulls the system back to rest.

The amplifying dynamics of excitation and inhibition seen on the level of transient responses to some initial conditions also shape the spontaneous background activity in SOCs (Figure 2f and Figure 6d). In the absence of additional stimuli, the rate units are driven by private noise ξ(t) (Experimental Procedures), such that firing rate fluctuations can be observed even in the unconnected case (W=0), Figure 6d, gray histogram). The recurrent SOC connectivity amplifies these unstructured fluctuations by one third (Figure 6d, black histogram), because the noise stimulates each of the ai modes evenly, and although some modes are suppressed by the recurrent dynamics and others are amplified, the net result is a mild amplification (Figure 3a, black arrowhead). Furthermore, since only a few activity modes experience very strong amplification, the resulting distribution of pairwise correlations among neurons is wide with a small positive mean (Figure 6e).

SOCs also exhibit an exquisite temporal match between excitatory and inhibitory inputs to single units during spontaneous activity (Figure 6f). The correlation between these two input streams averages to ~0.66 across units, because any substantial mismatch between recurrent E and I inputs is instantly converted into a pattern of activity in which those inputs match again (cf. Figure 6a). The amplitude of such reactions is larger than the typical response to noise, so the network is constantly in a state of detailed E/I balance (Vogels and Abbott, 2009). Furthermore, we have seen that it is mostly the spatial pattern of inhibitory activity that reverses during the course of amplification to restore the balance, while the excitatory activity is much less affected (Figure 6b). Thus, during spontaneous activity, inhibitory inputs are expected to lag behind excitatory inputs by a few milliseconds, which can indeed be seen in their average cross-correlogram (Figure 6g) and has also been observed experimentally (Okun and Lampl, 2008; Cafaro and Rieke, 2010).

The small temporal co-fluctuations in the firing rates of the excitatory and inhibitory populations are known to translate into correlated excitatory and inhibitory inputs to single neurons, in densely connected circuits (Renart et al., 2010). Here, interestingly, excitatory and inhibitory inputs are correlated more strongly than expected from the magnitude of such shared population fluctuations. This can be seen by correlating the excitatory input stream taken in one unit with the inhibitory input stream taken in another unit (Figure 6f, bottom row). Such correlations average to ~0.26 only (to be compared with ~0.66 above; Figure 6g).

Spiking implementation of a SOC

So far we have described neuronal activity on the level of firing rates. An important question is whether the dynamical features of rate-based SOCs are borne out in more realistic models of interconnected spiking neurons. To address this issue, we built a large-scale model of a SOC composed of 15'000 (12’000 exc. + 3’000 inh.) leaky integrate-and-fire model neurons. The network was structured such that each neuron belonged to one of 200 excitatory or 200 inhibitory small neuron subgroups (of size 60 and 15 respectively), whose average momentary activities can be interpreted as the “rate variables” discussed until here.

In order to keep the network in the asynchronous and irregular firing regime, the whole network was - in part - randomly and sparsely connected, similar in this respect to traditional models (van Vreeswijk and Sompolinsky, 1996; Brunel, 2000; Vogels et al., 2005; Renart et al., 2010). In addition to those random, fast synapses, slower ones were added that reflected the structured SOC connectivity between subgroups of neurons. The connectivity pattern between subgroups was given by a 400 × 400 SOC matrix obtained similarly to W in Figure 2. The value of a matrix element Wij reflected the probability that a neuron in subgroup j be chosen as a presynaptic partner to another neuron in group i (Experimental Procedures). Overall, the average connection probability between spiking neurons was 0.2.

The spiking SOC operated in a balanced regime, with large subthreshold membrane potential fluctuations and occasional action potential firing (Figure 7a) with realistic rate and interspike interval statistics (Figure 7c). Spiking events were fully desynchronized on the level of the entire population, whose momentary activity was approximately constant at ~6 Hz.

Figure 7. Transient dynamics in a spiking SOC.

Figure 7

(a) The network is initialized in a mixture of its top two preferred initial states during the preparatory period. Top: raster plot of spiking activity over 200 trials for three cells (red, green, blue). Middle: temporal evolution of the trial-averaged activity of those cells (same color code), and that of the overall population activity (black). Rate traces were computed over 1'000 trials and smoothed with a Gaussian kernel (width 20 ms), to reproduce the analysis of Churchland et al., 2012. Bottom: sample voltage trace of a randomly chosen neuron.

(b) Fast (black) and slow (brown) synaptic PSPs, corresponding to random and structured connections in the spiking circuit, respectively.

(c) Distribution of average firing rates (top) and ISI CVs (bottom) during spontaneous activity.

(d) Trial-averaged firing rate traces for a single sample cell, when the preparatory input drives the SOC into one of 27 random mixtures of its first and second preferred initial conditions. Averages were computed over 1'000 trials, and smoothed as described in (a).

(e) First 200 ms of movement-related population activity, projected onto the top jPC plane. Each trajectory corresponds to a different initial condition in (d), using the same color code.

See also Figure S1.

Similar to our rate-based SOCs, the spiking network could be initialized in any desired activity state through the injection of specific ramping input currents into each neuron (Figure 7a). The go cue triggered sudden input withdrawal, resulting in large and rich transients in the trial-averaged spiking activities of single cells (Figure 7a, middle), which lasted for about 500 ms, and occurred reliably despite substantial trial-by-trial spiking variability in the preparation phase.

The trial-averaged firing rate responses to 27 different initial conditions, chosen in the same way as in Figure 4, as well as the diversity of single cell responses, were qualitatively similar to the data in Churchland et al., 2012 (Figure 7 and Supplementary Figure 1). When projected onto the top jPC plane, the population activity also showed orderly rotations, as it did in our rate SOC (Figure 7e).

During spontaneous activity, subgroups of neurons in the SOC display large, slow and graded activity fluctuations (Figure 8a), which are absent from a control, traditional random network with equivalent synaptic input statistics (Supplementary Figure 2; Experimental Procedures). Moreover, individual pairwise correlations between subgroup activities in the spiking SOC are accurately predicted by a linear rate model similar to Equation 1 (Figure 8b). Crucially, this rate model is non-chaotic, as the matrix that describes connectivity among subgroups has no eigenvalue larger than one (by construction of the SOC). We emphasize that our spiking network uses deterministic I&F neurons without external noise, so that the spontaneous activity fluctuations seen in individual subgroups must have been intrinsically generated, similar to the voltage fluctuations seen in classical balanced networks (van Vreeswijk and Sompolinsky, 1996; Renart et al., 2010). This is in contrast to the rate-based model where fluctuations arose from the amplification of an external source of noise (Equation 1).

Figure 8. Spontaneous activity in spiking SOCs.

Figure 8

(a) Top: raster plot of spontaneous spiking activity in the SOC. Only the neurons in the first 5 subgroups (300 neurons) are shown. Bottom: momentary activity of the whole population (black), and of the second (green) and third (magenta) subgroups. Traces were smoothed using a Gaussian kernel of 20 ms width.

(b) Pairwise correlations between instantaneous subgroup firing rates in the SOC, as empirically measured from a 1'000 second-long simulation (x-axis) versus theoretically predicted from a linear stochastic model (y-axis). Rate traces were first smoothed using a Gaussian kernel (20 ms width) as in (a). Distributions of pairwise correlations are shown at the top, for the SOC (black) and for a control random network with equivalent synaptic input statistics (brown; Experimental Procedures).

(c) Distributions of pairwise spike correlations in the SOC (top) and in the control random network the random network (bottom), between pairs of neurons belonging to the same subgroup (blue), or to different subgroups (black). Spike trains were first convolved with a Gaussian kernel of width 100 ms. Gray curves were obtained by shuffling the ISIs, thus destroying correlations while preserving the ISI distribution.

(d) Distributions of subthreshold membrane potential correlations. Colors have the same meaning as in (c). Voltage traces were cut off at the spike threshold. Gray curves were obtained by shuffling the time bins independently for each voltage trace.

(e) Distributions of pairwise correlations between the slow excitatory and inhibitory currents, taken in the same cells (red) or in pairs of different cells (black).

(f) Full lagged cross-correlograms between the slow excitatory and inhibitory currents, taken in the same cells (red) or in pairs of different cells. Thick lines denote averages over such E/I current pairs across the network, and thin flanking lines denotes ± 1 std. The peak at negative time lag corresponds to E currents leading I currents.

See also Figure S2.

Consistent with the effective rate picture, the distribution of spike correlations in the SOC (Figure 8c) is wide with a very small positive mean (ρ ≈ 0.0027), indicating that cells fire asynchronously. The same is true in the control random network (ρ ≈ 0.0005; Renart et al., 2010). However, within SOC subgroups, spiking was substantially correlated (Figure 8c, blue; ρ ≈ 0.17), and particularly so on the 100 ms timescale, suggesting that the correlations can be attributed to joint activity fluctuations of all neurons in a given subgroup. Interestingly thus, in situations in which the subgroup partitioning would be unknown a priori (e.g. in actual experiments), clustering could potentially be performed on the basis of those large correlations (though admittedly they would be measured only rarely) to achieve subgroup identification. Not surprisingly, membrane potentials followed a similar pattern of correlations (Figure 8d).

Importantly, the detailed balance prediction made above for the rate-based scenario (Figure 6f,g) remains true on the level of single cells in the spiking network. Slow E and I inputs (corresponding to the structured SOC recurrent synapses) to single neurons are substantially more correlated (r ≈ 0.24) than pairs of E and I currents taken from different neurons (r ≈ 0.12; compare red and black in Figure 8e,f). This is not true in the control random network, in which the balance of excitation is merely a reflection of the synchronized fluctuations of the excitatory and inhibitory populations as a whole.

Discussion

The motor cortex data of (Churchland et al., 2012) showcase two seemingly conflicting characteristics. On the one hand, motor cortical areas appear to be precisely controllable during movement preparation, and dynamically stable with firing rates evolving well below saturation during movement execution. In most network models, such stability arises from weak recurrent interactions. On the other hand the data shows rich transient amplification of specific initial conditions, a phenomenon that requires strong recurrent excitation. To reconcile these opposing aspects, we introduced and studied the concept of “stability-optimized circuits” (SOCs), broadly defined as precisely balanced networks with strong and complex recurrent excitatory pathways. In SOCs, strong excitation mediates fast activity breakouts following appropriate input, while inhibition keeps track of the activity and acts as a retracting spring force. In the presence of intricate excitatory recurrence, inhibition cannot instantaneously quench such activity growth, leading to transient oscillations as excitation and inhibition waltz their way back to a stable background state. This results in spatially and temporally rich firing rate responses, qualitatively similar to those recorded by (Churchland et al., 2012).

To build SOCs, we used progressive optimal refinement of the inhibitory synaptic connectivity within a normative, control-theoretic framework. Our method makes use of recent techniques for stability optimization (Vanbiervliet et al, 2009), and can in principle produce stability-optimized circuits from any given excitatory connectivity. In simple terms, we iteratively refined both the absence/presence and the strengths of the inhibitory connections to pull all the unstable eigenvalues of the network's connectivity matrix back into the stable regime (Figure 2b). Even though we constrained the procedure to yield plausible network connectivity, notably one that respects Dale's law (Dayan and Abbott, chap. 7), it does not constitute - and is not meant to be - a synaptic plasticity rule. However, the phenomenology achieved by recent models of inhibitory synaptic plasticity (Vogels et al., 2011; Luz and Shamir, 2012; Kullman et al., 2012) is similar to, though more crude than, that of our SOCs. It raises the possibility that nature solves the problem of network stabilization through a form of inhibitory plasticity, potentially aided by appropriate pre- and re-wiring during development (Terauchi and Umemori, 2012). The shortcut of minimizing the “smoothed spectral abscissa” that we used here to stabilize the network may thus be difficult to observe experimentally in its current control-theoretic variant, but may be implemented through several sequential network mechanisms.

In a protocol qualitatively similar to the experimental design of Churchland et al., 2012 (Figure 1), we could generate complex activity transients by forcing the SOC into one of a few specific preparatory states through the delivery of appropriate inputs, which were then withdrawn to release the network into free dynamics (Figure 4). Those “engine dynamics” (Shenoy et al., 2011) could easily be converted into actual muscle trajectories. Simple linear readouts, with weights optimized through least-squares regression, were sufficient to produce fast and elaborate 2-dimensional movements (Figure 5). Three aspects of the SOC dynamics make this possible. First, the firing rates strongly deviate from baseline during the movement period, effectively increasing the signal-to-noise ratio in the network response. Second, the transients are multiphasic (Figure 4b), as opposed to simple rise-and-decay, allowing the readouts not to overfit on multi-curved movements. Third, the preferred initial conditions of the SOC are converted into activity modes that are largely non-overlapping (Figure 6c). Thus, not only is the system highly excitable from a large set of states, but those states produce responses that are distinguishable from one another, ensuring that different motor commands can be mapped onto distinct muscle trajectories (Figure 5).

Relation to balanced amplification and relevance to sensory circuits

Transient amplification in SOCs is an extended, more intricate form of “balanced amplification”, first described by Murphy and Miller, 2009 in a model of V1 synaptic organization. In their model, small patterns of spatial imbalance between excitation (E) and inhibition (I), or “difference modes”, drive large activity transients in which neighboring E and I neurons fire in unison (“sum modes”). Due to the absence of a topology in SOCs, it is impossible to tell which neuron is a neighbor to which, making sum and difference modes difficult to define. Nevertheless, they can be understood more broadly as patterns of average balance and imbalance in the E and I synaptic inputs to single cells. With this definition, we showed here (Figure 6a) that the phenomenology of amplification in SOCs is similar to balanced amplification, i.e. small stimulations of difference modes drive large activations of sum modes. This accounts for the large transient firing rate deflections of individual neurons that follow appropriate initialization. A key difference between SOCs and Murphy and Miller's model of V1 is the complexity of lateral excitatory connections in SOCs, which gives rise to temporally rich transients (Figures 3 and 4). Furthermore, although the “spring-box” analogy may not apply directly to sensory cortices, SOCs (as inhibition-stabilized networks) could still provide an appropriate conceptual framework for such cortical areas, as suggested by Ozeki et al., 2009. Likewise, the method we have used here to build such circuits could prove useful in finding conditions for inhibitory stabilization of known excitatory connectivities that are not easily reducible to analytically tractable models (see e.g. Ahmadian et al., 2013).

Relation to detailed excitation/inhibition balance

SOCs make a strong prediction regarding how excitation and inhibition interact in cortical networks: E and I synaptic inputs in single neurons should be temporally correlated in a way that cannot be explained by the activity co-fluctuations that occur on the level of the entire population.

During spontaneous activity in SOCs, balanced amplification of external noise (or intrinsically generated stochasticity, as in our spiking SOC) results in strongly correlated E/I inputs in single units. This phenomenon is a recurrent equivalent to what has been referred to as “detailed balance” in feedforward network models (Vogels and Abbott, 2009; Vogels et al., 2011; Luz and Shamir, 2012), and cannot be attributed here to mere co-fluctuations of the overall activity of E and I neurons. Such co-variations can be substantial in balanced networks (Vogels et al., 2005b, Kriener et al., 2008; Murphy and Miller, 2009), but have been quenched here by requiring inhibitory synaptic connections to be three times stronger than excitatory ones on average (Renart et al., 2010; Hennequin et al., 2012). The residual shared population fluctuations accounted for only one third of the total E/I input correlation (Figure 6f,g). Thus, the excess correlation can only be explained by the comparatively large fluctuations of balanced, zero-mean activity modes (the responses to the preferred initial conditions of the SOC; Figure 6a).

A certain degree of such E/I balance has been observed in several brain areas, and on levels as different as trial-averaged E and I synaptic input conductances in response to sensory stimuli (Wehr and Zador, 2003; Marino et al., 2005; Froemke et al., 2007; Dorrn et al., 2010; but see Haider et al., 2013), single-trial synaptic responses in which the trial-average has been removed (“residuals”, Cafaro and Rieke, 2010), and spontaneous activity (Okun and Lampl, 2008; Cafaro and Rieke, 2010). However, the latter spontaneous E/I input fluctuations have been simultaneously recorded either in the same cell or in different cells, making it impossible to estimate the contribution of global population activity fluctuations to the overall E/I balance.

Spiking models of SOCs

The simplicity and analytical tractability of rate models make them appealing to theoretical studies such as ours. One may worry, however, that some fundamental aspects of collective dynamics are being overseen when spiking events are reduced to their probabilities of occurrence, i.e. to rate variables. To verify our results, we embedded a SOC in a standard balanced spiking network, in which millions of randomly assigned synapses connect two populations of excitatory and inhibitory neurons. The SOC structure was embodied by additional connections between subgroups of these neurons, each containing on the order of tens of spiking cells. The resulting network displayed simultaneous firing rate and spiking variability (Churchland and Abbott, 2012), thus phenomenologically similar to the networks of Litwin-Kumar and Doiron, 2012 and Ostojic, 2014. However, slow rate fluctuations in SOCs arise from a completely different mechanism. The sea of random synapses in our network induces strong excitatory and inhibitory inputs to single cells that cancel each other on average, leaving large subthreshold fluctuations in membrane potential and therefore irregular spiking whose variability is mostly “private” to each neuron. This feature is common to all traditional balanced network models (van Vreeswijk and Sompolinsky, 1996; Brunel, 2000; Vogels et al., 2005; Renart et al., 2010). On the level of subgroups of neurons, this source of variability is not entirely lost to averaging: although all the cells in a given subgroup n fire at the same rate rn at any given time, receiver neurons in another subgroup m will only “sense” a noisy sample estimate rn of this rate, because n connects onto m through a finite number of synapses. Now, because the connectivity between subgroups is strong, but stabilized, this intrinsic source of noise (the “residual” ξn=rn-rn) is continuously amplified into large, structured firing rate fluctuations on the level of subgroups. The underlying mechanism is the same as for the rate model, i.e. balanced amplification of noise (Murphy and Miller, 2009), with the notable difference that the noise in the spiking network is intrinsically generated (the external excitatory drive that each neuron receives was chosen constant here to make this point).

In order to match the timescale of the rate transients in our spiking SOC to those in the data of Churchland et al., 2012, we assumed that the structured SOC synapses had slower time constants than the random ones. Functional segregation of fast/slow synapses in the cortex has been reported in the visual cortex (Self et al., 2012) and could also be motivated by recent experiments in which the distance from soma along the dendritic arbor was shown to predict the magnitude of the NMDA component in the corresponding somatic PSPs (Branco and Häusser, 2011). Thus, distal synapses tend to evoke slower PSPs than proximal ones. It is in fact an interesting and testable prediction of our model that distal synapses are actively recruited in the motor cortex during movement preparation and generation. Finally, pilot simulations suggest that this separation of timescales, though necessary to obtain realistically long movement-related activity, is not a requirement for the emergence of large transients, which could indeed be obtained with a single synaptic time constant of 10~ms (not shown).

Summary

In summary, we have shown that specific, recurrent inhibition is a powerful means of stabilizing otherwise unstable, complex circuits. The resulting networks are collectively excitable, and display rich transient responses to appropriate stimuli that resemble the activity recorded in the motor cortex (Churchland et al., 2012) on both single-neuron and populations levels. We found that SOCs can be used as “spring loaded motor engines” to generate complicated and reliable movements. The intriguing parallels to the detailed balance of excitatory and inhibitory inputs in cortical neurons, as well as to recent theories that apply specifically to the visual cortex (Ozeki et al., 2009; Murphy and Miller, 2009), suggest cortical-wide relevance for this new class of cortical architectures.

Experimental Procedures

Network setup and dynamics

Single-neuron dynamics followed Equation 1, which we integrated using a standard fourth-order Runge-Kutta method. Following Rajan et al., 2010, we used the gain function

g(x)={r0tanh[x/r0]ifx<0(rmaxr0)tanh[x/rmaxr0]ifx0 (2)

with baseline firing rate r0=5 Hz and maximum rate rmax=100 Hz (Figure 3e). Unless indicated otherwise, the input I(t)=ξ(t)+S(t) included a noise term ξ(t) which we modelled as an independent Ornstein-Uhlenbeck process for each neuron, with time constant τξ=50 ms. We set the variance of these processes to σ02(τ+τξ)/τξ, such that, in the limit of very weak synaptic connectivity, the firing rate of each cell in the network fluctuated around baseline with a standard deviation σ02=0.2 Hz.

In order to “prepare” the network and drive its activity x into a specific steady-state pattern ak (Figures 4 and 5), we delivered a slow ramping input to each cell during ongoing activity. This input was delivered as vector S(t)=R(t) Pk, where R(t) denotes the ramp activation of the input pool k and Pk are the projection weights from pool k onto the motor network (Figure 1b,c). The ramp R(t) had a slow exponential rise with time constant 400 ms beginning with the target cue at t=-1 sec., followed by a fast exponential decay with time constant 2 ms after the go cue. The projection weights were set to

Pk=akWg(ak) (3)

in order to guarantee x(t=0) ≈ ak.

In Figure 4b, the 27 arm reaching movements in Churchland et al., 2012 were modeled as 27 different initial conditions (b1, …, b27) for the SOC. We chose each vector bk as a random linear combination of the SOC's first and second preferred initial conditions a1 and a2 (see below). More precisely, bk = ∑c={1,2} skc zkc ac where the skc’s were random signs and the zkc’s were drawn uniformly between 0.5 and 1.

Preferred initial states

To find the preferred initial conditions of the SOC, we restricted ourselves to the linear regime in which Δr_i ≈ xi. To quantify the response evoked by some unit-norm initial condition Δr(t=0) ≡ a we defined the “energy” E(a) of the response as

E(a)=2τ0Δr(t)2dt (4)

also assuming that the network dynamics run freely without noise (ξ(t)=0). Here 2/τ is a normalizing factor such that E=1 for an unconnected network (W=0), irrespective of the (unit-norm) initial condition a (in which case ‖Δr(t)‖2 = exp(-2t/τ). Since the SOC is linearly stable, E is finite, in the sense that any initial condition is bound to decay (exponentially) after sufficiently long periods of time.

The “best” input direction is then defined as the initial condition a1 that maximizes E(a). By iterating, we can define a collection a1, a2, …, aN of N orthogonal input states that each maximize the evoked energy within the subspace orthogonal to all previous best input directions. In the linear regime, this maximization can be performed analytically (Supplemental information). Note that in the linear regime, E(ak) = E(-ak). In the nonlinear network, this needs not be the case, and in Figure 3c,d we resolved this sign ambiguity by picking the sign that evoked most energy.

Construction of the SOC architecture

Random connectivity matrices of size N=2M, with M positive (excitatory) columns and M negative (inhibitory) columns, were generated as in Hennequin et al., 2012 with connectivity density $p=0.1$. Non-zero excitatory (resp. inhibitory) weights were set to w0/√N (resp. -γw0/√N), where w02 = 2R2/(p(1-p)(1+γ2)) and R is the desired spectral radius before stability optimization (Rajan and Abbott, 2006).

To generate a SOC, we generated such a random connectivity matrix with R=10, producing unstable, deeply chaotic network behavior. After the creation of the initial W, all excitatory connections remained fixed. To achieve robust linear stability of the dynamics, we refined the inhibitory synapses to minimize the “smoothed spectral abscissa” (SSA) of W, a relaxation of the spectral abscissa (the largest real part in the eigenvalues of W) that - among other advantages - leads to tractable optimization (Vanbiervliet et al., 2009). In short, inhibitory weights followed a gradient descent on the SSA subject to three constraints. First, we kept the inhibitory weights inhibitory, i.e. negative. Second, we enforced a constant ratio between the average magnitude of the inhibitory weights, and its excitatory counterpart (γ=3, cf. Discussion). Third, the density of inhibitory connections was restricted to less than 40%, to yield realistically sparse connectivity structures. This constrained gradient descent usually converged within a few hundred iterations. All details can be found in our Supplemental Information.

Analysis of rotational dynamics

The plane of projection of Figure 4d was found with jPCA, a dynamical variant of principal component analysis used to extract low-dimensional rotations from multidimensional time series (Churchland et al., 2012). Given data of the form (y(t),dy(t)/dt), jPCA fits (through standard least-squares regression) a linear oscillatory model of the form dy/dt = Mskew y(t), where Mskew is a skew-symmetric matrix, therefore one with purely imaginary eigenvalues. The two leading eigenvectors of the best-fitting Mskew (associated with the largest conjugate pair of imaginary eigenvalues) define the plane in which the trajectory rotates most strongly.

Here we computed the jPC projection exactly as prescribed in Churchland et al., 2012. Our model data consisted of the population responses Δr(t) during the first 200 ms following the go cue for each of our 27 initial conditions, sampled in 1 ms time steps. Note that the temporal derivatives are directly given by Equation 1, except in the spiking network (see below) where we estimated those derivatives using a finite-difference approximation. To make sure that the jPC projection captures enough of the data variance, that is, that the observed rotational dynamics (if any) are significant, the data was first projected down to its top 6 standard principal components (as in Churchland et al., 2012).

Muscle activation through linear readouts

In Figure 5, a single pair of muscle readouts was learned from 200 training trials (100 trials for each of the “snake” and “butterfly” movements). We assumed the following linear model:

zt=(m1;m2)TΔrt+b+εt (5)

where zt (size 2) denotes the vector of target muscle activations at discrete time t, Δrt is the vector of momentary deviation from baseline firing rate in the network (size N), and εt is the vector of residual errors (size 2). The readout weights (column vectors m1 and m2) are parameters which we optimized through simple least-squares regression, together with a pair of biases b. The snake (resp. butterfly) target trajectory was made of 58 points (resp. 26 points), equally spaced in time over 500 ms following the go cue. Those points defined the discrete time variable t in Equation 5, and the activity vector Δrt was sampled accordingly for each movement.

Spiking network simulations

We simulated a network of 15'000 neurons, composed of 12'000 excitatory and 3'000 inhibitory neurons, with parameters listed in Table 1. This network was divided into 200 subgroups of excitatory neurons and 200 subgroups of inhibitory neurons, which can be interpreted as the “rate units” we have focused on until here.

Table 1.

description name value unit
membrane time constant τm 20 ms
refractory period τr 2 ms
axonal transmission delay - 0.5 ms
resting potential Vrest -70 mV
spiking threshold Vthresh. -55 mV
voltage reset Vreset -60 mV
PSC rise time τrise 1 ms
fast PSC decay time τdecayfast 10 ms
slow PSC decay time τdecayslow 100 ms

Single-neuron model

Single cells were modelled as leaky integrate-and-fire (LIF) neurons (e.g. Gerstner and Kistler, 2002, chap. 4) according to

τmdVm(i)dt=Vm(i)+Vrest+hexc.(i)+hinh.(i)+hext. (6)

Neuron i emitted a spike whenever Vm(i)(t) crossed a threshold Vthresh. from below. Following a spike, the voltage was reset to Vreset and held constant for an absolute refractory period of τr. The excitatory and inhibitory synaptic inputs, hexc.(i) and hinh.(i) were sums of alpha-shaped postsynaptic currents (PSCs) of the form c[exp(-t/τdecay)-exp(-t/τrise)] where c is a synapse-type-specific scaling factor that regulates peak excitatory and inhibitory postsynaptic potential (PSP) amplitudes after further membrane integration through Equation (6) (Figure 7b).

Recurrent synapses

Each neuron received input from 1'500 excitatory and 1'500 inhibitory network neurons. For 50% of those recurrent connections (750 exc. and 750 inh. synapses), the presynaptic partner was drawn randomly and uniformly from the corresponding population (exc. or inh.), providing a sea of unspecific, random synapses that was instrumental in maintaining the network in a regime of asynchronous and irregular firing. These connections were thought to target proximal dendritic zones and therefore to evoke fast PSCs (parameter τdecayfast in Table 1). The other half of the network synapses were used to mirror the structure of the network of rate units described throughout the paper, and were therefore drawn according to probabilities jointly determined by i) the subgroups that the pre- and postsynaptic neurons belonged to, ii) an optimized SOC matrix W of size 400×400 that described the connectivity between subgroups.

We first normalized the excitatory and inhibitory parts for each row of W, obtaining a matrix W^ of connection probabilities. Then, for any cell i in group m (1≤m≤400, exc. or inh.), each of 750 exc. partners were chosen in two steps: first, a particular group n was picked with probability w^mn$, and second, a presynaptic neuron was picked at random from this group n. We applied the same procedure to generate the second half of the inhibitory synapses (750 per neuron). These structured SOC connections were given a slower PSC decay time constant τdecayslow (c.f. Table 1), and can be interpreted as targeting more distal dendritic parts.

Sample PSPs are shown in Figure 7b for all four types of synapses. The ratio between exc. and inh. synaptic efficacies was set to achieve a stable background firing state of 5 Hz. Note that because of the amplifying behavior of SOCs and the superlinear nature of the input-output function of LIF neurons, the network ended up with a mean of 6 Hz instead (Figure 7c).

Each neuron also received a constant positive external input current hext. which was set to the mean current a cell would receive from 5'000 independent Poisson sources at 5 Hz with fast synapses. We boiled this input down to its mean to motivate that the slow, seemingly stochastic rate fluctuations we observed in the spiking SOC (Figure 8a) did not require any external source of noise.

Generation of W

SOC matrices for spiking networks were generated in a similar manner as described above for rate-based networks, except for a few simple variations to account for the effective gains of the excitatory and inhibitory synaptic pathways between subgroups. These details are described in our Supplemental Information.

Control random network

The random network used for comparison in Figure 8 was identical in every respect to the SOC, except that presynaptic partners for slow synapses were drawn completely randomly (there was no notion of neuronal subgroups).

Simulations were custom-written in OCaml and parallelized onto 8 cores following the strategy developed in Morrison et al., 2005, taking advantage of a finite axonal propagation delay which we set to 0.5 ms. We used simple Euler integration of Equation (6) with a time step of 0.1 ms.

Supplementary Material

Supplementary information

References

  1. Afshar A, Santhanam G, Yu B, Ryu S, Sahani M, Shenoy K. Single-trial neural correlates of arm movement preparation. Neuron. 2011;71:555–564. doi: 10.1016/j.neuron.2011.05.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ahmadian Y, Rubin DB, Miller KD. Analysis of the stabilized supralinear network. Neural Comput. 2013;25:1994–2037. doi: 10.1162/NECO_a_00472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ames KC, Ryu SI, Shenoy KV. Neural dynamics of reaching following incorrect or absent motor preparation. Neuron. 2014;81:438–451. doi: 10.1016/j.neuron.2013.11.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex. 1997;7:237–252. doi: 10.1093/cercor/7.3.237. [DOI] [PubMed] [Google Scholar]
  5. Branco T, Häusser M. Synaptic integration gradients in single cortical pyramidal cell dendrites. Neuron. 2011;69:885–892. doi: 10.1016/j.neuron.2011.02.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci. 2000;8:183–208. doi: 10.1023/a:1008925309027. [DOI] [PubMed] [Google Scholar]
  7. Cafaro J, Rieke F. Noise correlations improve response fidelity and stimulus encoding. Nature. 2010;468:964–967. doi: 10.1038/nature09570. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Churchland MM, Abbott LF. Two layers of neural variability. Nat Neurosci. 2012;15:1472–1474. doi: 10.1038/nn.3247. [DOI] [PubMed] [Google Scholar]
  9. Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature. 2012;487:51–56. doi: 10.1038/nature11129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Churchland MM, Cunningham JP, Kaufman MT, Ryu SI, Shenoy KV. Cortical preparatory activity: representation of movement or first cog in a dynamical machine? Neuron. 2010;68:387–400. doi: 10.1016/j.neuron.2010.09.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Churchland MM, Shenoy KV. Temporal complexity and heterogeneity of single-neuron activity in premotor and motor cortex. J Neurophysiol. 2007;97:4235–4257. doi: 10.1152/jn.00095.2007. [DOI] [PubMed] [Google Scholar]
  12. Dayan P, Abbott LF. Theoretical neuroscience. MIT Press; 2001. [Google Scholar]
  13. Dorrn AL, Yuan K, Barker AJ, Schreiner CE, Froemke RC. Developmental sensory experience balances cortical excitation and inhibition. Nature. 2010;465:932–936. doi: 10.1038/nature09119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Froemke RC, Merzenich MM, Schreiner CE. A synaptic memory trace for cortical receptive field plasticity. Nature. 2007;450:425–429. doi: 10.1038/nature06289. [DOI] [PubMed] [Google Scholar]
  15. Ganguli S, Huh D, Sompolinsky H. Memory traces in dynamical systems. Proc Natl Acad Sci USA. 2008;105:18970–18975. doi: 10.1073/pnas.0804451105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Gerstner W, Kistler WM. Spiking neuron models: Single neurons, populations, plasticity. Cambridge University Press; 2002. [Google Scholar]
  17. Goldman MS. Memory without feedback in a neural network. Neuron. 2009;61:621–634. doi: 10.1016/j.neuron.2008.12.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Haider B, Häusser M, Carandini M. Inhibition dominates sensory responses in the awake cortex. Nature. 2013;493:97–100. doi: 10.1038/nature11665. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hennequin G, Vogels TP, Gerstner W. Non-normal amplification in random balanced neuronal networks. Phys Rev E. 2012;86 doi: 10.1103/PhysRevE.86.011909. 011909. [DOI] [PubMed] [Google Scholar]
  20. Hoerzer GM, Legenstein R, Maass W. Emergence of complex computational structures from chaotic neural networks through reward-modulated hebbian learning. Cerebral Cortex. 2012;24:677–690. doi: 10.1093/cercor/bhs348. [DOI] [PubMed] [Google Scholar]
  21. Kriener B, Tetzlaff T, Aertsen A, Diesmann M, Rotter S. Correlations and population dynamics in cortical networks. Neural Comput. 2008;20:2185–2226. doi: 10.1162/neco.2008.02-07-474. [DOI] [PubMed] [Google Scholar]
  22. Kullmann DM, Moreau AW, Bakiri Y, Nicholson E. Plasticity of inhibition. Neuron. 2012;75:951–962. doi: 10.1016/j.neuron.2012.07.030. [DOI] [PubMed] [Google Scholar]
  23. Laje R, Buonomano DV. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat Neurosci. 2013;16:925–933. doi: 10.1038/nn.3405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci. 2012;15:1498–1505. doi: 10.1038/nn.3220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Luz Y, Shamir M. Balancing feed-forward excitation and inhibition via hebbian inhibitory synaptic plasticity. PLoS Comput Biol. 2012;8:e1002334. doi: 10.1371/journal.pcbi.1002334. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Maass W, Natschläger T, Markram H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 2002;14:2531–2560. doi: 10.1162/089976602760407955. [DOI] [PubMed] [Google Scholar]
  27. Mariño J, Schummers J, Lyon DC, Schwabe L, Beck O, Wiesing P, Obermayer K, Sur M. Invariant computations in local cortical networks with balanced excitation and inhibition. Nat Neurosci. 2005;8:194–201. doi: 10.1038/nn1391. [DOI] [PubMed] [Google Scholar]
  28. Morrison A, Mehring C, Geisel T, Aertsen TGA, Diesmann MA. Advancing the boundaries of high-connectivity network simulation with distributed computing. Neural Comput. 2005;17:1776–1801. doi: 10.1162/0899766054026648. [DOI] [PubMed] [Google Scholar]
  29. Murphy BK, Miller KD. Balanced amplification: A new mechanism of selective amplification of neural activity patterns. Neuron. 2009;61:635–648. doi: 10.1016/j.neuron.2009.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Okun M, Lampl I. Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nat Neurosci. 2008;11:535–537. doi: 10.1038/nn.2105. [DOI] [PubMed] [Google Scholar]
  31. Ostojic S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nat Neurosci. 2014;17:594–600. doi: 10.1038/nn.3658. [DOI] [PubMed] [Google Scholar]
  32. Ozeki H, Finn IM, Schaffer ES, Miller KD, Ferster D. Inhibitory stabilization of the cortical network underlies visual surround suppression. Neuron. 2009;62:578–592. doi: 10.1016/j.neuron.2009.03.028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Rajan K, Abbott LF. Eigenvalue spectra of random matrices for neural networks. Phys Rev Lett. 2006;97 doi: 10.1103/PhysRevLett.97.188104. 188104. [DOI] [PubMed] [Google Scholar]
  34. Rajan K, Abbott LF, Sompolinsky H. Stimulus-dependent suppression of chaos in recurrent neural networks. Physical Review E. 2010;82 doi: 10.1103/PhysRevE.82.011903. 011903. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris K. The asynchronous state in cortical circuits. Science. 2010;327:587–590. doi: 10.1126/science.1179850. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Self MW, Kooijmans RN, Supèr H, Lamme VA, Roelfsema PR. Different glutamate receptors convey feedforward and recurrent processing in macaque v1. Proc Natl Acad Sci USA. 2012;109:11031–11036. doi: 10.1073/pnas.1119527109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Shenoy KV, Kaufman MT, Sahani M, Churchland MM. A dynamical systems view of motor preparation: Implications for neural prosthetic system design. Progr Brain Res. 2011;192:33. doi: 10.1016/B978-0-444-53355-5.00003-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Sompolinsky H, Crisanti A, Sommers HJ. Chaos in random neural networks. Phys Rev Lett. 1988;61:259–262. doi: 10.1103/PhysRevLett.61.259. [DOI] [PubMed] [Google Scholar]
  39. Sussillo D, Abbott L. Generating coherent patterns of activity from chaotic neural networks. Neuron. 2009;63:544–557. doi: 10.1016/j.neuron.2009.07.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Terauchi A, Umemori H. Specific sets of intrinsic and extrinsic factors drive excitatory and inhibitory circuit formation. The Neuroscientist. 2012;18:271–86. doi: 10.1177/1073858411404228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Tsodyks MV, Skaggs WE, Sejnowski TJ, McNaughton BL. Paradoxical effects of external modulation of inhibitory interneurons. J Neurosci. 1997;17:4382–4388. doi: 10.1523/JNEUROSCI.17-11-04382.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274:1724. doi: 10.1126/science.274.5293.1724. [DOI] [PubMed] [Google Scholar]
  43. Vanbiervliet J, Vandereycken B, Michiels W, Vandewalle S, Diehl M. The smoothed spectral abscissa for robust stability optimization. SIAM J on Optim. 2009;20:156–171. [Google Scholar]
  44. Vogels TP, Abbott LF. Signal propagation and logic gating in networks of integrate-and-fire neurons. J Neurosci. 2005;25:10786–10795. doi: 10.1523/JNEUROSCI.3508-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Vogels TP, Abbott LF. Gating multiple signals through detailed balance of excitation and inhibition in spiking networks. Nat Neurosci. 2009;12:483–491. doi: 10.1038/nn.2276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Vogels TP, Rajan K, Abbott LF. Neural network dynamics. Annu Rev Neurosci. 2005;28:357–376. doi: 10.1146/annurev.neuro.28.061604.135637. [DOI] [PubMed] [Google Scholar]
  47. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011;334:1569. doi: 10.1126/science.1211095. [DOI] [PubMed] [Google Scholar]
  48. Wang X. Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory. J Neurosci. 1999;19:9587–9603. doi: 10.1523/JNEUROSCI.19-21-09587.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Wehr M, Zador A. Balanced inhibition underlies tuning and sharpens spike timing in auditory cortex. Nature. 2003;426:442–446. doi: 10.1038/nature02116. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary information

RESOURCES