Skip to main content
eLife logoLink to eLife
. 2018 Sep 20;7:e37124. doi: 10.7554/eLife.37124

Learning recurrent dynamics in spiking networks

Christopher M Kim 1,, Carson C Chow 1,
PMCID: PMC6195349  PMID: 30234488

Abstract

Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We apply the training method to learn arbitrary firing patterns, stabilize irregular spiking activity in a network of excitatory and inhibitory neurons respecting Dale’s law, and reproduce the heterogeneous spiking rate patterns of cortical neurons engaged in motor planning and movement. We identify sufficient conditions for successful learning, characterize two types of learning errors, and assess the network capacity. Our findings show that synaptically-coupled recurrent spiking networks possess a vast computational capability that can support the diverse activity patterns in the brain.

Research organism: None

Introduction

Neuronal populations exhibit diverse patterns of recurrent activity that can be highly irregular or well-structured when learning or performing a behavioral task (Churchland and Shenoy, 2007; Churchland et al., 2012; Harvey et al., 2012; Pastalkova et al., 2008; Li et al., 2015). An open questions whether learning-induced synaptic rewiring is sufficient to give rise to the wide range of spiking dynamics that encodes and processes information throughout the brain.

It has been shown that a network of recurrently connected neuron models can be trained to perform complex motor and cognitive tasks. In this approach, synaptic connections to a set of outputs are trained to generate a desired population-averaged signal, while the activity of individual neurons within the recurrent network emerges in a self-organized way that harnesses chaotic temporally irregular activity of a network of rate-based neurons (Sompolinsky et al., 1988) that is made repeatable through direct feedback from the outputs or through training of the recurrent connections (Maass et al., 2007; Sussillo and Abbott, 2009). The resulting irregular yet stable dynamics provides a rich reservoir from which complex patterns such as motor commands can be extracted by trained output neurons (Sussillo and Abbott, 2009; Buonomano and Maass, 2009; Jaeger and Haas, 2004), and theoretical studies have shown that the network outputs are able to perform universal computations (Maass et al., 2007).

Here, we explore whether there is even a need for a set of output neurons. Instead, each unit in the recurrent network could be considered to be an output and learn target patterns directly while simultaneously serving as a reservoir. Laje and Buonomano (2013) showed that individual rate units in a recurrent network can learn to stabilize innate chaotic trajectories that an untrained network naturally generates. The trained trajectories are then utilized to accomplish timing tasks by summing their activities with trained weights. DePasquale et al. (2018) obtained a set of target trajectories from a target network driven externally by the desired network output. They showed that training individual units on such target trajectories and then adjusting the read-out weights yielded better performance than an untrained random recurrent network with a trained feedback loop (i.e. ‘traditional’ FORCE learning). Rajan et al., 2016 trained a small fraction of synaptic connections in a randomly connected rate network to produce sequential activity derived from cortical neurons engaged in decision making tasks.

Although these studies demonstrate that units within a rate-based network can learn recurrent dynamics defined by specific forms of target functions, the possible repertoire of the recurrent activity that a recurrent network can learn has not been extensively explored. Moreover, extending this idea to spiking networks, where neurons communicate with time dependent spike induced synapses, poses an additional challenge because it is difficult to coordinate the spiking dynamics of many neurons, especially, if spike times are variable as in a balanced network (London et al., 2010). Some success has been achieved by training spiking networks directly with a feedback loop (Nicola and Clopath, 2017) or using a rate-based network as an intermediate step (DePasquale et al., 2016; Thalmeier et al., 2016). A different top-down approach is to build networks that emit spikes optimally to correct the discrepancy between the actual and desired network outputs (Boerlin et al., 2013; Denève and Machens, 2016). This optimal coding strategy in a tightly balanced network can be learned with a local plasticity rule (Brendel et al., 2017) and is able to generate arbitrary network output at the spike level (Bourdoukan and Deneve, 2015; Denève et al., 2017).

We show that a network of spiking neurons is capable of supporting arbitrarily complex coarse-grained recurrent dynamics provided the spatiotemporal patterns of the recurrent activity are diverse, the synaptic dynamics are fast, and the number of neurons in the network is large. We give a theoretical basis for how a network can learn and show various examples, which include stabilizing strong chaotic rate fluctuations in a network of excitatory and inhibitory neurons that respects Dale’s law and constructing a recurrent network that reproduces the spiking rate patterns of a large number of cortical neurons involved in motor planning and movement. Our study suggests that individual neurons in a recurrent network have the capability to support near universal dynamics.

Results

Spiking networks can learn complex recurrent dynamics

We considered a network of N quadratic integrate-and-fire neurons that are recurrently connected with spike-activated synapses weighted by a connectivity matrix W. We show below that our results do not depend on the spiking mechanism. We focused on two measures of coarse-grained time-dependent neuron activity: (1) the synaptic drive ui(t) to neuron i which is given by the W-weighted sum of low-pass filtered incoming spike trains, and (2) the time-averaged spiking rate Ri(t) of neuron i. The goal was to find a weight matrix W that can autonomously generate desired recurrent target dynamics when the network of spiking neurons connected by W is stimulated briefly with an external stimulus (Figure 1a). The target dynamics were defined by a set of functions f1(t),f2(t),...,fN(t) on a time interval [0,T]. Learning of the recurrent connectivity W was considered successful if ui(t) or Ri(t) evoked by the stimulus matches the target functions fi(t) over the time interval [0,T] for all neurons i=1,2,...,N.

Figure 1. Synaptic drive and spiking rate of neurons in a recurrent network can learn complex patterns.

(a) Schematic of network training. Blue square represents the external stimulus that elicits the desired response. Black curves represent target output for each neuron. Red arrows represent recurrent connectivity that is trained to produce desired target patterns. (b) Synaptic drive of 10 sample neurons before, during and after training. Pre-training is followed by multiple training trials. An external stimulus (blue) is applied prior to training for 100 ms. Synaptic drive (black) is trained to follow the target (red). If the training is successful, the same external stimulus can elicit the desired response. Bottom shows the spike rater of 100 neurons. (c) Top, The Pearson correlation between the actual synaptic drive and the target output during training trials. Bottom, The matrix (Fresenius) norm of changes in recurrent connectivity normalized to initial connectivity during training. (d) Filtered spike train of 10 neurons before, during and after training. As in (b), external stimulus (blue) is applied immediately before training trials. Filtered spike train (black) learns to follow the target spiking rate (red) with large errors during the early trials. Applying the stimulus to a successfully trained network elicits the desired spiking rate patterns in every neuron. (e) Top, Same as in (c) but measures the correlation between filtered spike trains and target outputs. Bottom, Same as in (c).

Figure 1.

Figure 1—figure supplement 1. Learning arbitrarily complex target patterns in a network of rate-based neurons.

Figure 1—figure supplement 1.

The network dynamics obey τx˙i=xi+j=1NWijrj+Ii where rj=tanh(xj). The synaptic current xi to every neuron in the network was trained to follow complex periodic functions f(t)=Asin(2π(tT0)/T1)sin(2π(tT0)/T2) where the initial phase T0 and frequencies T1,T2 were selected randomly. The elements of initial connectivity matrix Wij were drawn from a Gaussian distribution with mean zero and standard deviation σ/Np where σ=2 was strong enough to induce chaotic dynamics; Network size N=500, connection probability between neurons p=0.3, and time constant τ=10 ms. External input Ii with constant random amplitude was applied to each neuron for 50 ms (blue) and was set to zero elsewhere. (a) Before training, the network is in chaotic regime and the synaptic current (black) of individual neurons fluctuates irregularly. (b) After learning to follow the target trajectories (red), the synaptic current tracks the target pattern closely in response to the external stimulus.
Figure 1—figure supplement 2. Training a network that has no initial connections.

Figure 1—figure supplement 2.

The coupling strength of the initial recurrent connectivity is zero, and, prior to training, no synaptic or spiking activity appears beyond the first few hundred milliseconds. (a) Training synaptic drive patterns using the RLS algorithm. Black curves show the actual synaptic drive of 10 neurons and red curves show the target outputs. Blue shows the 100 ms external stimulus. (b) Correlation between synaptic drive and target function (top) and the Frobenius norm of changes in recurrent connectivity normalized to initial connectivity during training (botom). (c and d) Same as in (a) and (b), but spiking rate patterns are trained.

Previous studies have shown that recurrently connected rate units can learn specific forms of activity patterns, such as chaotic trajectories that the initial network could already generate (Laje and Buonomano, 2013), trajectories from a target network (DePasquale et al., 2018), and sequential activity derived from imaging data (Rajan et al., 2016). Our study expanded these results in two ways: first, we trained the recurrent dynamics of spiking networks, and, second, we showed that the repertoire of recurrent dynamics that can be encoded is vast. The primary goal of our paper was to investigate the computational capability of spiking networks to generate arbitrary recurrent dynamics, therefore we neither trained the network outputs (Sussillo and Abbott, 2009; Sussillo et al., 2015; Nicola and Clopath, 2017) nor constrained the target signals to those designed for performing specific computations (DePasquale et al., 2018). We focused on training the recurrent activity as in the work of Laje and Buonomano (2013) (without the read-outs) and Rajan et al., 2016, and considered arbitrary target functions. To train the activity of individual neurons within a spiking network, we extended the Recursive Least Squares (RLS) algorithm developed by Laje and Buonomano in rate-based networks (Laje and Buonomano, 2013). The algorithm was based on the FORCE algorithm (Haykin, 1996; Sussillo and Abbott, 2009), originally developed to train the network outputs by minimizing a quadratic cost function between the activity measure and the target together with a quadratic regularization term (see Materials and methods, 'Training recurrent dynamics'). Example code that trains a network of quadratic integrate-and-fire neurons is available at https://github.com/chrismkkim/SpikeLearning (Kim and Chow, 2018; copy archived at https://github.com/elifesciences-publications/SpikeLearning).

As a first example, we trained the network to produce synaptic drive patterns that matched a set of sine functions with random frequencies and the spiking rate to match the positive part of the same sine functions. The initial connectivity matrix had connection probability p=0.3 and the coupling strength was drawn from a Normal distribution with mean 0 and standard deviation σ. Prior to training, the synaptic drive fluctuated irregularly, but as soon as the RLS algorithm was instantiated, the synaptic drives followed the target with small error; rapid changes in W quickly adjusted the recurrent dynamics towards the target (Sussillo and Abbott, 2009) (Figure 1b,c). As a result, the population spike trains exhibited reproducible patterns across training trials. A brief stimulus preceded each training session to reset the network to a specific state. If the training was successful, the trained response could be elicited whenever the same stimulus was applied regardless of the network state. We were able to train a network of rate-based neurons to learn arbitrarily complex target patterns using the same learning scheme (Figure 1—figure supplement 1).

Training the spiking rate was more challenging than training the synaptic drive because small changes in recurrent connectivity did not immediately affect the spiking activity if the effect was below the spike-threshold. Therefore, the spike trains may not follow the desired spiking rate pattern during the early stage of training, and the population spike trains no longer appeared similar across training trials (Figure 1d). This was also reflected in relatively small changes in recurrent connectivity and the substantially larger number of training runs required to produce desired spiking patterns (Figure 1e). However, by only applying the training when the total input to a neuron is suprathreshold, the spiking rate could be trained to reproduce the target patterns. The correlation between the actual filtered spike trains and the target spiking rate increased gradually as the training progressed.

Previous work that trained the network read-out had proposed that the initial recurrent network needed to be at the ‘edge of chaos’ to learn successfully (Bertschinger and Natschläger, 2004; Sussillo and Abbott, 2009; Abbott et al., 2016; Thalmeier et al., 2016; Nicola and Clopath, 2017). However, we found that the recurrent connectivity could learn to produce the desired recurrent dynamics regardless of the initial network dynamics and connectivity. Even when the initial network had no synaptic connections, the brief stimulus preceding the training session was sufficient to build a fully functioning recurrent connectivity that captured the target dynamics. The RLS algorithm could grow new synapses or tune existing ones as long as some of the neurons became active after the initial stimulus (Figure 1—figure supplement 2).

Learning was not limited to one set of targets; the same network was able to learn multiple sets of targets. We trained the network to follow two independent sets of targets, where each target function was a sine function with random frequency. Every neuron in the network learned both activity patterns after training, and, when stimulated with the appropriate cue, the network recapitulated the specified trained pattern of recurrent dynamics, regardless of initial activity. The synaptic drive and the spiking rate were both able to learn multiple target patterns (Figure 2).

Figure 2. Learning multiple target patterns.

Figure 2.

(a) The synaptic drive of neurons learns two different target outputs. Blue stimulus evokes the first set of target outputs (red) and the green stimulus evokes the second set of target outputs (red). (b) The spiking rate of individual neurons learns two different target outputs.

Learning arbitrary patterns of activity

Next, we considered targets generated from various families of functions: complex periodic functions, chaotic trajectories, and Ornstein-Hollenbeck (OU) noise. We randomly selected N different target patterns from one of the families to create a set of heterogeneous targets, and trained the synaptic drive of a network consisting of N neurons to learn the target dynamics. These examples demonstrated that recurrent activity patterns that a spiking network can generate is not limited to specific forms of patterns considered in previous studies (Laje and Buonomano, 2013; Rajan et al., 2016; DePasquale et al., 2018), but can be arbitrary functions. The successful learning suggested that single neurons embedded in a spiking network have the capability to perform universal computations.

As we will show more rigorously in the next section, we identified two sufficient conditions on the dynamical state and spatiotemporal structure of target dynamics that ensure a wide repertoire of recurrent dynamics can be learned. The first is a ‘quasi-static’ condition that stipulates that the dynamical time scale of target patterns must be slow enough compared to the synaptic time scale and average spiking rate. The second is a ‘heterogeneity’ condition that requires the spatiotemporal structure of target patterns to be diverse enough. The target patterns considered in Figure 3 had slow temporal dynamics in comparison to the synaptic time constant (τs=20 ms) and the patterns were selected randomly to promote diverse structure. After training each neuron’s synaptic drive to produce the respective target pattern, the synaptic drive of every neuron in the network followed its target.

Figure 3. Quasi-static and heterogeneous patterns can be learned.

Example target patterns include complex periodic functions (product of sines with random frequencies), chaotic rate units (obtained from a random network of rate units), and OU noise (obtained by low-pass filtering white noise with time constant 100 ms). (a) Target patterns (red) overlaid with actual synaptic drive (black) of a trained network. Quasi-static prediction (Equation 1) of synaptic drive (blue). (b) Spike trains of trained neurons elicited multiple trials, trial-averaged spiking rate calculated by the average number of spikes in 50 ms time bins (black), and predicted spiking rate (blue). (c) Performance of trained network as a function of the fraction of randomly selected targets. (d) Network response from a trained network after removing all the synaptic connections from 5%, 10% and 40% of randomly selected neurons in the network.

Figure 3.

Figure 3—figure supplement 1. Learning target patterns with low-population spiking rate.

Figure 3—figure supplement 1.

The synaptic drive of networks consisting of 500 neurons were trained to learn complex periodic functions f(t)=Asin(2π(tT0)/T1)sin(2π(tT0)/T2) where the initial phase T0 and frequencies T1,T2 were selected randomly from [500 ms, 1000 ms]. (a) The amplitude A=0.1, resulting in population spiking rate 2.8 Hz in trained window. (b) The amplitude A=0.05, resulting in population spiking rate 1.5 Hz in trained window. (c) The amplitude A=0.01, resulting in population spiking rate 0.01 Hz in trained window and learning fails.
Figure 3—figure supplement 2. Learning recurrent dynamics with leaky integrate-and-fire and Izhikevich neuron models.

Figure 3—figure supplement 2.

Synaptic drive of a network of spiking neurons were trained to follow 1000 ms long targets f(t)=Asin(2π(tT0)/T1)sin(2π(tT0)/T2) where T0,T1 and T2 were selected uniformly from the interval [500 ms, 1000 ms]. (a) Network consisted of N=200 leaky integrate-and-fire neuron models, whose membrane potential obeys v˙i=(viIi)/τ+ui with a time constant τ=10ms; the neuron spikes when vi exceeds spike threshold vthr=50 mV then vi is reset to vres=65 mV. Red curves show the target pattern and black curves show the voltage trace and synaptic drive of a trained network. (b) Spike rastergram of a trained leaky integrate-and-fire neuron network generating the synaptic drive patterns. (c) Network consisted of N=200 Izhikevich neurons, whose dynamics are described by two equations v˙i=0.04vi2+5vi+140wi+Ii+ui and w˙i=a(bviwi); the neuron spikes when vi exceeds 30 mV, then vi is reset to c and wi is reset to wi+d. Neuron parameters a,b,c and d were selected as in the original study (Izhikevich, 2003) so that there were equal numbers of regular spiking, intrinsic bursting, chattering, fast spiking and low threshold spiking neurons. Synaptic current ui is modeled as in Equations 6 for all neuron models with synaptic decay time τs=30 ms. Red curves show the target patterns and black curves show the voltage trace and synaptic drive of a trained network. (d) Spike rastergram of a trained Izhikevich neuron network showing the trained response of different cell types.
Figure 3—figure supplement 3. Synaptic drive of a network of neurons is trained to learn an identical sine wave while external noise generated independently from OU process is injected to individual neurons.

Figure 3—figure supplement 3.

The same external noise (gray curves) is applied repeatedly during and after training. (a)-(b) The amplitude of external noise is varied from (a) low, (b) medium to (c) high. The target sine wave is shown in red and the synaptic drive of neurons are shown in black. The raster plot in (c) shows the ensemble of spike trains of a successfully trained network with strong external noise.

To verify the quasi-static condition, we compared the actual to a quasi-static approximation of the spiking rate and synaptic drive. The spiking rates of neurons were approximated using the current-to-rate transfer function with time-dependent synaptic input, and the synaptic drive was approximated by a weighted sum of the presynaptic neurons’ spiking rates. We elicited the trained patterns over multiple trials starting at random initial conditions to calculate the trial-averaged spiking rates. The quasi-static approximations of the synaptic drive and spiking rate closely matched the actual synaptic drive (Figure 3a) and trial-averaged spiking rates (Figure 3b).

To examine how the heterogeneity of target patterns may facilitate learning, we created sets of target patterns where the fraction of randomly generated targets was varied systematically. For non-random targets, we used the same target pattern repeatedly. Networks trained to learn target patterns with strong heterogeneity showed that a network is able to encode target patterns with high accuracy if there is a large fraction of random targets (Figure 3c). Networks that were trained on too many repeated target patterns failed to learn. Beyond a certain fraction of random patterns, including additional patterns did not improve the performance, suggesting that the set of basis functions was over-complete. We probed the stability of over-complete networks under neuron loss by eliminating all the synaptic connections from a fraction of the neurons. A network was first trained to learn target outputs where all the patterns were selected randomly (i.e. fraction of random targets equals 1) tonsure that the target patterns form a set of redundant basis functions. Then, we elicited the trained patterns after removing a fraction of neurons from the network, which entails eliminating all the synaptic connections from the lost neurons. A trained network with 5% neuron loss was able to generate the trained patterns perfectly, 10% neuron loss resulted in a mild degradation of network response, and trained patterns completely disappeared after 40% neuron loss (Figure 3d).

The target dynamics considered in Figure 3 had population spiking rates of 9.1 Hz (periodic), 7.2 Hz (chaotic) and 12.1 Hz (OU) within the training window. To examine how population activity may influence learning, we trained networks to learn target patterns whose average amplitude was reduced gradually across target sets. The networks were able to learn when the population spiking rate of the target dynamics was as low as 1.5 Hz. However, the performance deteriorated as the population spiking rate decreased further (Figure 3—figure supplement 1). To demonstrate that learning does not depend on the spiking mechanism, we trained the synaptic drive of spiking networks using different neuron models. A network of leaky integrate-and-fire neurons, as well as a network of Izhikevich neurons whose neuron parameters were tuned to have five different firing patterns, successfully learned complex synaptic drive patterns (Figure 3—figure supplement 2).

Stabilizing rate fluctuations in a network respecting Dale’s law

A random network with balanced excitation and inhibition is a canonical model for a cortical circuit that produces asynchronous single unit activity (Sompolinsky et al., 1988; van Vreeswijk et al., 1996; Renart et al., 2010; Ostojic, 2014; Rosenbaum et al., 2017). The chaotic activity of balanced rate models (Sompolinsky et al., 1988) has been harnessed to accomplish complex tasks by including a feedback loop (Sussillo and Abbott, 2009), stabilizing chaotic trajectories (Laje and Buonomano, 2013) or introducing low-rank structure to the connectivity matrix (Mastrogiuseppe and Ostojic, 2017). Balanced spiking networks have been shown to possess similar capabilities (Thalmeier et al., 2016; DePasquale et al., 2016; Abbott et al., 2016; Nicola and Clopath, 2017; Denève and Machens, 2016), but it is unknown if it is possible to stabilize the heterogeneous fluctuations of the spiking rate in the strong coupling regime (Ostojic, 2014). Here, we extended the work of Laje and Buonomano (2013) to spiking networks and showed that strongly fluctuating single neuron activities can be turned into dynamic attractors by adjusting the recurrent connectivity.

We considered a network of randomly connected excitatory and inhibitory neurons that respected Dale’s Law. Prior to training, the synaptic and spiking activity of individual neurons showed large variations across trials because small discrepancies in the initial network state led to rapid divergence of network dynamics. When simulated with two different initial conditions, the synaptic drive to neurons deviated strongly from each other (Figure 4a), and the spiking activity of single neurons was uncorrelated across trials and the trial-averaged spiking rate had little temporal structure (Figure 4b). The network activity was also sensitive to small perturbation; the microstate of two identically prepared networks diverged rapidly if one spike was deleted from one of the networks (Figure 4c). It has been previously questioned as to whether the chaotic nature of an excitatory-inhibitory network could be utilized to perform reliable computations (London et al., 2010; Monteforte and Wolf, 2012).

Figure 4. Learning innate activity in a network of excitatory and inhibitory neurons that respects Dale’s Law.

Figure 4.

(a) Synaptic drive of sample neurons starting at random initial conditions in response to external stimulus prior to training. (b) Spike raster of sample neurons evoked by the same stimulus over multiple trials with random initial conditions. (c) Single spike perturbation of an untrained network. (d)-(f) Synaptic drive, multi-trial spiking response and single spike perturbation in a trained network. (g) The average phase deviation of theta neurons due to single spike perturbation. (h) Left, distribution of eigenvalues of the recurrent connectivity before and after training as a function their absolution values. Right, Eigenvalue spectrum of the recurrent connectivity; gray circle has unit radius. (i) The accuracy of quasi-static approximation in untrained networks and the performance of trained networks as a function of coupling strength J and synaptic time constant τs. Color bar shows the Pearson correlation between predicted and actual synaptic drive in untrained networks (left) and innate and actual synaptic drive in trained networks (right).

As in Laje and Buonomano (2013), we sought to tame the chaotic trajectories of single neuron activities when the coupling strength is strong enough to induce large and irregular spiking rate fluctuations in time and across neurons (Ostojic, 2014). We initiated the untrained network with random initial conditions to harvest innate synaptic activity, that is a set of synaptic trajectories that the network already knows how to generate. Then, the recurrent connectivity was trained so that the synaptic drive of every neuron in the network follows the innate pattern when stimulated with an external stimulus. To respect Dale’s Law, the RLS learning rule was modified such that it did not update synaptic connections if there were changes in their signs.

After training, the synaptic drive to every neuron in the network was able to track the innate trajectories in response to the external stimulus within the trained window and diverged from the target pattern outside the trained window (Figure 4d). When the trained network was stimulated to evoke the target patterns, the trial-averaged spiking rate developed a temporal structure that was not present in the untrained network (Figure 4e). To verify the reliability of learned spiking patterns, we simulated the trained network twice with identical initial conditions but deleted one spike 200 ms after evoking the trained response from one of the simulations. Within the trained window, the relative deviation of the microstate was markedly small in comparison to the deviation observed in the untrained network. Outside the trained window, however, two networks diverged rapidly again, which demonstrated that training the recurrent connectivity created an attracting flux tube around what used to be chaotic spike sequences (Monteforte and Wolf, 2012) (Figure 4f,g). Analyzing the eigenvalue spectrum of the recurrent connectivity revealed that the distribution of eigenvalues shifts towards zero and the spectral radius decreased as a result of training, which is consistent with the more stable network dynamics found in trained networks (Figure 4h).

To demonstrate that learning the innate trajectories works well when an excitatory-inhibitory network satisfies the quasi-static condition, we scanned the coupling strength J (see Materials and methods, 'Training recurrent dynamics' for the definition) and synaptic time constant τs over a wide range and evaluated the accuracy of the quasi-static approximation in untrained networks. We find that increasing either J or τs promoted strong fluctuations in spiking rates (Ostojic, 2014; Harish and Hansel, 2015), hence improving the quasi-static approximation (Figure 4i). Learning performance was correlated with adherence to the quasi-static approximation, resulting in better performance for strong coupling and long synaptic time constants.

Generating an ensemble of in vivo spiking patterns

We next investigated if the training method applied to actual spike recordings of a large number of neurons. In a previous study, a network of rate units was trained to match sequential activity imaged from posterior parietal cortex as a possible mechanism for short-term memory (Harvey et al., 2012; Rajan et al., 2016). Here, we aimed to construct recurrent spiking networks that captured heterogeneous spiking activity of cortical neurons involved in motor planning and movement (Churchland and Shenoy, 2007; Churchland et al., 2012; Li et al., 2015).

The in vivo spiking data was obtained from the publicly available data of Li et al. (2015), where they recorded the spike trains of a large number of neurons from the anterior lateral motor cortex of mice engaged in planning and executing directed licking over multiple trials. We compiled the trial-average spiking rate of Ncor=227 cortical neurons from their data set (Li et al., 2014), and trained a recurrent network model to reproduce the spiking rate patterns of all the Ncor neurons autonomously in response to a brief external stimulus. We only trained the recurrent connectivity and did not alter single neuron dynamics or external inputs.

First, we tested if a recurrent network of size Ncor was able to generate the spiking rate patterns of the same number of cortical neurons. This network model assumed that the spiking patterns of Ncor cortical neurons could be self-generated within a recurrent network. After training, the spiking rate of neuron models captured the overall trend of the spiking rate, but not the rapid changes that may be pertinent to the short term memory and motor response (Figure 5b). We hypothesized that the discrepancy may be attributed to other sources of input to the neurons not included in the model, such as recurrent input from other neurons in the local population or input from other areas of the brain, or the neuron dynamics that cannot be captured by our neuron model. We thus sought to improve the performance by adding Naux auxiliary neurons to the recurrent network to mimic the spiking activity of unobserved neurons in the local population, and trained the recurrent connectivity of a network of size Ncor=Naux=2 (Figure 5a). The auxiliary neurons were trained to follow spiking rate patterns obtained from an OU process and provided heterogeneity to the overall population activity patterns. When Naux/Ncor2, the spiking patterns of neuron models accurately fit that of cortical neurons (Figure 5c), and the population activity of all Ncor cortical neurons was well captured by the network model (Figure 5d). The fit to cortical activity improved gradually as a function of the fraction of auxiliary neurons in the network due to increased heterogeneity in the target patterns (Figure 5e)

Figure 5. Generating in vivo spiking activity in a subnetwork of a recurrent network.

Figure 5.

(a) Network schematic showing cortical (black) and auxiliary (white) neuron models trained to follow the spiking rate patterns of cortical neurons and target patterns derived from OU noise, respectively. Multi-trial spike sequences of sample cortical and auxiliary neurons in a successfully trained network. (b) Trial-averaged spiking rate of cortical neurons (red) and neuron models (black) when no auxiliary neurons are included. (c) Trial-averaged spiking rate of cortical and auxiliary neuron models when Naux=Naux=2. (c) Spiking rate of all the cortical neurons from the data (left) and the recurrent network model (right) trained with Naux=Ncor=2. (e) The fit to cortical dynamics improves as the number of auxiliary neurons increases. (f) Random shuffling of synaptic connections between cortical neuron models degrades the fit to cortical data. Error bars show the standard deviation of results from 10 trials.

To verify that the cortical neurons in the network model were not simply driven by the feed forward inputs from the auxiliary neurons, we randomly shuffled a fraction of recurrent connections between cortical neurons after a successful training. The fit to cortical data deteriorated as the fraction of shuffled synaptic connections between cortical neurons was increased, which confirmed that the recurrent connections between the cortical neurons played a role in generating the spiking patterns (Figure 5f).

Sufficient conditions for learning

We can quantify the sufficient conditions the target patterns need to satisfy in order to be successfully encoded in a network. The first condition is that the dynamical time scale of both neurons and synapses must be sufficiently fast compared to the target patterns such that targets can be considered constant (quasi-static) on a short time interval. In terms of network dynamics, the quasi-static condition implies that the synaptic and neuron dynamics operate as if in a stationary state even though the stationary values change as the network activity evolves in time. In this quasi-static state, we can use a mean field description of the spiking dynamics to derive a self-consistent equation that captures the time-dependent synaptic and spiking activity of neurons (Buice and Chow, 2013; Ostojic, 2014; Brunel, 2000) (see Materials and methods, 'Mean field description of the quasi-static dynamics'). Under the quasi-static approximation, the synaptic drive satisfies

Ui(t)=j=1NWijϕ(Uj(t)+Ij), (1)

and the spiking rate Ri=ϕ(Ui+Ii) satisfies

Ri(t)=ϕ(j=1NWijRj(t)), (2)

where ϕ is the current-to-rate transfer (i.e. gain) function and Ii is a constant external input.

The advantage of operating in a quasi-static state is that both measures of network activity become conducive to learning new patterns. First, Equation 1 is closed in terms of U, which implies that training the synaptic drive is equivalent to training a rate-based network. Second, the RLS algorithm can efficiently optimize the recurrent connectivity W, thanks to the linearity of Equation 1 in W, while the synaptic drive closely follows the target patterns as shown in Figure 1b. The spiking rate also provides a closed description of the network activity, as described in Equation 2. However, due to nonlinearity in W, it learns only when the total input to a neuron is supra-threshold, that is the gradient of ϕ must be positive. For this reason, the learning error cannot be controlled as tightly as the synaptic drive and requires additional trials for successful learning as shown in Figure 1d.

The second condition requires the target patterns to be sufficiently heterogeneous in time and across neurons. Such complexity allows the ensemble of spiking activity to have a rich spatiotemporal structure to generate the desired activity patterns of every neuron within the network. In the perspective of ‘reservoir computing’ (Maass et al., 2002; Jaeger and Haas, 2004; Sussillo and Abbott, 2009), every neuron in a recurrent network is considered to be a read-out, and, at the same time, it is part of the reservoir that is collectively used to produce desired patterns in single neurons. The heterogeneity condition is equivalent to having a set of complete (or over-complete) basis functions, that is ϕ(Uj+Ij),j=1,...,N in Equation 1 and Rj,j=1,...,N in Equation 2, to generate the target patterns, that is the left hand side of Equations 1 and 2. The two conditions are not necessarily independent. Heterogeneous targets also foster asynchronous spiking activity that support quasi-static dynamics.

We can illustrate the necessity of heterogeneous target functions with a simple argument. Successful learning is achieved for the synaptic drive when Equation 1 is satisfied. If we discretize time into P‘quasi-static’ bins then we can consider the target Ui(t) as a N×P matrix that satisfies the system of equations expressed in matrix form as U=WV, where Vϕ(U+I) is an N×P matrix. Since the elements of W are the unknowns, it is convenient to consider the transpose of the matrix equation, UT=VTWT. Solving for WT is equivalent to finding wi in ui=VTwi for i=1,...,N, where ui is a vector in P-dimensional Euclidean space P denoting the ith column of UT (the synaptic drive of neuron i) and wi is an N-dimensional vector denoting the ith column of WT (the incoming synaptic connections to neuron i). We also denote the column vectors of VT in RP by v1,...,vN (the firing rate patterns of neurons induced by the target functions). For each i, the system of equations consists of P equations and N unknowns.

In general, the system of equations is solvable if all target functions ui,i=1,...,N lie in the subspace spanned by v1,...,vN. This is equivalent to stating that the target functions can be self-consistently generated by the firing rate patterns induced by the target functions. We define target functions to be sufficiently heterogeneous if rank(V) is maximal and show that this is a sufficient condition for solutions to exist. Since the span of v1,...,vN encompasses the largest possible subspace in P if rank(V) is maximal, it is justified as a mathematical definition of sufficiently heterogeneous. In particular, if NP and rank(V) is maximal, we have dimspan{v1,...,vN}=P, which implies that the set of firing rate vectors v1,...,vN fully span P, of which the target vectors ui are elements; in other words, v1,...,vN forms an (over-)complete set of basis functions of P. On the other hand, if N<P and rank(V) is maximal, we have dimspan{v1,...,vN}=N, which implies linearly independent v1,...,vN can only span an N-dimensional subspace of P, but such subspace still attains the largest possible dimension.

Now we consider the solvability of ui=VTwi when rank(V) is maximal. For NP, the set of vectors v1,...,vN fully span P, or equivalently we can state that there are more unknowns (N) than independent equations (P), in which case the equation can always be satisfied and learning the pattern is possible. If N is strictly larger than P then a regularization term is required for the algorithm to converge to a specific solution out of the many possible solutions, the number of which decreases as P approaches N. For N<P, on the other hand, v1,...,vN spans an N-dimensional subspace of P, or equivalently there will be more equations than unknowns and perfect learning is not possible. However, since rank(V) is maximal, there is an approximate regression solution of the form W=UVT(VVT)1, where the inverse of VVT exists since the set of vectors v1,...,vN is linearly independent.

When rank(V) is not maximal, successful learning is still possible as long as all ui,i=1,...,N lie close to the subspace spanned by v1,...,vN. However, the success depends on the specific choice of target functions, because the dimension of the subspace spanned by v1,...,vN is strictly less than P, so whether the rows of U are contained in or close to this subspace is determined by the geometry of the subspace. This shows why increasing pattern heterogeneity, which makes the columns of VT more independent and the rank higher, is beneficial for learning. Conversely, as a larger number of neurons is trained on the same target, as considered in Figure 3c, it becomes increasingly difficult to develop the target pattern ui with the limited set of basis functions v1,...,vN.

This argument also shows why learning capability declines as P increases, with a steep decline for P>N. If we ascribe a quasi-static bin to some fraction of the pattern correlation time then P will scale with the length of the pattern temporal length. In this way, we can intuitively visualize the temporal storage capacity demonstrated below in Figure 7 through simulations.

We note that although Equations 1 and 2 describe the dynamical state in which learning works well, merely finding W that satisfies one of the equations does not guarantee that a spiking network with recurrent connectivity W will produce the target dynamics in a stable manner. The recurrent connectivity W needs to be trained iteratively as the network dynamics unfold in time to ensure that the target dynamics is generated in a stable manner (Sussillo and Abbott, 2009). There are three aspects of the training scheme that promote stable dynamics around the target trajectories. First, the stimulus at the onset of the learning window is applied at random times so it only sets the initial network states close to each other but with some random deviations. Training with initial conditions sampled from a small region in the state space forces the trained network to be robust to the choice of initial condition, and the target dynamics can be evoked reliably. Second, various network states around the target trajectories are explored while W is learning the desired dynamics. In-between the time points when W is updated, the network states evolve freely with no constraints and can thus diverge from the desired trajectory. This allows the network to visit different network states in the neighborhood of the target trajectories during training, and the trained network becomes resistant to relatively small perturbations from the target trajectories. Third, the synaptic update rule is designed to reduce the error between the target and the ongoing network activity each time W is updated. Thus, the sequential nature of the training procedure automatically induces stable dynamics by contracting trajectories toward the target throughout the entire path. In sum, robustness to initial conditions and network states around the target trajectories, together with the contractive property of the learning scheme, allow the trained network to generate the target dynamics in a stable manner.

Characterizing learning error

Learning errors can be classified into two categories. There are tracking errors, which arise because the target is not a solution of the true spiking network dynamics and sampling errors, which arise from encoding a continuous function with a finite number of spikes. We note that for a rate network, there would only be a tracking error. We quantified these learning errors as a function of the network and target time scales. The intrinsic time scale of spiking network dynamics was the synaptic decay constant τs, and the time scale of target dynamics was the decay constant τc of OU noise. We used target patterns generated from OU noise since the trajectories have a predetermined time scale and their spatio-temporal patterns are sufficiently heterogeneous.

We systematically varied τs and τc from fast AMPA-like (~ 1 ms) to slow NMDA-like synaptic transmission (~ 100 ms) and trained the synaptic drive of networks with synaptic time scale τs to learn a set of OU trajectories with timescale τc. The parameter scan revealed a learning regime, where the networks successfully encoded the target patterns, and two error-dominant regimes. The tracking error was prevalent when synapses were slow in comparison to target patterns, and the sampling error dominated when the synapses were fast (Figure 6a).

Figure 6. Sampling and tracking errors.

Figure 6.

Synaptic drive was trained to learn 1 s long trajectories generated from OU noise with decay time τc. (a) Performance of networks of size N=500 as a function of synaptic decay time τs and target decay time τc. (b) Examples of trained networks whose responses show sampling error, tracking error, and successful learning. The target trajectories are identical and τc=100 ms. (c) Inverted ‘U’-shaped curve as a function of synaptic decay time. Error bars show the s.d. of five trained networks of size N=500. (d) Inverted ‘U’-shaped curve for networks of sizes N=500 and 1000 for τc=100 ms. (e) Network performance shown as a function of τs/τc where the range of τs is from 30 ms to 500 ms and the range of τc is from 1ms to 500ms and N=1000. (f) Network performance shown as a function of 1/Nτs where the range of τs is from 1 ms to 30 ms, the range of N is from 500 to 1000 and τc=100 ms.

A network with a synaptic decay time τs=200 ms failed to follow rapid changes in the target patterns, but still captured the overall shape, when the target patterns had a faster time scale τc=100 ms (Figure 6b, Tracking error). This prototypical example showed that the synaptic dynamics were not fast enough to encode the target dynamics in the tracking error regime. With a faster synapse τs=30 ms, the synaptic drive was able to learn the identical target trajectories with high accuracy (Figure 6b, Learning). Note that although the target time scale (τc=100 ms) was significantly slower than the synaptic time scale (τs=30 ms), tuning the recurrent synaptic connections was sufficient for the network to generate slow network dynamics using fast synapses. This phenomenon was shown robustly in the learning regime in Figure 4a where learning occurred successfully for the parameters lying above the diagonal line (τc>τs). When the synapse was too fast τs=5 ms, however, the synaptic drive fluctuated around the target trajectories with high frequency (Figure 6b, Sampling error). This was a typical network response in the sampling error regime where discrete spikes with narrow width and large amplitude were summed to ‘sample’ the target synaptic activity.

To better understand how network parameters determined the learning errors, we mathematically analyzed the errors assuming that (1) target dynamics can be encoded if the quasi-static condition holds, and (2) the mean field description of the target dynamics is accurate (see Materials and methods, 'Analysis of learning error'). The learning errors were characterized as a deviation of these assumptions from the actual spiking network dynamics. We found that the tracking errors ϵtrack were substantial if the quasi-static condition was not valid, that is synapses were not fast enough for spiking networks to encode targets, and the sampling errors ϵsample occurred if the mean field description became inaccurate, that is discrete representation of targets in terms of spikes deviated from their continuous representation in terms of spiking rates. The errors were estimated to scale with

ϵtrackτs/τc,ϵsample1/τsN, (3)

which implied that tracking error can be controlled as long as synapses are relatively faster than target patterns, and the sampling error can be controlled by either increasing τs to stretch the width of individual spikes or increasing N to encode the targets with more input spikes. The error estimates revealed the versatility of recurrent spiking networks to encode arbitrary patterns since ϵtrack can be reduced by tuning τs to be small enough and ϵsample can be reduced by increasing N to be large enough. In particular, target signals substantially slower than the synaptic dynamics (i.e. τs/τc1) can be encoded reliably as long as the network size is large enough to represent the slow signals with filtered spikes that have narrow widths. Such slow dynamics were also investigated in randomly connected recurrent networks when coupling is strong (Sompolinsky et al., 1988; Ostojic, 2014) and reciprocal connections are over-represented (Martí et al., 2018).

We examined the performance of trained networks to verify if the theoretical results can explain the learning errors. The learning curve, as a function of τs, had an inverted U-shape when both types of errors were present (Figure 6c, d). Successful learning occurred in an optimal range of τs, and, consistent with the error analysis, the performance decreased monotonically with τs on the right branch due to increase in the tracking error while the performance increased monotonically with τs on the left branch due to decrease in the sampling error. The tracking error was reduced if target patterns were slowed down from τc=50 ms to τc=200 ms, hence decreased the ratio τs/τc. Then, the learning curve became sigmoidal, and the performance remained high even when τs was in the slow NMDA regime (Figure 6c). On the other hand, the sampling error was reduced if the network size was increased from N=500 to 1500, which lifted the left branch of the learning curve (Figure 6d). Note that when two error regimes were well separated, changes in target time scale τc did not affect ϵsample, and changes in network size N did not affect ϵsample, as predicted.

Finally, we condensed the training results over a wide range of target time scales in the tracking error regime (Figure 6e), and similarly condensed the training results over different network sizes in the sampling error regime (Figure 6f) to demonstrate that τs/τc and Nτs explained the overall performance in the tracking and sampling error regimes, respectively.

Learning capacity increases with network size

It has been shown that a recurrent rate network’s capability to encode target patterns deteriorates as a function of the length of time (Laje and Buonomano, 2013), but increase in network size can enhance its storage capacity (Jaeger, 2001; White et al., 2004; Rajan et al., 2016). Consistent with these results, we found that the performance of recurrent spiking networks to learn complex trajectories decreased with target length and improved with network size (Figure 7a).

Figure 7. Capacity as a function of network size.

Figure 7.

(a) Performance of trained networks as a function of target length T for networks of size N=500 and 1000. Target patterns were generated from OU noise with decay time τc=100 ms. (b) Networks of fixed sizes trained on a range of target length and correlations. Color bar shows the Pearson correlation between target and actual synaptic drive. The black lines show the function Tmax=T~maxτc where T~max was fitted to minimize the least square error between the linear function and maximal target length Tmax that can be successfully learned at each τc. (c) Learning capacity T~max shown as a function of network size.

To assess the storage capacity of spiking networks, we evaluated the maximal target length that can be encoded in a network as a function of network size. It was necessary to define the target length in terms of its ‘effective length’ to account for the fact that target patterns with the same length may have different effective length due to their temporal structures; for instance, OU noise with short temporal correlation times has more structure to be learned than a constant function. For target trajectories generated from an OU process with decay time τc, we rescaled the target length T with respect to τc and defined the effective length T~=T/τc. The capacity of a network was the maximal T~ that can be successfully encoded in a network.

To estimate the maximal T~, we trained networks of fixed size to learn OU trajectories while varying T and τc (each panel in Figure 7b). Then, for each τc, we found the maximal target length Tmax that can be learned successfully, and estimated the maximal T~ by finding a constant T~max that best fits the line Tmax=T~maxτc to training results (black lines in Figure 7b). Figure 7c shows that the learning capacity T~max increases monotonically with the network size.

Discussion

Our findings show that individual neurons embedded in a recurrent network can learn to produce complex activity by adjusting the recurrent synaptic connections. Most previous research on learning in recurrent neural networks focused on training the network outputs to perform useful computations and subsequently analyzed the recurrent activity in comparison with measured neuron activity (Sussillo and Abbott, 2009; Sussillo et al., 2015; Sussillo and Barak, 2013; Wang et al., 2018; Chaisangmongkon et al., 2017; Remington et al., 2018). In contrast to such output-centric approaches, our study takes a network-centric perspective and directly trains the activity of neurons within a network individually. Several studies have trained a rate-based network model to learn specific forms of target recurrent activity, such as innate chaotic dynamics (Laje and Buonomano, 2013), sequential activity (Rajan et al., 2016), and trajectories from a target network (DePasquale et al., 2018). In this study, we showed that the synaptic drive and spiking rate of a synaptically-coupled spiking network can be trained to follow arbitrary spatiotemporal patterns. The necessary ingredients for learning are that the spike train inputs to a neuron are weakly correlated (i.e. heterogeneous target patterns), the synapses are fast enough (i.e. small tracking error), and the network is large enough (i.e. small sampling error and large capacity). We demonstrated that (1) a network consisting of excitatory and inhibitory neurons can learn to track its strongly fluctuating innate synaptic trajectories, and (2) are current spiking network can learn to reproduce the spiking rate patterns of an ensemble of cortical neurons involved in motor planning and movement.

Our scheme works because the network quickly enters a quasi-static state where the instantaneous firing rate of a neuron is a fixed function of the inputs (Figure 3a, b; Equations 1, 2). Learning fails if the synaptic time scale is slow compared to the time scale of the target, in which case the quasi-static condition is violated and the tracking error becomes large. There is a trade-off between tracking error and sampling noise; fast synapse can decrease the tracking error, but it also increases the sampling noise. Increasing the network size can decrease sampling noise without affecting the tracking error (Figure 6e, f; Equation 3). Therefore, analysis of learning error and simulations suggest that it is possible to learn arbitrarily complex recurrent dynamics by adjusting the synaptic time scale and network size.

An important structural property of our network model is that the synaptic inputs are summed linearly, which allows the synaptic activity to be trained using a recursive form of linear regression (Sussillo and Abbott, 2009; Equation 6). Linear summation of synaptic inputs is a standard assumption for many spiking network models (van Vreeswijk et al., 1996; Renart et al., 2010; Brunel, 2000; Wang and Buzsáki, 1996; Rosenbaum et al., 2017) and there is physiological evidence that linear summation is prevalent (Cash and Yuste, 1998; Cash and Yuste, 1999). Training the spiking rate, on the other hand, cannot take full advantage of the linear synapse due to the nonlinear current-to-transfer function (Figure 1d, e; Equation 2). The network is capable of following a wide repertoire of patterns because even though the network dynamics are highly nonlinear, the system effectively reduces to a linear system for learning. Moreover, learning capacity can be estimated using a simple solvability condition for a linear system. However, nonlinear dendritic processing has been widely observed (Gasparini and Magee, 2006; Nevian et al., 2007) and may have computational consequences (Memmesheimer, 2010; Memmesheimer and Timme, 2012; Thalmeier et al., 2016). It requires further investigation to find out whether a recurrent network with nonlinear synapses can be trained to learn arbitrary recurrent dynamics.

We note that our learning scheme does not train precise spike times; it either trains the spiking rate or the synaptic drive. The stimulus at the onset of the learning window attempts to set the network to a specific state, but due to the variability of the initial conditions the network states can only be set approximately close to each other across trials. Because of this discrepancy in network states at the onset, the spike times are not aligned precisely across trials. Hence, our learning scheme supports rate coding as opposed to spike coding. However, spike trains that have temporally irregular structure across neurons actually enhance the rate coding scheme by providing sufficient computational complexity to encode the target dynamics (Results, 'Sufficient conditions for learning'). In fact, all neurons in the network can be trained to follow the same target patterns as long as there is sufficient heterogeneity, for example noisy external input, and the neuron time constant is fast enough (Figure 3—figure supplement 3). We also note that the same learning scheme can also be used to train the recurrent dynamics of rate-based networks (Figure 1—figure supplement 1). In fact, the learning is more efficient in a rate network since there is no sampling error to avoid.

The RLS algorithm, as demonstrated in this and other studies (Sussillo and Abbott, 2009; Sussillo et al., 2015; Laje and Buonomano, 2013; Rajan et al., 2016; DePasquale et al., 2018; Wang et al., 2018), successfully generates desired outputs in a stable manner because the synaptic update rule contracts the network activity towards the target output, and the synaptic connections are adjusted while the network explores various states around the target trajectories. It would be interesting to examine more rigorously how such an iterative learning scheme turns a set of arbitrary functions into dynamic attractors to which the network dynamics converge transiently. Recent studies investigated how stable dynamics emerge when the read-outs of a rate-based network are trained to learn fixed points or continuous values (Rivkind and Barak, 2017; Beer and Barak, 2018). In addition, previous studies have investigated the mathematical relationship between the patterns of stored fixed points and the recurrent connectivity in simple network models (Curto et al., 2013; Brunel, 2016).

Although our results demonstrated that recurrent spiking networks have the capability to generate a wide range of repertoire of recurrent dynamics, it is unlikely that a biological network is using this particular learning scheme. The learning rule derived from recursive least squares algorithm is very effective but is nonlocal in time, that is it uses the activity of all presynaptic neurons within the train time window to update synaptic weights. Moreover, each neuron in the network is assigned with a target signal and the synaptic connections are updated at a fast time scale as the error function is computed in a supervised manner. It would be of interest to find out whether more biologically plausible learning schemes, such as reward-based learning (Fiete and Seung, 2006; Hoerzer et al., 2014; Miconi, 2017) can lead to similar performance.

Materials and methods

Network of spiking neurons

We considered a network of N randomly and sparsely connected quadratic integrate-and-fire neurons given by

τv˙i=Ii(t)+ui(t)+vi2 (4)

where vi is a dimensionless variable representing membrane potential, Ii(t) is an applied input, ui(t) is the total synaptic drive the neuron receives from other neurons in the recurrent network, and τ=10 ms is a neuron time constant. The threshold to spiking is zero input. For negative total input, the neuron is at rest and for positive input, vi will go to infinity or ‘blow up’ in finite time from any initial condition. The neuron is considered to spike at vi= whereupon it is reset to (Ermentrout, 1996; Latham et al., 2000).

To simulate the dynamics of quadratic integrate-and-fire neurons, we used its phase representation, that is theta neuron model, that can be derived by a simple change of variables, vi=tan(θi/2); its dynamics are governed by

τθ˙i=1cosθi+(Ii(t)+ui(t))(1+cosθi), (5)

where a spike is emitted when θ(t)=π. The synaptic drive to a neuron obeys

τsu˙i(t)=ui(t)+j=1N Wijsj(t), (6)

where sj(t)=tjk<t δ(ttjk) is the spike train neuron j generates up to time t, and τs is a synaptic time constant.

The recurrent connectivity Wij describes the synaptic coupling from neuron j to neuron i. It can be any real matrix but in many of the simulations we use a random matrix with connection probability p, and the coupling strength of non-zero elements is modeled differently for different figures.

Training recurrent dynamics

To train the synaptic and spiking rate dynamics of individual neurons, it is more convenient to divide the synaptic drive Equation 6 into two parts; one that isolates the spike train of single neuron and computes its synaptic filtering

τsr˙i(t)=ri(t)+si(t), (7)

and the other that combines all the presynaptic neurons’ spiking activity and computes the synaptic drive

ui(t)=j=1NWijrj(t). (8)

The synaptic drive ui and the filtered spike train ri are two measures of spiking activity that have been trained in this study. Note that Equations 7 and 8 generate synaptic dynamics that are equivalent to Equation 6.

Training procedure

We select N target trajectories f1(t),...,fN(t) of length T ms for a recurrent network consisting of N neurons. We train either the synaptic drive or spiking rate of individual neuron i to follow the target fi(t) over time interval [0,T] for all i=1,...,N. External stimulus Ii with amplitude sampled uniformly from [1,1] is applied to neuron i for all i=1,2,...,N for 100 ms immediately preceding the training to situate the network at a specific state. During training, the recurrent connectivity W is updated every Δt ms using a learning rule described below in order to steer the network dynamics toward the target dynamics. The training is repeated multiple times until changes in the recurrent connectivity stabilize.

Training synaptic drive

Recent studies extended the RLS learning (also known as FORCE methods) developed in rate networks (Sussillo and Abbott, 2009) either directly (Nicola and Clopath, 2017) or indirectly using rate networks as an intermediate step (DePasquale et al., 2016; Abbott et al., 2016; Thalmeier et al., 2016) to train the output of spiking networks. Our learning rule uses the RLS learning but is different from previous studies in that (a) it trains the activity of individual neurons within a spiking network and (b) neurons are trained directly by adjusting the recurrent synaptic connections without using any intermediate networks. We modified the learning rule developed by Laje and Buonomano in a network of rate units (Laje and Buonomano, 2013) and also provided mathematical derivation of the learning rules for both the synaptic drive and spiking rates (see Materials and methods, 'Derivation of synaptic learning rules' for details).

When learning the synaptic drive patterns, the objective is to find recurrent connectivity W that minimizes the cost function

C[W]=0T 12|| f(t)u(t)||L22dt+λ2||W||L22, (9)

which measures the mean-square error between the targets and the synaptic drive over the time interval [0,T] plus a quadratic regularization term. To derive the learning rule, we use Equation 8 to express u as a function of W, view the synaptic connections Wi1,...,WiN to neuron i to be the read-out weights that determine the synaptic drive ui, and apply the learning rule to the row vectors of W. To keep the recurrent connectivity sparse, learning occurs only on synaptic connections that are non-zero prior to training.

Let wi(t) be the reduced row vector of W(t) consisting of elements that have non-zero connections to neuron i prior to training. Similarly, let ri(t) be a (column) vector of filtered spikes of presynaptic neurons that have non-zero connections to neuron i. The synaptic update to neuron i is

wi(t)T=wi(tΔt)T+ei(t)P(t)ri(t), (10)

where the error term is

ei(t)=fi(t)wi(tΔt)ri(t) (11)

and the inverse of the correlation matrix of filtered spike trains is

P(t)=P(tΔt)P(tΔt)ri(t)ri(t)TP(tΔt)1+ri(t)TP(tΔt)ri(t), P(0)=λ1I. (12)

Finally, W(t) is obtained by concatenating the row vectors wi(t),i=1,...,N.

Training spiking rate

To train the spiking rate of neurons, we approximate the spike train si(t) of neuron i with its spiking rate ϕ(ui(t)+Ii) where ϕ is the current-to-rate transfer function of theta neuron model. For constant input,

ϕ1(x)=π1[x]+ where [x]+=max(x,0), (13)

and for noisy input

ϕ2(x)=1πclog(1+ex/c). (14)

Since ϕ2 is a good approximation of ϕ1 and has a smooth transition around x=0, we used ϕϕ2 with c=0.1 (Brunel and Latham, 2003). The objective is to find recurrent connectivity W that minimizes the cost function

C[W]=0T 12|| f(t)ϕ(Wr(t)+I)||L22dt+λ2||W||L22. (15)

If we define wi and ri as before, we can derive the following synaptic update to neuron i

wiT(t)=wiT(tΔt)+ei(t)P(t)r~i(t), (16)

where the error term is

ei(t)=fi(t)ϕ(wi(tΔt)ri(t)+Ii) (17)

and

P(t)=P(tΔt)P(tΔt)r~i(t)r~i(t)TP(tΔt)1+r~i(t)TP(tΔt)r~i(t), P0=λ1I. (18)

(see Materials and methods, 'Derivation of synaptic learning rules' for details). Note that the nonlinear effects of the transfer function is included in

r~i(t)=ϕ(ui(t)+Ii)ri(t), (19)

which scales the spiking activity of neuron i by its gain function ϕ.

As before, W(t) is obtained by concatenating the row vectors ri(t),i=1,...,N.

Simulation parameters

Figure 1

A network of N=200 neurons was connected randomly with probability p=0.3 and the coupling strength was drawn from a Normal distribution with mean 0 and standard deviation σ/Np with σ=4. In addition, the average of all non-zero synaptic connections to a neuron was subtracted from the connections to the neuron such that the summed coupling strength was precisely zero. Networks with balanced excitatory and inhibitory connections produced highly fluctuating synaptic and spiking activity in all neurons. The synaptic decay time was τs=20 ms.

The target functions for the synaptic drive (Figure 1b) were sine waves f(t)=Asin(2π(tT0)/T1) where the amplitude A, initial phase T0, and period T1 were sampled uniformly from [0.5,1.5], [0,1000ms] and [300ms,1000ms], respectively. We generated N distinct target functions of length T=1000 ms. The target functions for the spiking rate (Figure 1d) were π1[f(t)]+ where f(t) were the same synaptic drive patterns that have been generated.

Immediately before each training loop, every neuron was stimulated for 50 ms with constant external stimulus that had random amplitude sampled from [1,1]. The same external stimulus was used across training loops. The recurrent connectivity was updated every Δt=2 ms during training using the learning rule derived from RLS algorithm and the learning rate was λ=1. After training, the network was stimulated with the external stimulus to evoke the trained patterns. The performance was measured by calculating the average Pearson correlation between target functions and the evoked network response.

Figure 2

The initial network and target functions were generated as in Figure 1 using the same parameters, but now the target functions consisted of two sets of N sine waves. To learn two sets of target patterns, the training loops alternated between two patterns, and immediately before each training loop, every neuron was stimulated for 50 ms with constant external stimuli that had random amplitudes, using a different stimulus for each pattern. Each target pattern was trained for 100 loops (i.e. total 200 training loops), synaptic update was every Δt=2 ms, and the learning rate was λ=10. To evoke one of the target patterns after training, the network was stimulated with the external stimulus that was used to train that target pattern.

The network consisted of N=500 neurons. The initial connectivity was sparsely connected with connection probability p=0.3 and coupling strength was sampled from a Normal distribution with mean 0 and standard deviation σ/Np with σ=1. The synaptic decay time was τs=20 ms.

We considered three families of target functions with length T=1000 ms. The complex periodic functions were defined as a product of two sine waves f(t)=Asin(2π(tT0)/T1)sin(2π(tT0)/T2) where A, T0, T1 and T2 were sampled randomly from intervals [0.5,1.5], [0,1000 ms], [500 ms,1000 ms], and [100 ms,500 ms], respectively. The chaotic rate activity was generated from a network of N randomly connected rate units, τx˙i=xi+j=1N Mijh(xj) where τ=40 ms, h(x)=π1[x]+ and Mij is non-zero with probability p=0.3 and is drawn from Gaussian distribution with mean zero and standard deviation g/Np with g=5. The Ornstein-Ulenbeck process was obtained by simulating, τcx˙=x+sξ(t), N times with random initial conditions and different realizations of the white noise ξ(t) satisfying ξ=0 and ξ(t)ξ(t)=δ(tt). The decay time constant was τc=200 ms, and the amplitude of target function was determined by s=0.3.

The recurrent connectivity was updated every Δt=2 ms during training, the learning rate was λ=1, and the training loop was repeated 30 times.

Figure 4

A balanced network had two populations where the excitatory population consisted of (1f)N neurons and the inhibitory population consisted of fN neurons with ratio f=0.2 and network size N=1000. Each neuron received p(1f)N excitatory connections with strength J and pfN inhibitory connections with strength gJ from randomly selected excitatory and inhibitory neurons. The connection probability was set to p=0.1 to have sparse connectivity. The relative strength of inhibition to excitation g was set to 5 so that the network was inhibition dominant (Brunel, 2000). In Figure 4a–h, the initial coupling strength J=6 and synaptic decay time τs=60 ms were adjusted to be large enough, so that the synaptic drive and spiking rate of individual neurons fluctuated strongly and slowly prior to training.

After running the initial network that started at random initial conditions for 3 s, we recorded the synaptic drive of all neurons for 2 s to harvest target trajectories that are innate to the balanced network. Then, the synaptic drive was trained to learn the innate trajectories, where synaptic update occurred every 10 ms, learning rate was λ=10 and training loop was repeated 40 times. To respect Dale’s Law while training the network, we did not modify the synaptic connections if the synaptic update reversed the sign of original connections, either from excitatory to inhibitory or from inhibitory to excitatory. Moreover, the synaptic connections that attempted to change their signs were excluded in subsequent trainings. In bf Figure 4h, the initial and trained connectivity matrices were normalized by a factor [(1f)J2+f(gJ)2](1p) so that the spectral radius of the initial connectivity matrix is approximately 1, then we plotted the eigenvalue spectrum of the normalized matrices.

In Figure 4i, the coupling strength J was scanned from 1 to 6 in increments of 0.25, and the synaptic decay time τs was scanned from 5 ms to 100 ms in increments of 5 ms. To measure the accuracy of quasi-static approximation in untrained networks, we simulated the network dynamics for each pair of J and τs, then calculated the average Person correlation between the predicted synaptic drive (Equation 1) and the actual synaptic drive. To measure the performance of trained networks, we repeated the training 10 times using different initial network configurations and innate trajectories, and calculated the Pearson correlation between the innate trajectories and the evoked network response for all 10 trainings. The heat map shows the best performance out of 10 trainings for each pair, J and τs.

Figure 5

The initial connectivity was sparsely connected with connection probability p=0.3 and the coupling strength was sampled from a Normal distribution with mean 0 and standard deviation σ/Np with σ=1. The synaptic decay time was τs=50 ms. There were in total N neurons in the network model, of which Ncor neurons, called cortical neurons, were trained to learn the spiking rate patterns of cortical neurons, and Naux neurons, called auxiliary neurons, were trained to learn trajectories generated from OU process.

We used the trial-averaged spiking rates of neurons recorded in the anterior lateral motor cortex of mice engaged in motor planning and movement that lasted 4600 ms (Li et al., 2015). The data was available from the website CRCNS.ORG (Li et al., 2014). We selected Ncor=227 neurons from the data set, whose average spiking rate during the behavioral task was greater than 5 Hz. Each cortical neuron in the network model was trained to learn the spiking rate pattern of one of the real cortical neurons.

To generate target rate functions for the auxiliary neurons, we simulated an OU process, τcx˙(t)=x(t)+sξ(t), with τc=800 ms and s=0.1, then converted into spiking rate ϕ([x(t)]+) and low-pass filtered with decay time τs to make it smooth. Each auxiliary neuron was trained on 4600 ms-long target rate function that was generated with a random initial condition.

Figure 6 and 7

Networks consisting of N=500 neurons with no initial connections and synaptic decay time τs were trained to learn OU process with decay time τc and length T. In Figure 6, target length was fixed to T=1000 ms while the time constants τs and τc were varied systematically from 100 ms to 5102 ms in log-scale. The trainings were repeated five times for each pair of τs and τc to find the average performance. In Figure 7, the synaptic decay time was fixed to τs=20 ms and T was scanned from 250 ms to 5000 ms in increments of 250 ms, τc was scanned from 25 ms to 500 ms in increments of 25 ms, and N was scanned from 500 to 1000 in increments of 50.

To ensure that the network connectivity after training is sparse, synaptic learning occurred only on connections that were randomly selected with probability p=0.3 prior to training. Recurrent connectivity was updated every Δt=2 ms during training, learning rate was λ=1, and training loop was repeated 30 times. The average Pearson correlation between the target functions and the evoked synaptic activity was calculated to measure the network performance after training.

Derivation of synaptic learning rules

Here, we derive the synaptic update rules for the synaptic drive and spiking rate trainings, Equations 10 and 16. We use RLS algorithm (Haykin, 1996) to learn target functions fi(t),i=1,2,...,N defined on a time interval [0,T], and the synaptic update occurs at evenly spaced time points, 0=t0t1...tK=T.

In the following derivation, super-script k on a variable Xik implies that X is evaluated at tk, and the sub-script i implies that X pertains to neuron i.

Training synaptic drive

The cost function measures the discrepancy between the target functions fi(t) and the synaptic drive ui(t) for all i=1,...,N at discrete time points t0,...,tK,

C[W]=12k=0K fkukL22+λ2WL22. (20)

The Recursive Least Squares (RLS) algorithm solves the problem iteratively by finding a solution Wn to Equation 20 at tn and updating the solution at next time step tn+1. We do not directly find the entire matrix Wn, but find each row of Wn, that is synaptic connections to each neuron i that minimize the discrepancy between ui and fi, then simply combine them to obtain Wn.

To find the ith row of Wn, we denote it by win and rewrite the cost function for neuron i that evaluates the discrepancy between fi(t) and ui(t) on a time interval [0,tn],

C[win]=12k=0n (fikwinrk)2+λ2winL22. (21)

Calculating the gradient and setting it to 0, we obtain

0=winC=k=1n (u^ikwinrk)rk+λwin

We express the equation concisely as follows.

[Rn+λI]win=qn
Rn=k=1n rk(rk)T,qn=k=1n u^ikrk. (22)

To find win iteratively, we rewrite Equation 22 up to tn1,

[Rn1+λI]win1=qn1, (23)

and subtract Equations 23 and 24 to obtain

[Rn+λI][winwin1]+rn(rn)Twin1=u^inrn. (24)

The update rule for win is then given by

win=win1+ein[Rn+λI]1rn, (25)

where the error term is

ein=finrnwin1. (26)

The matrix inverse Pn=[Rn+λI]1 can be computed iteratively

Pn=Pn1Pn1rn(rn)TPn11+(rn)TPn1rn, P0=λ1I,

using the matrix identity

(A+rrT)1=A1A1rrTA11+rTA1r.

Training spiking rate

To train the spiking rate of neurons, we approximate the spike train si(t) of neuron i with its spiking rate ϕ(ui(t)+Ii) where ϕ is the current-to-rate transfer function of theta neuron model. For constant input,

ϕ1(x)=π1[x]+where[x]+=max(x,0), (27)

and for noisy input

ϕ2(x)=1πclog(1+ex/c). (28)

Since ϕ2 is a good approximation of ϕ1 and has a smooth transition around x=0, we used ϕϕ2 with c=0.1 (Brunel and Latham, 2003).

If the synaptic update occurs at discrete time points, t0,...,tK, the objective is to find recurrent connectivity W that minimizes the cost function

C[W]=12k=0K fk(t)ϕ(Wrk(t)+I)L22+λ2WL22. (29)

As in training the synaptic drive, we optimize the following cost function to train each row of Wn that evaluates the discrepancy between the spiking rate of neuron i and the target spiking rate fi over a time interval [0,tn],

C[win]=12k=1n (fikϕ(winrk+Iik))2+λ2|win|2. (30)

Calculating the gradient and setting it to zero, we obtain

0=winC=k=1n [fikϕ(winrk+Iik)]r~ik+λwin. (31)

where

r~ik=ϕ(uik+Iik)rk (32)

is the vector of filtered spike trains scaled by the gain of neuron i. Note that when evaluating ϕ in Equation 32, we use the approximation uikwinrk to avoid introducing nonlinear functions of win.

To find an update rule for win, we rewrite Equation 31 up to tn1,

0=k=1n1 [fikϕ(win1rk+Iik)]r~ik+λwin1, (33)

and subtract Equations 31 and 33 and obtain

0=k=1n [ϕ(winrk+Iik)ϕ(win1rk+Iik)]r~ik (34)
 [finϕ(win1rn+Iin)]r~ik+λ[winwin1].

Since win1 is updated by small increment, we can approximate the first line in Equation 34,

ϕ(winrk+Iik)ϕ(win1rk+Iik)[winwin1]r~ik (35)

where we use the approximation uikwinrk as before to evaluate the derivative ϕ. Substituting Equation 35 to Equation 34, we obtain the update rule

win=win1+ein[Rn+λI]1r~in, (36)

where the error is

ein=finϕ(win1rn+Iin), (37)

and the correlation matrix of the normalized spiking activity is

Rn=k=1n r~ik(r~ik)T. (38)

As shown above, the matrix inverse Pn=[Rn+λI]1 can be computed iteratively,

Pn=Pn1Pn1r~in(r~in)TPn11+(r~in)TPn1r~in, P0=λ1I.

Mean field description of the quasi-static dynamics

We say that a network is in a quasi-static state if the synaptic drive to a neuron changes sufficiently slower than the dynamical time scale of neurons and synapses. Here, we use a formalism developed by Buice and Chow (2013) and derive Equations 1 and 2, which provide a mean field description of the synaptic and spiking rate dynamics of neurons in the quasi-static state.

First, we recast single neuron dynamic Equation 5 in terms of the empirical distribution of neuron’s phase ηi(θ,t)=δ(θi(t)θ). Since the number of neurons in the network is conserved, we can write the Klimontovich equation for the phase distribution

tηi(θ,t)+θ[F(θ,ui+Ii)ηi(θ,t)]=0 (39)

where F(θ,I)=1cosθ+I(1+cosθ). The synaptic drive Equation 6 can be written in the form

τsu˙i(t)=ui(t)+2j=1N Wijηj(π,t) (40)

since sj(t)=ηj(π,t)θ˙|θj=π and θ˙j|θj=π=2 for a theta neuron model. Equation 39, together with Equation 40, fully describes the network dynamics.

Next, to obtain a mean field description of the spiking dynamics, we take the ensemble average prepared with different initial conditions and ignore the contribution of higher order moments resulting from nonlinear terms uiηi. Then we obtain the mean field equation

tρ(θ,t)+θ[F(θ,Ui+Ii)ρi(θ,t)]=0 (41)
τsU˙i=Ui+2j=1N Wijρj(π,t). (42)

where ui=Ui and ηi=ρi. We note that the mean field Equations 41 and 42 provide a good description of the trained network dynamics because W learns over repeatedly training trials and starting at random initial conditions, to minimize the error between target trajectories and actual neuron activity.

Now, we assume that the temporal dynamics of synaptic drive and neuron phase can be suppressed in the quasi-static state,

τsU˙i0, tρ0. (43)

Substituting Equation 43 to Equation 41, but allowing U(t) to be time-dependent, we obtain the quasi-static solution of phase density

ρi(θ,t)=[Ui(t)+Ii]+π[1cosθ+(Ui(t)+Ii)(1+cosθ)], (44)
ππ ρi(θ)dθ=1
ϕ(Ui(t)+Ii)=2ρi(π,t)=[Ui(t)+Ii]+/π, (45)

the current-to-rate transfer function of a theta neuron model. Substituting Equation 43 and Equation 45 to Equation 42, we obtain a quasi-static solution of the synaptic drive

Ui(t)=j=1N Wijϕ(Uj(t)+Ij). (46)

If we define the spiking rate of a neuron as Ri(t)=ϕ(Ui+Ii), we immediately obtain

Ri(t)=ϕ(j=1N WijRj+Ii). (47)

Analysis of learning error

In this section, we identify and analyze two types of learning errors, assuming that for sufficiently heterogeneous targets, (1) the learning rule finds a recurrent connectivity W that can generate target patterns if the quasi-static condition holds, and (2) the mean field description of the spiking network dynamics is accurate due to the error function and repeated training trials. These assumptions imply that Equations 46 and 47 hold for the target patterns Ui(t) and the trained W. We show that learning errors arise when our assumptions become inaccurate, hence the network dynamics described by Equations 46 and 47 deviate from the actual spiking network dynamics. As we will see, tracking error is prevalent if the target is not an exact solution of the mean field dynamics (i.e. quasi-static approximation fails), and the sampling error dominates if the discrete spikes do not accurately represent continuous targets (i.e. mean field approximation fails).

Suppose we are trying to learn a target u^i which obeys an Ornstein-Ulenbeck process

(τcddt+1)ui^=ξi(t) (48)

on a time interval 0<t<T where ξi(t) are independent white noise with zero mean and variance σ2. The time constant τc determines the temporal correlation of a target trajectory. In order for perfect training, the target dynamics (Equation 48) needs to be compatible with the network dynamics (Equation 6); in other words, there must exist a recurrent connectivity W such that the following equation

(τsddt+1)u^i(t)=j=1N Wijs[u^j(t)] (49)

obtained by substituting the solution of Equation 48 into Equation 6 must hold for 0<t<T. Here, s[u^j(t)] maps the synaptic drive u^j(t) to the entire spike train sj(t).

It is very difficult to find W that may solve Equation 49 exactly since it requires fully understanding the solution space of a high-dimensional system of nonlinear ordinary differential equations. Instead, we assume that the target patterns are quasi-static and the learning rule finds a recurrent connectivity W that satisfies

u^i(t)=j=1N Wijϕ(u^j(t)). (50)

We then substitute Equation 50 to Equation 49 to estimate how the quasi-static mean field dynamics deviate from the actual spiking network dynamics. A straightforward calculation shows that

u^i(t)j=1N Wijϕ(u^j(t))+ϵtrack+ϵsample=0 (51)

where we define the tracking and sampling errors as

ϵtrack=τsdu^idt (52)

and

ϵsample=j=1N Wij(ϕ(u^j(t))s[u^j(t)]) (53)

on the time interval 0<t<T.

Tracking error

From its definition, ϵtrack captures the deviation of the quasi-static solution (Equation 50) from the exact solution of the mean field description obtained when ϵsample=0. ϵtrack becomes large if the quasi-static condition (Equation 43) fails and, in such network state, the synaptic dynamic is not able to ‘track’ the target patterns, thus learning is obstructed. In the following, we estimate ϵtrack in terms of two time scales τs and τc.

First, we take the Fourier transform of Equation 52 and obtain

F[ϵtrack](ω)=iτsωF[u^](ω). (54)

Next, normalize F[ϵtrack] with respect to F[u^] to estimate the tracking error for target patterns with different amplitudes, then compute the power of normalized tracking error.

1Ω0ΩF[ϵtrack]F[u^]dω=12τsΩ|Ω=Ωc=14πτsτc (55)

where Ωc=1/(2πτc) is the cut-off frequency of the power spectrum of a Gaussian process, SGP(ω)=σ2τc2/(1+4π2τc2ω). Thus, the tracking error scales with τs/τc.

Sampling error

ϵsample captures how the actual representation of target patterns in terms of spikes deviates from their continuous representation in terms of rate functions. In the following, we estimate ϵextsample in terms of τs and N under the assumption that the continuous representation provides an accurate description of the target patterns.

We low-pass filtered ϵsample to estimate the sampling error since the synaptic drive (i.e. the target variable in this estimate) is a W weighted sum of filtered spikes with width that scales with τs. If the spike trains of neurons are uncorrelated (i.e. cross product terms are negligible),

Var[ϵsamplefiltered]=j=1N Wij2(r¯jrj(t))2 (56)

where rj(t) is the filtered spike train and r¯j=rj(t)=1Δttktk+1 rj(s)ds is the empirical estimate of mean spiking rate on a short time interval.

First, we calculate the fluctuation of filtered spike trains under the assumption that a neuron generates spikes sparsely, hence the filtered spikes are non-overlapping. Let sj(t)=k δ(ttjk) be a spike train of neuron j and the filtered spike train rj(t)=1τsk exp((ttjk)/τs)H(ttjk). Then, the rate fluctuation of neuron j is

(57)(rj(t)r¯j)2=rj2(t)r¯2(58)=1τs2kexp(2(ttjk)/τs)H(ttjk)r¯2(59)=r¯j(12τsr¯j)

where k is summed over the average number of spikes, r¯jΔt, generated in the time interval of length Δt.

Next, to estimate the effect of network size on the sampling error, we examined Equation 50 and observed that O(W)1/N. This follows from that, for pre-determined target patterns, O(U),O(ϕ(U))1 regardless of the network size, hence O(W) must scale with 1/N in order for both sides of the equation to be compatible. If the network is dense, that is the number of synaptic connections to a neuron is pN on average, then the sampling error scales as follows.

O(Var[ϵsamplefiltered])j=1N O(Wij2)O((r¯jrj(t))2)1τsN (60)

Acknowledgements

This research was supported [in part] by the Intramural Research Program of the NIH, The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK).

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Christopher M Kim, Email: chrismkkim@gmail.com.

Carson C Chow, Email: carsonc@niddk.nih.gov.

Funding Information

This paper was supported by the following grant:

  • National Institute of Diabetes and Digestive and Kidney Diseases Intramural Research Program to Christopher M Kim, Carson C Chow.

Additional information

Competing interests

No competing interests declared.

Author contributions

Data curation, Software, Formal analysis, Investigation, Visualization, Methodology, Writing—original draft, Writing—review and editing.

Conceptualization, Supervision, Funding acquisition, Investigation, Methodology, Writing—review and editing.

Additional files

Transparent reporting form
DOI: 10.7554/eLife.37124.014

Example computer code that trains recurrent spiking networks is available at https://github.com/chrismkkim/SpikeLearning (copy archived at https://github.com/elifesciences-publications/SpikeLearning)

References

  1. Abbott LF, DePasquale B, Memmesheimer RM. Building functional networks of spiking model neurons. Nature Neuroscience. 2016;19:350–355. doi: 10.1038/nn.4241. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Beer C, Barak O. Dynamics of dynamics: following the formation of a line attractor. arXiv. 2018 https://arxiv.org/abs/1805.09603
  3. Bertschinger N, Natschläger T. Real-time computation at the edge of chaos in recurrent neural networks. Neural Computation. 2004;16:1413–1436. doi: 10.1162/089976604323057443. [DOI] [PubMed] [Google Scholar]
  4. Boerlin M, Machens CK, Denève S. Predictive coding of dynamical variables in balanced spiking networks. PLoS Computational Biology. 2013;9:e1003258. doi: 10.1371/journal.pcbi.1003258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bourdoukan R, Deneve S. Enforcing balance allows local supervised learning in spiking recurrent networks. Advances in Neural Information Processing SystemsAdvances in Neural Information Processing Systems 28 (NIPS 2015); 2015. pp. 982–990. [Google Scholar]
  6. Brendel W, Bourdoukan R, Vertechi P, Machens CK, Denéve S. Learning to represent signals spike by spike. arXiv. 2017 doi: 10.1371/journal.pcbi.1007692. https://arxiv.org/abs/1703.03777 [DOI] [PMC free article] [PubMed]
  7. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience. 2000;8:183–208. doi: 10.1023/A:1008925309027. [DOI] [PubMed] [Google Scholar]
  8. Brunel N, Latham PE. Firing rate of the noisy quadratic integrate-and-fire neuron. Neural Computation. 2003;15:2281–2306. doi: 10.1162/089976603322362365. [DOI] [PubMed] [Google Scholar]
  9. Brunel N. Is cortical connectivity optimized for storing information? Nature Neuroscience. 2016;19:749–755. doi: 10.1038/nn.4286. [DOI] [PubMed] [Google Scholar]
  10. Buice MA, Chow CC. Dynamic finite size effects in spiking neural networks. PLoS Computational Biology. 2013;9:e1002872. doi: 10.1371/journal.pcbi.1002872. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Buonomano DV, Maass W. State-dependent computations: spatiotemporal processing in cortical networks. Nature Reviews Neuroscience. 2009;10:113–125. doi: 10.1038/nrn2558. [DOI] [PubMed] [Google Scholar]
  12. Cash S, Yuste R. Input summation by cultured pyramidal neurons is linear and Position-Independent. The Journal of Neuroscience. 1998;18:10–15. doi: 10.1523/JNEUROSCI.18-01-00010.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Cash S, Yuste R. Linear summation of excitatory inputs by CA1 pyramidal neurons. Neuron. 1999;22:383–394. doi: 10.1016/S0896-6273(00)81098-3. [DOI] [PubMed] [Google Scholar]
  14. Chaisangmongkon W, Swaminathan SK, Freedman DJ, Wang XJ. Computing by robust transience: how the Fronto-Parietal network performs sequential, Category-Based decisions. Neuron. 2017;93:1504–1517. doi: 10.1016/j.neuron.2017.03.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Churchland MM, Shenoy KV. Temporal complexity and heterogeneity of single-neuron activity in premotor and motor cortex. Journal of Neurophysiology. 2007;97:4235–4257. doi: 10.1152/jn.00095.2007. [DOI] [PubMed] [Google Scholar]
  16. Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV, Mark M. Neural population dynamics during reaching. Nature. 2012;487:51–56. doi: 10.1038/nature11129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Curto C, Degeratu A, Itskov V. Encoding binary neural codes in networks of threshold-linear neurons. Neural Computation. 2013;25:2858–2903. doi: 10.1162/NECO_a_00504. [DOI] [PubMed] [Google Scholar]
  18. Denève S, Machens CK. Efficient codes and balanced networks. Nature Neuroscience. 2016;19:375–382. doi: 10.1038/nn.4243. [DOI] [PubMed] [Google Scholar]
  19. Denève S, Alemi A, Bourdoukan R. The brain as an efficient and robust adaptive learner. Neuron. 2017;94:969–977. doi: 10.1016/j.neuron.2017.05.016. [DOI] [PubMed] [Google Scholar]
  20. DePasquale B, Churchland MM, Abbott LF. Using Firing-Rate dynamics to train recurrent networks of spiking model neurons. Arxiv. 2016 https://arxiv.org/abs/1601.07620
  21. DePasquale B, Cueva CJ, Rajan K, Escola GS, Abbott LF. full-FORCE: a target-based method for training recurrent networks. PLoS One. 2018;13:e0191527. doi: 10.1371/journal.pone.0191527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Ermentrout B. Type I membranes, phase resetting curves, and synchrony. Neural Computation. 1996;8:979–1001. doi: 10.1162/neco.1996.8.5.979. [DOI] [PubMed] [Google Scholar]
  23. Fiete IR, Seung HS. Gradient learning in spiking neural networks by dynamic perturbation of conductances. Physical Review Letters. 2006;97:e048104. doi: 10.1103/PhysRevLett.97.048104. [DOI] [PubMed] [Google Scholar]
  24. Gasparini S, Magee JC. State-dependent dendritic computation in hippocampal CA1 pyramidal neurons. Journal of Neuroscience. 2006;26:2088–2100. doi: 10.1523/JNEUROSCI.4428-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Harish O, Hansel D. Asynchronous rate Chaos in spiking neuronal circuits. PLoS Computational Biology. 2015;11:e1004266. doi: 10.1371/journal.pcbi.1004266. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Harvey CD, Coen P, Tank DW. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature. 2012;484:62–68. doi: 10.1038/nature10918. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Haykin S. Adaptive Filter Theory. third Ed. Upper Saddle River, NJ, USA: Prentice-Hall, Inc; 1996. [Google Scholar]
  28. Hoerzer GM, Legenstein R, Maass W. Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning. Cerebral cortex. 2014;24:677–690. doi: 10.1093/cercor/bhs348. [DOI] [PubMed] [Google Scholar]
  29. Izhikevich EM. Simple model of spiking neurons. IEEE Transactions on Neural Networks. 2003;14:1569–1572. doi: 10.1109/TNN.2003.820440. [DOI] [PubMed] [Google Scholar]
  30. Jaeger H. Short Term Memory in Echo State Networks. Vol. 5. GMD-German National Research Institute for Computer Science; 2001. [Google Scholar]
  31. Jaeger H, Haas H. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science. 2004;304:78–80. doi: 10.1126/science.1091277. [DOI] [PubMed] [Google Scholar]
  32. Kim C, Chow C. Spike learning . 693e790Github. 2018 https://github.com/chrismkkim/SpikeLearning
  33. Laje R, Buonomano DV. Robust timing and motor patterns by taming Chaos in recurrent neural networks. Nature Neuroscience. 2013;16:925–933. doi: 10.1038/nn.3405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Latham PE, Richmond BJ, Nelson PG, Nirenberg S. Intrinsic dynamics in neuronal networks. I. Theory. Journal of Neurophysiology. 2000;83:808–827. doi: 10.1152/jn.2000.83.2.808. [DOI] [PubMed] [Google Scholar]
  35. Li N, Chen T-W, Guo ZV, Gerfen CR, Svoboda K. 2014. Extracellular recordings from anterior lateral motor cortex (alm) neurons of adult mice performing a tactile decision behavior. Collaborative Research in Computational Neuroscience - Data Sharing. https://crcns.org/data-sets/motor-cortex/alm-1
  36. Li N, Chen TW, Guo ZV, Gerfen CR, Svoboda K. A motor cortex circuit for motor planning and movement. Nature. 2015;519:51–56. doi: 10.1038/nature14178. [DOI] [PubMed] [Google Scholar]
  37. London M, Roth A, Beeren L, Häusser M, Latham PE. Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature. 2010;466:123–127. doi: 10.1038/nature09086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Maass W, Natschläger T, Markram H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation. 2002;14:2531–2560. doi: 10.1162/089976602760407955. [DOI] [PubMed] [Google Scholar]
  39. Maass W, Joshi P, Sontag ED. Computational aspects of feedback in neural circuits. PLoS Computational Biology. 2007;3:e165. doi: 10.1371/journal.pcbi.0020165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Martí D, Brunel N, Ostojic S. Correlations between synapses in pairs of neurons slow down dynamics in randomly connected neural networks. Physical Review E. 2018;97:062314. doi: 10.1103/PhysRevE.97.062314. [DOI] [PubMed] [Google Scholar]
  41. Mastrogiuseppe F, Ostojic S. Linking connectivity, dynamics and computations in recurrent neural networks. arXiv. 2017 doi: 10.1016/j.neuron.2018.07.003. https://arxiv.org/abs/1711.09672 [DOI] [PubMed]
  42. Memmesheimer RM. Quantitative prediction of intermittent high-frequency oscillations in neural networks with supralinear dendritic interactions. PNAS. 2010;107:11092–11097. doi: 10.1073/pnas.0909615107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Memmesheimer RM, Timme M. Non-additive coupling enables propagation of synchronous spiking activity in purely random networks. PLoS Computational Biology. 2012;8:e1002384. doi: 10.1371/journal.pcbi.1002384. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Miconi T. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks. eLife. 2017;6:e20899. doi: 10.7554/eLife.20899. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Monteforte M, Wolf F. Dynamic flux tubes form reservoirs of stability in neuronal circuits. Physical Review X. 2012;2:041007. doi: 10.1103/PhysRevX.2.041007. [DOI] [Google Scholar]
  46. Nevian T, Larkum ME, Polsky A, Schiller J. Properties of basal dendrites of layer 5 pyramidal neurons: a direct patch-clamp recording study. Nature Neuroscience. 2007;10:206–214. doi: 10.1038/nn1826. [DOI] [PubMed] [Google Scholar]
  47. Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training. Nature Communications. 2017;8:2208. doi: 10.1038/s41467-017-01827-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Ostojic S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature Neuroscience. 2014;17:594–600. doi: 10.1038/nn.3658. [DOI] [PubMed] [Google Scholar]
  49. Pastalkova E, Itskov V, Amarasingham A, Buzsáki G. Internally generated cell assembly sequences in the rat hippocampus. Science. 2008;321:1322–1327. doi: 10.1126/science.1159775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Rajan K, Harvey CD, Tank DW. Recurrent network models of sequence generation and memory. Neuron. 2016;90:128–142. doi: 10.1016/j.neuron.2016.02.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Remington ED, Narain D, Hosseini EA, Jazayeri M. Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics. Neuron. 2018;98:1005–1019. doi: 10.1016/j.neuron.2018.05.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD. The asynchronous state in cortical circuits. Science. 2010;327:587–590. doi: 10.1126/science.1179850. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Rivkind A, Barak O. Local dynamics in trained recurrent neural networks. Physical Review Letters. 2017;118:258101. doi: 10.1103/PhysRevLett.118.258101. [DOI] [PubMed] [Google Scholar]
  54. Rosenbaum R, Smith MA, Kohn A, Rubin JE, Doiron B. The spatial structure of correlated neuronal variability. Nature Neuroscience. 2017;20:107–114. doi: 10.1038/nn.4433. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Sompolinsky H, Crisanti A, Sommers HJ. Chaos in random neural networks. Physical Review Letters. 1988;61:259–262. doi: 10.1103/PhysRevLett.61.259. [DOI] [PubMed] [Google Scholar]
  56. Sussillo D, Abbott LF. Generating coherent patterns of activity from chaotic neural networks. Neuron. 2009;63:544–557. doi: 10.1016/j.neuron.2009.07.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Sussillo D, Barak O. Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural computation. 2013;25 doi: 10.1162/NECO_a_00409. [DOI] [PubMed] [Google Scholar]
  58. Sussillo D, Churchland MM, Kaufman MT, Shenoy KV. A neural network that finds a naturalistic solution for the production of muscle activity. Nature Neuroscience. 2015;18:1025–1033. doi: 10.1038/nn.4042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Thalmeier D, Uhlmann M, Kappen HJ, Memmesheimer RM. Learning Universal Computations with Spikes. PLoS computational biology. 2016;12:e1004895. doi: 10.1371/journal.pcbi.1004895. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. van Vreeswijk C, Sompolinsky H, Vreeswijk Cvan. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274:1724–1726. doi: 10.1126/science.274.5293.1724. [DOI] [PubMed] [Google Scholar]
  61. Wang XJ, Buzsáki G. Gamma oscillation by synaptic inhibition in a hippocampal interneuronal network model. The Journal of Neuroscience. 1996;16:6402–6413. doi: 10.1523/JNEUROSCI.16-20-06402.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Wang J, Narain D, Hosseini EA, Jazayeri M. Flexible timing by temporal scaling of cortical responses. Nature Neuroscience. 2018;21:102–110. doi: 10.1038/s41593-017-0028-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. White OL, Lee DD, Sompolinsky H. Short-term memory in orthogonal neural networks. Physical Review Letters. 2004;92:148102. doi: 10.1103/PhysRevLett.92.148102. [DOI] [PubMed] [Google Scholar]

Decision letter


In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "Learning recurrent dynamics in spiking networks" for consideration by eLife. Your article has been reviewed by Timothy Behrens as the Senior Editor and two reviewers, one of whom is the Reviewing Editor. One of the reviewers, Brian DePasquale, has agreed to reveal his identity.

The Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

The focus of most (maybe all?) research on RNNs has been on output: how do you train recurrent networks to produce a low dimensional (compared to the dimensionality of the network) target function? This paper asks a different question: how do you train recurrent networks so that neurons within the network produce their own target function? Here the target can be either firing rate or synaptic drive. What's interesting about this paper is that the authors give the conditions under which training is possible. These rules provide insight into the limits of performance of spiking RNNs, something that has, I believe, been lacking in the field.

Essential revisions:

Reviewers A and B's full reviews, along with the subsequent discussion are included below. You should focus on making A happy (which is pretty easy, since he/she had only minor comments). However, when you revise the paper, you probably should keep B's review in mind – his/her interpretation of your paper may be shared by others.

Reviewer A comments:

The focus of most (maybe all?) research on RNNs has been on output: how do you train recurrent networks to produce a low dimensional (compared to the dimensionality of the network) target function? This paper asks a different question: how do you train recurrent networks so that neurons within the network produce their own target function? Here the target can be either firing rate or synaptic drive.

What's interesting about this paper is that the authors give the conditions under which training is possible:

- The target functions have to be sufficiently diverse..

- A sufficiently large number of neurons have to be associated with targets.

- The synaptic time constant can't be too long compared to the timescale of the target functions.

- the synaptic time constant can't be too short compared to 1/N (N is the number of neurons in the network).

- The training period can't be too long compared to N.

These rules provide insight into the limits of performance of spiking RNNs, something that has, I believe, been lacking in the field. [Minor comments not shown.]

Reviewer B comments:

The authors present an interesting study examining the capability of recurrent networks of QIF neurons for learning various types of signals. They apply a RLS learning rule so that the network produces various periodic functions, filtered noise, the activity of chaotic networks, and experimental data. They present evidence to support their contention that two key conditions need be met for learning: (1) including a sufficiently rich set of target functions; (2) appropriately matched synaptic time-scale relative to the target dynamics. Additionally, a mean field analysis was performed and shown to be consistent with dynamics after learning.

While a handful of curious findings are presented, for the following two reasons I do not believe the work, as presented, is suitable for publication in this journal.

First, the examples, learning conditions and supporting experiments do not constitute a significant advance over previous work. While I cannot refer to prior work that has studied the ability to produce the specific signals presented, the authors do not properly motivate the importance of these particular signals. Prior publications that train RNNs (spiking too) typically focus on constructing networks that can perform tasks, where the motivation for the signals they construct is obvious: they use signals necessary for performing computations. For three of the examples (rate chaos, OU noise and spiking chaos) it's not clear what computation these networks might perform after learning (in fact, the primary motivation for stabilizing rate chaos, e.g. in FORCE learning, was so that the network COULD produce a computation after learning). In the case of the work of Laje and Buonomano (which the authors claim to extend to spiking networks), they sought to stabilize chaotic rate trajectories as a model for understanding timing tasks; here the authors do not present any use for these dynamical patterns, just that they can be learned.

Second, the presented work is simultaneously too technical (and therefore too specific) and too vague to be of significant interest to the general neuroscience community. The mean field analysis is interesting and stands out when compared to other published works, which have not extensively studied how analytic solutions compare to simulations in trained networks. Nevertheless, this is a minor, technical point, not likely to be interesting to most neuroscientists (unless the authors can explain why this is interesting).

While the question of universal computation is of general interest, the presented results are not sufficiently rigorous to match the claims. While they claim that previous work has been impressive but not sufficient to showcase the full richness of what can be expressed by spiking RNNs, the authors claim to learn "arbitrary" and "near universal" dynamics but only show examples (as previous work has) from a rather small set of potential dynamics. If this paper were to be published, the authors would need to clarify what they mean by this to accurately reflect the presented results or include a proof of the universality they are claiming to show. Another question of general interest is what constitutes a "sufficient basis" for learning but here again the authors only provide a few examples that are not shown to extend beyond the direct application studied. This approach is inferior to previous methods, since no prescription is offered for establishing sufficiency other than generating additional random signals; in previous work where specific computations were examined, the only signals required were those necessary for performing the computation and these could be "bootstrapped" by exploiting knowledge about the train-ability of firing rate networks. The authors offer a bold claim (identifying the sufficiency of a rich basis) with pretty weak support (just keep adding random signals).

I conclude by highlighting the most interesting and promising result: training with neural data. That the training required auxiliary dynamics is interesting and provides a prediction of a set of dynamical patterns that should exist in the brain in order for the network to perform the task it did. Expounding on these results would yield a very interesting finding, albeit beyond the scope of a simple revision to the present work.

A's response to B's comments (during discussion phase):

My main feeling is that they are looking at a very different problem than anybody else: they're training every single neuron (or a large fraction of neurons) to produce target functions, with different target functions (but similar timescales) for each neuron. This is, in my opinion, completely unrealistic – it's much more realistic to train networks to produce relatively low dimensional output patterns. But what I thought was interesting was that the paper gave bounds on performance in general in terms of timescales of the network and the target output of each neuron. If you can't produce target functions with a particular timescale when you can train each neuron individually, I would think that you certainly can't train the network as a whole to produce target functions with that timescale. Which is a point they might have emphasized more, but that would be easy to fix. But maybe I'm interested simply because it was something I always wondered about.

With this view, your first objection, which is that they didn't consider interesting functions, becomes, I think, moot. Your second objection is that the work is too technical and too vague. I agree the mean field analysis is very technical, and will be followed by only a handful of people. But it doesn't seem to be an integral part of the paper. And I think the rest is pretty easy to follow. Not sure what you meant by too vague – could you expand?

You also say that their results are not sufficiently rigorous. However, if they can make the arguments rigorous, would you be satisfied, at least about the rigorousness (but not necessarily about the first two points)?

B's response to A:

I disagree with you that they are looking at a different problem than anybody else. However, none of this works addresses the conditions for learning, I agree with you there.

I agree with you that understanding the limits of learning in RNNs is interesting and lacking, but to me, their results related to that point only account for a fraction of the full paper. As written, that point is embedded within a range of other findings that are unmotivated (e.g. the various other signals they train networks to perform that do not support their results about the limits of learning), incomplete (using data to train networks, as I wrote in my review) and that do not ultimately support the point about learnability. However, if they were to remove these other results, the point on learnability alone would not constitute a full paper.

In my view, they introduce two main conditions for learning: that the synaptic timescale should roughly match the targets and that the targets should be "suffciently rich". The second condition highlights my concerns about the vagueness and rigorness of the results. They illustrate the need for sufficient richness by removing targets from some of the training sets they used and show that these network cannot learn (Figure 3). While this is anecdotally interesting, I do not believe that illustrating that this is true in a handful of cases is sufficient to say anything general about the conditions necessary for learning. At best, this is good enough to make some pretty hand-waving arguments about the need for an overcomplete basis, which is precisely what they do in Section 2. Again, it's not that I don't find this interesting, I just don't seem much substance there.

Their first condition for learning is definitely much more rigorously addressed (the simulations and calculations of Figure 6 are nice) but I find this neither surprising or groundbreaking (you can't learn slow signals without slow synapses, or fast signals without fast synapses....) I think this point could be stated much more simply without the need for the various training targets they used (chaotic rate models and stochastic noise) which again, strengthens my resolve that much of what they do feels unmotivated. Even though Figure 6 being nicely done, 1 figure is not enough for a paper.

Finally, I think the tasks they are learning are perfectly "interesting", just unmotivated.

eLife. 2018 Sep 20;7:e37124. doi: 10.7554/eLife.37124.017

Author response


We would like to thank the reviewers for their valuable comments. We have made the code for learning recurrent spiking dynamics available on-line at https://github.com/chrismkkim/SpikeLearning

We will use following summary as reference points when responding to the reviewers’ comments:

S1) As Reviewer A pointed out, most research on reservoir computing in RNNs has focused on training the network output to perform a specific computation [Sussillo and Abbott, 2009; Laje and Buonomano, 2013; Rajan, Harvey, and Tank, 2016; DePasquale et al., 2018]. The implicit idea is that the RNN’s job is to provide a reservoir of trajectories that can be combined linearly with trainable weights to produce a desired output. In contrast, we ask the question of whether the output neurons are necessary at all. What if the RNN could be trained such that the output and reservoir were one and the same? And can this be done in a spiking network? None of these questions had answers before our work.

S2) In order for the reservoir computing scheme to work the RNN needs to be rich enough and repeatable, which is nontrivial to attain [Sussillo and Abbott, 2009] achieves this by stabilizing an inherently chaotic network with feedback or training, [Laje and Buonomano, 2013] trains the weights of the RNN to stabilize specific chaotic trajectories, [Rajan, Harvey, and Tank, 2016] used sequential activity derived from experimental data, and [DePasquale et al., 2018] used network activity extracted from a “target network”.

S3) We identified two conditions under which learning is successful for any arbitrary signal: (1) the synaptic time scales must be fast enough and the network size must be large enough and (2) the set of target functions must be sufficiently heterogeneous.

Response to essential revisions:

Reviewer B commented that the set of target functions a network learns (e.g. OU noise) was not motivated for performing computations. We respectfully disagree. As stated in (S1), our goal was to show that an RNN could produce arbitrary patterns such that there would be no need to distinguish between the RNN and the output neurons. The question we address is specifically the computational capability of an RNN in terms of the patterns it can produce. As stated in (S2), previous studies considered only specific forms of target functions for the RNN, hence it remains unknown whether a spiking network can learn “arbitrary” functions. Laje and Buonomano even stated (see their Online Methods) that “choosing innate trajectories from another network did not produce effective training”, alluding that recurrent networks may not be able to learn arbitrary functions. Our examples show that this is not the case and that individual neurons may be able to perform universal computations. The target functions considered in Figure 3 (periodic functions, rate chaos, OU noise) provide a few examples of random functions that a network can learn. We felt that this provided a generic selection of arbitrary functions, and also showed that the network is able to reproduce firing patterns of an animal performing a task. We revised the introduction of the original manuscript and added additional explanation in the first paragraph of discussion to make it clear that our goal was to investigate the computational capability of RNNs to encode recurrent spiking dynamics.

We also wish to correct Reviewer B’s interpretation of the first condition for learning (small sampling and tracking errors) since his statement “you can’t learn slow signals without slow synapses, or fast signals without fast synapses” is not correct and we apologize for not making this point more clear in our original text. It is possible to learn slow signals without slow synapses. As stated in (S3), the first condition requires that synapses to be fast enough (i.e. small tracking error,etrack~ts/tc) and network size to be large enough (i.e. small sampling error,esample~1/tsN), then the network can learn any signals, in particular the slow ones. As long as the synapse is faster than the target (i.e. their time scales need not be comparable), and there are plenty of neurons in the network to produce continuous target signals, forming appropriate recurrent connections can compensate for the short synaptic time scale and allows the network to generate slow signals. This is the main conclusion of the error analysis and is stated immediately after Equation 3. Our simulation results also support the error analysis: in Figure 6A, the parameter regime above the diagonal line (i.e. slow signal and fast synapses, tc > ts) learns successfully, and similarly learning is successful if ts/tc < 1 in Figure 6E. Such slow dynamics, consistent with our results, have been studied in random rate networks [Sompolinsky, Crisanti and Sommers, 1988], random spiking networks [Ostojic, 2014] and random rate networks when reciprocal connections are over-represented [Mart´ı, Brunel and Ostojic, 2018]. The example shown in Figure 6B also demonstrates that networks with fast synapses (τs = 30ms) can learn slow signals (τc = 100ms). We pointed out these facts in the third and fourth paragraphs of Results section. What may have led to the confusion is that for a fixed network size, synapses cannot be too fast or too slow, as there is a trade off between tracking and sampling error (See inverted “U”-curves in Figure 6C,D). However, as we also show, the sampling error can be overcome with more neurons so synapses can be arbitrarily fast as long as the network can be arbitrarily large (Figure 6D).

The second condition for learning is that the set of target functions need to be sufficiently heterogeneous. Reviewer B criticized that our definition of the second condition is vague and lacks rigor since we only provided anecdotal examples in Figure 3. However, we actually discussed the mathematical basis for the second condition in subsection “Learning capacity increases with network size” in the original manuscript. We now realize that our original treatment was not convincing and have strived to improve it so that it better articulates our argument. Here we present the main ideas of the mathematical argument (See the revised manuscript for details).

The network dynamics under the quasi-static condition can be expressed in a matrix form U = WV where U and Vφ(U + I) are N × P matrices. Here, the ith row of U and V is the time discretized trajectory of neuron i’s synaptic drive and spiking rate, respectively. The question of successful learning is reduced to analyzing the solution set of W to this system of equations. In principle, learning is possible when the rows of U are spanned by the rows of V, i.e. target functions can be self-consistently generated by firing rate patterns induced by the targets. We define target functions to be sufficiently heterogeneous if the firing rate matrix V is full-rank because under such condition that the image of V encompasses the largest possible space.

We show that rank(V) being maximal is sufficient for an exact or approximate solution W to exist. When NP, the maximal rank(V) implies that the rows of V span the entire Euclidean space RP, of which the target functions (i.e. rows of U) are elements. Equivalently, we can say that the number of unknowns (N) exceeds or equals the number of independent equations (P). Therefore, multiple perfect solutions are possible, and the regularization term is required for the learning scheme to converge to one solution. When N < P, the maximal rank(V) implies that the linearly independent rows of V span an N-dimensional subspace of RP. The number of unknowns (N) is less than the number of equations (P), so perfect learning is not possible and we can only find an approximate regression solution W = UVT(VVT)−1 where the inverse of VVT exists because rank(V) is maximal. If rank(V) is not maximal, a solution can still exists if rows of U are close to the subspace spanned by V. However, in this case the success depend on the specific choice of target functions because the dimension of the subspace spanned by V is strictly less than P, so whether the rows of U are contained in or close to this subspace is determined by the geometry of the subspace. This shows why increasing pattern heterogeneity, which makes the rows of V independent and the rank higher, is beneficial for learning. Conversely, if a large number of neurons is trained on the same target, it becomes increasingly difficult to develop the target patterns with limited basis functions. Given that this mathematical explanation is easily missed, we discussed the underlying mathematics in detail after introducing the second condition for learning in the fourth paragraph of Results section in the revised manuscript.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Li N, Chen T-W, Guo ZV, Gerfen CR, Svoboda K. 2014. Extracellular recordings from anterior lateral motor cortex (alm) neurons of adult mice performing a tactile decision behavior. Collaborative Research in Computational Neuroscience - Data Sharing. https://crcns.org/data-sets/motor-cortex/alm-1

    Supplementary Materials

    Transparent reporting form
    DOI: 10.7554/eLife.37124.014

    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES