Skip to main content
eLife logoLink to eLife
. 2021 Mar 24;10:e53151. doi: 10.7554/eLife.53151

Network dynamics underlying OFF responses in the auditory cortex

Giulio Bondanelli 1,2, Thomas Deneux 3, Brice Bathellier 3,4, Srdjan Ostojic 1,
Editors: Peter Latham5, Barbara G Shinn-Cunningham6
PMCID: PMC8057817  PMID: 33759763

Abstract

Across sensory systems, complex spatio-temporal patterns of neural activity arise following the onset (ON) and offset (OFF) of stimuli. While ON responses have been widely studied, the mechanisms generating OFF responses in cortical areas have so far not been fully elucidated. We examine here the hypothesis that OFF responses are single-cell signatures of recurrent interactions at the network level. To test this hypothesis, we performed population analyses of two-photon calcium recordings in the auditory cortex of awake mice listening to auditory stimuli, and compared them to linear single-cell and network models. While the single-cell model explained some prominent features of the data, it could not capture the structure across stimuli and trials. In contrast, the network model accounted for the low-dimensional organization of population responses and their global structure across stimuli, where distinct stimuli activated mostly orthogonal dimensions in the neural state-space.

Research organism: Mouse

Introduction

Neural responses within the sensory cortices are inherently transient. In the auditory cortex (AC) even the simplest stimulus, for instance a pure tone, evokes neural responses that strongly vary in time following the onset and offset of the stimulus. A number of past studies have reported a prevalence of ON- compared to OFF-responsive neurons in different auditory areas (Phillips et al., 2002; Luo et al., 2008; Fu et al., 2010; Pollak and Bodenhamer, 1981). As a result, the transient onset component has been long considered the dominant feature of auditory responses and has been extensively studied across the auditory pathway (Liu et al., 2019b; Kuwada and Batra, 1999; Grothe et al., 1992; Guo and Burkard, 2002; Yu et al., 2004; Heil, 1997a; Heil, 1997b), with respect to its neurophysiological basis and perceptual meaning (Phillips et al., 2002). In parallel, due to less evidence of OFF-responsive neurons in anaesthetized animals, OFF cortical responses have received comparably less attention. Yet, OFF responses have been observed in awake animals throughout the auditory pathway, and in the mouse AC they arise in 30–70% of the responsive neurons (Scholl et al., 2010; Keller et al., 2018; Joachimsthaler et al., 2014; Liu et al., 2019a; Sollini et al., 2018).

While the generation of ON responses has been attributed to neural mechanisms based on short-term adaptation, most likely inherited from the auditory nerve fibers (Phillips et al., 2002; Smith and Brachman, 1980; Smith and Brachman, 1982), the mechanisms that generate OFF responses are more diverse and seem to be area-specific (Xu et al., 2014; Kopp-Scheinpflug et al., 2018). In subcortical regions, neurons in the dorsal cochlear nucleus and in the superior paraolivary nucleus of the brainstem nuclei may generate OFF responses by post-inhibitory rebound, a synaptic mechanism in which a neuron emits one or more spikes following the cessation of a prolonged hyperpolarizing current (Suga, 1964; Hancock and Voigt, 1999; Kopp-Scheinpflug et al., 2011). In the midbrain inferior colliculus and in the medial geniculate body of the thalamus, OFF responses appear to be mediated by excitatory inputs from upstream areas and potentially boosted by a post-inhibitory facilitation mechanism (Kasai et al., 2012; Vater et al., 1992; Yu et al., 2004; He, 2003). Unlike in subcortical areas, OFF responses in AC do not appear to be driven by hyperpolarizing inputs during the presentation of the stimulus, since synaptic inhibition has been found to be only transient with respect to the stimulus duration (Qin et al., 2007; Scholl et al., 2010). The precise cellular or network mechanisms underlying transient OFF responses in cortical areas therefore remain to be fully elucidated.

Previous studies investigating the transient responses in the auditory system mostly adopted a single-neuron perspective (Henry, 1985; Scholl et al., 2010; Qin et al., 2007; He, 2002; Wang et al., 2005; Wang, 2007). However, in recent years, population approaches to neural data have proven valuable for understanding the role of transients dynamics in various cortical areas (Buonomano and Maass, 2009; Remington et al., 2018; Saxena and Cunningham, 2019). Work in the olfactory system has shown that ON and OFF responses encode the stimulus identity in the dynamical patterns of activity across the neural population (Mazor and Laurent, 2005; Stopfer et al., 2003; Broome et al., 2006; Friedrich and Laurent, 2001; Saha et al., 2017). In motor and premotor cortex, transient responses during movement execution form complex population trajectories (Churchland and Shenoy, 2007; Churchland et al., 2010) that have been hypothesized to be generated by a mechanism based on recurrent network dynamics (Shenoy et al., 2011; Hennequin et al., 2014; Sussillo et al., 2015; Stroud et al., 2018). In the AC, previous works have suggested a central role of the neural dynamics across large populations for the coding of different auditory features (Deneux et al., 2016; Saha et al., 2017; Lim et al., 2016), yet how these dynamics are generated remains an open question.

Leveraging the observation that the AC constitutes a network of neurons connected in a recurrent fashion (Linden and Schreiner, 2003; Oswald and Reyes, 2008; Oswald et al., 2009; Barbour and Callaway, 2008; Bizley et al., 2015), in this study we test the hypothesis that transient OFF responses are generated by a recurrent network mechanism broadly analogous to the motor cortex (Churchland et al., 2006; Hennequin et al., 2014). We first analyzed OFF responses evoked by multiple auditory stimuli in large neural populations recorded using calcium imaging in the mouse AC (Deneux et al., 2016). These analyses identified three prominent features of the auditory cortical data: (i) OFF responses correspond to transiently amplified trajectories of population activity; (ii) for each stimulus, the corresponding trajectory explores a low-dimensional subspace; and (iii) responses to different stimuli lie mostly in orthogonal subspaces. We then determined to what extent these features can be accounted for by a linear single-cell or network model. We show that the single-cell model can reproduce the first two features of population dynamics in response to individual stimuli, but cannot capture the structure across stimuli and single trials. In contrast, the network model accounts for all three features. Identifying the mechanisms responsible for these features led to additional predictions that we verified on the data.

Results

ON/OFF responses in AC reflect transiently amplified population dynamics

We analyzed the population responses of 2343 cells from the AC of three awake mice recorded using calcium imaging techniques (data from Deneux et al., 2016). The neurons were recorded while the mice passively listened to randomized presentations of different auditory stimuli. In this study we consider a total of 16 stimuli, consisting of two groups of intensity modulated UP- or DOWN-ramping sounds. In each group, there were stimuli with different frequency content (either 8 kHz pure tones or white noise [WN] sounds), different durations (1 or 2 s) and different intensity slopes (either 50–85 dB or 60–85 dB and reversed, see Table 1 and Materials and methods, Section 'The data set').

Table 1. Stimuli set.

Stimulus Direction Frequency Duration (s) Modulation (dB)
1 UP 8 kHz 1 s 50–85
2 UP 8 kHz 1 s 60–85
3 UP 8 kHz 2 s 50–85
4 UP 8 kHz 2 s 60–85
5 UP WN 1 s 50–85
6 UP WN 1 s 60–85
7 UP WN 2 s 50–85
8 UP WN 2 s 60–85
9 DOWN 8 kHz 1 s 85–50
10 DOWN 8 kHz 1 s 85–60
11 DOWN 8 kHz 2 s 85–50
12 DOWN 8 kHz 2 s 85–60
13 DOWN WN 1 s 85–50
14 DOWN WN 1 s 85–60
15 DOWN WN 2 s 85–50
16 DOWN WN 2 s 85–60

We first illustrate the responses of single cells to the presentation of different auditory stimuli, focusing on the periods following the onset and offset of the stimulus. The activity of individual neurons to different stimuli was highly heterogeneous. In response to a single stimulus, we found individual neurons that were strongly active only during the onset of the stimulus (ON responses), or only during the offset (OFF responses), while other neurons in the population responded to both stimulus onset and offset, consistent with previous analyses (Deneux et al., 2016). Importantly, across stimuli some neurons in the population exhibited ON and/or OFF responses only when specific stimuli were presented, showing stimulus-selectivity of transient responses, while others strongly responded at the onset and/or at the offset of multiple stimuli (Figure 1A).

Figure 1. Strong transient ON and OFF responses in auditory cortex of mice passively listening to different sounds.

Figure 1.

(A) Top: deconvolved calcium signals averaged over 20 trials showing the activity (estimated firing rate) of 20 out of 2343 neurons in response to a 8 kHz 1 s UP-ramp with intensity range 60–85 dB. We selected neurons with high signal-to-noise ratios (ratio between the peak activity during ON/OFF responses and the standard deviation of the activity before stimulus presentation). Neurons were ordered according to the difference between peak activity during ON and OFF response epochs. Bottom: activity of the same neurons as in the top panel in response to a white noise (WN) sound with the same duration and intensity profile. In all panels dashed lines indicate onset and offset of the stimulus, and green solid lines show the temporal region where OFF responses were analyzed (from 50 ms before stimulus offset to 300 ms after stimulus offset). (B) Left: cartoon showing the OFF response to one stimulus as a neural trajectory in the state space, where each coordinate represents the firing rate of one neuron (with respect to the baseline B). The length of the dashed line represents the distance between the population activity vector and its baseline firing rate, that is, ||𝐫(t)||. Right: the red trace shows the distance from baseline ||𝐫(t)|| computed for the population response to the 8 kHz sound in A. The gray trace shows the distance from baseline averaged over the 8 kHz sounds of 1 s duration (four stimuli). The gray shading represents ±1 standard deviation. The dashed horizontal line shows the average level of the distance ||𝐫(t)|| before stimulus presentation (even if baseline-subtracted responses were used, a value of the norm different from zero is expected because of noise in the spontaneous activity before stimulus onset). (C) Accuracy of stimulus classification between a 8 kHz versus WN UP-ramping sounds over time based on single trials (20 trials). The decoder is computed at each time step (spaced by ∼50 ms) and accuracy is computed using leave-one-out cross-validation. Orange trace: average classification accuracy over the cross-validation folds. Orange shaded area corresponds to ±1 standard error. The same process is repeated after shuffling stimulus labels across trials at each time step (chance level). Chance level is represented by the gray trace and shading, corresponding to its average and ±1 std computed over time. The red markers on the top indicate the time points where the average classification accuracy is lower than the average accuracy during the ON transient response (P<0.01, two-tailed t-test).

Because of the intrinsic heterogeneity of single-cell responses, we examined the structure of the transient ON and OFF responses to different stimuli using a population approach (Buonomano and Maass, 2009; Saxena and Cunningham, 2019). The temporal dynamics of the collective response of all the neurons in the population can be represented as a sequence of states in a high-dimensional state space, in which the i-th coordinate corresponds to the (baseline-subtracted) firing activity ri(t) of the i-th neuron in the population. At each time point, the population response is described by a population activity vector 𝐫(t) which draws a neural trajectory in the state space (Figure 1B left panel).

To quantify the strength of the population transient ON and OFF responses, we computed the distance of the population activity vector from its baseline firing level (average firing rate before stimulus presentation), corresponding to the norm of the population activity vector ||𝐫(t)|| (Mazor and Laurent, 2005). This revealed that the distance from baseline computed during ON and OFF responses was larger than the distance computed for the state at the end of stimulus presentation (Figure 1B right panel). We refer to this feature of the population transient dynamics as the transient amplification of ON and OFF responses.

To examine what the transient amplification of ON and OFF responses implies in terms of stimulus decoding, we trained a simple decoder to classify pairs of stimuli that differed in their frequency content, but had the same intensity modulation and duration. We found that the classification accuracy was highest during the transient phases corresponding to ON and OFF responses, while it decreased at the end of stimulus presentation (Figure 1C). This result revealed a robust encoding of the stimulus features during ON and OFF responses, as previously found in the locust olfactory system (Mazor and Laurent, 2005; Saha et al., 2017).

OFF responses rely on orthogonal low-dimensional subspaces

To further explore the structure of the neural trajectories associated with the population OFF responses to different stimuli, we analyzed neural activity using dimensionality reduction techniques (Cunningham and Yu, 2014). We focused specifically on responses within the period starting 50 ms before stimulus offset to 300 ms after stimulus offset.

By performing principal component analysis (PCA) independently for the responses to individual stimuli, we found that the dynamics during the transient OFF responses to individual stimuli explored only about five dimensions, as 80% of the variance of the OFF responses to individual stimuli was explained on average by the first five principal components (Figure 2A; see Figure 2—figure supplement 1 for cross-validated controls; note that, given the temporal resolution, the maximal dimensionality explaining 80% of the variance of the responses to individual stimuli was 9). The projection of the low-dimensional OFF responses to each stimulus onto the first two principal components revealed circular activity patterns, where the population vector smoothly rotated between the two dominant dimensions (Figure 2B).

Figure 2. Low-dimensional structure of population OFF responses.

(A) Cumulative variance explained for OFF responses to individual stimuli, as a function of the number of principal components. The blue trace shows the cumulative variance averaged across all 16 stimuli. Error bars are smaller than the symbol size. The triangular marker indicates the number of PCs explaining 80% (red line) of the total response variance for individual stimuli. (B) Left: projection of the population OFF response to the 8 kHz and white noise (WN) sounds on the first two principal components computed for the OFF response to the 8 kHz sound. Right: projection of both responses onto the first two principal components computed for the OFF response to the WN sound. PCA was performed on the period from −50 ms to 300 ms with respect to stimulus offset. We plot the response from 50 ms before stimulus offset to the end of the trial duration. (C) Cumulative variance explained for the OFF responses to all 16 stimuli together as a function of the number of principal components. The triangular marker indicates the number of PCs explaining 80% of the total response variance for all stimuli. (D) Overlap between the subspaces defined by the first five principal components of the OFF responses corresponding to pairs of stimuli. The overlap is measured by the cosine of the principal angle between these subspaces (see Materials and methods, Section 'Correlations between OFF response subspaces'). (E) Left: correlation matrix between the initial population activity states 𝐫0(s)=𝐫(s)(0) at the end of stimulus presentation (50 ms before offset) for each pair of stimuli. Right: linear correlation between subspace overlaps (D) and the overlap between initial states 𝐫0(s) (E left panel) for each stimulus pair. The component of the dynamics along the corresponding initial states was subtracted before computing the subspace overlaps.

Figure 2.

Figure 2—figure supplement 1. Controls for PCA of OFF responses to individual stimuli and across stimuli.

Figure 2—figure supplement 1.

(A) Solid blue trace: cumulative variance explained for population OFF responses to individual stimuli evaluated using PCA on raw calcium traces (13 data points). Gray trace: cumulative variance explained after shuffling cell indices independently at each time step to remove temporal correlations between time points. Responses where the temporal correlations have been removed exhibit higher dimensionality with respect to the original responses. For comparison, the cumulative variance explained computed on the temporally smoothed responses (see Materials and methods, Section 'Principal component analysis') is shown (dashed blue trace). (B) Cumulative variance explained evaluated using the cross-validated PCA (cvPCA) method (Stringer et al., 2019) applied to the OFF responses to individual stimuli. For each stimulus we evaluated the cvPCA spectrum averaging over all pairs of trials extracted from 20 total trials (see Materials and methods, Section 'Cross-validated PCA'). The solid trace and color shading represent the mean and standard deviation of the cumulative variance over stimuli. (C) Cumulative variance explained evaluated using the cvPCA method applied to the OFF responses to all stimuli at once. Here the solid trace and color shading represent the mean and standard error of the cumulative variance computed over all pairs of trials. In both plots the triangular red markers indicate the cvPCA dimensionalities that explain at least 80% of the response variance.
Figure 2—figure supplement 2. Controls for the orthogonality between OFF response subspaces.

Figure 2—figure supplement 2.

(A) Statistical control testing for the hypothesis that small subspace overlaps are due to the high dimensionality of neural responses. Blue and gray regions indicate stimulus pairs for which the p-value for the difference between the data and the controls is respectively smaller and larger or equal than 0.05 (lower tail test). Blue regions therefore indicate the stimulus pairs for which low values of the subspace overlaps are not attributable to the high dimensionality of the state-space. The number of shuffles is set to 200 (see Materials and methods, Section 'Controls for subspace overlaps and initial state-peak correlations'). (B) Statistical control testing for the hypothesis that small subspace overlaps are due to the trial-to-trial variability of neural responses. Blue and gray regions indicate stimulus pairs for which the p-value for the difference between the data and the controls is respectively smaller and larger or equal than 0.05 (two-tailed independent t-test). Blue regions therefore indicate the stimulus pairs for which low values of the subspace overlaps are not attributable to the trial-to-trial variability of neural activity. The number of trial subsamplings is set to 200 (see Materials and methods, Section 'Controls for subspace overlaps and initial state-peak correlations').
Figure 2—figure supplement 3. Relation between ON and OFF responses in the auditory cortex.

Figure 2—figure supplement 3.

(A) Left: projection of the population ON and OFF responses to a 8 kHz pure tone (1 s, UP-ramp, with intensity range 60–85 dB) on the first two principal components computed on the ON response. Right: projection of both ON and OFF responses to the same stimulus on the first two principal components computed on the OFF response. PCA was performed on the period from −50 ms to 300 ms with respect to stimulus onset and offset. (B) Subspace overlaps between the subspaces spanned by the ON and OFF population responses to individual stimuli. The subspace overlap is computed as the cosine of the principal angle between the ON and OFF subspaces, defined by the first five principal components of the ON and OFF population responses respectively. Error bars correspond to ±1 standard deviations computed over 100 subsamplings of the 80% of the units. (C) Projection of the population response for the stimulus in A, along the population activity vector at the peak of the transient ON and OFF responses (corresponding to the maximum distance from baseline).
Figure 2—figure supplement 4. Overlap between the states at the peak of the transient OFF responses.

Figure 2—figure supplement 4.

(A) Left: overlap between the states at the peak of the OFF responses for each pair of stimuli. The peak time is defined as the time at which the distance from baseline of the population vector is maximum. Right: linear correlation between the overlaps between peak states and the overlaps between initial conditions for each stimulus pair. (B) Same as A except that, for each stimulus, the component of the peak state along the corresponding initial condition has been subtracted.

A central observation revealed by the dimensionality reduction analysis was that the OFF response trajectories relative to stimuli with different frequency content spanned orthogonal low-dimensional subspaces. For instance, the response to the 8 kHz sound was poorly represented on the plane defined by the two principal components of the response to the WN sound (Figure 2B), and vice versa, showing that they evolved in distinct subspaces. To quantify the relationship between the subspaces spanned by the OFF responses to different stimuli, we proceeded as follows. We first computed the first five principal components of the (baseline-subtracted) OFF response 𝐫(s)(t) to each individual stimulus s. Therefore, for each stimulus s these dimensions define a five-dimensional subspace. Then, for each pair of stimuli, we computed the relative orientation of the corresponding pair of subspaces, measured by the subspace overlap (Figure 2D; see Materials and methods, Section 'Correlations between OFF response subspaces').

This approach revealed an interesting structure between the OFF response subspaces for different stimuli (Figure 2D and Figure 2—figure supplement 2). Stimuli with different frequency content evoked in general non-overlapping OFF responses reflected in low values of the subspace overlap. Two clusters of correlated OFF responses emerged, corresponding to the 8 kHz UP-ramps and WN UP-ramps of different durations and intensity. Surprisingly, DOWN-ramps evoked OFF responses that were less correlated than UP-ramps, even for sounds with the same frequency content.

The fact that most of the stimuli evoked non-overlapping OFF responses is reflected in the number of principal components that explain 80% of the variance for all OFF responses considered together, which is around 60 (Figure 2C and Figure 2—figure supplement 1). This number is in fact close to the number of dominant components of the joint response to all 16 stimuli (see Table 1) that we would expect if the responses to individual stimuli evolved on uncorrelated subspaces (given by #PC per stimulus × #stimuli ≈80). Notably this implies that while the OFF responses to individual stimuli span low-dimensional subspaces, the joint response across stimuli shows high-dimensional structure.

We finally examined to what extent the structure observed between the OFF response trajectories 𝐫(s)(t) to different stimuli (Figure 2D) could be predicted from the structure of the population activity states reached at the end of stimulus presentation, corresponding to the initial states 𝐫(s)(0). Remarkably, we found that the initial states exhibited structure across stimuli that matched well the structure of OFF response dynamics (Figure 2E left panel and Figure 2—figure supplement 4), even though the component along the initial states was substracted from the corresponding OFF response trajectories (Figure 2E right panel and Figure 2—figure supplement 4). This suggests that initial states contribute to determining the subsequent dynamics of the OFF responses.

Single-cell model for OFF responses

Our analyses of auditory cortical data identified three prominent features of population dynamics: (i) OFF responses correspond to transiently amplified trajectories; (ii) responses to individual stimuli explore low-dimensional subspaces; (iii) responses to different stimuli lie in largely orthogonal subspaces. We next examined to what extent these three features could be accounted for by a single-cell mechanisms for OFF response generation.

The AC is not the first stage where OFF responses arise. Robust OFF responses are found throughout the auditory pathway, and in subcortical areas the generation of OFF responses most likely relies on mechanisms that depend on the interaction between the inhibitory and excitatory synaptic inputs to single cells (e.g. post-inhibitory rebound or facilitation, see Kopp-Scheinpflug et al., 2018). To examine the possibility that OFF responses in AC are consistent with a similar single-cell mechanism, we considered a simplified, linear model that could be directly fit to calcium recordings in the AC. Adapting previously used models (Anderson and Linden, 2016; Meyer et al., 2016), we assumed that the cells received no external input after stimulus offset, and that the response of neuron i after stimulus offset is specified by a characteristic linear filter, which describes the cell’s intrinsic response generated by intracellular or synaptic mechanisms (Figure 3A). We moreover assumed that the shape of this temporal response is set by intrinsic properties, and is therefore identical across different stimuli. In the model, each stimulus modulates the response of a single neuron linearly depending on its activity at stimulus offset, so that the (baseline-subtracted) OFF response of neuron i to stimulus s is written as:

ri(s)(t)=r0,i(s)Li(t)

where r0,i(s) is the modulation factor for neuron i and stimulus s. We estimated the single-cell responses Li(t) from the data by fitting basis functions to the responses of neurons subject to prior normalization by the modulation factors r0,i(s) using linear regression (see Materials and methods, Section 'Single-cell model for OFF response generation'). We first fitted the single-cell model to responses to a single stimulus, and then increased the number of stimuli.

Figure 3. Comparison between single-cell and network models for OFF response generation.

(A) Cartoon illustrating the single-cell model defined in Equation (1). The activity of each unit is described by a characteristic temporal response Li(t) (colored traces). Across stimuli, only the relative activation of the single units changes, while the temporal shape of the responses of each unit Li(t) does not vary. (B) Distance from baseline of the population activity vector during the OFF response to one example stimulus (red trace; same stimulus as in C left panel). The dashed vertical line indicates the time of stimulus offset. Gray dashed lines correspond to the norms of the population activity vectors obtained from fitting the single-cell model respectively to the OFF response to a single stimulus (dashed line), and simultaneously to the OFF responses to two stimuli (dash-dotted line; in this example, the two fitted stimuli are the ones considered in panel C). In light gray we plot the norm of 100 realizations of the fitted OFF response by simulating different initial states distributed in the vicinity of the fitted initial state (shown only for the simultaneous fit of two stimuli for clarity). Note that in the single-cell model fit (see Materials and methods, Section 'Fitting the single-cell model'), the fitted initial condition can substantially deviate from the initial condition taken from the data. (C) Colored traces: projection of the population OFF responses to two distinct example stimuli (same stimuli as in Figure 2B) onto the first two principal components of either stimulus. As in panel B gray dashed and dash-dotted traces correspond to the projection of the OFF responses obtained when fitting the single-cell model to one or two stimuli at once. (D) Goodness of fit (as quantified by coefficient of determination R2) computed by fitting the single-cell model (blue trace) and the network model (red trace) to the calcium activity data, as a function of the number of stimuli included in the fit. Both traces show the cross-validated value of the goodness of fit (10-fold cross-validation in the time domain). Error bars represent the standard deviation over multiple subsamplings of a fixed number of stimuli (reported on the abscissa). Prior to fitting, for each subsampling of stimuli, we reduced the dimensionality of the responses using PCA, and kept the dominant dimensions that accounted for 90% of the variance. (E) Examples of the OFF responses to distinct stimuli of two different auditory cortical neurons (each panel corresponds to a different neuron). The same neuron exhibits different temporal response profiles for different stimuli, a feature consistent with the network model (see Figure 6A,D), but not with the single-cell model. (F) Illustration of the recurrent network model. The variables defining the state of the network are the (baseline-subtracted) firing rates of the units, denoted by ri(t). The strength of the connection from unit j to unit i is denoted by Jij. (G) Distance from baseline of the population activity vector during the OFF response to one example stimulus (red trace; same stimulus as in C left panel). Gray traces correspond to the norms of the population activity vectors obtained from fitting the network model to the OFF response to a single stimulus and generated using the fitted connectivity matrix. The initial conditions were chosen in a neighborhood of the population activity vector 50 ms before sound offset. 100 traces corresponding to 100 random choices of the initial condition are shown. Dashed trace: average distance from baseline. (H) Colored traces: projection of the population OFF responses to two different stimuli (same stimuli as in Figure 2B) on the first two principal components. Gray traces: projections of multiple trajectories generated by the network model using the connectivity matrix fitted to the individual stimuli as in G. The initial condition is indicated with a dot. 100 traces are shown. Dashed trace: projection of the average trajectory. (I) Left: overlap between the subspaces corresponding to OFF response trajectories to pairs of stimuli (as in Figure 2D) generated using the connectivity matrix fitted to all stimuli at once. Right: linear correlation between subspace overlaps and the overlaps between the initial states 𝐫0(s) for each stimulus pair computed using the trajectories obtained by the fitted network model. The component of the dynamics along the corresponding initial states was subtracted before computing the subspace overlaps.

Figure 3.

Figure 3—figure supplement 1. Goodness of fit for the network and single-cell models adjusted for the number of parameters.

Figure 3—figure supplement 1.

Goodness of fit, as quantified by coefficient of determination RAdj2 adjusted for the numbers of parameters, when fitting the single-cell model (blue trace) and the network model (red trace) to the calcium activity data, as a function of the number of stimuli included in the fit. Both traces show the cross-validated value of the goodness of fit (10-fold cross-validation in the time domain). Error bars represent the standard deviation over multiple subsamplings of a fixed number of stimuli (on the abscissa). Prior to fitting, for each subsampling of stimuli, we reduced the dimensionality of the responses using PCA, and kept the dominant dimensions accounting for 90% of the variance.
Figure 3—figure supplement 2. Goodness of fit for the original and surrogate data sets.

Figure 3—figure supplement 2.

Value of the goodness of fit for the network model (as quantified by the coefficient of determination R2) for the original data sets and for three types of surrogate data sets, computed using the activity pooled across the three animals (All) and the activity of individual animals (mouse 1, 2, and 3). For the original data set, the dot shows the goodness of fit averaged over the cross-validation. For the surrogate data sets, the average goodness of fit over cross-validation folds is shown for multiple (n=500) surrogates of the original data. The network model fits significantly better the original data than all three types of surrogate data sets (*** corresponds to p<0.001; upper tail test using the value of the goodness of fit averaged over the cross-validations folds for each data set). Box-and-whisker plots show the distribution of the values for the three types of surrogates (Tukey convention: box lower end, middle line, and upper end represent the first quartile, median, and third quartile of the distributions; lower and upper whisker extends for 1.5 times the interquartile range). Fitting is performed on the population OFF responses to all 16 stimuli at once. The goodness of fit is computed using 10-fold cross-validation in the time domain. The parameters for the reduced-rank ridge regression for the three plots are respectively: λAll=2, RAll=70; λ1=2 and R1=70; λ2=0.8, R2=90; λ3=0.6, R=90.
Figure 3—figure supplement 3. Goodness of fit as a function of PC dimensionality for original and surrogate data sets.

Figure 3—figure supplement 3.

Value of the goodness of fit as a function of PC dimensionality for the original data set (black trace) and for the surrogate data sets (colored traces). The value of the goodness of fit was computed using ridge regression using 10-fold cross-validation. The ridge parameter is optimized for each choice of the PC dimensionality. For the original data set, error bars represent the standard error over the 10 cross-validation folds; for the surrogate data sets, error bars represent the average cross-validation standard error over all the surrogates. Here the number of surrogates is set to 100. For each choice of the PC dimensionality, red crosses indicate that the difference in goodness of fit between the original data set and the most constraining surrogate (TKC) is statistically significant (p<0.01; upper tail test using the mean value of the goodness of fit over the cross-validation folds for each data set).

The single-cell model accounted well for the first two features observed in the data: (i) because the single-cell responses Li(t) are in general non-monotonic, the distance from baseline of the population activity vector displayed amplified dynamics (Figure 3B); (ii) at the population level, the trajectories generated by the single-cell model spanned low-dimensional subspaces of at least two dimensions (Figure 3C) and provided excellent fits to the trajectories in the data when the response to a single stimulus was considered (Figure 3C). However, we found that the single-cell model could not account for the structure of responses across multiple stimuli. Indeed, although fitting the model to OFF responses to a single stimulus led to an excellent match with auditory cortical data (coefficient of determination R2=0.75), increasing the number of simultaneously fitted stimuli led to increasing deviations from the data (Figure 3B,C), and strongly decreased the goodness of fit (Figure 3D). When fitting all stimuli at once, the goodness of fit was extremely poor (coefficient of determination R2=0.1), and therefore the single-cell model could not provide useful information about the third feature of the data, the structure of subspaces spanned in response to different stimuli. A simple explanation for this inability to capture structure across stimuli is that in the single-cell model the temporal shape of the response of each neuron is the same across all stimuli, while this was not the case in the data (Figure 3E).

Network model for OFF responses

We contrasted the single-cell model with a network model that generated OFF responses through collective interactions between neurons. Specifically, we studied a linear network of N recurrently coupled linear rate units with time evolution given by: 

r˙i=-ri+j=1NJijrj. (2)

The quantity ri(t) represents the deviation of the activity of the unit i from its baseline firing rate, while Jij denotes the effective strength of the connection from neuron j to neuron i (Figure 3F). As in the single-cell model, we assumed that the network received no external input after stimulus offset. The only effect of the preceding stimulus was to set the initial pattern of activity of the network at t=0. The internal recurrent dynamics then fully determined the subsequent evolution of the activity. Each stimulus s was thus represented by an initial state 𝐫0(s) that was mapped onto a trajectory of activity 𝐫(s)(t). We focused on the dynamics of the network following stimulus offset, which we represent as t=0.

To quantify to what extent recurrent interactions could account for auditory cortical dynamics, we fitted the network model to OFF responses using linear regression (see Materials and methods, Section 'Fitting the network model'). For each stimulus, the pattern of initial activity 𝐫0(s) was taken from the data, and the connectivity matrix 𝐉 was fitted to the responses to a progressively increasing number of stimuli.

Qualitatively, we found that the fitted network model reproduced well the first two features of the data, transient amplification and low-dimensional population trajectories (Figure 3G,H). Quantitatively, we evaluated the goodness of fit by computing the coefficient of determination R2. While the goodness of fit of the network model was lower than the single-cell model when fitting the response to a single stimulus (R2=0.52), for the network model the goodness of fit remained consistently high as the number of simultaneously fitted stimuli was increased (Figure 3D, R2=0.52 when fitted on the responses to all stimuli). Computing the goodness of fit by taking into account the number of parameters of the network and single-cell models led to the same conclusion (Figure 3—figure supplement 1). When fitted to all stimuli at once, the fitted network model captured the structure of the subspace overlaps (Figure 3I left panel) and its correlation with the structure of initial conditions (Figure 3I right panel). In contrast to the single-cell model, the network model therefore accounted well for the third feature of the data, the structure of responses across stimuli. This can be explained by the fact that, in the network model, the temporal OFF-responses of single cells can in general differ across stimuli (see Figure 6A,D), similar to the activity in the AC (Figure 3E).

Testing the network mechanisms of OFF responses on the data

Having found that the fitted network model reproduced the three main features of the data, we next analyzed this model to identify the mechanisms responsible for each feature. The identified mechanisms provided new predictions that we tested on the effective connectivity matrix 𝐉 obtained from the fit to the data.

Transiently amplified OFF responses

The first feature of the data was that the dynamics were transiently amplified, i.e. the OFF responses first deviated from baseline before eventually converging to it. A preliminary requirement to reproduce this feature is that dynamics are stable, that is, eventually decay to baseline following any initial state. This requirement leads to the usual condition that the eigenvalues of the connectivity matrix 𝐉 have real parts less than unity. Provided this basic requirement is met, during the transient dynamics from the initial state at stimulus offset, the distance from baseline can either monotonically decrease, or transiently increase before eventually decreasing. To generate the transient increase, the connectivity matrix needs to belong to the class of non-normal matrices (Trefethen and Embree, 2005; Murphy and Miller, 2009; Goldman, 2009; Hennequin et al., 2012; see Materials and methods, Section 'Normal and non-normal connectivity matrices'), but this is not a sufficient condition. More specifically, the largest eigenvalue of the symmetric part defined by 𝐉S=(𝐉+𝐉T)/2 needs to be larger than unity, while the initial state 𝐫0 needs to belong to a subset of amplified patterns (Bondanelli and Ostojic, 2020; see Materials and methods, Section 'Sufficient condition for amplified OFF responses').

To test the predictions derived from the condition for amplified transient responses, that is, that 𝐉 belongs to a specific subset of non-normal matrices, we examined the spectra of the full connectivity 𝐉 and of its symmetric part 𝐉S (Bondanelli and Ostojic, 2020). The eigenvalues of the fitted connectivity matrix had real parts smaller than one, indicating stable dynamics, and large imaginary parts (Figure 4A). Our theoretical criterion predicted that for OFF responses to be amplified, the spectrum of the symmetric part 𝐉S must have at least one eigenvalue larger than unity. Consistent with this predictions, we found that the symmetric part of the connectivity had indeed a large number of eigenvalues larger than one (Figure 4B) and could therefore produce amplified responses (Figure 3G).

Figure 4. Spectra of the connectivity matrix of the fitted network model.

Figure 4.

(A) Eigenvalues of the connectivity matrix 𝐉 obtained by fitting the network model to the population OFF responses to all 1 s stimuli at once. The dashed line marks the stability boundary given by Reλ(J)=1. (B) Probability density distribution of the eigenvalues of the symmetric part of the effective connectivity, 𝐉S. Eigenvalues larger than unity (λ(𝐉S)=1; highlighted in blue) determine strongly non-normal dynamics. In A and B the total dimensionality of the responses was set to 100. The fitting was performed on 20 bootstrap subsamplings (with replacement) of 80% of neurons out of the total population. Each bootstrap subsampling resulted in a set of 100 eigenvalues. In panel A we plotted the eigenvalues of 𝐉 obtained across all subsamplings. In panel B the thin black lines indicate the standard deviation of the eigenvalue probability density across subsamplings. In panels A and B, the temporal window considered for the fit was extended from 350 ms to 600 ms, to include the decay to zero baseline of the OFF responses. The extension of the temporal window was possible only for the 1 s long stimuli (n=8), since the length of the temporal window following stimulus offset for the 2 s stimuli was limited by the length of the neural recordings.

Low-dimensionality of OFF response trajectories

The second feature of auditory cortical data was that each stimulus generated a population response embedded in an approximately five-dimensional subspace of the full state space. We hypothesized that low-dimensional responses in the fitted network model originated from a low-rank structure in the connectivity (Mastrogiuseppe and Ostojic, 2018), implying that the connectivity matrix could be approximated in terms of RN modes, that is, as

𝐉=𝐮(1)𝐯(1)T+𝐮(2)𝐯(2)T++𝐮(R)𝐯(R)T, (3)

where each mode was specified by two N-dimensional vectors 𝐮(r) and 𝐯(r), which we term the right and left connectivity patterns respectively (Mastrogiuseppe and Ostojic, 2018). This set of connectivity patterns is uniquely defined from the singular value decomposition (SVD) of the connectivity matrix 𝐉 (see Materials and methods, Section 'Low-dimensional dynamics').

To test the low-rank hypothesis, we re-fitted the network model while constraining the rank of connectivity matrix (Figure 5A, Figure 5—figure supplement 1, Figure 5—figure supplement 2; see Materials and methods, Section 'Fitting the network model'). Progressively increasing the rank R, we found that R=6 was sufficient to capture more than 80% of the variance explained by the network model when fitting the responses to individual stimuli (Figure 5A).

Figure 5. Low-dimensional structure of the dynamics of OFF responses to individual stimuli.

(A) Goodness of fit (coefficient of determination) as a function of the rank R of the fitted network, normalized by the goodness of fit computed using ordinary least squares ridge regression. The shaded area represents the standard deviation across individual stimuli. For R=6 reduced-rank regression captures more than 80% (red solid line) of the variance explained by ordinary least squares regression. (B) Left: overlap matrix 𝐉ov consisting of the overlaps between left and right connectivity patterns of the connectivity 𝐉 fitted on one example stimulus. The color code shows strong (and opposite in sign) correlation values between left and right connectivity patterns within pairs of nearby modes. Weak coupling is instead present between different rank-2 channels (highlighted by dark boxes). Right: histogram of the absolute values of the correlation between right and left connectivity patterns, across all stimuli, and across 20 random subsamplings of 80% of the neurons in the population for each stimulus. Left and right connectivity vectors are weakly correlated, except for the pairs corresponding to the off-diagonal couplings within each rank-2 channel. The two markers indicate the average values of the correlations within each group. When fitting individual stimuli the rank parameter R and the number of principal components are set respectively to 6 and 100. (C) Left: absolute value of the difference between Δ1 and Δ2 (see panel B) divided by 2, across stimuli. For a rank-R channel (here R=6) comprising R/2 rank-2 channels, the maximum difference |Δ1-Δ2|/2 across the R/2 rank-2 channels is considered. Large values of this difference indicate that the dynamics of the corresponding rank-R channel are amplified. Right: component of the initial state 𝐫0 on the left connectivity vectors 𝐯(2;k), (i.e. (kα2(k) 2)1/2 in Equation (75)), obtained from the fitted connectivity 𝐉 to individual stimuli (in green). The component of the initial condition along the left connectivity vectors 𝐯(2;k) (green box) is larger than the component of random vectors along the same vectors (gray box). In both panels the boxplots show the distributions across all stimuli of the corresponding average values over 20 random subsamplings of 80% of the neurons. The rank parameter and the number of principal components are the same as in B. (D) For each stimulus, we show the correlation between the state at the end of stimulus presentation and the state at the peak of the OFF response, defined as the time of maximum distance from baseline. Error bars represent the standard deviation computed over 2000 bootstrap subsamplings of 50% of the neurons in the population (2343 neurons). (E) Correlation between initial state and peak state obtained from fitting the single-cell (in green) and network models (in red) to a progressively increasing number of stimuli. For each fitted response the peak state is defined as the population vector at the time of maximum distance from baseline of the original response. The colored shaded areas correspond to the standard error over 100 subsamplings of a fixed number of stimuli (reported on the abscissa) out of 16 stimuli. For each subsampling of stimuli, the correlation is computed for one random stimulus.

Figure 5.

Figure 5—figure supplement 1. Selection of hyperparameters in rank-reduced ridge regression.

Figure 5—figure supplement 1.

The ridge and rank hyperparameters λ and R are selected using 10-fold cross-validation in the time domain, when fitting the activity of the whole pseudopopulation (All) or the activity of individual animals (mouse 1, 2, and 3). (A) Goodness of fit as a function of the hyperparameters λ and R. The gray trace shows the value of λ for which the goodness of fit is maximum for each value of the rank parameter R. (B) Goodness of fit as a function of the hyperparameters λ for three different choices of the rank R. The value of λ for which the goodness of fit is maximum is indicated by the dashed line. (C) Goodness of fit as a function of the rank R for λ=1. The value of the goodness of fit as a function of the rank R does not exhibit a clear maximum, but rather tends to saturate at a certain value of the rank R*.
Figure 5—figure supplement 2. Model recovery for simulated OFF responses using a low-rank network model.

Figure 5—figure supplement 2.

Model recovery for OFF responses generated through a low-rank network model. The number of units was set to N=500. The connectivity matrix consisted of P=30 orthogonal transient channels, each one with rank equal to one, given by 𝐉=𝐮(1)𝐯(1)T+𝐮(2)𝐯(2)T++𝐮(P)𝐯(P)T, where all the left and right connectivity vectors are mutually orthogonal (𝐯(i)T𝐮(j)=0,i,j). We generated 10 OFF responses by setting the initial state of the network to 10 orthogonal left connectivity vectors 𝐯(r). Input noise was injected into the system. Thus, the dynamics across stimuli spanned at least 20 dimensions. (A) Goodness of fit as a function of the rank parameter R in the reduced-rank regression fit, for two values of the PC dimensionality. Ten-fold cross-validation is used; error bars represent the standard error across the 10 cross-validation folds. The goodness of fit increases as a function of the rank R and stays approximately constant for values of the rank bigger or equal to 20 (dashed line). (B) Goodness of fit as a function of the PC dimensionality computed using ordinary least square regression (with no rank constraint, gray trace) and reduced-rank regression (with R=20, red trace). The dashed line indicated the number of dimensions spanned by the dynamics for all stimuli in case of zero input noise (equal to 20 in this case). For a value of the PC dimensionality larger than the number of dimensions spanned by the dynamics, the goodness of fit decreases due to overfitting. Cross-validation is performed as in A. (C) Similarity, as quantified by the coefficient of determination R2, between the elements of connectivity resulting from the fitting procedure 𝐉fit and the projection of the real connectivity 𝐉 on the top principal components, i.e. 𝐉PC=𝐐T𝐉𝐐 (where 𝐐 contains as columns the principal components), as a function of the number of principal components considered. The gray trace corresponds to ordinary least square regression, while the red trace to reduced-rank regression with rank R=20. (D) Spectra of the matrices 𝐉fit (green dots) and 𝐉PC (red crosses) using ordinary least square regression (left) and reduced-rank regression (right) for fixed dimensionality (equal to 80, black marker in panel C). In all panels, the ridge parameter is set to λ=0.1.
Figure 5—figure supplement 3. Controls for the orthogonality between initial and peak state.

Figure 5—figure supplement 3.

(A) Statistical control testing for the hypothesis that small correlations are due to the high dimensionality of neural responses. The dark trace represents the absolute value of the correlation between initial and peak state for individual stimuli. The blue trace represents the correlations between initial and peak state after shuffling the labels ‘initial state’ and ‘peak state’ across trials (mean and standard error across shuffles is shown). Blue markers indicate stimulus indices for which the p-value for the difference between the data and the controls is smaller than 0.05 (lower tail test), where a low value of the correlation is not attributable to the high dimensionality of the state-space (see Materials and methods, Section 'Controls for subspace overlaps and initial state-peak correlations'). The number of shuffles is set to 1000. (B) Statistical control testing for the hypothesis that small correlations are due to the trial-to-trial variability of neural responses. The dark trace represents absolute value of the correlation between initial and peak state for individual stimuli (mean and standard error over 1000 subsampling of 10 trials are shown). The yellow trace represents the correlations between responses at the same time points but averaged over two different sets of trials (mean and standard error over 1000 permutations of trials are shown). Yellow markers indicate stimulus indices for which the p-value for the difference between the data and the controls is smaller than 0.05 (two-tailed independent t-test), where a low value of the correlation is not attributable to the trial-to-trial variability of neural responses (see Materials and methods, Section 'Controls for subspace overlaps and initial state-peak correlations').

The dynamics in the obtained low-rank network model are therefore fully specified by a set of patterns over the neural population: patterns corresponding to initial states 𝐫0 determined by the stimuli, and patterns corresponding to connectivity vectors 𝐮(r) and 𝐯(r) for r=1,,R that determine the network dynamics. Recurrent neural networks based on such connectivity directly generate low-dimensional dynamics: for a given stimulus s the trajectory of the dynamics lies in the subspace spanned by the initial state 𝐫0(s) and the set of right connectivity patterns {𝐮(r)}r=1,,R (Mastrogiuseppe and Ostojic, 2018; Bondanelli and Ostojic, 2020; Schuessler et al., 2020; Beiran et al., 2020). More specifically, the transient dynamics can be written as (see Materials and methods, Section 'Low-dimensional dynamics')

r(t)=etr0+etUJov1[exp(t(JovI))](VTr0), (4)

where 𝐔 and 𝐕 are N×R matrices that contain respectively the R right and left connectivity vectors as columns, and 𝐉ov is the R×R matrix of overlaps 𝐉klov=𝐯(k)T𝐮(l) between left and right connectivity vectors. This overlap matrix therefore fully determines the dynamics in the network (see Materials and methods, Section 'Low-dimensional dynamics').

Inspecting the overlap matrix 𝐉ov obtained from the fitted connectivity matrix revealed a clear structure, where left and right vectors were strongly correlated within pairs, but uncorrelated between different pairs (Figure 5B left panel). This structure effectively defined a series of R/2 rank-2 channels that were orthogonal to each other. Within individual rank-2 channels, strong correlations were observed only across the two modes (e.g. between 𝐯(1) and 𝐮(2), 𝐯(2) and 𝐮(1), etc.; Figure 5B), so that the connectivity matrix corresponding to each rank-2 channel can be written as

𝐉2=Δ1𝐯(2)𝐯(1)T-Δ2𝐯(1)𝐯(2)T, (5)

where Δ1 and Δ2 are two scalar values. Rank-2 matrices of this type have purely imaginary eigenvalues given by ±iΔ1Δ2, reflecting the strong imaginary components in the eigenspectrum of the full matrix (Figure 4A). These imaginary eigenvalues lead to strongly rotational dynamics in the plane defined by 𝐯(1) and 𝐯(2) (Figure 6C; see Materials and methods, Section 'Dynamics of a low-rank rotational channel'), qualitatively similar to the low-dimensional dynamics in the data (Figure 2B). Such rotational dynamics however do not necessarily imply transient amplification. In fact, a rank-2 connectivity matrix as in Equation (5) generates amplified dynamics only if two conditions are satisfied (Figure 6B; see Materials and methods, Section 'Dynamics of a low-rank rotational channel'): (i) the difference |Δ2-Δ1|/2 is greater than unity; (ii) the initial state 𝐫0 overlaps strongly with the left connectivity patterns 𝐯(r). A direct consequence of these constraints is that when dynamics are transiently amplified, the population activity vector at the peak of the transient dynamics lies along a direction that is largely orthogonal to the initial state at stimulus offset (Figure 6C,E, see Materials and methods, Section 'Correlation between initial and peak state').

Figure 6. Population OFF responses in a network model with low-rank rotational structure.

Figure 6.

(A) Single-unit OFF responses to two distinct stimuli generated by orthogonal rank-2 rotational channels (see Materials and methods, Section 'Dynamics of a low-rank rotational channel'). The two stimuli were modeled as two different activity states 𝐫0(1) and 𝐫0(2) at the end of stimulus presentation (t=0). We simulated a network of 1000 units. The connectivity consisted of the sum of P=20 rank-2 rotational channels of the form given by Equation (5, 6) (R1=R2==RP=2 in Equation (6) and Δ1=||𝐮(1;s)||=1, Δ2=||𝐮(2;s)||=7 for all s=1,,P). Here the two stimuli were chosen along two orthogonal vectors 𝐯(2;s=1) and 𝐯(2;s=2). Dashed lines indicate the time of stimulus offset. (B) Distance from baseline of the population activity vector during the OFF responses to the two stimuli in A. For each stimulus, the amplitude of the offset responses (quantified by the peak value of the distance from baseline) is controlled by the difference between the lengths of the right connectivity vectors of each rank-2 channel, that is, |Δ2-Δ1|/2. (C) Projection of the population OFF responses to the two stimuli onto the first two principal components of either stimuli. The projections of the vectors 𝐯(2;s=1) and v(1;s=1) (resp. 𝐯(2;s=2) and v(1;s=2)) on the subspace defined by the first two principal components of stimulus 1 (resp. 2) are shown respectively as solid and dashed arrows. The subspaces defined by the vector pairs (𝐯(1;s=1),𝐯(2;s=1)) and (𝐯(1;s=2),𝐯(2;s=2)) are mutually orthogonal, so that the OFF responses to stimuli 1 and 2 evolve in orthogonal dimensions. (D) Firing rate of one example neuron in response to two distinct stimuli: in the recurrent network model the time course of the activity of one unit is not necessarily the same across stimuli. (E) Correlation between the initial state (i.e. the state at the end of stimulus presentation) and the state at the peak of the OFF responses, for five example stimuli. Error bars represent the standard deviation computed over 100 bootstrap subsamplings of 50% of the units in the population.

The theoretical analyses of the model therefore provide a new set of predictions that we directly tested on the connectivity fitted to the data. We first computed the differences |Δ2-Δ1|/2 for each channel using the SVD of the fitted matrix 𝐉(s), and found that they were sufficiently large to amplify the dynamics within each rank-R channel (Figure 5C left panel). We next examined for each stimulus the component of the initial state 𝐫0 on the 𝐯(r)’s and found that it was significantly larger than the component on the 𝐯(r)’s of a random vector, across all stimuli (Figure 5C right panel). Finally, computing the correlation between the peak state and the initial state at the end of stimulus presentation, we found that this correlation took values lower than values predicted by chance for almost all stimuli, consistent with the prediction of the network model (Figure 5D, Figure 5—figure supplement 3). The single-cell model instead predicts stronger correlations between initial and peak states, in clear conflict with the data (Figure 5E).

Orthogonal OFF response trajectories across stimuli

The third feature observed in the data was that population responses evoked by different stimuli were orthogonal for many of the considered stimuli. Orthogonal trajectories for different stimuli can be reproduced in the model under the conditions that (i) initial patterns of activity corresponding to different stimuli are orthogonal, (ii) the connectivity matrix is partitioned into a set of orthogonal low-rank terms, each individually leading to transiently amplified dynamics, and (iii) each stimulus activates one of the low-rank terms. Altogether, for P stimuli leading to orthogonal responses, the connectivity matrix can be partitioned in P different groups of modes:

𝐉=s=1P𝐉(s), (6)

with

𝐉(s)=r=1Rs𝐮(r;s)𝐯(r;s)T, (7)

where the vectors 𝐯(r;s) have unit norm. The set of modes indexed by s correspond to the s-th stimulus, so that the pattern of initial activity 𝐫0(s) evoked by stimulus s overlaps only with those modes. Moreover, modes corresponding to different groups are orthogonal, and generate dynamics that span orthogonal subspaces (Figure 2D). Each term in Equation (6) can therefore be interpreted as an independent transient coding channel that can be determined from the OFF-response to stimulus s alone.

We therefore examined whether the fitted connectivity 𝐉 consisted of independent low-rank coding channels (Equation (6)), a constraint that would generate orthogonal responses to different stimuli as observed in the data. If 𝐉 consisted of fully independent channels as postulated in Equation (6), then it could be equivalently estimated by fitting the recurrent model to the OFF responses to each stimulus s independently, leading to one matrix 𝐉(s) for each stimulus. The hypothesis of fully independent channels then implies that (i) the connectivity vectors for the different connectivity matrices are mutually orthogonal, (ii) the full connectivity matrix can be reconstructed by summing the matrices 𝐉(s) estimated for individual stimuli. To test these two predictions, we estimated the connectivity matrices 𝐉(s) (the rank parameter for each stimulus was set to R=5). We then computed the overlaps between the connectivity vectors corresponding to different pairs of stimuli (see Materials and methods, Section 'Analysis of the transient channels') and compared them to the overlaps between the subspaces spanned in response to the same pairs of stimuli (see Figure 2D). We found a close match between the two quantities (Figure 7A,B): pairs of stimuli with strong subspace overlaps corresponded to high connectivity overlaps, and pairs of stimuli with low subspace overlaps corresponded to low connectivity overlaps. This indicated that some of the stimuli, but not all, corresponded to orthogonal coding channels. We further reconstructed the OFF responses using the matrix 𝐉Sum=s𝐉(s). We then compared the goodness of fit computed using the matrices 𝐉Sum, 𝐉Full and multiple controls where the elements of the matrix 𝐉Sum were randomly shuffled. While the fraction of variance explained when using the matrix 𝐉Sum was necessarily lower than the one computed using 𝐉Full, the model with connectivity 𝐉Sum yielded values of the goodness of fit significantly higher than the ones obtained from the shuffles (Figure 7C). These results together indicated that the full matrix 𝐉Full can indeed be approximated by the sum of the low-rank channels represented by the 𝐉(s).

Figure 7. Structure of the transient channels across stimuli.

Figure 7.

(A) Overlaps between the transient channels corresponding to individual stimuli (connectivity overlaps), as quantified by the overlap between the right connectivity vectors 𝐮(r;s) of the connectivities 𝐉(s) fitted to each individual stimulus s (see Materials and methods, Section 'Analysis of the transient channels'). The ridge and rank parameters, and the number of principal components have been respectively set to λ=5, R=5 and #PC=100 (the choice of R=5 maximized the coefficient of determination between the connectivity overlaps, A, and the subspace overlaps, Figure 2D). (B) Scatterplot showing the relationship between the subspace overlaps (see Figure 2D) and connectivity overlaps (panel A) for each pair of stimuli. (C) Goodness of fit computed by predicting the population OFF responses using the connectivity 𝐉Full (Full) or the connectivity given by the sum of the individual channels 𝐉Sum (Sum). These values are compared with the goodness of fit obtained by shuffling the elements of the matrix 𝐉Sum (Shuffle). Box-and-whisker plots show the distributions of the goodness of fit computed for each cross-validation partition (10-fold). For 𝐉Full the ridge and rank parameters, and the number of principal components were set to λ=2, R=70 and #PC=100 (Figure 5—figure supplement 1).

Amplification of single-trial fluctuations

So far we examined the activity obtained by averaging for each neuron the response to the same stimulus over multiple trials. We next turned to trial-to-trial variability of simultaneously recorded neurons and compared the structure of the variability in the data with the network and single-cell models. Specifically, we examined the variance of the activity along a direction 𝐳0 of state-space (Figure 8A; Hennequin et al., 2012), defined as:

var(z0;t)=(z0Tr(t)z0Tr(t))2, (8)

where denotes the averaging across trials. We compared the variance along the direction 𝐫ampl corresponding to the maximum amplification (distance from baseline) of the trial-averaged dynamics with variance along a random direction 𝐫rand.

Figure 8. Structure of trial-to-trial variability generated by the network and single-cell models, compared to the auditory cortical data.

Figure 8.

(A) Cartoon illustrating the structure of single-trial population responses along different directions in the state-space. The thick and thin blue lines represent respectively the trial-averaged and single-trial responses. Single-trial responses start in the vicinity of the trial-averaged activity 𝐫0. Both the network and single-cell mechanisms dynamically shape the structure of the single-trial responses, here represented as graded blue ellipses. The red and black lines represent respectively the amplified and random directions considered in the analyses. (B) Time course of the variability computed along the amplified direction (solid trace) and along a random direction (dashed trace) for one example stimulus and one example session (287 simultaneously recorded neurons). In this and all the subsequent panels the amplified direction is defined as the population activity vector at the time when the norm of the trial-averaged activity of the pseudo-population pooled over sessions and animals reaches its maximum value (thick dashed line). Thin dashed lines denote stimulus offset. Shaded areas correspond to the standard deviation computed over 20 bootstrap subsamplings of 19 trials out of 20. (C, E, and G) Variability amplification (VA) computed for the amplified and random directions on the calcium activity data (panel C), on trajectories generated using the network model (panel E) and the single-cell model (panel G); (see Materials and methods, Section 'Single-trial analysis of population OFF responses'), for one example stimulus (same as in B). The network and single-cell models were first fitted to the trial-averaged responses to individual stimuli, independently across all recordings sessions (13 sessions, 180 ± 72 neurons). 100 single-trial responses were then generated by simulating the fitted models on 100 initial conditions drawn from a random distribution with mean 𝐫0 and covariance matrix equal to the covariance matrix computed from the single-trial initial states of the original responses (across neurons for the single-cell model, across PC dimensions for the recurrent model). Results did not change by drawing the initial conditions from a distribution with mean 𝐫0 and isotropic covariance matrix (i.e. proportional to the identity matrix, as assumed for the theoretical analysis in Materials and methods, Section 'Single-trial analysis of population OFF responses'). In the three panels, the values of VA were computed over 50 subsamplings of 90% of the cells (or PC dimensions for the recurrent model) and 50 shuffles. Error bars represent the standard deviation over multiple subsamplings, after averaging over all sessions and shuffles. Significance levels were evaluated by first computing the difference in VA between amplified and random directions (ΔVA) and then computing the p-value on the difference between ΔVA(Real) and ΔVA(Shuffles) across subsamplings (two-sided independent t-test). For the network model, the VA is higher for the amplified direction than for a random direction, and this effect is significantly stronger for the real than for the shuffled responses. Instead, for the single-cell model the values of the VA computed on the real responses are not significantly different from the ones computed on the shuffled responses. (D, F, and H) Values of the VA computed as in panels C, E, and G pooled across all 16 stimuli. Error bars represent the standard error across stimuli. Significance levels were evaluated computing the p-value on the difference between ΔVA(Real) and ΔVA(Shuffles) across stimuli (two-sided Wilcoxon signed-rank test). The fits of the network and single-cell models of panels E, G, F, and H were generated using ridge regression (λ=0.5 for both models).

Inspecting the variability during the OFF-responses in the AC data revealed two prominent features: (i) fluctuations across trials are amplified along the same direction 𝐫ampl of state-space as trial-averaged dynamics, but not along random directions (Figure 8B,C,D); (ii) cancelling cross-correlations by shuffling trial labels independently for each cell strongly reduces the amplification of the variance (Figure 8C,D). This second feature demonstrates that the amplification of variance is due to noise-correlations across neurons that have been previously reported in the AC (Rothschild et al., 2010). Indeed the variance along a direction 𝐳0 at time t can be expressed as

var(𝐳0;t)=𝐳0T𝐂(t)𝐳0, (9)

where 𝐂(t) represents the covariance matrix of the population activity at time t. Shuffling trial labels independently for each cell effectively cancels the off-diagonal elements of the covariance matrix and leaves only the diagonal elements that correspond to single-neuron variances. The comparison between simultaneous and shuffled data (Figure 8C,D) demonstrates that an increase in single-neuron variance is not sufficient to explain the total increase in variance along the amplified direction 𝐫ampl.

Simulations of the fitted models and a mathematical analysis show that the network model reproduces both features of trial-to-trial variability observed in the data (Figure 8E,F), based on a minimal assumption that variability originates from independent noise in the initial condition at stimulus offset (see Materials and methods, Section 'Structure of single-trial responses in the network model'). Surprisingly, the single-cell model also reproduces the first feature, the amplification of fluctuations along the same direction of state-spate as trial-averaged dynamics, although in that model the different neurons are not coupled and noise-correlations are absent (Figure 8G,H). Instead, the single-cell model fails to capture the second feature, as it produces amplified variability in both simultaneous and shuffled activity. This demonstrates that the amplification of variability in the single-cell model is due to an amplification of single-cell variances, that is, the diagonal elements of covariance matrix (see Materials and methods, Section 'Structure of single-trial responses in the single-cell model'). Somewhat counter-intuitively, the disagreement between the single-cell model and the data is not directly due to the lack of noise-correlations in that model, but due to the fact that the single-cell model does not predict accurately the variance of single-neuron activity during the OFF-responses, and therefore fails to capture variance in shuffled activity.

In summary, the network model accounts better than the single-cell model for the structure of single-trial fluctuations present in the AC data.

Discussion

Adopting a population approach, we showed that strong OFF responses observed in auditory cortical neurons form transiently amplified trajectories that encode individual stimuli within low-dimensional subspaces. A geometrical analysis revealed a clear structure in the relative orientation of these subspaces, where subspaces corresponding to different auditory stimuli were largely orthogonal to each other. We found that a simple, linear single-neuron model captures well the average response to individual stimuli, but not the structure across stimuli and across individual trials. In contrast, we showed that a simple, linear recurrent network model accounts for all the properties of the population OFF responses, notably their global structure across multiple stimuli and the fluctuations across single trials. Our results therefore argue for a potential role of network interactions in shaping OFF responses in AC. Ultimately, future work could combine both single-cell and network mechanisms in a network model with more complex intrinsic properties of individual neurons (Beiran and Ostojic, 2019; Muscinelli et al., 2019).

In this study, we focused specifically on the responses following stimulus offset. Strong transients during stimulus onset display similar transient coding properties (Mazor and Laurent, 2005) and could be generated by the same network mechanism as we propose for the OFF responses. However, during ON-transients, a number of additional mechanisms are likely to play a role, in particular single-cell adaptation, synaptic depression, or feed-forward inhibition. Indeed, recent work has shown that ON and OFF trajectories elicited by a stimulus are orthogonal to each other in the state-space (Saha et al., 2017), and this was also the case in our data set (Figure 2—figure supplement 3). Linear network models instead produce ON and OFF responses that are necessarily correlated and cannot account for the observed orthogonality of ON and OFF responses for a given stimulus. Distinct ON and OFF response dynamics could also result from intrinsic nonlinearities known to play a role in the AC (Calhoun and Schreiner, 1998; Rotman et al., 2001; Sahani and Linden, 2003; Machens et al., 2004; Williamson et al., 2016; Deneux et al., 2016).

A major assumption of both the single-cell and network models we considered is that the AC does not receive external inputs after the auditory stimulus is removed. Indeed, the observation that the structure of the dynamics across stimuli could be predicted by the structure of their initial states (Figure 2D,E and Figure 2—figure supplement 4) indicates that autonomous dynamics at least partly shape the OFF responses. The comparisons between data and models moreover show that autonomous dynamics are in principle sufficient to reproduce a number of features of OFF responses in the absence of any external drive. However, neurons in the AC do receive direct afferents from the medial geniculate body of the thalamus, and indirect input from upstream regions of the auditory pathway, where strong OFF responses have been observed. Thus, in principle, OFF responses observed in AC could be at least partly inherited from upstream auditory regions (Kopp-Scheinpflug et al., 2018). Disentangling the contributions of upstream inputs and recurrent dynamics is challenging if one has access only to auditory cortical activity (but see Seely et al., 2016 for an interesting computational approach). Ultimately, the origin of OFF responses in AC needs to be addressed by comparing responses between related areas, an approach recently adopted in the context of motor cortical dynamics (Lara et al., 2018). A direct prediction of our model is that the inactivation of recurrent excitation in auditory cortical areas should weaken OFF responses (Li et al., 2013). However, recurrency in the auditory system is not present only within the cortex but also between different areas along the pathway (Ito and Malmierca, 2018; Winer et al., 1998; Lee et al., 2011). Therefore OFF responses could be generated at a higher level of recurrency and might not be abolished by inactivation of AC.

The dimensionality of the activity in large populations of neurons in the mammalian cortex is currently the subject of debate. A number of studies have found that neural activity explores low-dimensional subspaces during a variety of simple behavioral tasks (Gao et al., 2017). In contrast, a recent study in the visual cortex has shown that the response of large populations of neurons to a large number of visual stimuli is instead high-dimensional (Stringer et al., 2019). Our results provide a potential way of reconciling these two sets of observations. We find that the population OFF responses evoked by individual auditory stimuli are typically low-dimensional, but lie in orthogonal spaces, so that the dimensionality of the responses increases when considering an increasing number of stimuli. Note that in contrast to Stringer et al., 2019, we focused here on the temporal dynamics of the population response. The dimensionality of these dynamics is in particular limited by the temporal resolution of the recordings (Figure 2—figure supplement 1).

The analyses we performed in this study were directly inspired by an analogy between OFF responses in the sensory areas and neural activity in the motor cortices (Churchland and Shenoy, 2007; Churchland et al., 2010). Starting at movement onset, single-neuron activity recorded in motor areas exhibits strongly transient and multiphasic firing lasting a few hundreds of milliseconds. Population-level dynamics alternate between at least two dimensions, shaping neural trajectories that appear to rotate in the state-space. These results have been interpreted as signatures of an underlying dynamical system implemented by recurrent network dynamics (Churchland et al., 2012; Shenoy et al., 2011), where the population state at movement onset provides the initial condition able to generate the transient dynamics used for movement generation. Computational models have explored this hypothesis (Sussillo et al., 2015; Hennequin et al., 2014; Stroud et al., 2018) and showed that the complex transient dynamics observed in motor cortex can be generated in network models with strong recurrent excitation balanced by fine-tuned inhibition (Hennequin et al., 2014). Surprisingly, fitting a linear recurrent network model to auditory cortical data, we found that the arrangement of the eigenvalues of the connectivity matrix was qualitatively similar to the spectrum of this class of networks (Hennequin et al., 2014), suggesting that a common mechanism might account for the responses observed in both areas.

The perceptual significance of OFF responses in the auditory pathway is still matter of ongoing research. Single-cell OFF responses observed in the auditory and visual pathways have been postulated to form the basis of duration selectivity (Brand et al., 2000; Alluri et al., 2016; He, 2002; Aubie et al., 2009; Duysens et al., 1996). In the auditory brainstem and cortex, OFF responses of single neurons exhibit tuning in the frequency-intensity domain, and their receptive field has been reported to be complementary to the receptive field of ON responses (Henry, 1985; Scholl et al., 2010). The complementary spatial organization of ON and OFF receptive fields may result from two distinct sets of synaptic inputs to cortical neurons (Scholl et al., 2010), and has been postulated to form the basis for higher-order stimulus features selectivity in single cells, such as frequency-modulated (FM) sounds (Sollini et al., 2018) and amplitude-modulated (AM) sounds (Deneux et al., 2016), both important features of natural sounds (Sollini et al., 2018; Nelken et al., 1999). Complementary tuning has also been observed between cortical ON and OFF responses to binaural localization cues, suggesting OFF responses may contribute to the encoding of sound-source location or motion (Hartley et al., 2011). At the population level, the proposed mechanism for OFF response generation may provide the basis for encoding complex sequences of sounds. Seminal work in the olfactory system has shown that sequences of odors evoked specific transient trajectories that depend on the history of the stimulation (Broome et al., 2006; Buonomano and Maass, 2009). Similarly, within our framework, different combinations of sounds could bring the activity at the end of stimulus offset to different regions of the state-space, setting the initial condition for the subsequent OFF responses. If the initial conditions corresponding to different sequences are associated with distinct transient coding channels, different sound sequences would evoke transient trajectories along distinct dimensions during the OFF responses, therefore supporting the robust encoding of complex sound sequences.

Materials and methods

Data analysis

The data set

Neural recordings

Neural data was recorded and described in previous work (Deneux et al., 2016). We analyzed the activity of 2343 neurons in mouse AC recorded using two-photon calcium imaging while mice were passively listening to different sounds. Each sound was presented 20 times. Data included recordings from three mice across 13 different sessions. Neural recordings in the three mice comprised respectively 1251, 636, and 456 neurons. Recordings in different sessions were performed at different depths, typically with a 50 µm difference (never less than 20 µm). Since the soma diameter is ∼15 µm, this ensured that different cells were recorded in different sessions. We analyzed the trial-averaged activity of a pseudo-population of neurons built by pooling neural activity across all recording sessions and all animals. The raw calcium traces (imaging done at 31.5 frames per second) were smoothed using a Gaussian kernel with standard deviation σ=32 ms. We then subtracted the baseline firing activity (i.e. the average neural activity before stimulus presentation) from the activity of each neuron, and used the baseline-subtracted neural activity for the analyses.

The stimuli set

The stimuli consisted of a randomized presentation of 16 different sounds, 8 UP-ramping sounds, and 8 DOWN-ramping sounds. For each type, sounds had different frequency content (either 8 kHz or WN), different durations (1 or 2 s), and different combinations of onset and offset intensity levels (for UP-ramps either 50–85 dB or 60–85 dB, while for DOWN-ramps 85–50 dB or 85–60 dB). The descriptions of the stimuli are summarized in Table 1.

Decoding analysis

To assess the accuracy of stimulus discrimination (8 kHz vs. WN sound) on single-trials, we trained and tested a linear discriminant classifier (Bishop, 2006) using cross-validation. For each trial, the pseudo-population activity vectors (built by pooling across sessions and animals) were built at each 50 ms time bin. We used leave-one-out cross-validation. At each time bin we used 19 out of 20 trials for the training set, and tested the trained decoder on the remaining trial. The classification accuracy was computed as the average fraction of correctly classified stimuli over all 20 cross-validation folds.

At each time t the decoder for classification between stimuli s1 and s2 was trained using the trial-averaged pseudo-population vectors 𝐜1t and 𝐜2t. These vectors defined the decoder 𝐰t and the bias term 𝐛t as:

𝐰t=𝐜1t-𝐜2t,𝐛t=𝐜1t+𝐜2t2

A given population vector 𝐱 was classified as either stimulus s1 or stimulus s2 according to the value of the function y(𝐱)=𝐰tT𝐱-𝐛t:

{if y(x)>0 then x is classified as stimulus s1if y(x)<0 then x is classified as stimulus s2

Random performance was evaluated by training and testing the classifier using cross-validation on surrogate data sets built by shuffling stimulus single-trial labels at each time bin. We performed the same analysis using more sophisticated decoder algorithms, that is, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), and C-SVM decoders. For none of these decoder algorithms was the cross-validated accuracy substantially better than the naive classifier reported in Figure 1C (and for the QDA algorithm the accuracy was substantially worse; not shown).

Principal component analysis

To perform PCA on the population responses to C stimuli si1,,siC we considered the matrix 𝐗N×TC, where N is the number of neurons and T is the number of time steps considered. The matrix 𝐗 contained the population OFF responses to the stimuli si1,,siC, centered around the mean over times and stimuli. If we denote by λi the i-th eigenvalue of the covariance matrix XXT/(TC), the percentage of variance explained by the i-th principal component is given by:

𝖵𝖠𝖱(i)=λi/j=1Nλj

while the cumulative percentage of variance explained by the first M principal components (Figure 2A,C) is given by:

𝖢𝖴𝖬𝖵𝖠𝖱(M)=j=1Mλj/j=1Nλj. (13)

In Figure 2A, for each stimulus s we consider the matrix 𝐗(s)N×T containing the population OFF responses to stimulus s. We computed the cumulative variance explained as a function of the number of principal components M and then averaged over all stimuli.

Cross-validated PCA

We used cross-validation to estimate the variance of OFF responses to individual stimuli and across stimuli attributable to the stimulus-related component, discarding the component due to trial-to-trial variability (Figure 2—figure supplement 1). Specifically, we applied to the calcium activity data the method of cross-validated PCA (cvPCA) developed in Stringer et al., 2019. This method provides an unbiased estimate of the stimulus-related variance by computing the covariance between a training and a test set of responses (e.g. two different trials or two different sets of trials) to an identical collection of stimuli. Let 𝐗(train) and 𝐗(test) be the N×TC matrices containing the mean-centered responses to the same C stimuli. We consider the training and test responses to be two distinct trials. We first perform ordinary PCA on the training responses 𝐗(train) and find the principal component 𝐮i(train) (i=1,...,N). We then evaluate the cross-validated PCA spectrum {λi} as:

λi=1C𝐮i(train)T𝐗(test)T𝐗(train)𝐮i(train). (14)

We repeat the procedure for all pairs of trials (i,j) with ij and average the result over pairs. The cross-validated cumulative variance is finally computed as in Equation (13).

Correlations between OFF response subspaces

To quantify the degree of correlation between pairs of OFF responses corresponding to two different stimuli, termed subspace overlap, we computed the cosine of the principal angle between the corresponding low-dimensional subspaces (Figure 2D, Figure 3I). In general, the principal angle θP between two subspaces U and V corresponds to the largest angle between any two pairs of vectors in U and V respectively, and it is defined by Bjorck and Golub, 1973; Knyazev and Argentati, 2002:

cosθP=max𝐮U,𝐯V𝐮T𝐯

To compute the correlations between the OFF responses to stimuli s1 and s2 we first identified the first K=5 principal components of the response to stimulus s1 and organized them in a N×K matrix 𝐐(s1). We repeated this for stimulus s2, which yielded a matrix 𝐐(s2). Therefore the columns of 𝐐(s1) and 𝐐(s2) define the two subspaces on which the responses to stimuli s1 and s2 live. The cosine of the principal angle between these two subspaces is given by Bjorck and Golub, 1973; Knyazev and Argentati, 2002:

cosθP(s1,s2)=σmax(𝐐(s1)T𝐐(s2)), (16)

where σmax(𝐀) denotes the largest singular value of a matrix 𝐀. We note that this procedure directly relates to canonical correlation analysis (CCA; see Hotelling, 1936; Uurtio et al., 2018). In particular the first principal angle corresponds to the first canonical weight between the subspaces spanned by the columns of 𝐐(s1) and 𝐐(s2) (Golub and Zha, 1992; Bjorck and Golub, 1973).

Controls for subspace overlaps and initial state-peak correlations

In this section we describe the controls that we used to evaluate the statistical significance of the measures of orthogonality between subspaces spanned by neural activity during OFF responses to different stimuli (Figure 2D, Figure 3I), and between initial and peak activity vectors for a single stimulus (Figure 5D). We assessed the significance of the orthogonality between OFF response subspaces across stimuli using two separate controls, which aim at testing for two distinct null hypotheses (Figure 2D, Figure 3I, Figure 2—figure supplement 2). The first hypothesis is that small subspace overlaps (i.e. low correlations) between OFF responses to different stimuli may be due to the high number of dimensions of the state-space in which they are embedded. To test for this hypothesis we compared the subspace overlap computed on the trial-averaged activity with the subspace overlaps computed on the trial-averaged activity where the stimulus labels had been shuffled across trials for each pair of stimuli. We shuffled stimulus labels multiple times, resulting in one value of the subspace overlap for each shuffle. For each pair of stimuli, significance levels were then computed as the fraction of shuffles for which the subspace overlap was lower than the subspace overlap for the real data (lower tail test; Figure 2—figure supplement 2A).

Alternatively, small subspace overlaps could be an artifact of the trial-to-trial variability present in the calcium activity data. In fact, for a single stimulus, maximum values of the correlation between two different trials were of the order of 0.2 (Deneux et al., 2016). To test for the possibility that small subspace overlaps may be due to trial-to-trial variability, for each pair of stimuli we computed the values of the subspace overlaps by computing the trial-averaged activity on only half of the trials (10 trials), subsampling the set of 10 trials multiple times for each stimulus. This yielded a set of values cosθreal(s1,s2,n), where s1 and s2 are the specific stimuli considered and n=1,,Nshuffle. We then computed the subspace overlaps between the trial-averaged responses to the same stimulus, but averaged over two different sets of 10 trials each, over multiple permutations of the trials, resulting in a set of values cosθshuffle(s,n), where s{s1,s2} is the specific stimulus considered and n=1,,Nshuffle. For each pair of stimuli s1 and s2, significance levels were computed using two-tailed t-test and taking the maximum between the p-values given by p(cosθreal(s1,s2,:),cosθshuffle(s1,:)) and p(cosθreal(s1,s2,:),cosθshuffle(s2,:)) for those stimulus pairs for which cosθreal(s1,s2,:)n<cosθshuffled(s1,:)n and cosθreal(s1,s2,:)n<cosθshuffled(s2,:)n, where the bar symbol indicated the mean over shuffles (Figure 2—figure supplement 2B).

The same null hypotheses have been used to test the significance of the orthogonality between initial state and peak state for individual stimuli (Figure 5D, Figure 5—figure supplement 3). A procedure analogous to the one outlined above was employed. Here, instead of the subspace overlaps, the quantity of interest is the correlation (scalar product) between the initial and peak state. For the first control shuffled data are obtained by shuffling the labels ‘initial state’ and ‘peak state’ across trials. Significance levels were evaluated as outlined above for the first control (Figure 5—figure supplement 3A). To test for the second hypothesis, we computed correlations between activity vectors defined at the same time point, but averaged over two different sets of 10 trials each. For each trial permutations we computed these correlations for all time points belonging to the OFF response (35 time points) and average over time points. Significance level was then assessed as outlined above for the second control (Figure 5—figure supplement 3B).

Single-cell model for OFF response generation

In the next section we describe the procedure used to fit the single-cell model to the auditory cortical OFF responses. We consider the single-cell model given by Equation (1), where r0,i(s)Li(0) represents the initial state of the response of unit i to stimulus s. Without loss of generality we assume that the dynamic range of the temporal filters, defined as |maxtLi(t)-mintLi(t)|, is equal to unity, so that r0,i(s) represents the firing rate range of neuron i for stimulus s. If that was not true, i.e. if the single-neuron responses were given by ri(s)(t)=r0,i(s)Ki(t) with αi=|maxtKi(t)-mintKi(t)|1 we could always write the responses as ri(s)(t)=r~0,i(s)L~i(t), where r~0,i(s)=r0,i(s)αi and L~i(t)=Ki(t)/αi, formally equivalent to Equation (1).

Fitting the single-cell model

To fit the single-cell OFF responses ri(s)(t), we expressed the single-cell responses on a set of basis functions (Pillow et al., 2008).

ri(s)(t)=j=1Nbasisaij(s)fj(t), (17)

where the shape and the number of basis function Nbasis are predetermined. We choose the functions fi(t) to be Gaussian functions centered around a value t¯i and with a given width wi, that is, fi(t)=exp(-(t-t¯i)2/2wi2). The problem then consists in finding the coefficients aij(s) that best approximate Equation (17). By dividing the left- and right-hand side of Equation (17) by the firing rate range r0,i(s) we obtain:

ri(s)(t)r0,i(s)=j=1Nbasisbij(s)fj(t),bij(s)=aij(s)/r0,i(s). (18)

In general the coefficients bij(s) could be fitted independently for each stimulus. However, the single-cell model assumes that the coefficients bij do not change across stimuli (see Equation (1)). Therefore, to find the stimulus-independent coefficients bij that best approximate Equation (18) across a given set of stimuli, we minimize the mean squared error given by:

MSE=i,sdt(ri(s)(t)r0,i(s)j=1Nbasisbijfj(t))2

The minimization of the mean squared error can be solved using linear regression techniques. Suppose we want to fit the population responses to C different stimuli simultaneously. Let 𝐑(C) be the matrix of size N×TC obtained by concatenating the N×T matrices (rnorm(s))it=ri(s)(t)/r0,i(s) (i=1,,N, t=1,,T, s=1,,C) corresponding to the normalized responses to the C stimuli. Let 𝐅(C) be the Nbasis×TC matrix obtained by concatenating C times the Nbasis×T matrix (𝐟)it=fi(t). Let 𝐁 be the N×Nbasis matrix given by 𝐁ij=bij. Then Equation (18) can be written as:

𝐑(C)=𝐁𝐅(C), (20)

which can be solved using linear regression.

In order to avoid normalizing by very small values, we fit only the most responding neurons in the population, as quantified by their firing rate range. We set wi=w=35ms for all i=1,,Nbasis, and we set the number of basis functions to Nbasis=10, corresponding to the minimal number of basis functions sufficient to reach the highest cross-validated goodness of fit.

The network model

We consider a recurrent network model of N randomly coupled linear rate units. Each unit i is described by the time-dependent variable ri(t), which represents the difference between the firing rate of neuron i at time t and its baseline firing level. The equation governing the temporal dynamics of the network reads:

τr˙i=-ri+j=1NJijrj, (21)

where τ represents the time constant (fixed to unity), and Jij is the effective synaptic strength from neuron j to neuron i. The system has only one fixed point corresponding to ri=0 for all i. To have stable dynamics, we require that the real part of the eigenvalues of the connectivity matrix 𝐉 is smaller than unity, that is, (𝔢λ(𝐉))max<1. We represent each stimulus as the state of the system reached at the end of stimulus presentation, which we denote by the vector 𝐫0. We moreover assume that during the OFF response the network receives no external input. The OFF response to the stimulus associated with the state 𝐫0 therefore corresponds to the network dynamics starting from the initial condition 𝐫0.

Fitting the network model

In this section we describe the procedure we used for fitting a linear recurrent network model (Equation (21)) to the auditory cortical data using ridge and reduced-rank ridge regression. To fit the network model to the responses to C stimuli, we first concatenate the C matrices 𝐗(s)T×D (where 𝐗t,i(s)=ri(s)(t), with s=1,,C and i=1,,D) containing the neural responses to stimulus s across T timesteps and D neurons (or PC dimensions) along the first dimension, obtaining a matrix 𝐗CT×D contains the neural activity across C stimuli, T time steps, and D neurons or PC dimensions. We then fit the network model 𝐗˙=𝐗(𝐉-𝐈) by first computing from the data the velocity of the trajectory as

X˙=X(ti+1)X(ti)ti+1ti. (22)

Both ridge and reduced-rank ridge regression aim at minimizing the mean squared error ||𝐗˙-𝐗(𝐉-𝐈)||2 subject to different constraints, which are described below. Since ridge regression involves computationally expensive matrix inversion, we first reduced the dimensionality of the original data set by using PCA. Unless otherwise stated, we kept a number of PC components equal to K = 100 for the fit of individual stimuli (Figure 3G,H, Figure 5A–C, Figure 8, Figure 7) and for the fit of all stimuli together (Figure 3I, Figure 4, Figure 7C, Figure 3—figure supplement 2, Figure 3—figure supplement 3, Figure 5—figure supplement 1; K = 100 principal components sufficed to explain more than 90% of the variance of the responses to all stimuli together). As a result, the data matrix 𝐗TC×K contains the activity across T time bins and across all K dimensions for a number C of stimuli.

Ridge regression

Ridge regression aims at determining the connectivity matrix 𝐉 that minimizes the mean squared error ||𝐗˙-𝐗(𝐉-𝐈)||2 with a penalty for large entries of the matrix 𝐉, so that the cost function to minimize is given by ||𝐗˙-𝐗(𝐉-𝐈)||2+λ||(𝐉-𝐈)||2, where λ is a hyperparameter of the model. We can write the ridge regression optimization problem as:

(JI)λ=argminJI||X˙λXλ(JI)||2

where we defined 𝐗˙λ=(𝐗˙,) and 𝐗λ=(𝐗,λ𝐈). The solution to Equation (23) is given by:

𝐉λ*=𝐈+(𝐗λT𝐗λ)-1𝐗λT𝐗˙λ. (24)

This procedure was used for fits shown in Figure 3, Figure 3—figure supplement 1, Figure 3—figure supplement 3, and Figure 4.

Reduced-rank ridge regression

Reduced-rank regression aims at minimizing the mean squared error ||𝐗˙-𝐗(𝐉-𝐈)||2 under a rank constraint on the matrix 𝐉, that is, rank𝐉R, where R is a hyperparameter of the model (Izenman, 1975; Davies and Tso, 1982). Here we combined reduced-rank and ridge regression in the same framework. The reduced-rank ridge regression optimization problem with hyperparameters R and λ can be therefore written as (Mukherjee et al., 2015):

(JI)r,λ=argminrankJR||X˙λXλ(JI)||2. (25)

To solve Equation (25) we consider the solution to the ridge regression problem given by Equation (24). If the matrix 𝐗λ𝐉λ* has SVD given by 𝐗λ𝐉λ*=𝐔Σ𝐕T, then it can be shown that the solution to the reduced-rank ridge regression problem given by Equation (25) can be written as:

𝐉r,λ*=𝐉λ*i=1R𝐕i𝐕iT

We note that each term in the sum of Equation (26) has unit rank, so that the resulting matrix 𝐉R,λ* has rank equal to or less than R.

This procedure was used for fits shown in Figure 5, Figure 7, Figure 8, Figure 3—figure supplement 2, Figure 5—figure supplement 1, and Figure 5—figure supplement 2.

Selection of hyperparameters for ridge regression

To select the value of the hyperparameter λ, we fitted the network model to the data and computed the goodness of fit as the coefficient of determination R2(λ) using cross-validation. We then selected the value of λ which maximized the goodness of fit.

Selection of hyperparameters for reduced-rank ridge regression

To select the values of the hyperparameters λ and R, we fitted the network model to the data with hyperparameters (λ,R) and computed the goodness of fit as the coefficient of determination R2(λ,R) using cross-validation. We repeated the process for a range of values of (λ,R). We observed that, independent of the value of λ, the goodness of fit as a function of the rank R saturates at a particular value of the rank R*, but does not exhibit a clear maximum. We took the value R* as the minimal rank hyperparameter, while we defined the best ridge parameter as λ*=argmaxλR2(λ,R*) (Figure 5—figure supplement 1).

For both ridge and reduced-rank ridge regression, we used K-fold cross-validation, with K=10. When fitting multiple stimuli at once, for each stimulus we partitioned the temporal activity into K chunks, resulting in a total of KC chunks. At the i-th iteration of the cross-validation procedure, we leave out the i-th partition for each stimulus to construct the training set (consisting of (K-1)C chunks) and test the trained model on the remaining C folds.

In Figure 4A,B, the temporal window considered for the fit was extended from 350 ms to 600 ms to include the decay to baseline of the OFF responses, thus obtaining stable eigenvalues. The extension of the temporal window was possible only for the 1 s long stimuli (n=8), since the length of the temporal window following stimulus offset for the 2 s stimuli was limited to approximately 380 ms by the length of the neural recordings.

Control data sets

To evaluate whether the fitted network model captured nontrivial collective dynamics of auditory cortical OFF responses, we followed the approach introduced in Elsayed and Cunningham, 2017. We first computed the goodness of fit (as quantified by the coefficient of determination R2), then repeated the model fitting on control data sets which kept no additional structure than the one defined by correlations across time, neurons and stimuli of the original neural responses. We found that the goodness of fit obtained from fitting the model to the original data set was significantly higher than the one obtained from fitting the control data sets (Figure 3—figure supplement 2, Figure 3—figure supplement 3), confirming that the recurrent network model captured nontrivial collective dynamics in the data beyond the correlations across time, neurons and stimuli.

We generated the control data sets using a recent method based on a maximum entropy model (Savin and Tkačik, 2017) and described in Elsayed and Cunningham, 2017. This method, termed Tensor Maximum Entropy, allowed us to build surrogate data sets that are maximally random (entropy maximization), but constrained in a way that their marginal means and covariances are equal to the means and covariances of the original data set.

Marginal means and covariances

Let the temporal activity along all K dimensions for all C stimuli be organized in a tensor 𝐙T×K×C. The mean tensor 𝐌 is defined as the tensor that makes all the marginal means of 𝐙 vanish. Specifically, if 𝐙¯=𝐙-𝐌, the tensor 𝐌 is such that:

k=1Kc=1C𝐙¯tkc=0,t=1Tc=1C𝐙¯tkc=0,t=1Tk=1K𝐙¯tkc=0

The marginal covariances of the tensor 𝐙¯ across times, neural dimensions, and stimuli are therefore defined as:

{ΣijT=k=1Kc=1CZ¯ikcZ¯jkcΣijK=t=1Tc=1CZ¯ticZ¯tjcΣijC=t=1Tk=1KZ¯tkiZ¯tkj

Tensor maximum entropy method

The method generates the desired number of surrogate data sets 𝐒(i)T×K×C. Each of these surrogates is randomly drawn from a probability distribution that assumes a priori no structure apart from that expected from the marginal means and covariances of the original data. Let µ, ΛT, ΛK, and ΛC be the marginal means and covariances of surrogate 𝐒. The method computes the probability P(𝐒) over the surrogates that maximizes the entropy function

P(S)=argmaxy(S)[Sy(S)logy(S)dS], with SP(S)dS=1 (29)

subject to the constraints

𝔼P[μ]=𝐌,𝔼P[ΛT]=ΣT,𝔼P[ΛK]=ΣK,𝔼P[ΛC]=ΣC, (30)

where 𝔼P[] denotes the expectation over the probability density P. We used three types of surrogate data sets, denoted as T, TK, and TKC. All the three types of surrogates obey to the first constraint in Equation (30) on the marginal means. In addition, surrogates of type T obey the constraint on the time covariances, surrogates of type TK on time and dimension covariance, while surrogates TKC obey all the three covariance constraints.

Analysis of the transient channels

In this section, we provide details on the analysis of the structure of the transient channels obtained from fitting the network model to OFF responses to individual stimuli using reduce-rank regression (Figure 7, see Materials and methods, Section 'Fitting the network model'). Let 𝐉Full be the connectivity matrix resulting from fitting the network model to the responses to all stimuli at once, and 𝐉(s) the connectivity obtained from fitting the model to the response to stimulus s. Before fitting both matrices 𝐉Full and 𝐉(s), we reduced the dimensionality of the responses to the first 100 principal components. We set the value of the rank parameter to R=70 and R=5 respectively for 𝐉Full and 𝐉(s) (Figure 5). Using the SVD, we can write the matrices 𝐉(s) as:

𝐉(s)=𝐔^(s)Σ(s)𝐕(s)T, (31)

where 𝐔^(s) and 𝐕(s) contain respectively the Rs right and left connectivity vectors as columns (see Equation (6)), while Σ(s) contains the norms of 𝐔(s) on the diagonal (see Equation (6)). The transient dynamics elicited by stimulus s has a strong component along the state-space dimensions specified by the right connectivity vectors 𝐮(r;s) (columns of 𝐔(s); see also Equation (43)). We therefore define the overlap between the transient channels (connectivity overlaps) corresponding to pairs of stimuli s1 and s2 (Figure 7A) as the principal angle between the subspaces defined by 𝐔^(s1) and 𝐔^(s2) (see Equations (15 and 16)). We show that the structure of the connectivity overlaps (Figure 7A) matched well with the structure of the subspace overlaps (Figure 2D) across pairs of stimuli in Figure 7B.

To test whether the connectivity fitted on all stimuli at once 𝐉Full consisted of the sum of low-rank transient coding channels, we defined the matrix 𝐉Sum as the sum of the individual transient channels for all stimuli (see Equation 6):

𝐉Sum=𝐔^(1)Σ(1)𝐕(1)T+𝐔^(2)Σ(2)𝐕(2)T++𝐔^(P)Σ(P)𝐕(P)T

We then compared the cross-validated goodness of fit of the population OFF responses using the full matrix 𝐉Full and the matrix 𝐉Sum (Figure 7C).

In Figure 7, when fitting individual stimuli, the ridge and rank hyperparameters have been set respectively to λ=5, R=5, which correspond to the values that maximize the agreement (coefficient of determination) between the connectivity overlaps (Figure 7A) and the subspace overlaps (Figure 2D). For these values the coefficient of determination between the transient channel overlaps and the subspace overlaps is >0.7.

Analysis of the network model

In this section we provide a detailed mathematical analysis of the network model given by Equation (21)

τr˙i=-ri+j=1NJijrj.

Specifically, we derive the conditions on the network components (i.e. connectivity spectra, connectivity vectors, and initial conditions) to produce low-dimensional, transiently amplified OFF responses in networks with low-rank connectivity. We then focus on the specific case of rotational channels in the connectivity.

Normal and non-normal connectivity matrices

In this section, we summarize the relationship between amplified OFF responses and non-normal connectivity matrices in linear recurrent networks previously reported in Trefethen and Embree, 2005; Hennequin et al., 2012; Bondanelli and Ostojic, 2020. To characterize the amplification of OFF responses we focus on the temporal dynamics of the distance from baseline, defined as the norm of the population activity vector ||𝐫(t)|| (Hennequin et al., 2014). The network generates an amplified OFF response to the stimulus associated with the initial condition 𝐫0 when the value of ||𝐫(t)|| transiently increases before decaying to its asymptotic value ||𝐫(t)||=0. Note that having a transiently increasing value of the distance from baseline implies that the OFF response ri(t) of at least one unit displays non-monotonic temporal dynamics. Importantly, the transient behavior of ||𝐫(t)|| depends on the stimulus through 𝐫0, and on the properties of the connectivity matrix 𝐉, in particular on the relationship between its eigenvectors (Trefethen and Embree, 2005).

Connectivity matrices for which the eigenvectors are orthogonal to each other are called normal matrices and they are formally defined as matrices 𝐉 that satisfy 𝐉𝐉T=𝐉T𝐉. In networks with normal connectivity, any stimulus 𝐫0 evokes an OFF response for which the distance from baseline decays monotonically to zero. Such networks thus cannot produce amplified OFF responses, as defined by a transiently increasing ||𝐫(t)||. Note that any symmetric matrix is normal.

On the other hand, connectivity matrices for which some eigenvectors are not mutually orthogonal are called non-normal (Trefethen and Embree, 2005), and they consist of all connectivity 𝐉 for which 𝐉𝐉T𝐉T𝐉. It is well known that non-normal networks can lead to transiently increasing values of ||𝐫(t)||, therefore producing amplified OFF responses. However, the non-normality of the network connectivity 𝐉 constitutes only a necessary, but not a sufficient condition for the generation of amplified responses.

Sufficient condition for amplified OFF responses

In this section we identify the necessary and sufficient condition for the generation of amplified OFF responses in linear recurrent networks. To find the necessary and sufficient condition for amplified responses, we start by writing the differential equation for the dynamics of the distance from baseline as (Neubert and Caswell, 1997; Bondanelli and Ostojic, 2020):

1||𝐫||d||𝐫||dt=𝐫T(𝐉S-𝐈)𝐫||𝐫||2,𝐉S=𝐉+𝐉T2, (33)

where 𝐉S denotes the symmetric part of the connectivity 𝐉. A linear recurrent network exhibits amplified responses when the rate of change of the distance from baseline, d||𝐫||/dt, takes positive values at time t=0. The right-hand side of Equation (33) takes its largest value when the initial condition 𝐫0 is aligned with the eigenvector of 𝐉S associated with the largest eigenvalue λmax(𝐉S). In this case, the rate of change of the distance from baseline at time t=0 takes the value λmax(𝐉S)-1. From Equation (33) it can be shown that the necessary and sufficient condition for the generation of amplified responses in a recurrent networks with connectivity 𝐉 is given by

λmax(𝐉S)>1. (34)

This criterion defines two classes of networks based on the properties of the connectivity matrix: networks in which amplified responses cannot be evoked by any stimulus, and networks able to generate amplified responses to at least one stimulus.

Low-rank networks

In the following section we examine OFF dynamics in networks with low-rank connectivity of the form given by Equation (3):

J=u(1)v(1)T+u(2)v(2)T+...+u(R)v(R)T. (35)

We first show that such connectivity directly constraints the network dynamics to a low-dimensional subspace. We then derive the conditions for the stability and amplification of OFF responses. Finally we apply these results to the specific case of low-rank rotational channels as in Equation (5).

Low-dimensional dynamics

Here we study the dynamics of the population activity vector for a unit-rank network (R=1) and for a general network with rank-R connectivity structure. We consider low-rank networks in which the initial state is set to 𝐫(0)=𝐫0 and no external input acts on the system at later times t>0. By construction, the autonomous dynamics generated by low-rank networks are constrained by the rank R of the connectivity matrix and are therefore low-dimensional when RN.

We first illustrate the linear dynamics in the case of a unit-rank connectivity (R=1), given by

𝐉=𝐮(1)𝐯(1)T, (36)

with 𝐯(1)T of unit norm. The vectors 𝐮(1) and 𝐯(1) are respectively the right and left eigenvectors of 𝐉, and in the following analysis we refer to them respectively as the right and left connectivity vectors. In this case, the dynamics following the initial state 𝐫0 can be explicitly computed as the product between the time-dependent propagator of the dynamics, given by 𝐏t=exp(t(𝐉-𝐈)), and the initial condition 𝐫0 (Arnold, 1973; Bondanelli and Ostojic, 2020):

𝐫(t)=𝐏t𝐫0=exp(t(𝐮(1)𝐯(1)T-𝐈))𝐫0. (37)

By expanding the exponential matrix exp(t(𝐉-𝐈)) in power series, we can write the propagator for the unit-rank dynamics as:

exp(t(u(1)v(1)TI))=etk=0(t(u(1)v(1)T))kk!=et[I+u(1)v(1)Tλ(1+λt+12λ2t2+...1)]=et[I+eλt1λu(1)v(1)T], (38)

where λ is the only nonzero eigenvalue of 𝐉, given by

λ=||u(1)||cosθ,cosθ=u(1)v(1)||u(1)||. (39)

As a result, the full dynamics in the case of rank-1 connectivity structure can be written as:

𝐫(t)=e-t𝐫0+e-t𝐮(1)a(t), (40)

where

a(t)=(𝐯(1)T𝐫0)(eλt-1)/λ. (41)

Since 𝐮(1) is the right eigenvector of 𝐉 corresponding to λ, from Equation (40) we note that, when 𝐫0 is fully aligned with 𝐮(1) the dynamics are one-dimensional and exhibit a monotonic decay along the same dimension. Instead, when the initial state is not fully aligned with 𝐮(1), the dynamics are confined to the plane defined by 𝐫0 and 𝐮(1). In this case, while the component of the dynamics along 𝐫0 decays exponentially as a function of time, the component along the direction of 𝐮(1) increases initially in proportion to the value of its norm, ||𝐮(1)|| (since at time t=0, de-ta(t)/dt=||𝐮(1)||). Eventually, the activity decays to its asymptotic value given by 𝐫(t)=. Therefore in a unit-rank network the dynamics draw a neural trajectory that explores at most two-dimensions in the state space.

These observations can be extended to the general case of rank-R connectivity matrices. For simplicity we rewrite Equation (3) as

𝐉=𝐔𝐕T, (42)

where the matrices 𝐔 and 𝐕 contain respectively the R right and left connectivity vectors as columns. Writing the connectivity matrix in this form is always possible by applying the SVD on 𝐉. The SVD allows us to write the connectivity matrix as 𝐉=𝐔^𝐒𝐕T, where 𝐔=𝐔^𝐒, and 𝐔^T𝐔^=𝐕T𝐕=𝐈. In particular, this implies the norm of each left connectivity vector 𝐯(r) is unity, while the right connectivity vectors 𝐮(r) are not normalized.

Following steps similar to Equation (38), the linear dynamics evoked by the initial state 𝐫0 can be written as

r(t)=exp(t(JI))r0=et(I+m=1(UVT)mm!)r0=etr0+etUa(t), (43)

where we defined the R-dimensional column vector

𝐚(t)=(𝐕T𝐔)-1[exp(t𝐕T𝐔)-𝐈](𝐕T𝐫0), (44)

in analogy with Equation (40). Therefore, in the case of rank-R connectivity matrix, the dynamics evolve in a (R+1)-dimensional space determined by the initial state 𝐫0 and the R right connectivity vectors 𝐮(r) (columns of 𝐔). Equation (44) shows that the dynamics of a rank-R system are determined by the matrix of scalar products between left and right connectivity vectors, which we refer to as the overlap matrix (Schuessler et al., 2020; Beiran et al., 2020)

𝐉ov=𝐕T𝐔. (45)

We conclude that low-rank connectivity matrices of the form given by Equation (3) with RN generate low-dimensional dynamics that explore at most R+1 dimensions.

Conditions for stability and amplification of OFF responses in low-rank networks

In this paragraph we examine the conditions required to generate stable amplified dynamics in networks with low-rank connectivity 𝐉 and initial state 𝐫0. Specifically, two sets of conditions need to be satisfied. The first set of conditions is directly derived by applying the general criterion for stable amplified dynamics given by Equation (34) to low-rank networks, which effectively constrains the connectivity 𝐉 through the relative arrangement of the connectivity vectors 𝐮(r) and 𝐯(r) (r=1,,R). When this criterion is satisfied, amplified dynamics can be generated only if the initial condition 𝐫0 is aligned with specific directions in the state-space. We thus examine a second set of conditions on the initial state 𝐫0 for which amplified trajectories can be evoked, and express these conditions in terms of relationship between 𝐫0 and the modes 𝐮(r)-𝐯(r). As before, without loss of generality, we assume that the norm of the vectors 𝐯(r) is equal to one, while the norm of the vectors 𝐮(r) can vary.

Conditions on the modes 𝐮(r)-𝐯(r)

We first consider a network with unit-rank connectivity 𝐉=𝐮(1)𝐯(1)T, with 𝐯(1) of unit norm. In unit-rank networks, the dynamics is stable only if the nonzero eigenvalue λ is smaller than one. From Equation (39) this yields the stability condition 

cosθ<1||u(1)||. (46)

The condition for the generation of amplified responses given by Equation (34) can be derived in terms of 𝐮(1) and 𝐯(1) by computing the eigenvalues of the symmetric part of the connectivity, 𝐉S=(𝐮(1)𝐯(1)T+𝐯(1)𝐮(1)T)/2. The matrix 𝐉S is of rank two, and has in general two nonzero eigenvalues, given by:

λmax,min(𝐉S)=||𝐮(1)||(cosθ±1)2

Therefore the condition for amplified OFF responses Equation (34) can be written in terms of the length of the vector 𝐮(1) as

||𝐮(1)||>2cosθ+1. (48)

Previous work has shown that amplification in a unit-rank network can take arbitrarily large values only if 0cosθ<1/||𝐮(1)|| (Bondanelli and Ostojic, 2020). As a result, stable and unbounded amplified dynamics can be generated in a unit-rank network if and only if the norm of 𝐮(1) is sufficiently large (Equation (48)) and the correlation between the connectivity vectors is positive and sufficiently small (Equation (46)).

The conditions for stability and amplification derived for a rank-1 network can be implicitly generalized to a rank-R network model, using the fact that the nonzero eigenvalues λ of the low-rank connectivity matrix 𝐉=𝐔𝐕T (where 𝐉 is a N×N matrix) are equal to the nonzero eigenvalues of its overlap matrix 𝐉ov=𝐕T𝐔 (where 𝐉ov is a R×R matrix) (Nakatsukasa, 2019):

eigλ0(𝐉)=eigλ0(𝐉ov)

Therefore, the condition for stability can be implicitly written in terms of the nonzero eigenvalues λ of 𝐉ov as:

{det(VTUλI)=0λ<1. (50)

To derive the conditions for transient amplification, we need that the eigenvalues of the symmetric part of the connectivity be larger than unity, where the symmetric part 𝐉S is a matrix of rank 2R given by:

𝐉S=12(𝐔𝐕T+𝐕𝐔T)=12[𝐔,𝐕][𝐕T𝐔T]

Using Equation (49) (Nakatsukasa, 2019), we can write the nonzero eigenvalues λS of 𝐉S as:

eigλS0(𝐉S)=12eigλS0[𝐕T𝐔T][𝐔,𝐕]=12eigλS0(𝐕T𝐔𝐈𝐒2𝐔T𝐕), (52)

where we used the fact that 𝐕T𝐕=𝐈, and defined 𝐒2=diag(||𝐮(1)||,||𝐮(2)||,). By definition, the eigenvalues of the symmetric part multiplied by 2, that is, 2λS, should satisfy:

0=det(𝐕T𝐔-2λS𝐈)det(𝐔T𝐕-2λS𝐈-𝐒2(𝐔T𝐕-2λS𝐈)-1), (53)

where we used the expression of the determinant of a 2×2 block matrix (Horn and Johnson, 2012). Since for two square matrices 𝐀 and 𝐁 we have that det(𝐀𝐁)=det(𝐀)det(𝐁), det(𝐀T)=det(𝐀), and det(𝐀-1)=1/det(𝐀), we can write:

0=det(VTU2λSI)det{[(UTV2λSI)2S2](UTV2λSI)1}=det(VTU2λSI)det[(UTV2λSI)(VTU2λSI)S2]det(VTU2λSI)1=det[(UTV2λSI)(VTU2λSI)S2]. (54)

Thus, the condition for amplification becomes:

{det[B(λS)B(λS)TS2]=0, where B(λS)=(UTV2λSI)λS>1

Thus, in the general case of a rank-R network, the condition for stable and amplified dynamics is given by Equations (50) and (55).

Dynamics of a low-rank rotational channel

In this section we describe the structure of the low-rank connectivity obtained by fitting the network model to the OFF responses to individual stimuli (Figure 5B,C) and analyze the resulting dynamics. When fitting a low-rank network model to the OFF responses to individual stimuli, we observed a specific structure in the pattern of correlations between right and left connectivity vectors 𝐮(i) and 𝐯(j) of the fitted connectivity 𝐉. This pattern exhibited low values of the correlation for almost all pairs of connectivity vectors, except for pairs of left and right vectors coupled across nearby modes, e.g. 𝐮(1)-𝐯(2), 𝐮(2)-𝐯(1), 𝐮(3)-𝐯(4), 𝐮(4)-𝐯(3), and so forth. This structures gives rise to independent rank-2 channels grouping pairs of modes of the form 𝐉1=𝐮(1)𝐯(1)T+𝐮(2)𝐯(2)T. Within a single channel, the values of the correlation was high, and opposite in sign, for different pairs of vectors, that is, 𝐮^(1)𝐯(2)1 and 𝐮^(2)𝐯(1)-1. As a result, each of these channels can be cast in the form:

𝐉2=Δ1𝐯(2)𝐯(1)T-Δ2𝐯(1)𝐯(2)T, (56)

where we set

𝐮(1)=Δ1𝐯(2)𝐮(2)=-Δ2𝐯(1) (57)

with Δ1, Δ2 two positive scalars, and the vectors 𝐯’s are of unit norm. Scheme 1 illustrates the structure of the rank-2 channel in terms of the left and right connectivity vectors, and the dynamics evoked by an initial condition along 𝐯(1).

Scheme 1. Schematics of the rank-2 rotational channel.

Scheme 1.

For a rank-2 connectivity as in Equation (5), the 2x2 overlap matrix 𝐉ov is therefore given by

𝐉ov=𝐕T𝐔=(0-Δ2Δ10)=𝐕T𝐉2𝐕𝐉~2. (58)

Note that the overlap matrix also corresponds to the connectivity matrix 𝐉2 projected on the basis of the left connectivity vectors [𝐯(1),𝐯(2)] (third equality in Equation (58)).

To derive the necessary and sufficient conditions for the channel to exhibit stable and amplified dynamics in response to at least one initial condition 𝐫0, we need to compute the nonzero eigenvalues of the connectivity matrix and of its symmetric part (equal to the eigenvalues of 𝐉~2 and of its symmetric part). The eigenvalues of 𝐉~2 are purely imaginary for all values of Δ1 and Δ2 and are given by λ1,2=±iΔ1Δ2±iω. The dynamics are therefore always stable. Next, we compute the eigenvalues of the symmetric part of the connectivity, which reads:

𝐉~2,S=12(𝐉~2+𝐉~2T)=(0(Δ1-Δ2)/2(Δ1-Δ2)/20). (59)

The eigenvalues of 𝐉~2,S are ±|Δ2-Δ1|/2, so that the dynamics are amplified when

|Δ2-Δ1|>2

We next derive the full dynamics of the rank-2 connectivity matrix 𝐉2 in Equation (56). To this end, we use the general expression of the propagator for a rank-R network given by Equations (43),(44) in terms of the overlap matrix 𝐉ov (Equation (58)). To compute the inverse and the exponential of the overlap matrix, we start by diagonalizing 𝐉ov. Its eigenvalues are given by λ1,2=±iω, while the corresponding eigenvectors are specified in the columns of the matrix 𝐄, where 𝐄 and 𝐄-1 read:

𝐄=1C(11-iΔ1Δ2iΔ1Δ2),𝐄-1=C(1iΔ2Δ11-iΔ2Δ1),C=1+Δ1/Δ2

The inverse and exponential of the overlap matrix can be therefore computed as:

(𝐕T𝐔)-1=𝐄(-iω00iω)𝐄-1exp(𝐕T𝐔)=𝐄(exp(iω)00exp(-iω))𝐄-1, (62)

so that we obtain the term 𝐚(t) in Equation (44) as:

𝐚(t)=(1ωsinωt1Δ1(cosωt-1)-1Δ2(cosωt-1)1ωsinωt)(𝐕T𝐫0). (63)

From Equations (43), (44), (57), and (63), we can write the expression for the dynamics of the transient channel 𝐉2 with initial condition given by 𝐫0=α1𝐯(1)+α2𝐯(2)+β𝐳() (with 𝐳() a vector orthogonal to both 𝐯(1) and 𝐯(2)) as:

r(t)=etβz()+et[v(1)(α1cosωtΔ2Δ1α2sinωt)+v(2)(α2cosωt+Δ1Δ2α1sinωt)]. (64)

Therefore, the squared distance from baseline of the trajectory reads:

||r(t)||2=e2tβ2+e2t[α12cos2ωt+Δ2Δ1α22sin2ωt2α1α2Δ2Δ1cosωtsinωtα22cos2ωt+Δ1Δ2α12sin2ωt+2α1α2Δ1Δ2cosωtsinωt]. (65)

Note that when Δ2=Δ1, the connectivity matrix is purely anti-symmetric (and therefore normal) and cannot produce amplified dynamics. In fact, in this case, the evolution of the distance from baseline results in an exponential decay ||𝐫(t)||=e-t. Amplification therefore requires that Δ1Δ2 and specifically that Equation (60) holds.

We observe that while an anti-symmetric matrix (therefore normal, and with purely imaginary eigenvalues) cannot produce amplified dynamics, nonetheless a matrix with purely imaginary eigenvalues is not necessarily normal, and can thus produce amplified dynamics. Qualitatively, this is reflected in the dynamics tracing elliptical rotations (as opposed to circular rotations in the case of an anti-symmetric matrix) which determine oscillations (and therefore rising and decaying phases) in the distance from baseline. This analysis can be extended to the case of a superposition of K rank-2 mutually orthogonal channels with connectivity matrix given by:

𝐉2K=k=1KΔ1(k)𝐯(2;k)𝐯(1;k)T-Δ2(k)𝐯(1;k)𝐯(2;k)T, (66)

where each channel k is defined by the connectivity vectors 𝐯(1;k), 𝐯(2;k) (Figure 5B), defining together a rank-2K rotational channel. If we write the initial condition along the connectivity vectors of the different K channels as

𝐫0=k=1K(α1(k)𝐯(1;k)+α2(k)𝐯(2;k))+β𝐳()=k=1K𝐫0(k)+β𝐳() (67)

with k=1K(α1(k)2+α2(k)2)+β2=1, we can write the resulting dynamics as the sum of the dynamics within each channel as:

𝐫2K(t)=k=1K𝐫(t;𝐫0(k))+e-tβ𝐳(), (68)

where the terms 𝐫(t;𝐫0(k)) have the same form of the second term of the right-hand side of Equation (64).

Conditions on the initial state 𝐫0

In this paragraph we consider the rotational channel examined in Section 'Dynamics of a low-rank rotational channel' and study the conditions on the relationship between the initial condition 𝐫0 and the connectivity vectors that ensure that the evoked dynamics be amplified. We first examine the case of a rank-2 rotational channel, and then generalize to a superposition of orthogonal rank-2 channels.

We consider the equation for the distance from baseline of the dynamics for a rank-2 channel (Equation (65)) and assume, without loss of generality, that Δ2>Δ1. We observe that when Δ2>Δ1, an initial condition along 𝐯(1) does not evoke amplified dynamics. On the other hand, an initial condition along 𝐯(2) can evoke amplified dynamics, depending on the values of Δ1 and Δ2. In fact, we have:

||r(t;r0=v(1))||2=e2t[1+(Δ1Δ21)sin2ωt]<e2tt||r(t;r0=v(2))||2=e2t[1+(Δ2Δ11)sin2ωt]

Therefore, when Δ2>Δ1 (resp. Δ1>Δ2), the initial condition 𝐫0 needs to have a substantial component along the vector 𝐯(2) (resp. 𝐯(1)). To formalize this observation, we compute the peak time t* when ||𝐫(t;𝐫0=𝐯(2))||2 takes the largest value. If Equation (60) holds and Δ2Δ1, the peak time can be approximated by ωt*π/2.

We then examine the value of the distance from baseline ||𝐫(t*;𝐫0)||2 when the initial condition 𝐫0 is a linear combination of the vectors 𝐯(1), 𝐯(2) and a vector orthogonal to both 𝐯’s, that is, 𝐫0=α1𝐯(1)+α2𝐯(2)+β𝐳(), with α12+α22+β2=1. At the peak time we can write the distance from baseline as:

||𝐫(t*;𝐫0)||2=e-2t*[Δ1Δ2α12+Δ2Δ1α22+β2]. (70)

By requiring that, at the peak time, the distance from baseline is larger than unity we can derive the following sufficient conditions on the component of the initial conditions along the vectors 𝐯(1), 𝐯(2) and 𝐳():

Δ1Δ2α12+Δ2Δ1α22+β2>e2t*. (71)

When the component of the initial condition along 𝐳() is zero (β=0), Equation (71) is satisfied when

α22>(Δ2Δ1e2t*-1)/(Δ22Δ12-1). (72)

Equation (71) shows that in an amplified rotational channel the initial state 𝐫0 may have a significant component orthogonal to the vector 𝐯(2) and yet be able to generate amplified dynamics. However, from Equation (70) we observe that, for a fixed value of Δ1 and Δ2 (with Δ2>Δ1), the amplification decreases when the component along 𝐯(2), that is, α2, decreases. Therefore, to have amplification the component of 𝐫0 along 𝐯(2) should be sufficiently strong.

The above condition Equation (71) can be generalized to the case of a superposition of orthogonal rank-2 rotational channels (see Equation (68)) if we assume that the peak time is approximately the same when considering separately the dynamics within each rank-2 channel. Under these assumptions, the distance from baseline at the peak time can be written as

||𝐫(t*;𝐫0)||2=e-2t*k=1K[Δ1(k)Δ2(k)α1(k)2+Δ2(k)Δ1(k)α2(k)2]+e-2t*β2, (73)

and the condition for the relationship between initial condition and connectivity vectors in a rank-2K channel becomes:

k=1KΔ1(k)Δ2(k)α1(k)2+Δ2(k)Δ1(k)α2(k)2+β2>e2t*. (74)

When the component of the initial condition along 𝐳() is zero (β=0) and the Δi(k) do not depend on the channel k, Equation (74) is satisfied when

k=1Kα2(k) 2>(Δ2Δ1e2t*-1)/(Δ22Δ12-1)

where the left-hand side represents the component of the initial condition on all the vectors 𝐯(2;k)’s. From Equation (73) we note that, for fixed values of the Δ1(k) and Δ2(k) (assuming Δ2(k)>Δ1(k) for all k), amplification decreases by decreasing the components of the initial condition on the vectors 𝐯(2;k)’s. Therefore, to generate amplified dynamics, the components of the initial condition 𝐫0 on the vectors 𝐯(2;k)’s should be sufficiently strong.

Correlation between initial and peak state

In this section we derive the expression for the correlation between the initial state 𝐫0 and the state at the peak of the transient dynamics, for a rank-2K rotational channel consisting of K rotational rank-2 channels (see Section 'Dynamics of a low-rank rotational channel'). We set the initial condition to 𝐫0=k=1Kα1(k)𝐯(1;k)+k=1Kα2(k)𝐯(2;k)+β𝐳() and evaluate the state at the peak at time t* (corresponding to the peak time when 𝐫0 has a strong component on the 𝐯(2;k)’s and Δ2(k)Δ1(k)). The initial condition and the peak state are therefore given by:

r(0)=k=1Kα1(k)v(1;k)+k=1Kα2(k)v(2;k)+βz()r(t)=1etk=1K[v(1;k)Δ2(k)Δ1(k)α2(k)+v(2;k)Δ1(k)Δ2(k)α1(k)]+1etβz(). (76)

From the mutual orthogonality between v(1;k), 𝐯(2;k), and 𝐳() it follows that

||r(0)||2=1||r(t)||2=1e2tk=1K[Δ2(k)Δ1(k)α2(k)2+Δ1(k)Δ2(k)α1(k)2]+1e2tβ2. (77)

Thus, we can write the correlation between the initial condition and the peak state as

r(0)r(t)||r(0)||||r(t)||=k=1K(Δ1(k)Δ2(k)Δ2(k)Δ1(k))α1(k)α2(k)+βz()et||r(t)||. (78)

If the initial condition has a strong component on the vectors 𝐯(2)’s (and the component on 𝐳() can be neglected, that is, β=0), so that α1(k) 2=ϵ2 and α2(k) 2=1-ϵ2, then the correlation between initial and peak state satisfies

𝐫(0)𝐫(t*)||𝐫(0)||||𝐫(t*)||k=1Kα1(k)α2(k)=Kϵ, (79)

and equal to zero when the initial condition is a linear combination of the vectors 𝐯(2)’s.

Single-trial analysis of population OFF responses

In this section we focus on the single-trial structure of the dynamics generated by the network and single-cell models. In particular, we consider a setting where dynamics are amplified and examine the amount of single-trial variability along specific directions in the state space: the direction corresponding to the maximum amplification of the dynamics, denoted by 𝐫ampl, and a random direction 𝐫rand. Variability along a given direction 𝐳0 at time t can be written as a function of the covariance matrix 𝐂(t) of the population activity at time t as:

var(𝐳0;t)=𝐳0T𝐂(t)𝐳0. (80)

In particular, for a given direction 𝐳0, we examine the ratio between the variability at the time of maximum amplification t=t* and the variability at time t=0 (corresponding to the end of stimulus presentation), which we termed variability amplification (VA), defined as:

VA(𝐳0)=var(𝐳0;t*)var(𝐳0;0)

We show that the network model predicts that the variability should increase from t=0 to t=t* along the amplified direction 𝐫ampl, but not along a random direction 𝐫rand, resulting in the relationship VA(𝐫ampl)>VA(𝐫rand). In contrast, in the single-cell model the presence of VA depends on the shapes of the single-cell responses. Further, we show that changes in VA between the amplified and random directions are due to distinct mechanisms in the network and single-cell models: in the network model they are due to the inter-neuron correlations generated by network recurrency; in the single-cell model they directly stem from changes in the variance of single neurons in the population.

Structure of single-trial responses in the network model

In the following we examine the structure of single-trial population OFF responses generated by a general linear network model. We show that the network model is able to reproduce the experimental observations on VA under the minimal assumption that the variability in the single-trial responses is due to the noise in the initial condition 𝐫0 at stimulus offset. The analysis can nevertheless be extended to the case where external input noise is fed into the network without affecting the key results.

We consider a linear network model with connectivity 𝐉 and a single stimulus s modeled by the trial-averaged initial condition 𝐫0. We assume that the trial-to-trial noise in the initial condition is Gaussian, so that for trial ν the corresponding initial condition 𝐫0(ν) is given by

𝐫0(ν)=𝐫0+η(ν), (82)

where ηi(ν)=0 and ηi(ν)ηj(ν)=s2δij, so that 𝐫0(ν)=𝐫0. The solution of the linear system can thus be analytically expressed as:

r(ν)(t)=Ptr0(ν)=exp(t(JI))(r0+η(ν)), (83)

where the time-dependent matrix 𝐏t is the propagator of the system (Arnold, 1973).

We start by computing the covariance matrix of the population activity at an arbitrary time t as defined by:

𝐂(t)=(𝐫(ν)(t)-𝐫(ν)(t))(𝐫(ν)(t)-𝐫(ν)(t))T. (84)

Using Equations (83) and (84), we can write the covariance matrix as (Farrell and Ioannou, 1996; Farrell and Ioannou, 2001):

C(t)=Ptη(ν)η(ν)TPtT=Ptη(ν)η(ν)TPtT=s2PtPtT, (85)

so that 𝐂(0)=s2𝐈.

At a given time t=t*, let the SVD of the propagator is given by

Pt=i=1Nσim(i)n(i)T=MΣNT, (86)

where the singular values σi are positive numbers. If none of the singular values σi is larger than unity, no initial condition 𝐫0 can lead to amplified dynamics at time t*, meaning that for all initial conditions ||𝐫(t*)||<1(Bondanelli and Ostojic, 2020). Instead, suppose that the first K singular values {σk}k=1K are larger than unity, while the remaining N-K singular values are smaller than one (with arbitrary K). Under this condition, any initial condition 𝐫0 consisting of a linear combination of the first K vectors 𝐧(k) (i.e. 𝐫0=k=1Kαk𝐧(k)) will be amplified, so that the norm of the population vector at time t* is larger than one (Bondanelli and Ostojic, 2020):

𝐫(t*)=𝐏t*𝐫0=k=1Kαkσk𝐦(k)||𝐫(t*)||>1

In the following analysis, we explore this last scenario, where the first K singular values are larger than unity. We make the following two assumptions. First, we assume that the initial condition is a linear combination of the first K vectors 𝐧(k)’s, so that the amplified direction is given by 𝐫ampl=𝐫(t*)/||𝐫(t*)|| (see Equation (87)). Notably, for any specified initial condition 𝐫0 defined in this way, the following analysis holds for all times t* for which ||𝐫(t*)||>1, including the peak time where the distance from baseline is maximum. Second, we assume that averages of the first or second powers of the singular values are of order (at most) one, that is, i=1Mσi/M,i=1Mσi2/M=O(1) for any integer M. This assumption holds for typically studied network topologies (e.g. Gaussian random, low-rank networks), where a few amplified dimensions (corresponding to σi>1) coexist with many non-amplified ones (corresponding to σi<1) (see Bondanelli and Ostojic, 2020).

Using Equation (84) and the SVD of the propagator 𝐏t* (Equation (86)), we can write the covariance matrix at time t* as

𝐂(t*)=s2𝐌Σ2𝐌T

Thus, we can compute the amount of variability along the amplified direction 𝐫ampl (Equation (87)) and a random direction 𝐫rand (with rrand=i=1Nβim(i), with βi𝒩(0,1/N)), using Equation (80) as:

var(rampl;0)=s2var(rrand;0)=s2var(rampl;t)=s2k=1Kαk2σk4/k=1Kαk2σk2s2[var(rrand;t)]rrand=s2[i=1Nβi2σi2]=s2i=1N[βi2]σi2=s2Ni=1Nσi2=O(s2), (89)

where in the third equation the sum runs only over the singular values larger than one, while in the fourth equation the sum runs over all singular values (both larger and smaller than unity), and brackets denote average over realizations of rrand. Equation (89) shows that in a linear network model the variability along the amplified direction 𝐫ampl increases from the initial time t=0 to the peak time t=t*, while the variability along a random direction does not, and instead decreases. We quantified these observations by computing the VA, defined in Equation (81) (Figure 8). Using Equation (89) we obtain:

VA(𝐫rand)=O(1)VA(𝐫ampl)1, (90)

thus showing that, for a linear network model, VA is larger when computed for the amplified direction than a random direction.

Shuffled responses

In the following we compare the VA of the original responses with the VA obtained from the responses where the trial labels have been shuffled independently for each cell. The covariance matrix of the shuffled responses 𝐂(sh)(t) is obtained by retaining only the diagonal elements of the real covariance matrix 𝐂(t):

𝐂ijsh(t)=𝐂ij(t)δij=l=1N𝐦i(l)σl2𝐦j(l)δij. (91)

The first two quantities in Equation (89) computed using 𝐂sh instead of 𝐂 do not change, since 𝐂(0) is already diagonal. Instead, for shuffled responses the variability along both directions at the peak time t* is of order O(s2), since:

[var(rrand;t;shuffled)]rrand,{m(k)}k=s2[i,j,l=1Nrrand,imi(l)σl2mj(l)δijrrand,j]=s2i,j,l=1Nσl2δij[rrand,irrand,j][mi(l)mj(l)]=s2N2i,j,l=1Nσl2δij=s2Nl=1Nσl2=O(s2). (92)

Following the same steps it is possible to show that [var(𝐫ampl;t*;shuffled)]𝐫ampl,{𝐦(k)}k=O(s2). In fact if we set, according to Equation (87), 𝐫ampl=k=1Kαkσk𝐦(k)/C, where C=k=1Kαk2σk2=O(1), we have:

[var(rampl;t;shuffled)]rampl,{m(k)}k=s2C2i,j,l=1Nk1,k2=1Kσk1σk2σl2[αk1αk2][mi(k1)mj(k1)mi(l)mi(l)]δij=s2C2i,l=1Nk1,k2=1Kσk1σk2σl2[αk1αk2](δlk1δk1k21N2+δk1k21N2)=s2C2(O(1/N)+O(1))=O(s2), (93)

where in the last equality we assumed that [αk2]1/K if αk𝒩(0,1/K). To summarize, for shuffled responses we have:

var(rampl;0;shuffled)=s2var(rrand;0;shuffled)=s2, (94)

so that the VA along both directions is given by:

VA(𝐫rand)=O(1)VA(𝐫ampl)=O(1), (95)

implying a weaker modulation of the VA between random and amplified direction for shuffled responses with respect to the original responses. These analyses show that in the network model, the difference in the VA between the amplified and random directions is a result of the recurrent interactions within the network, reflected in the off-diagonal elements of the covariance matrix 𝐂(t), and it therefore cannot be explained solely by changes in the variability of single units.

Structure of single-trial responses in the single-cell model

Here we examine the structure of the single-trial population responses generated by the single-cell model (Equation (1)). We consider single-trial responses r(ν)(t) to a single stimulus generated by perturbing the initial state of the trial averaged responses r0, yielding

ri(ν)(t)=r0,i(ν)Li(t)=(r0,i+ηi(ν))Li(t), (96)

where η(ν) has zero mean and variance equal to s2 (ηi(ν)=0 and ηi(ν)ηj(ν)=s2δij). From Equation (96) we can compute the covariance matrix of the population response for the single-cell model as:

Cij(t)=Li(t)ηi(ν)ηj(ν)Lj(t)=s2Li2(t)δij. (97)

We can obtain the variability along a given direction 𝐳0 as:

var(𝐳0;t)=s2i=1N𝐳0,i2Li2(t). (98)

Notably, the term on the right-hand side is proportional to the norm of the dynamics that would be evoked by the initial condition 𝐳0. In the single-cell model, the variability is maximally amplified along the initial condition that leads to largest amplification of the trial averaged dynamics. In contrast, in the network model, the variability is maximally amplified along the amplified direction of the trial-averaged dynamics, which is in general orthogonal to the corresponding initial condition (see Materials and methods, Section 'Conditions on the initial state 𝐫0'; Figure 6C,E). In general, the relationship between the modulation of the variability along the amplified direction and modulation along a random direction depends on the specific shape of the temporal filters Li(t), and we used simulations of the fitted model to determine it. We can however contrast the single-trial structure generated by the single-cell model with the structure generated by the recurrent model by noting that the covariance matrix given by Equation (97) is diagonal, and is therefore equal to the covariance matrix of the shuffled responses, 𝐂ij(t)=𝐂ijsh(t). This implies that no difference is observed when computing the VA for the amplified and random directions for the real and shuffled responses (i.e. VA(𝐫rand)=VA(𝐫rand;shuffled) and VA(𝐫ampl)=VA(𝐫ampl;shuffled)), indicating that changes in the trial-to-trial variability can be fully explained in terms of changes in the variability of single units.

In Figure 8E,G,F,H, for both network and single-cell models, single-trial population responses were generated by drawing the single-trial initial conditions from a random distribution with mean 𝐫0 and covariance matrix computed from the single-trials initial conditions of the original data (across neurons for the single-cell model, across PC dimensions for the network model). The results do not substantially change if we draw the single-trial initial conditions from a random distribution with mean 𝐫0 and isotropic covariance matrix (𝐫0(ν)𝒩(𝐫0,𝐂(0)), with 𝐂(0)=s2𝐈), as assumed in the analysis above (not shown).

Software

Numerical simulations and data analyses were done using Python 3.6 (using NumPy [Harris et al., 2020], SciPy [Virtanen et al., 2020], and the scikit-learn package [Pedregosa et al., 2011]). Code is available at https://github.com/gbondanelli/OffResponses; Bondanelli, 2021; copy archived at swh:1:rev:2438e688ad719eb9870af8c032803a7367fe1140. Surrogate data sets for model validation (see Materials and methods, Section 'Control data sets', Figure 3, Figure 3—figure supplement 2, Figure 3—figure supplement 3) were generated using the Matlab code provided by Elsayed and Cunningham, 2017 available at https://github.com/gamaleldin/TME.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Srdjan Ostojic, Email: srdjan.ostojic@ens.fr.

Peter Latham, University College London, United Kingdom.

Barbara G Shinn-Cunningham, Carnegie Mellon University, United States.

Funding Information

This paper was supported by the following grants:

  • Agence Nationale de la Recherche ANR-16-CE37-0016 to Giulio Bondanelli, Srdjan Ostojic.

  • Agence Nationale de la Recherche ANR-17-EURE-0017 to Giulio Bondanelli, Srdjan Ostojic.

Additional information

Competing interests

Reviewing editor, eLife.

No competing interests declared.

Author contributions

Conceptualization, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing.

Data curation, Methodology.

Conceptualization, Resources, Data curation, Investigation, Methodology, Writing - review and editing.

Conceptualization, Formal analysis, Supervision, Funding acquisition, Validation, Investigation, Methodology, Project administration, Writing - review and editing.

Additional files

Transparent reporting form

Data availability

Python code and data are available at https://github.com/gbondanelli/OffResponses copy archived at https://archive.softwareheritage.org/swh:1:rev:2438e688ad719eb9870af8c032803a7367fe1140/.

The following datasets were generated:

References

  1. Alluri RK, Rose GJ, Hanson JL, Leary CJ, Vasquez-Opazo GA, Graham JA, Wilkerson J. Phasic, suprathreshold excitation and sustained inhibition underlie neuronal selectivity for short-duration sounds. PNAS. 2016;113:E1927–E1935. doi: 10.1073/pnas.1520971113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Anderson LA, Linden JF. Mind the gap: two dissociable mechanisms of temporal processing in the auditory system. The Journal of Neuroscience. 2016;36:1977–1995. doi: 10.1523/JNEUROSCI.1652-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Arnold VI. Ordinary differential equations. The MIT Press; 1973. [Google Scholar]
  4. Aubie B, Becker S, Faure PA. Computational models of millisecond level duration tuning in neural circuits. Journal of Neuroscience. 2009;29:9255–9270. doi: 10.1523/JNEUROSCI.1085-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Barbour DL, Callaway EM. Excitatory local connections of superficial neurons in rat auditory cortex. Journal of Neuroscience. 2008;28:11174–11185. doi: 10.1523/JNEUROSCI.2093-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Beiran M, Dubreuil A, Valente A, Mastrogiuseppe F, Ostojic S. Shaping dynamics with multiple populations in low-rank recurrent networks. arXiv. 2020 doi: 10.1162/neco_a_01381. https://arxiv.org/abs/2007.02062 [DOI] [PubMed]
  7. Beiran M, Ostojic S. Contrasting the effects of adaptation and synaptic filtering on the timescales of dynamics in recurrent networks. PLOS Computational Biology. 2019;15:e1006893. doi: 10.1371/journal.pcbi.1006893. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bishop CM. Pattern recognition and machine learning. Springer-Verlag; 2006. [Google Scholar]
  9. Bizley JK, Bajo VM, Nodal FR, King AJ. Cortico-Cortical connectivity within ferret auditory cortex. Journal of Comparative Neurology. 2015;523:2187–2210. doi: 10.1002/cne.23784. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bjorck A, Golub GH. Numerical methods for computing angles between linear subspaces. Mathematics of Computation. 1973;27:579–594. doi: 10.2307/2005662. [DOI] [Google Scholar]
  11. Bondanelli G. OffResponses. swh:1:rev:2438e688ad719eb9870af8c032803a7367fe1140Software Heritage. 2021 https://archive.softwareheritage.org/swh:1:dir:ae6576e399aa359e62f4491b59683fb4d47fc101;origin=https://github.com/gbondanelli/OffResponses;visit=swh:1:snp:adffa57374648ef49c4ac9f4ef744b39c9fd41d5;anchor=swh:1:rev:2438e688ad719eb9870af8c032803a7367fe1140/
  12. Bondanelli G, Ostojic S. Coding with transient trajectories in recurrent neural networks. PLOS Computational Biology. 2020;16:e1007655. doi: 10.1371/journal.pcbi.1007655. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Brand A, Urban R, Grothe B. Duration tuning in the mouse auditory midbrain. Journal of Neurophysiology. 2000;84:1790–1799. doi: 10.1152/jn.2000.84.4.1790. [DOI] [PubMed] [Google Scholar]
  14. Broome BM, Jayaraman V, Laurent G. Encoding and decoding of overlapping odor sequences. Neuron. 2006;51:467–482. doi: 10.1016/j.neuron.2006.07.018. [DOI] [PubMed] [Google Scholar]
  15. Buonomano DV, Maass W. State-dependent computations: spatiotemporal processing in cortical networks. Nature Reviews Neuroscience. 2009;10:113–125. doi: 10.1038/nrn2558. [DOI] [PubMed] [Google Scholar]
  16. Calhoun BM, Schreiner CE. Spectral envelope coding in cat primary auditory cortex: linear and non-linear effects of stimulus characteristics. European Journal of Neuroscience. 1998;10:926–940. doi: 10.1046/j.1460-9568.1998.00102.x. [DOI] [PubMed] [Google Scholar]
  17. Churchland MM, Yu BM, Ryu SI, Santhanam G, Shenoy KV. Neural variability in premotor cortex provides a signature of motor preparation. Journal of Neuroscience. 2006;26:3697–3712. doi: 10.1523/JNEUROSCI.3762-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Churchland MM, Cunningham JP, Kaufman MT, Ryu SI, Shenoy KV. Cortical preparatory activity: representation of movement or first cog in a dynamical machine? Neuron. 2010;68:387–400. doi: 10.1016/j.neuron.2010.09.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature. 2012;487:51–56. doi: 10.1038/nature11129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Churchland MM, Shenoy KV. Temporal complexity and heterogeneity of single-neuron activity in premotor and motor cortex. Journal of Neurophysiology. 2007;97:4235–4257. doi: 10.1152/jn.00095.2007. [DOI] [PubMed] [Google Scholar]
  21. Cunningham JP, Yu BM. Dimensionality reduction for large-scale neural recordings. Nature Neuroscience. 2014;17:1500–1509. doi: 10.1038/nn.3776. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Davies PT, Tso MK-S. Procedures for Reduced-Rank regression. Applied Statistics. 1982;31:244–255. doi: 10.2307/2347998. [DOI] [Google Scholar]
  23. Deneux T, Kempf A, Daret A, Ponsot E, Bathellier B. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities. Nature Communications. 2016;7:12682. doi: 10.1038/ncomms12682. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Duysens J, Schaafsma SJ, Orban GA. Cortical off response tuning for stimulus duration. Vision Research. 1996;36:3243–3251. doi: 10.1016/0042-6989(96)00040-5. [DOI] [PubMed] [Google Scholar]
  25. Elsayed GF, Cunningham JP. Structure in neural population recordings: an expected byproduct of simpler phenomena? Nature Neuroscience. 2017;20:1310–1318. doi: 10.1038/nn.4617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Farrell BF, Ioannou PJ. Generalized stability theory. part I: autonomous operators. Journal of the Atmospheric Sciences. 1996;53:2025–2040. doi: 10.1175/1520-0469(1996)053&#x0003c;2025:GSTPIA&#x0003e;2.0.CO;2. [DOI] [Google Scholar]
  27. Farrell BF, Ioannou PJ. Accurate Low-Dimensional approximation of the linear dynamics of fluid flow. Journal of the Atmospheric Sciences. 2001;58:2771–2789. doi: 10.1175/1520-0469(2001)058&#x0003c;2771:ALDAOT&#x0003e;2.0.CO;2. [DOI] [Google Scholar]
  28. Friedrich RW, Laurent G. Dynamic optimization of odor representations by slow temporal patterning of mitral cell activity. Science. 2001;291:889–894. doi: 10.1126/science.291.5505.889. [DOI] [PubMed] [Google Scholar]
  29. Fu ZY, Tang J, Jen PH, Chen QC. The auditory response properties of single-on and double-on responders in the inferior colliculus of the leaf-nosed bat, Hipposideros armiger. Brain Research. 2010;1306:39–52. doi: 10.1016/j.brainres.2009.10.002. [DOI] [PubMed] [Google Scholar]
  30. Gao P, Trautmann E, Yu B, Santhanam G, Ryu S, Shenoy K, Ganguli S. A theory of multineuronal dimensionality, dynamics and measurement. bioRxiv. 2017 doi: 10.1101/214262. [DOI]
  31. Goldman MS. Memory without feedback in a neural network. Neuron. 2009;61:621–634. doi: 10.1016/j.neuron.2008.12.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Golub GH, Zha H. The canonical correlations of matrix pairs and their numerical computation. In: Bojanczyk A, Cybenko G, editors. Linear Algebra for Signal Processing. Stanford University; 1992. pp. 27–49. [DOI] [Google Scholar]
  33. Grothe B, Vater M, Casseday JH, Covey E. Monaural interaction of excitation and inhibition in the medial superior olive of the mustached bat: an adaptation for biosonar. PNAS. 1992;89:5108–5112. doi: 10.1073/pnas.89.11.5108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Guo Y, Burkard R. Onset and offset responses from inferior colliculus and auditory cortex to paired noisebursts: inner hair cell loss. Hearing Research. 2002;171:158–166. doi: 10.1016/S0378-5955(02)00496-3. [DOI] [PubMed] [Google Scholar]
  35. Hancock KE, Voigt HF. Wideband inhibition of dorsal cochlear nucleus type IV units in cat: a computational model. Annals of Biomedical Engineering. 1999;27:73–87. doi: 10.1114/1.150. [DOI] [PubMed] [Google Scholar]
  36. Harris CR, Millman KJ, van der Walt SJ, Gommers R, Virtanen P, Cournapeau D, Wieser E, Taylor J, Berg S, Smith NJ, Kern R, Picus M, Hoyer S, van Kerkwijk MH, Brett M, Haldane A, Del Río JF, Wiebe M, Peterson P, Gérard-Marchant P, Sheppard K, Reddy T, Weckesser W, Abbasi H, Gohlke C, Oliphant TE. Array programming with NumPy. Nature. 2020;585:357–362. doi: 10.1038/s41586-020-2649-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Hartley DE, Dahmen JC, King AJ, Schnupp JW. Binaural sensitivity changes between cortical on and off responses. Journal of Neurophysiology. 2011;106:30–43. doi: 10.1152/jn.01070.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. He J. OFF responses in the auditory thalamus of the guinea pig. Journal of Neurophysiology. 2002;88:2377–2386. doi: 10.1152/jn.00083.2002. [DOI] [PubMed] [Google Scholar]
  39. He J. Corticofugal modulation on both ON and OFF responses in the nonlemniscal auditory thalamus of the guinea pig. Journal of Neurophysiology. 2003;89:367–381. doi: 10.1152/jn.00593.2002. [DOI] [PubMed] [Google Scholar]
  40. Heil P. Auditory cortical onset responses revisited. I. First-spike timing. Journal of Neurophysiology. 1997a;77:2616–2641. doi: 10.1152/jn.1997.77.5.2616. [DOI] [PubMed] [Google Scholar]
  41. Heil P. Auditory cortical onset responses revisited. II. response strength. Journal of Neurophysiology. 1997b;77:2642–2660. doi: 10.1152/jn.1997.77.5.2642. [DOI] [PubMed] [Google Scholar]
  42. Hennequin G, Vogels TP, Gerstner W. Non-normal amplification in random balanced neuronal networks. Physical Review E. 2012;86:011909. doi: 10.1103/PhysRevE.86.011909. [DOI] [PubMed] [Google Scholar]
  43. Hennequin G, Vogels TP, Gerstner W. Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron. 2014;82:1394–1406. doi: 10.1016/j.neuron.2014.04.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Henry KR. Tuning of the auditory brainstem OFF responses is complementary to tuning of the auditory brainstem ON response. Hearing Research. 1985;19:115–125. doi: 10.1016/0378-5955(85)90115-7. [DOI] [PubMed] [Google Scholar]
  45. Horn RA, Johnson CR. Matrix Analysis. Cambridge: University Press; 2012. [Google Scholar]
  46. Hotelling H. Relations between two sets of variates. Biometrika. 1936;28:321–377. doi: 10.1093/biomet/28.3-4.321. [DOI] [Google Scholar]
  47. Ito T, Malmierca MS. Neurons, Connections, and Microcircuits of the Inferior Colliculus. In: Oliver D. L, editor. Mammalian Auditory Pathways: Synaptic Organization and Microcircuits. Springer International Publishing; 2018. pp. 127–167. [DOI] [Google Scholar]
  48. Izenman AJ. Reduced-rank regression for the multivariate linear model. Journal of Multivariate Analysis. 1975;5:248–264. doi: 10.1016/0047-259X(75)90042-1. [DOI] [Google Scholar]
  49. Joachimsthaler B, Uhlmann M, Miller F, Ehret G, Kurt S. Quantitative analysis of neuronal response properties in primary and higher-order auditory cortical fields of awake house mice (Mus musculus) The European Journal of Neuroscience. 2014;39:904–918. doi: 10.1111/ejn.12478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Kasai M, Ono M, Ohmori H. Distinct neural firing mechanisms to tonal stimuli offset in the inferior colliculus of mice in vivo. Neuroscience Research. 2012;73:224–237. doi: 10.1016/j.neures.2012.04.009. [DOI] [PubMed] [Google Scholar]
  51. Keller CH, Kaylegian K, Wehr M. Gap encoding by parvalbumin-expressing interneurons in auditory cortex. Journal of Neurophysiology. 2018;120:105–114. doi: 10.1152/jn.00911.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Knyazev AV, Argentati ME. Principal Angles between Subspaces in an A -Based Scalar Product: Algorithms and Perturbation Estimates. SIAM Journal on Scientific Computing. 2002;23:2008–2040. doi: 10.1137/S1064827500377332. [DOI] [Google Scholar]
  53. Kopp-Scheinpflug C, Tozer AJ, Robinson SW, Tempel BL, Hennig MH, Forsythe ID. The sound of silence: ionic mechanisms encoding sound termination. Neuron. 2011;71:911–925. doi: 10.1016/j.neuron.2011.06.028. [DOI] [PubMed] [Google Scholar]
  54. Kopp-Scheinpflug C, Sinclair JL, Linden JF. When sound stops: offset responses in the auditory system. Trends in Neurosciences. 2018;41:712–728. doi: 10.1016/j.tins.2018.08.009. [DOI] [PubMed] [Google Scholar]
  55. Kuwada S, Batra R. Coding of sound envelopes by inhibitory rebound in neurons of the superior olivary complex in the unanesthetized rabbit. The Journal of Neuroscience. 1999;19:2273–2287. doi: 10.1523/JNEUROSCI.19-06-02273.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Lara AH, Cunningham JP, Churchland MM. Different population dynamics in the supplementary motor area and motor cortex during reaching. Nature Communications. 2018;9:2754. doi: 10.1038/s41467-018-05146-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Lee CC, Kishan AU, Winer JA. Wiring of divergent networks in the central auditory system. Frontiers in Neuroanatomy. 2011;5:46. doi: 10.3389/fnana.2011.00046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Li LY, Li YT, Zhou M, Tao HW, Zhang LI. Intracortical multiplication of thalamocortical signals in mouse auditory cortex. Nature Neuroscience. 2013;16:1179–1181. doi: 10.1038/nn.3493. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Lim Y, Lagoy R, Shinn-Cunningham BG, Gardner TJ. Transformation of temporal sequences in the zebra finch auditory system. eLife. 2016;5:e18205. doi: 10.7554/eLife.18205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Linden JF, Schreiner CE. Columnar transformations in auditory cortex? A comparison to visual and somatosensory cortices. Cerebral Cortex. 2003;13:83–89. doi: 10.1093/cercor/13.1.83. [DOI] [PubMed] [Google Scholar]
  61. Liu J, Whiteway MR, Sheikhattar A, Butts DA, Babadi B, Kanold PO. Parallel processing of sound dynamics across mouse auditory cortex via spatially patterned thalamic inputs and distinct areal intracortical circuits. Cell Reports. 2019a;27:872–885. doi: 10.1016/j.celrep.2019.03.069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Liu X, Zhang O, Qi J, Chen A, Hu K, Yan J. The onset and post-onset auditory responses of cochlear nucleus neurons are modulated differently by cortical activation. Hearing Research. 2019b;373:96–102. doi: 10.1016/j.heares.2018.12.013. [DOI] [PubMed] [Google Scholar]
  63. Luo F, Metzner W, Wu F, Wu FJ, Zhang S, Zhang SY, Chen Q, Chen QC. Duration-sensitive neurons in the inferior colliculus of horseshoe bats: adaptations for using CF-FM echolocation pulses. Journal of Neurophysiology. 2008;99:284–296. doi: 10.1152/jn.00935.2007. [DOI] [PubMed] [Google Scholar]
  64. Machens CK, Wehr MS, Zador AM. Linearity of cortical receptive fields measured with natural sounds. Journal of Neuroscience. 2004;24:1089–1100. doi: 10.1523/JNEUROSCI.4445-03.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Mastrogiuseppe F, Ostojic S. Linking connectivity, dynamics, and computations in Low-Rank recurrent neural networks. Neuron. 2018;99:609–623. doi: 10.1016/j.neuron.2018.07.003. [DOI] [PubMed] [Google Scholar]
  66. Mazor O, Laurent G. Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons. Neuron. 2005;48:661–673. doi: 10.1016/j.neuron.2005.09.032. [DOI] [PubMed] [Google Scholar]
  67. Meyer AF, Williamson RS, Linden JF, Sahani M. Models of neuronal Stimulus-Response functions: elaboration, estimation, and evaluation. Frontiers in Systems Neuroscience. 2016;10:109. doi: 10.3389/fnsys.2016.00109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Mukherjee A, Chen K, Wang N, Zhu J. On the degrees of freedom of reduced-rank estimators in multivariate regression. Biometrika. 2015;102:457–477. doi: 10.1093/biomet/asu067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Murphy BK, Miller KD. Balanced amplification: a new mechanism of selective amplification of neural activity patterns. Neuron. 2009;61:635–648. doi: 10.1016/j.neuron.2009.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Muscinelli SP, Gerstner W, Schwalger T. How single neuron properties shape chaotic dynamics and signal transmission in random neural networks. PLOS Computational Biology. 2019;15:e1007122. doi: 10.1371/journal.pcbi.1007122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Nakatsukasa Y. The low-rank eigenvalue problem. arXiv. 2019 https://arxiv.org/abs/1905.11490
  72. Nelken I, Rotman Y, Bar Yosef O. Responses of auditory-cortex neurons to structural features of natural sounds. Nature. 1999;397:154–157. doi: 10.1038/16456. [DOI] [PubMed] [Google Scholar]
  73. Neubert MG, Caswell H. Alternatives to resilience for measuring the responses of ecological systems to perturbations. Ecology. 1997;78:653–665. doi: 10.1890/0012-9658(1997)078[0653:ATRFMT]2.0.CO;2. [DOI] [Google Scholar]
  74. Oswald AM, Doiron B, Rinzel J, Reyes AD. Spatial profile and differential recruitment of GABAB modulate oscillatory activity in auditory cortex. Journal of Neuroscience. 2009;29:10321–10334. doi: 10.1523/JNEUROSCI.1703-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Oswald AM, Reyes AD. Maturation of intrinsic and synaptic properties of layer 2/3 pyramidal neurons in mouse auditory cortex. Journal of Neurophysiology. 2008;99:2998–3008. doi: 10.1152/jn.01160.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay É. Scikit-learn: machine learning in Python. Journal of Machine Learning Research. 2011;12:2825–2830. [Google Scholar]
  77. Phillips DP, Hall SE, Boehnke SE. Central auditory onset responses, and temporal asymmetries in auditory perception. Hearing Research. 2002;167:192–205. doi: 10.1016/S0378-5955(02)00393-3. [DOI] [PubMed] [Google Scholar]
  78. Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, Simoncelli EP. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature. 2008;454:995–999. doi: 10.1038/nature07140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Pollak GD, Bodenhamer RD. Specialized characteristics of single units in inferior colliculus of mustache bat: frequency representation, tuning, and discharge patterns. Journal of Neurophysiology. 1981;46:605–620. doi: 10.1152/jn.1981.46.3.605. [DOI] [PubMed] [Google Scholar]
  80. Qin L, Chimoto S, Sakai M, Wang J, Sato Y. Comparison between offset and onset responses of primary auditory cortex ON-OFF neurons in awake cats. Journal of Neurophysiology. 2007;97:3421–3431. doi: 10.1152/jn.00184.2007. [DOI] [PubMed] [Google Scholar]
  81. Remington ED, Egger SW, Narain D, Wang J, Jazayeri M. A dynamical systems perspective on flexible motor timing. Trends in Cognitive Sciences. 2018;22:938–952. doi: 10.1016/j.tics.2018.07.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Rothschild G, Nelken I, Mizrahi A. Functional organization and population dynamics in the mouse primary auditory cortex. Nature Neuroscience. 2010;13:353–360. doi: 10.1038/nn.2484. [DOI] [PubMed] [Google Scholar]
  83. Rotman Y, Bar-Yosef O, Nelken I. Relating cluster and population responses to natural sounds and tonal stimuli in cat primary auditory cortex. Hearing Research. 2001;152:110–127. doi: 10.1016/S0378-5955(00)00243-4. [DOI] [PubMed] [Google Scholar]
  84. Saha D, Sun W, Li C, Nizampatnam S, Padovano W, Chen Z, Chen A, Altan E, Lo R, Barbour DL, Raman B. Engaging and disengaging recurrent inhibition coincides with sensing and unsensing of a sensory stimulus. Nature Communications. 2017;8:15413. doi: 10.1038/ncomms15413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Sahani M, Linden JF. How linear are auditory cortical responses?. Advances in Neural Information Processing Systems; 2003. pp. 125–132. [Google Scholar]
  86. Savin C, Tkačik G. Maximum entropy models as a tool for building precise neural controls. Current Opinion in Neurobiology. 2017;46:120–126. doi: 10.1016/j.conb.2017.08.001. [DOI] [PubMed] [Google Scholar]
  87. Saxena S, Cunningham JP. Towards the neural population doctrine. Current Opinion in Neurobiology. 2019;55:103–111. doi: 10.1016/j.conb.2019.02.002. [DOI] [PubMed] [Google Scholar]
  88. Scholl B, Gao X, Wehr M. Nonoverlapping sets of synapses drive on responses and off responses in auditory cortex. Neuron. 2010;65:412–421. doi: 10.1016/j.neuron.2010.01.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Schuessler F, Dubreuil A, Mastrogiuseppe F, Ostojic S, Barak O. Dynamics of random recurrent networks with correlated low-rank structure. Physical Review Research. 2020;2:013111. doi: 10.1103/PhysRevResearch.2.013111. [DOI] [Google Scholar]
  90. Seely JS, Kaufman MT, Ryu SI, Shenoy KV, Cunningham JP, Churchland MM. Tensor analysis reveals distinct population structure that parallels the different computational roles of Areas M1 and V1. PLOS Computational Biology. 2016;12:e1005164. doi: 10.1371/journal.pcbi.1005164. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Shenoy KV, Kaufman MT, Sahani M, Churchland MM. A dynamical systems view of motor preparation: implications for neural prosthetic system design. Progress in Brain Research. 2011;192:33–58. doi: 10.1016/B978-0-444-53355-5.00003-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Smith RL, Brachman ML. Operating range and maximum response of single auditory nerve fibers. Brain Research. 1980;184:499–505. doi: 10.1016/0006-8993(80)90817-3. [DOI] [PubMed] [Google Scholar]
  93. Smith RL, Brachman ML. Adaptation in auditory-nerve fibers: a revised model. Biological Cybernetics. 1982;44:107–120. doi: 10.1007/BF00317970. [DOI] [PubMed] [Google Scholar]
  94. Sollini J, Chapuis GA, Clopath C, Chadderton P. ON-OFF receptive fields in auditory cortex diverge during development and contribute to directional sweep selectivity. Nature Communications. 2018;9:2084. doi: 10.1038/s41467-018-04548-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Stopfer M, Jayaraman V, Laurent G. Intensity versus identity coding in an olfactory system. Neuron. 2003;39:991–1004. doi: 10.1016/j.neuron.2003.08.011. [DOI] [PubMed] [Google Scholar]
  96. Stringer C, Pachitariu M, Steinmetz N, Carandini M, Harris KD. High-dimensional geometry of population responses in visual cortex. Nature. 2019;571:361–365. doi: 10.1038/s41586-019-1346-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Stroud JP, Porter MA, Hennequin G, Vogels TP. Motor primitives in space and time via targeted gain modulation in cortical networks. Nature Neuroscience. 2018;21:1774–1783. doi: 10.1038/s41593-018-0276-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Suga N. Single unit activity in cochlear nucleus and inferior colliculus of echo-locating bats. The Journal of Physiology. 1964;172:449–474. doi: 10.1113/jphysiol.1964.sp007432. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Sussillo D, Churchland MM, Kaufman MT, Shenoy KV. A neural network that finds a naturalistic solution for the production of muscle activity. Nature Neuroscience. 2015;18:1025–1033. doi: 10.1038/nn.4042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Trefethen LN, Embree M. Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators. Princeton University Press; 2005. [DOI] [Google Scholar]
  101. Uurtio V, Monteiro JM, Kandola J, Shawe-Taylor J, Fernandez-Reyes D, Rousu J. A tutorial on canonical correlation methods. ACM Computing Surveys. 2018;50:1–33. doi: 10.1145/3136624. [DOI] [Google Scholar]
  102. Vater M, Habbicht H, Kössl M, Grothe B. The functional role of GABA and glycine in monaural and binaural processing in the inferior colliculus of horseshoe bats. Journal of Comparative Physiology A. 1992;171:541–553. doi: 10.1007/BF00194587. [DOI] [PubMed] [Google Scholar]
  103. Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J, van der Walt SJ, Brett M, Wilson J, Millman KJ, Mayorov N, Nelson ARJ, Jones E, Kern R, Larson E, Carey CJ, Polat İ, Feng Y, Moore EW, VanderPlas J, Laxalde D, Perktold J, Cimrman R, Henriksen I, Quintero EA, Harris CR, Archibald AM, Ribeiro AH, Pedregosa F, van Mulbregt P, SciPy 1.0 Contributors SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods. 2020;17:261–272. doi: 10.1038/s41592-019-0686-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Wang X, Lu T, Snider RK, Liang L. Sustained firing in auditory cortex evoked by preferred stimuli. Nature. 2005;435:341–346. doi: 10.1038/nature03565. [DOI] [PubMed] [Google Scholar]
  105. Wang X. Neural coding strategies in auditory cortex. Hearing Research. 2007;229:81–93. doi: 10.1016/j.heares.2007.01.019. [DOI] [PubMed] [Google Scholar]
  106. Williamson RS, Ahrens MB, Linden JF, Sahani M. Input-Specific gain modulation by local sensory context shapes cortical and thalamic responses to complex sounds. Neuron. 2016;91:467–481. doi: 10.1016/j.neuron.2016.05.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Winer JA, Larue DT, Diehl JJ, Hefti BJ. Auditory cortical projections to the cat inferior colliculus. The Journal of Comparative Neurology. 1998;400:147–174. doi: 10.1002/(SICI)1096-9861(19981019)400:2&#x0003c;147::AID-CNE1&#x0003e;3.0.CO;2-9. [DOI] [PubMed] [Google Scholar]
  108. Xu N, Fu Z-Y, Chen Q-C. The function of offset neurons in auditory information processing. Translational Neuroscience. 2014;5:275–285. doi: 10.2478/s13380-014-0235-5. [DOI] [Google Scholar]
  109. Yu YQ, Xiong Y, Chan YS, He J. In vivo intracellular responses of the medial geniculate neurones to acoustic stimuli in anaesthetized guinea pigs. The Journal of Physiology. 2004;560:191–205. doi: 10.1113/jphysiol.2004.067678. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision letter

Editor: Peter Latham1
Reviewed by: Guillaume Hennequin2

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

This paper explores the hypothesis that OFF responses are single-cell signatures of network dynamics, and shows convincingly that a model based on recurrent interactions does a much better job explaining the data than the de- facto standard model, which is based purely on single cell dynamics. In addition, the proposed recurrent network model generates transient OFF responses very similar to what is seen in experiments.

Decision letter after peer review:

Thank you for submitting your article "Population coding and network dynamics during OFF responses in auditory cortex" for consideration by eLife. Your article has been reviewed by three peer reviewers, one of whom is a member of our Board of Reviewing Editors, and the evaluation has been overseen by Barbara Shinn-Cunningham as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Guillaume Hennequin (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission. Please aim to submit the revised version within two months.

Summary:

In this paper, Bondanelli and colleagues study the spatiotemporal structure of OFF responses in A1. They propose that these responses arise from a simple linear mechanism involving a certain form of nonnormal population dynamics. They fit such a network model to A1 data and find that it explains several important aspects of OFF responses, some, but importantly, not all, of which are also captured by a single-neuron mechanism.

Essential revisions:

We're very sorry for the long delay. That's in part because at least one of use found the paper very hard to read, and it took a long time to figure out what was going on. And as you'll see, it's not clear if we did that very well. We think the problem is that you're missing quantitative analysis; in many cases it seems that we're supposed to understand why a particular piece of analysis supported your model, but it simply wasn't clear to us why that was true. Explicit examples of this will be given below.

A major issue for us has been that you begin with a discussion of special cases of Equation (2) (last two paragraphs of page 6) but then go on to test predictions of THE model without telling us which of these special assumptions you are actually making (if any) in the production of your results (Figure 4 specifically). We believe that the first paragraph of "Testing the network mechanisms on the data" should begin with a clear description of the actual model considered (including whether your transient channels are mutually orthogonal, and how initial conditions relate to those planes).

More specifically, to summarize our current understanding: under the assumption that the v's are orthonormal, the u's are orthogonal, and uk dot vl = 0 for all k \ne l (not strictly true, but close enough in high dimensions), the most general solution (a generalization of Eq. 34 to an arbitrary number of modes) is

r(t) = [uperp + sumk uk (ak + vk dot uk t)] exp(-t)

where uperp is perpendicular to all the uk and the ak are arbitrary.

This appears late in Methods. But it's important, because there seem to be hidden assumptions about uperp and the ak. Namely,

- the ak are all set to zero

- uperp = uk for only one k, with k dependent on the stimulus.

Assuming this is correct, it's a critical part of the model. And it is, in fact, an extra assumption.

Given that setup, we believe there were several predictions:

a. Lines 216-228: There's a prediction about the correlational structure that we didn't fully understand. Equations are needed, in the main text, that make the predictions explicit.

b. Lines 229-238: Correlations between the beginning and the peak of the off response should small. Again, equations are needed; this should be quantified.

c. Lines 239-253:

"To further examine to which extent the population dynamics during OFF responses could be accounted for by a non-normal linear dynamical system, we next fitted our network model population OFF responses to all stimuli at once using reduced-rank regression (Figure S1; see Methods). Qualitatively, we found that the fit reproduced well the low-dimensional population trajectories (Figure 4C)."

As far as we could tell, this just tells us that the fit is good; it doesn't tell us anything about whether the OFF responses could be accounted for by a non-normal linear dynamical system. Is that correct, or did we miss something? If that is correct, it seems like this paragraph is mainly a distractor.

d. Lines 254-262: The fitted connectivity, J, had eigenvalues less than 1 and the symmetrized version had eigenvalues greater than 1. That's consistent with the low rank, non-normal model. However, couldn't such a J could come from a high rank, non-normal model? If so, it's not strong evidence for your model. This should be clarified.

e. Lines 263-283: There were several results here, but it's not clear how much they supported the model. For example,

"We computed the overlap between the subspaces spanned by the transient channels for each pair of stimuli s1 and s2 , analogous to the overlap between the vectors u(s1) and u(s2) in Eq. (2) (see Methods). We found that the overlap between the transient channels matched the overlap between the response dynamics and population states at the end of stimulus presentation (Figure 4F, right panel), consistent with the interpretation that individual OFF responses may be generated through transient coding channels of the form given by Eq. (2)."

It's not clear what we should expect here. Is this strong or weak evidence for the model? You need to quantify what we should expect, and be clear whether this is only consistent with the model, or provides evidence for it.

"While the fraction of variance explained when using the matrix JSum was necessarily lower than the one computed using JFull , the model with connectivity JSum could still explain a consistent fraction of the total variance, and yielded values of R2 significantly higher than the ones obtained from the shuffles."

Again, it's not clear what we should expect. Is this strong or weak evidence for the model?

f. Lines 328-331: for the single cell model in Eq. 3, the correlations between the beginning and peak of the off response are high. This seems easily fixable with a very minor tweak. Suppose there are two kinds of neurons, some that peak early,

ri(t) = ai(s) fi(t)

and some that peak late,

ri(t) = bi(s) gi(t)

(for instance, fi(t) = Theta(t) e^{-t} and gi(t) = Theta(t) t*e^{-t}). Different stimuli (s) could activate different sets of neurons.

Alternatively, the single neuron responses could be of the form

ri(t) = ai(s) fi(t) + bi(s) gi(t)

where the a's and b's are anti correlated (either a is large and b is small or vice versa) and fi and gi are as above.

Are these model ruled out by the data? If not, that seems to weaken the paper (although maybe there are other things wrong with these models). In either case, models like these should be discussed.

g. Lines 332-353:

"A second important difference between the two models is that in the single-cell model the temporal shape of the response of each neuron is the same across all stimuli, while for the recurrent model this is not necessarily the case."

Given this opening sentence, why not just show that for the recurrent network model, the temporal shape of the response is not the same across all stimuli? Instead, there's a lot of analysis that we didn't fully understand. In particular, there's a rather sudden switch from specifying the form of Li(t) (Eq. 44) and fitting Li(t) (Eq. 48). What's missing is a quantitative prediction of what we should expect, followed by a demonstration that the data is consistent with that prediction.

In addition, we have some specific suggestions.

1. Lines 174-177,

"We focus on two outstanding properties identified in the OFF responses to different stimuli. The first one is the strong amplification of OFF responses, as quantified by the transient increase of the distance from baseline ||r(t)||. The second important feature is the low-dimensionality of OFF response trajectories and their global structure across different stimuli."

It seems that this is all you're going to focus on, but it's not. It would be nice to know, at some point early in the manuscript, exactly what you're going to do.

2. What's the difference between Figures 2D and 4A (left panel).

3. Lines 329-31,

"The single-cell model predicts that these two states are only weakly decorrelated and essentially lie along a single dimension (Figure 5E and S7), which is inconsistent with experimental observations"

Which experimental observations?

4. Throughout the paper we found it unclear how initial conditions were chosen for the model, e.g. in Figure 3. From line 167, we are led to think that they are picked from the actual data, "we model the firing activity at the end of stimulus presentation as the initial condition of the network dynamics" -- since the model doesn't explicitly models the stimulus period as far as we understand (?), we infer this activity is taken from the data? But does the model even have the same number of neurons?

5. The purpose of Figure 2 is to convey that different stimuli evoke orthogonal OFF responses, and these OFF responses are low dimensional. By my calculation, for each conditional mean (over 20 trials), there are 400ms * 31.5 frame/s ~= 13 samples used to estimate the PCs and variance explained resulting in a maximum dimensionality (PCs to get to 80%) of 13. If this is the case, a more careful comparison to the cross-condition dimensionality is warranted, which accounts for sample size.

Also, we would expect there to be fewer time points on the traces in Figure 2B. How were the trial-averaged responses interpolated?

What steps were taken to insure that the same unit wasn't double counted during pooling across sessions. Mistakes in this process can contribute to inflated variance along primary PC's creating the illusion of low dimensionality.

6. Statements of relative degree of subspace overlap should be verified with a hypothesis test. For data, this can be done by resampling neurons.

7. You should justify why LDA (naive covariance assumption) is more suitable for assessing linear separability in Figure 1C than an SVM. Absent justification, an SVM should be used to get the clearest assessment of linear separability.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Network dynamics underlying OFF responses in the auditory cortex" for further consideration by eLife. Your revised article has been evaluated by Barbara Shinn-Cunningham (Senior Editor) and a Reviewing Editor.

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below:

The good news is that the paper is much improved. The writing is more clear, and the analysis of dimensions of amplified trial-to-trial variability is compelling. The less good news is that now that we understand it better, additional issues have come up. All can, we believe, be dealt with relatively easily.

Let's start with the data you're are trying to explain. Basically, after the offset of an auditory stimulus, there is transient amplification, with different stimuli activating nearly orthogonal subspaces.

To model this, low rank linear dynamics is used, of the form

dr/dt = sum_{k=1}^R uk vk r – r,

where r (=r1, r2,.…, rN) is the trial averaged activity. The network had a few additional properties,

a. The rank, R, is much less than the number of neurons, N.

b. The u's and v's are nearly orthogonal.

c. The symmetric part of the connectivity matrix must have eigenvalues with real part >1.

d. The model has no external input, so the response to a particular stimulus (after offset) depends only on the value of r immediately after stimulus offset, here denoted r0(s) where s is the stimulus.

e. For any stimulus, s, r0(s) has appreciable overlap with a small number of vk (on the order of 5).

This model did an OK job explaining the data. In particular, Figure 4H, left (the model), is supposed to match Figure 2D (the data). But the match is not so good: the off-diagonal channel overlap in Figure 2D is near zero, whereas it's around 0.5 in Figure 4H. And the full model explains only about half the variance (Figure 4H, Right).

In addition, you claim that the dynamics is autonomous. The test for this is described on lines 251-9:

"We first tested the starting assumption that the OFF response trajectories are consistent with linear, autonomous network dynamics driven only by the initial pattern of activity and network interactions. If that is the case, the OFF response dynamics are fully determined by the initial activity at stimulus offset (t = 0). This assumption implies that the geometric structure of the initial activity states r0(s) fully predicts the geometric structure of the dynamics r(s)(t) during the OFF responses (see Materials and methods). In particular, two stimuli s1 and s2 corresponding to correlated initial states r0(s1) and r0(s2) will lead to correlated trajectories r(s1)(t) and r(s2)(t) in the model. Conversely, if two OFF-response trajectories r(s1)(t) and r(s2)(t) are orthogonal, the initial states r0(s1) and r0(s2) are necessarily orthogonal."

We're pretty sure that isn't correct: correlations are preserved only if the dynamics causes pure rotation at constant angular speed, something that doesn't happen in general, even for linear dynamics.

And finally, Figure 4E, right, shows that many u's and v's are not orthogonal -- judging from that figure, on the order of 10-20%. These terms were of the form

|u1| v2 v1 – |u2| v1 v2

where v1 and v2 are nearly orthogonal. You claim that terms like this can't lead to transient amplification. That's true if |u_1|=|u_2|. But if |u_1| and |u_2| are sufficiently different, it's not true. But maybe |u_1| \approx |u_2| in the data? If so, that should be clear. In any case, this seems like a detail -- it's certainly possible to do the analysis with these terms included.

The lack of orthogonality between the u's and v's doesn't seem so serious. But the lack of agreement between Figures 2D and 4H, combined with no strong evidence for autonomous dynamics, seems potentially problematic. Of course, no model ever fits perfectly. But it's hard to figure out exactly what the take-home message is. If there was strong evidence that the dynamics is autonomous, that would help -- at least we would know that the intrinsic dynamics is responsible for the transient amplification.

As a potentially important aside: the eigenvalue spectrum of Figure 4D shows one marginally stable, non-oscillatory mode. Based on past experience of one of the reviewers, it may be that the lack of agreement between Figures 2D and 4H is because of this mode. The reasoning is that if this slow mode is present by a small amount in all initial conditions, it will generate the fairly weak overlap in Figure 2D, but because it sticks, it might end up dominating the late off-responses for each stimulus after all the rest has decayed away. If that's the case, then this mode will be part of the principal subspace extracted for each stimulus. The measure of overlap based on "subspace angle" is very sensitive to this, which could explain the large overlaps in Figure 4H. Importantly, that marginally stable eigenvalue may be an artifact of insufficient regularization. There are a lot of "ifs" in the above reasoning, but if this mode can be eliminated, perhaps by different regularization, agreement may improve. We don't know if that's possible, but it's worth thinking about.

There is a lot of very nice analysis in the paper, and all reviewers agree that it should be published. However, it needs a clear take-home message. Exactly what that is depends, we believe, on how strongly the evidence is for autonomous dynamics. If that's very strong, then at the very least the take-home message is:

"Off cells show transient amplification that is generated by internal dynamics, not external input. That transient dynamics is mildly consistent with low rank connectivity in which dynamics can mainly be reduced to multiple 2-D systems. However, we can't rule out more complex behavior, with transient amplification fundamentally involving several dimensions."

Possibly you can come up with a stronger conclusion, but that's the best we could come up with.

If the evidence for autonomous dynamics is weak, that seems to be a fundamental problem, since the transient amplification may be caused solely by external input. But presumably that can never be ruled out -- a fact that you should probably mention. It's not a plausible scenario, so we don't think it will weaken the paper..

Our main comment, then, is to explore more carefully whether or not dynamics is autonomous, and/or be more careful about the conclusions. One possible way to test for autonomous dynamics is to plot the trajectories and look for intersections, or near intersections -- what Mark Churchland calls tangled trajectories, for which quantification is possible. Absence of near intersections would at least be consistent with autonomous dynamics. But there may be better methods. For instance, you might try LFADS: you can fit both an RNN and a linear dynamical model, and see which one fits better. That will, though, be a huge amount of work, and you should do it only if you want to.

In addition, we suggest exploring a richer single-neuron model. The current model has the form

ri^s(t) = r_{0i}^s Li(t).

If ri is non-negative, this can be written

dr_i/dt = fi(t) ri,

with ri(t=0) a function of the stimulus. It would be more convincing to consider a model with a nonlinearity; say

dri/dt = fi(t) gi(ri).

The nonlinearity gi could be parameterized with, probably, only a handful of parameters. For instance, it could be written as a sum of basis functions, much the way Li was parameterised. The single-cell model was rejected because it explained only about 20% of the variance, versus 50% for the recurrent network. It would be reassuring if adding a nonlinearity didn't bring this up to 50%. Again, this also is only a suggestion. However, if you stick with the current single cell model, you'll need to frame your comparison less generally than "recurrent versus single cell".

eLife. 2021 Mar 24;10:e53151. doi: 10.7554/eLife.53151.sa2

Author response


Essential revisions:

We're very sorry for the long delay. That's in part because at least one of use found the paper very hard to read, and it took a long time to figure out what was going on. And as you'll see, it's not clear if we did that very well. We think the problem is that you're missing quantitative analysis; in many cases it seems that we're supposed to understand why a particular piece of analysis supported your model, but it simply wasn't clear to us why that was true. Explicit examples of this will be given below.

We thank the reviewers for a very constructive feedback which has allowed us to significantly improve the manuscript. Following the reviewers’ suggestions we have rewritten large parts of the Results and Methods. Here we first briefly list the main analyses we added. Further below, we explain the new structure of the manuscript, and how the new analyses address the reviewers’ comments.

List of new analyses:

– We theoretically derived quantitative predictions on the structure of the network connectivity for a general l ow-rank connectivity matrix and tested these predictions on the AC data (Figure 4E, Methods Section 2.3, 2.7);

– We derived quantitative predictions for the relationship between the structure of the network connectivity and the population vector at stimulus offset and tested these predictions on AC data (Figure 4F, Methods Section 2.3);

– We derived quantitative prediction for the value of the correlation between the population activity vectors at stimulus offset and at the time of maximum distance from baseline (Figure 4G, Methods Section 2.5);

– We added a method section illustrating the prediction for he correlation between the structure of the population vectors at stimulus offset and dynamical trajectories (Figure 4A) f or a general low-rank network model (Methods Section 2.4);

– We adopted a different approach when comparing the recurrent model with the single-cell model. In the new version of the manuscript, we fit directly the form of the temporal responses of the single units Li(t) to the data (we correspondingly updated Figure 5B-C; Methods Section 3.1). We then compare the predictions of the two models on AC data (Figure 5F).

– We added a new (unrequested) analysis comparing the predictions of the recurrent and single-cell model on the population dynamics across single-trials (Figure 6, Methods Section 3.2, 3.3). We show that the two models yield quantitatively different predictions for the structure of the trial-to-trial response variability, and that the AC data i s consistent with the recurrent model and not with the single-cell model. This adds substantial new evidence i n favor of the recurrent model.

– We added controls f or the principal component analysis of the OFF responses to individual stimuli and across stimuli (Figure S1, Methods Section 1.4), and for the values of subspace overlap (i.e. Figure 2D) and correlation between i nitial and peak states (i.e. Figure 4G). These controls are described i n detail i n Methods Sections 1.6 and shown i n Figure S7, S8.

A major issue for us has been that you begin with a discussion of special cases of Equation (2) (last two paragraphs of page 6) but then go on to test predictions of THE model without telling us which of these special assumptions you are actually making (if any) in the production of your results (Figure 4 specifically). We believe that the first paragraph of "Testing the network mechanisms on the data" should begin with a clear description of the actual model considered (including whether your transient channels are mutually orthogonal, and how initial conditions relate to those planes).

Following the reviewers’ suggestion, we have totally rewritten the subsection “Recurrent network mechanism” (and the corresponding Methods) and most of the subsections “Testing the network mechanism” and “Comparison with a single-cell model”.

To help the reviewers navigate the changes, here we provide a summary of the new organization of the manuscript.

First two subsections of Results: we use population analysis of calcium activity identify three main features of OFF responses in the auditory cortex:

F1. OFF responses correspond to transiently amplified trajectories;

F2. responses to individual stimuli lie in low-dimensional subspaces;

F3. responses to different stimuli lie in largely orthogonal subspaces.

Subsection “Recurrent network mechanism”: we propose a recurrent network model f or OFF responses based on only two hypotheses:

H1. dynamics are driven exclusively by recurrent interactions (Eq. 1), and the only effect of each stimulus s is to set the initial state r0s that corresponds to activity at stimulus offset;

H2. the recurrent connectivity is of l ow-rank type, i.e. characterized by a set of connectivity vectors ur and vr (Eq. 1).

We then systematically derive a set of sufficient conditions on vectors ur, vr and r0s to reproduce the observed features F1-F3. These sufficient conditions provide corresponding predictions P1-P3 that can be tested on the data.

Subsection “Testing the network mechanism”: we systematically test on the data the hypotheses H1 and H2, as well as the predictions P1-P3. The structure of this subsection now mirrors the structure of “Recurrent network mechanism”.

Subsection “Comparison with a single-cell model”: we first compare predictions of the fitted models on trial-averaged data; we then compare new predictions f or trial-to-trial variability.

More specifically, to summarize our current understanding: under the assumption that the v's are orthonormal, the u's are orthogonal, and uk dot vl = 0 for all k \ne l (not strictly true, but close enough in high dimensions), the most general solution (a generalization of Eq. 34 to an arbitrary number of modes) is

r(t) = [uperp + sumk uk (ak + vk dot uk t)] exp(-t)

where uperp is perpendicular to all the uk and the ak are arbitrary.

This appears late in Methods. But it's important, because there seem to be hidden assumptions about uperp and the ak. Namely,

– the ak are all set to zero

– uperp = uk for only one k, with k dependent on the stimulus.

Assuming this is correct, it's a critical part of the model. And it is, in fact, an extra assumption.

The new version of the text clarifies i n detail the questions raised by the reviewers. In short, we make no a priori assumptions on vectors ur, vr and r0s (the initial state) other than that the v's are orthonormal, the u's are orthogonal, but there is no loss of generality there (such u’s and v’s can be uniquely specified by an SVD of the connectivity matrix).

We then derive sufficient conditions on u+ and vr (uk, vk in the reviewers' notation) and r0 to reproduce F1-F3, and find i n particular:

– An upper bound on ur dot vs (always satisfied if ur dot vs = 0 for all r,s) to have transient amplification (F1)

– Conditions on r0 with respect to ur, vr, that correspond to conditions on uperp and ak in the reviewers’ notation

We next fit the network model on the data (without specific constraints on ur, vr and r0s), and check whether the predicted constraints on ur, vr and r0s hold.

The methods now start with equations equivalent to the general solution put forward by the reviewers (Eq. 33 i n a unit-rank example, and Eq. 35 for arbitrary rank).

Given that setup, we believe there were several predictions:

a. Lines 216-228: There's a prediction about the correlational structure that we didn't fully understand. Equations are needed, in the main text, that make the predictions explicit.

The goal was simply to test H1, i.e. that the states of initial activity at stimulus offset determine the ensuing dynamics. We have rewritten that part of the text, and added a detailed section i n the Methods (Section 2.4, "Correlation between structure of initial states and structure of the dynamics").

b. Lines 229-238: Correlations between the beginning and the peak of the off response should small. Again, equations are needed; this should be quantified.

In the new organization of the text, this prediction now appears later. We have now derived a quantitative prediction for the upper bound of the correlation between initial state and peak response (Methods Section 2.5, "Correlation between initial and peak state"), and added a panel (Figure 4G) demonstrating this prediction is quantitatively satisfied.

c. Lines 239-253:

"To further examine to which extent the population dynamics during OFF responses could be accounted for by a non-normal linear dynamical system, we next fitted our network model population OFF responses to all stimuli at once using reduced-rank regression (Figure S1; see Methods). Qualitatively, we found that the fit reproduced well the low-dimensional population trajectories (Figure 4C)."

As far as we could tell, this just tells us that the fit is good; it doesn't tell us anything about whether the OFF responses could be accounted for by a non-normal linear dynamical system. Is that correct, or did we miss something? If that is correct, it seems like this paragraph is mainly a distractor.

The aim of this paragraph is twofold: (i) test H1, i.e. show an autonomous dynamical system is a good model of the data, using the approach introduced in Elsayed and Cunningham (2017) (ii) determine the effective connectivity matrix, and extract vectors ur and vr, which are then used to test the predictions on the relationship between ur, vr and r0s (Figure 4 F and F). We have now rephrased that part (and reorganized the whole subsection).

d. Lines 254-262: The fitted connectivity, J, had eigenvalues less than 1 and the symmetrized version had eigenvalues greater than 1. That's consistent with the low rank, non-normal model. However, couldn't such a J could come from a high rank, non-normal model? If so, it's not strong evidence for your model. This should be clarified.

The reviewers are correct, the examination of the eigenvalues tests only the predictions that ensue from H1 (autonomous dynamical system), not H2 (low-rank structure). So this test provides only evidence for H1, not H2, and we have clarified that i n the new version of the text. H2 and the ensuing predictions are tested separately.

e. Lines 263-283: There were several results here, but it's not clear how much they supported the model. For example,

"We computed the overlap between the subspaces spanned by the transient channels for each pair of stimuli s1 and s2 , analogous to the overlap between the vectors u(s1) and u(s2) in Eq. (2) (see Methods). We found that the overlap between the transient channels matched the overlap between the response dynamics and population states at the end of stimulus presentation (Figure 4F, right panel), consistent with the interpretation that individual OFF responses may be generated through transient coding channels of the form given by Eq. (2)."

It's not clear what we should expect here. Is this strong or weak evidence for the model? You need to quantify what we should expect, and be clear whether this is only consistent with the model, or provides evidence for it.

Here we test the predictions that result from the requirement that the model reproduces F3 (orthogonal trajectories in response to different stimuli). The corresponding prediction has now been made explicit in the “Recurrent network mechanism” subsection. Specifically, we test Eqs. 3-4.

"While the fraction of variance explained when using the matrix J=Sum was necessarily lower than the one computed using J=Full , the model with connectivity J=Sum could still explain a consistent fraction of the total variance, and yielded values of R2 significantly higher than the ones obtained from the shuffles."

Again, it's not clear what we should expect. Is this strong or weak evidence for the model?

We have now rephrased this part. If the prediction (Eqs.3-4) i s exact, the explained variance should be the same i f the model was f it on the responses to all stimuli at once, or separately on responses to different stimuli. We find that the explained variance i s slightly lower i n the second case, but still much larger than chance (as could be expected, since the responses are orthogonal to many, but not all stimuli). Hence Eq.3 i s not exact, but a reasonable approximation. The strength of the evidence f or the model i s therefore directly quantified by the value of R2 when fitting the model separately to different stimuli.

f. Lines 328-331: for the single cell model in Eq. 3, the correlations between the beginning and peak of the off response are high. This seems easily fixable with a very minor tweak. Suppose there are two kinds of neurons, some that peak early,

ri(t) = ai(s) fi(t)

and some that peak late,

ri(t) = bi(s) gi(t)

(for instance, fi(t) = Theta(t) e^{-t} and gi(t) = Theta(t) t*e^{-t}). Different stimuli (s) could activate different sets of neurons.

Alternatively, the single neuron responses could be of the form

ri(t) = ai(s) fi(t) + bi(s) gi(t)

where the a's and b's are anti correlated (either a is large and b is small or vice versa) and fi and gi are as above.

Are these model ruled out by the data? If not, that seems to weaken the paper (although maybe there are other things wrong with these models). In either case, models like these should be discussed.

In the original manuscript, the correlations between initial and peak activity were assessed by assuming a specific form of the single-neuron response, without constraining i t by the data. Rather than explore this quantity for various assumptions on the single-neuron responses, we decided to first fit the single-neuron responses to data, and then compute the resulting correlations between initial and peak activity. Figure 5F shows that these correlations are much higher than on actual activity trajectories. This clearly demonstrates that the single model does not capture the decorrelation present in the data. (note that the second form suggested by the reviewers, ri(t) = ai(s) fi(t) + bi(s) gi(t), is not consistent with our hypothesis that a single-cell response depends only on the activity of the cell at stimulus offset; it is instead equivalent to a unit-rank network).

We moreover added a new analysis and figure (Figure 6) comparing the predictions of the recurrent and single-cell mechanisms for trial-to-trial fluctuations. The results clearly indicate that the single-cell model is inconsistent with the data, and provide significant new evidence in favor of the network model.

g. Lines 332-353:

"A second important difference between the two models is that in the single-cell model the temporal shape of the response of each neuron is the same across all stimuli, while for the recurrent model this is not necessarily the case."

Given this opening sentence, why not just show that for the recurrent network model, the temporal shape of the response is not the same across all stimuli? Instead, there's a lot of analysis that we didn't fully understand. In particular, there's a rather sudden switch from specifying the form of Li(t) (Eq. 44) and fitting Li(t) (Eq. 48). What's missing is a quantitative prediction of what we should expect, followed by a demonstration that the data is consistent with that prediction.

We thank the reviewers for this important comment. In the new version of the manuscript, we have totally removed the part where we assumed a specific form of single-neuron responses. We now fit i t to the data right away.

Following the reviewers’ suggestion, we also show that in the network model the temporal shape of the response is not the same across stimuli (Figure 3E).

In addition, we have some specific suggestions.

1. Lines 174-177,

"We focus on two outstanding properties identified in the OFF responses to different stimuli. The first one is the strong amplification of OFF responses, as quantified by the transient increase of the distance from baseline ||r(t)||. The second important feature is the low-dimensionality of OFF response trajectories and their global structure across different stimuli."

It seems that this is all you're going to focus on, but it's not. It would be nice to know, at some point early in the manuscript, exactly what you're going to do.

As mentioned above, we have now completely reorganized the manuscript to make clear we focus on three features identified i n the data (F1-F3 listed above). These features are now clearly listed in the abstract, introduction and provide the guiding line throughout the results.

2. What's the difference between Figures 2D and 4A (left panel).

Figure 2D displays the correlation matrix between the subspaces explored during the OFF responses to different stimuli. Figure 4A displays the correlation matrix of the initial states (=activity at stimulus offset, not included in the OFF responses). We have now clarified this i n the text and figure legends.

3. Lines 329-31,

"The single-cell model predicts that these two states are only weakly decorrelated and essentially lie along a single dimension (Figure 5E and S7), which is inconsistent with experimental observations"

Which experimental observations?

We have rephrased this sentence. As mentioned above, in the new version of the manuscript, we don't fix the shape of the single-cell responses, but instead fit them directly to the data. We compute the correlation between initial state and peak state from the fitted responses when fitting the single-cell model to a progressively increasing number of stimuli. We found that the value of the correlation predicted by the single-cell model deviated from the experimental value as more stimuli were included in the fit, in contrast to the recurrent model, which predicted a value of the correlation consistent with data for any number of fitted stimuli (Figure 5F).

4. Throughout the paper we found it unclear how initial conditions were chosen for the model, e.g. in Figure 3. From line 167, we are led to think that they are picked from the actual data, "we model the firing activity at the end of stimulus presentation as the initial condition of the network dynamics" -- since the model doesn't explicitly models the stimulus period as far as we understand (?), we infer this activity is taken from the data? But does the model even have the same number of neurons?

Figure 3 is only an illustration of the model (with rank=1), where the initial conditions r0 are picked to satisfy the conditions derived on the relationship between ur, vr and r0 to reproduce F1-F3. Once we turn to testing the predictions on the data, the initial conditions are taken from the data: for each stimulus they correspond to the calcium activity recorded at stimulus offset, corresponding to 50ms before the termination of the stimulus. Prior to fitting the model, we applied dimensionality reduction (PCA) to the data to avoid overfitting, so that the dimensionality of the fitted model i s equal to the number of principal components. This has been specified i n Section 1.7 of the Methods "Linear dynamical system fit".

5. The purpose of Figure 2 is to convey that different stimuli evoke orthogonal OFF responses, and these OFF responses are low dimensional. By my calculation, for each conditional mean (over 20 trials), there are 400ms * 31.5 frame/s ~= 13 samples used to estimate the PCs and variance explained resulting in a maximum dimensionality (PCs to get to 80%) of 13. If this is the case, a more careful comparison to the cross-condition dimensionality is warranted, which accounts for sample size.

We thank the reviewers for pointing this out. We have now added two types of controls for the principal component analysis, which aim at controlling respectively for sample size and trial-to-trial response variability. To control f or sample size we performed PCA on the trial-averaged responses to individual stimuli using the raw data (13 time points). Next we performed PCA on the responses where we shuffled cell labels independently for each time point, thus preserving the firing rate statistics at each time point, but removing temporal correlations i n the response. We found that the dimensionality of the shuffled response was higher than the dimensionality of the real responses, indicating that the value of the dimensionality obtained for individual stimuli is not the direct consequence of sample size. These results are shown in Figure S1 A.

To account for trial-to-trial variability i n the response we performed a second control which uses cross-validated PCA. This approach (Stringer et al. 2018) provides an unbiased estimate of the signal variance by computing the covariance between a training and a test set of responses (e.g. two different trials) to an identical collection of stimuli. We show that the same number of dimensions obtained with ordinary PCA explains 80% of variance when the trial-to-trial variability of the responses to a fixed stimulus i s taken into account. We describe the details of this procedure in the Methods, Section 1.4 "Cross-validated PCA", and i n Figure S1 B-C.

Also, we would expect there to be fewer time points on the traces in Figure 2B. How were the trial-averaged responses interpolated?

Indeed, the time points were interpolated using a Gaussian filter with standard deviation equal to 32ms. This is specified in Section 1.1 "Neural recordings".

What steps were taken to insure that the same unit wasn't double counted during pooling across sessions. Mistakes in this process can contribute to inflated variance along primary PC's creating the illusion of low dimensionality.

Recordings in different sessions were performed at different depths, typically with a 50 micron difference (never less than 20 microns). Since the soma diameter is 15 microns, this ensured different cells were recorded in different sessions. We now specify this in the Methods (Section 1.1).

6. Statements of relative degree of subspace overlap should be verified with a hypothesis test. For data, this can be done by resampling neurons.

We thank the reviewers for pointing this out. We performed two types of statistical tests for assessing the significance of the orthogonality between responses to different stimuli, reflected in l ow values of the subspace overlap (Figure 2D). The two controls correspond to two different null hypotheses.

The first hypothesis is that low values of the subspace overlap may be due to the high dimensionality of the state-space in which they are embedded (e.g. the correlation between any pair of random vectors in high dimension is likely to be low). To test for this hypothesis we compared the subspace overlap computed f or each pair of stimulis 1 and s2 on the real trial-averaged responses with the subspace overlap computed on the responses where the stimulus labels s1 and s2 had been shuffled across trials, and then averaged over trials. This yielded pairs of responses that are embedded in the same dimensions of the real responses but are otherwise random. We found that f or most of the stimuli the subspace overlap was significantly lower for the real responses than f or the shuffled responses, indicating that l ow values of the subspace overlap were not a direct consequence of the high dimensionality of the state space.

The second null hypothesis is that l ow values of the subspace overlap might be an artifact of the trial-to-trial variability present in the calcium activity data. To test for this possibility, for each pair of stimuli we computed the values of the subspace overlap by computing the trial-averaged activity on only half of the trials (10 trials), subsampling the set of 10 trials multiple times for each pair of stimuli. We then compared these values with the values of the subspace overlap between the trial-averaged responses to the same stimulus, but averaged over two different sets of 10 trials each, over multiple permutations of the trials (if the trial-to-trial variability was l ow, these values should be close to 1 since we compute it on a single stimulus, but should decrease as a function of the variability). We found that for most of the stimuli the subspace overlap was significantly lower for the real responses than f or the controls, indicating that l ow values of the subspace overlap were not a direct consequence of the trial-to-trial response variability.

The same type of controls have been applied to test f or the decorrelation between the initial state and peak state (see Figure 4G). We describe the controls in detail in the Methods, Section 1.6 "Controls for subspace overlaps and initial state-peak correlations", while the results are reported in Figure S7-S8.

7. You should justify why LDA (naive covariance assumption) is more suitable for assessing linear separability in Figure 1C than an SVM. Absent justification, an SVM should be used to get the clearest assessment of linear separability.

We do not make a strong statement on linear separability in Figure 1C. LDA assuming naive covariance was used simply for convenience, and for consistency with previous studies that did not quantify the impact of noise correlations on decoding (Mazor and Laurent 2005, Saha et al. 2014).

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below:

The good news is that the paper is much improved. The writing is more clear, and the analysis of dimensions of amplified trial-to-trial variability is compelling. The less good news is that now that we understand it better, additional issues have come up. All can, we believe, be dealt with relatively easily.

We thank the reviewers for detailed and constructive feedback. We have significantly revised the manuscript to address the reviewers’ comments.

A major comment was that the paper lacked a clear take home message. Our main message is in fact very simple: A1 OFF responses across stimuli are better described by a network mechanism than by a single-cell mechanism. This is an important message, because the whole OFF-response literature is based on single-cell mechanisms (see Kopp-Scheinpflug, Sinclair and Linden, Trends in Neuroscience 2018), and a network model has been lacking.

The reviewers’ feedback however made us realize that this message was not communicated in a sufficiently clear manner. We have therefore thoroughly reorganized the manuscript to make this message more prominent.

The new organisation of the paper is the following:

1. The first part is unchanged: we start with model-free analyses of auditory cortex data.

2. We now first discuss the single-cell model, and show that it describes well responses to a single stimulus, but not responses across stimuli.

3. We then introduce the network model, and show that it accounts better for the responses across stimuli than the single-cell model.

4. We next analyze the structure of the network model, which leads to additional predictions we verify on the data. A subsection of this part has also been substantially revised following the reviewers’ feedback (see below).

As a consequence of this reorganization, the figures have also been thoroughly rearranged, and several panels have been added:

– Figure 1: unchanged.

– Figure 2: panel E (previously in former Figure 4) has been added.

– Figure 3 contains the fit of the single-cell and network models (previously in former Figure 5 and Figure 4 respectively). We added one more example neuron to panel E. We added panel G (distance from baseline for the fitted network model) and panel I (subspace overlaps for the fitted network model).

– Figure 4: the spectra of the fitted connectivity matrix J and of its symmetric part JS have been re-computed using ridge regression (without low-rank constraints), and extending the temporal window of the OFF response from 350 to 600 ms to get a stable connectivity matrix. To evaluate uncertainty in the connectivity spectra, we now compute the spectra over multiple neuron subsamplings.

– Figure 5 (former Figure 3) has been updated, and now illustrates the dynamics of orthogonal rank-2 rotational channels.

– Figure 6 illustrates the structure of the connectivity fitted to individual stimuli, and the predictions ensuing from this structure. We added panel A showing the goodness of fit computed using reduced-rank regression normalized by its value computed when using ordinary least squares regression. We added panel C(left) (condition for amplification for orthogonal rotational channels) and removed former Figure 4F(right). Former Figure 5F has been moved to Figure 6E.

– Figure 7 contains the results shown in former Figure 4H. Panel B has been added (correlation between subspace overlap and overlap between transient channels).

– Figure 8 (former Figure 6): the panels have been re-organized: following the structure of the main text, the data panels are now shown before the model panels.

– Former Figure 4C has been removed and is now shown in Figure 3-Supplementary3.

– Figure 3-Supplementary1 has been updated to include up to 16 stimuli.

Below we reply to the details of the reviewers’ comments.

Let's start with the data you're are trying to explain. Basically, after the offset of an auditory stimulus, there is transient amplification, with different stimuli activating nearly orthogonal subspaces.

To model this, low rank linear dynamics is used, of the form

dr/dt = sum_{k=1}^R uk vk r – r,

where r (=r1, r2,.…, rN) is the trial averaged activity. The network had a few additional properties,

a. The rank, R, is much less than the number of neurons, N.

b. The u's and v's are nearly orthogonal.

c. The symmetric part of the connectivity matrix must have eigenvalues with real part >1.

d. The model has no external input, so the response to a particular stimulus (after offset) depends only on the value of r immediately after stimulus offset, here denoted r0(s) where s is the stimulus.

e. For any stimulus, s, r0(s) has appreciable overlap with a small number of vk (on the order of 5).

This model did an OK job explaining the data. In particular, Figure 4H, left (the model), is supposed to match Figure 2D (the data). But the match is not so good: the off-diagonal channel overlap in Figure 2D is near zero, whereas it's around 0.5 in Figure 4H.

We thank the reviewers for pointing out the inconsistency between the former Figure 4H and Figure 2D. We have traced it back to a mismatch between preprocessing parameters, and updated former Figure 4H (now Figure 7A) so that it now matches much better Figure 2D.

As indicated in the Methods (sections "Fitting​ the network model"​ and "Analysis of the transient channels"), before fitting the network model, we denoise the data using a PCA. In the original Figure 4H, the total number of dimensions (PC dimensionality) after denoising was set to D=50, which was too low compared to the dimensionality of responses to individual stimuli, and introduced correlations between the fitted low-rank components. Increasing this pre-processing parameter to D=100 and setting the rank parameter to R = 5 instead leads to a high match between Figure 2D and Figure 4H (new Figure 7C, coeff. of determination >0.7).

And the full model explains only about half the variance (Figure 4H, Right).

Our main argument for the network model is not based on the absolute value of explained variance, but on the fact that the explained variance does not increase, in contrast to the single-cell model.

The total variance explained by our model is consistent with previous reports from the literature that used similar methods. Fitting a linear dynamical system to motor cortical population activity yielded R2 values between 0.6 and 0.8 (see Elsayed et al. 2017, Lara et al. 2018). In our case, for D=100, R=5 and λ=1 for individual stimuli, we found that the cross-validated (10-fold) goodness of fit, as quantified by the coefficient of determination R2, was equal to 0.65. Note also that we used calcium recordings, which are significantly more noisy than electrophysiological recordings used in Elsayed et al. 2017, Lara et al. 2018.

In addition, you claim that the dynamics is autonomous. The test for this is described on lines 251-9:

"We first tested the starting assumption that the OFF response trajectories are consistent with linear, autonomous network dynamics driven only by the initial pattern of activity and network interactions. If that is the case, the OFF response dynamics are fully determined by the initial activity at stimulus offset (t = 0). This assumption implies that the geometric structure of the initial activity states r0(s) fully predicts the geometric structure of the dynamics r(s)(t) during the OFF responses (see Materials and methods). In particular, two stimuli s1 and s2 corresponding to correlated initial states r0(s1) and r0(s2) will lead to correlated trajectories r(s1)(t) and r(s2)(t) in the model. Conversely, if two OFF-response trajectories r(s1)(t) and r(s2)(t) are orthogonal, the initial states r0(s1) and r0(s2) are necessarily orthogonal."

We're pretty sure that isn't correct: correlations are preserved only if the dynamics causes pure rotation at constant angular speed, something that doesn't happen in general, even for linear dynamics.

We apologize, this paragraph appears to have caused a lot of confusion, and we have now removed it. Our goal was in fact to simply show that the OFF responses depend on the initial state of the network at the end of stimulus presentation. This is consistent with a dynamical mechanism, as has been argued previously in the motor cortex (Churchland et al. 2012), but does not exclude a contribution from external inputs. Since this aspect is consistent with both single-cell and network models, we have now moved the previous Figure 4A to model-free analysis in Figure 2.

We did not intend to claim to demonstrate that the OFF-response dynamics in the auditory cortex are autonomous. In fact the word “autonomous” was used only twice in the Results (and only in the paragraph copied above), and we have now completely removed it, as technically the single-cell model we use is not an autonomous dynamical system.

A paragraph in the Discussion (“A major assumption of our model…”) clarifies that we cannot exclude contributions from external inputs during the OFF responses, and do not claim the dynamics are purely autonomous.

The reviewers are absolutely correct that linear dynamics do not in general preserve correlations – additional constraints are needed. The relationship between initial conditions and subsequent responses found in the data is non-trivial. We have now added a plot showing that the fitted network model accounts for this observation (new Figure 3I).

And finally, Figure 4E, right, shows that many u's and v's are not orthogonal -- judging from that figure, on the order of 10-20%. These terms were of the form

|u1| v2 v1 – |u2| v1 v2

where v1 and v2 are nearly orthogonal. You claim that terms like this can't lead to transient amplification. That's true if |u_1|=|u_2|. But if |u_1| and |u_2| are sufficiently different, it's not true. But maybe |u_1| \approx |u_2| in the data?If so, that should be clear. In any case, this seems like a detail -- it's certainly possible to do the analysis with these terms included.

The reviewers are absolutely correct. We have now thoroughly reorganized this part of the results and the corresponding methods. Specifically:

– We do not assume anymore orthogonality between the u’s and v’s.

– Instead, based on the properties of the fitted model, we now test the hypothesis that the network can be approximated as a sum of orthogonal rank-2 channels of the form |u1| v2 v1 – |u2| v1 v2

– We now describe under which conditions such channels lead to amplified dynamics (new Figure 5). In particular, as pointed out by the reviewers, a key condition is | |u1|-|u2| |>2.

– We then test those conditions in the data, in particular in the new panel Figure 5C.

The lack of orthogonality between the u's and v's doesn't seem so serious. But the lack of agreement between Figures 2D and 4H, combined with no strong evidence for autonomous dynamics, seems potentially problematic. Of course, no model ever fits perfectly. But it's hard to figure out exactly what the take-home message is. If there was strong evidence that the dynamics is autonomous, that would help -- at least we would know that the intrinsic dynamics is responsible for the transient amplification.

As pointed out at the beginning of the reply, our claim is not that the dynamics are autonomous. Our main message is that A1​ OFF responses across stimuli are better described by a network mechanism than by a single-cell mechanism​.

The manuscript has been thoroughly reorganized to better communicate this message.

As a potentially important aside: the eigenvalue spectrum of Figure 4D shows one marginally stable, non-oscillatory mode. Based on past experience of one of the reviewers, it may be that the lack of agreement between Figures 2D and 4H is because of this mode. The reasoning is that if this slow mode is present by a small amount in all initial conditions, it will generate the fairly weak overlap in Figure 2D, but because it sticks, it might end up dominating the late off-responses for each stimulus after all the rest has decayed away. If that's the case, then this mode will be part of the principal subspace extracted for each stimulus. The measure of overlap based on "subspace angle" is very sensitive to this, which could explain the large overlaps in Figure 4H. Importantly, that marginally stable eigenvalue may be an artifact of insufficient regularization. There are a lot of "ifs" in the above reasoning, but if this mode can be eliminated, perhaps by different regularization, agreement may improve. We don't know if that's possible, but it's worth thinking about.

We thank the reviewer suggesting this possibility. As explained above, the disagreement between former Figure 2D and former Figure 4H (new Figure 7A) was due to an incorrect setting in the preprocessing. Changing the PC dimensionality from 50 (as in the previous version of the manuscript) to 100 (new version) fixed the disagreement.

As noted by the reviewers, there was indeed a marginally stable eigenvalue in the full spectrum of J. We traced this back to the fact that, for the longest stimuli, the inter-trial interval is too short, and the OFF responses do not have enough time to decay to baseline. Excluding the longest stimuli, and extending the analyzed window eliminates the marginal eigenvalues (updated Figure 4).

There is a lot of very nice analysis in the paper, and all reviewers agree that it should be published. However, it needs a clear take-home message. Exactly what that is depends, we believe, on how strongly the evidence is for autonomous dynamics. If that's very strong, then at the very least the take-home message is:

"Off cells show transient amplification that is generated by internal dynamics, not external input. That transient dynamics is mildly consistent with low rank connectivity in which dynamics can mainly be reduced to multiple 2-D systems. However, we can't rule out more complex behavior, with transient amplification fundamentally involving several dimensions."

Possibly you can come up with a stronger conclusion, but that's the best we could come up with.

If the evidence for autonomous dynamics is weak, that seems to be a fundamental problem, since the transient amplification may be caused solely by external input. But presumably that can never be ruled out -- a fact that you should probably mention. It's not a plausible scenario, so we don't think it will weaken the paper..

As pointed out at the beginning of the reply, our claim is not that the dynamics are autonomous. While we show evidence that initial conditions play a role in the subsequent dynamics, we cannot exclude that external inputs are present. This is explicitly acknowledged in the Discussion.

Our main message is instead that A1​ OFF responses across stimuli are better described by a network mechanism than by a single-cell mechanism​. As summarized at the top of the response, we have thoroughly reorganized the manuscript to better communicate this message.

Our main comment, then, is to explore more carefully whether or not dynamics is autonomous, and/or be more careful about the conclusions. One possible way to test for autonomous dynamics is to plot the trajectories and look for intersections, or near intersections -- what Mark Churchland calls tangled trajectories, for which quantification is possible. Absence of near intersections would at least be consistent with autonomous dynamics. But there may be better methods. For instance, you might try LFADS: you can fit both an RNN and a linear dynamical model, and see which one fits better. That will, though, be a huge amount of work, and you should do it only if you want to.

We thank the reviewers for this suggestion. We have however decided not to follow this route, because our main claim is not that the dynamics are purely autonomous, as explained above.

In addition, we suggest exploring a richer single-neuron model. The current model has the form

ris(t) = r_{0i}^s Li(t).

If ri is non-negative, this can be written

dri/dt = fi(t) ri,

with ri(t=0) a function of the stimulus. It would be more convincing to consider a model with a nonlinearity; say

dri/dt = fi(t) gi(ri).

The nonlinearity gi could be parameterized with, probably, only a handful of parameters. For instance, it could be written as a sum of basis functions, much the way Li was parameterised. The single-cell model was rejected because it explained only about 20% of the variance, versus 50% for the recurrent network. It would be reassuring if adding a nonlinearity didn't bring this up to 50%. Again, this also is only a suggestion. However, if you stick with the current single cell model, you'll need to frame your comparison less generally than "recurrent versus single cell".

We thank the reviewers for this suggestion. We now explicitly frame our results as comparing linear ​ single-cell and network models (Abstract).

Note that the single-cell model captures about 75% of the variance when fitted to responses to a single stimulus. It is therefore clearly expressive enough to capture the dynamics seen in the data. What it does not capture is dynamics across stimuli, and it is not clear to us how a non-linearity could fix this, while linear network interactions do. The effects of a non-linearity would need to be included both in the single-cell and network models, and this goes beyond the scope of the present study.


Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

RESOURCES