Abstract
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.
Research organism: Rhesus macaque
Introduction
Complex brain functions require coordination between large numbers of neurons. Unraveling mechanisms of neuronal coordination is therefore a core ingredient towards answering the long-standing question of how neuronal activity represents information. Population coding is one classical paradigm (Georgopoulos et al., 1983) in which entire populations of similarly tuned neurons behave coherently, thus leading to positive correlations among their members. The emergence and dynamical control of such population-averaged correlations has been studied intensely (Ginzburg and Sompolinsky, 1994; Renart et al., 2010; Helias et al., 2014; Rosenbaum and Doiron, 2014). More recently, evidence accumulated that neuronal activity often evolves within more complex low-dimensional manifolds, which imply more involved ways of neuronal activity coordination (Gallego et al., 2017; Gallego, 2018; Gallego et al., 2020): A small number of population-wide activity patterns, the neural modes, are thought to explain most variability of neuronal activity. In this case, individual neurons do not necessarily follow a stereotypical activity pattern that is identical across all neurons contributing to a representation. Instead, the coordination among the members is determined by more complex relations. Simulations of recurrent network models indeed indicate that networks trained to perform a realistic task exhibit activity organized in low-dimensional manifolds (Sussillo et al., 2015). The dimensionality of such manifolds is determined by the structure of correlations (Abbott et al., 2011; Mazzucato et al., 2016) and tightly linked to the complexity of the task the network has to perform (Gao, 2017) as well as to the dimensionality of the stimulus (Stringer et al., 2019). Recent work has started to decipher how neural modes and the dimensionality of activity are shaped by features of network connectivity, such as heterogeneity of connections (Smith et al., 2018; Dahmen et al., 2019), block structure (Aljadeff et al., 2015; Aljadeff et al., 2016), and low-rank perturbations (Mastrogiuseppe and Ostojic, 2018) of connectivity matrices, as well as connectivity motifs (Recanatesi et al., 2019; Dahmen et al., 2021; Hu and Sompolinsky, 2020). Yet, these works neglected the spatial organization of network connectivity (Schnepel et al., 2015) that becomes more and more important with current experimental techniques that allow the simultaneous recording of ever more neurons. How distant neurons that are likely not connected can still be strongly coordinated to participate in the same neural mode is a widely open question.
To answer this question, we combine analyses of parallel spiking data from macaque motor cortex with the analytical investigation of a spatially organized neuronal network model. We here quantify coordination by Pearson correlation coefficients and pairwise covariances, which measure how temporal departures of the neurons’ activities away from their mean firing rate are correlated. We show that, even with only unstructured and short-range connections, strong covariances across distances of several millimeters emerge naturally in balanced networks if their dynamical state is close to an instability within a ‘critical regime’. This critical regime arises from strong heterogeneity in local network connections that is abundant in brain networks. Intuitively, it arises because activity propagates over a large number of different indirect paths. Heterogeneity, here in the form of sparse random connectivity, is thus essential to provide a rich set of such paths. While mean covariances are readily accessible by mean-field techniques and have been shown to be small in balanced networks (Renart et al., 2010; Tetzlaff et al., 2012), explaining covariances on the level of individual pairs requires methods from statistical physics of disordered systems. With such a theory, here derived for spatially organized excitatory-inhibitory networks, we show that large individual covariances arise at all distances if the network is close to the critical point. These predictions are confirmed by recordings of macaque motor cortex activity. The long-range coordination found in this study is not merely determined by the anatomical connectivity, but depends substantially on the network state, which is characterized by the individual neurons’ mean firing rates. This allows the network to adjust the neuronal coordination pattern in a dynamic fashion, which we demonstrate through simulations and by comparing two behavioral epochs of a reach-to-grasp experiment.
Results
Macaque motor cortex shows long-range coordination patterns
We first analyze data from motor cortex of macaques during rest, recorded with , 100-electrode Utah arrays with 400 µm inter-electrode distance (Figure 1A). The resting condition of motor cortex in monkeys is ideal to assess intrinsic coordination between neurons during ongoing activity. In particular, our analyses focus on true resting state data, devoid of movement-related transients in neuronal firing (see Materials and methods). Parallel single-unit spiking activity of neurons per recording session, sorted into putative excitatory and inhibitory cells, shows strong spike-count correlations across the entire Utah array, well beyond the typical scale of the underlying short-range connectivity profiles (Figure 1B and D). Positive and negative correlations form patterns in space that are furthermore seemingly unrelated to the neuron types. All populations show a large dispersion of both positive and negative correlation values (Figure 1C).
The classical view on pairwise correlations in balanced networks (Ginzburg and Sompolinsky, 1994; Renart et al., 2010; Pernice et al., 2011; Pernice et al., 2012; Tetzlaff et al., 2012; Helias et al., 2014) focuses on averages across many pairs of cells: average correlations are small if the network dynamics is stabilized by an excess of inhibitory feedback; dynamics known as the ‘balanced state’ arise (van Vreeswijk and Sompolinsky, 1996; Amit and Brunel, 1997; van Vreeswijk and Sompolinsky, 1998): Negative feedback counteracts any coherent increase or decrease of the population-averaged activity, preventing the neurons from fluctuating in unison (Tetzlaff et al., 2012). Breaking this balance in different ways leads to large correlations (Rosenbaum and Doiron, 2014; Darshan et al., 2018; Baker et al., 2019). Can the observation of significant correlations between individual cells across large distances be reconciled with the balanced state? In the following, we provide a mechanistic explanation.
Multi-synaptic connections determine correlations
Connections mediate interactions between neurons. Many studies therefore directly relate connectivity and correlations (Pernice et al., 2011; Pernice et al., 2012; Trousdale et al., 2012; Brinkman et al., 2018; Kobayashi et al., 2019). From direct connectivity, one would expect positive correlations between excitatory neurons and negative correlations between inhibitory neurons and a mix of negative and positive correlations only for excitatory-inhibitory pairs. Likewise, a shared input from inside or outside the network only imposes positive correlations between any two neurons (Figure 2A). The observations that excitatory neurons may have negative correlations (Figure 1D), as well as the broad distribution of correlations covering both positive and negative values (Figure 1C), are not compatible with this view. In fact, the sign of correlations appears to be independent of the neuron types. So how do negative correlations between excitatory neurons arise?
The view that equates connectivity with correlation implicitly assumes that the effect of a single synapse on the receiving neuron is weak. This view, however, regards each synapse in isolation. Could there be states in the network where, collectively, many weak synapses cooperate, as perhaps required to form low-dimensional neuronal manifolds? In such a state, interactions may not only be mediated via direct connections but also via indirect paths through the network (Figure 2B). Such effective multi-synaptic connections may explain our observation that far apart neurons that are basically unconnected display considerable correlation of arbitrary sign.
Let us here illustrate the ideas first and corroborate them in subsequent sections. Direct connections yield correlations of a predefined sign, leading to correlation distributions with multiple peaks, for example a positive peak for excitatory neurons that are connected and a peak at zero for neurons that are not connected. Multi-synaptic paths, however, involve both excitatory and inhibitory intermediate neurons, which contribute to the interaction with different signs (Figure 2B). Hence, a single indirect path can contribute to the total interaction with arbitrary sign (Pernice et al., 2011). If indirect paths dominate the interaction between two neurons, the sign of the resulting correlation becomes independent of their type. Given that the connecting paths in the network are different for any two neurons, the resulting correlations can fall in a wide range of both positive and negative values, giving rise to the broad distributions for all combinations of neuron types in Figure 1C. This provides a hypothesis why there may be no qualitative difference between the distribution of correlations for excitatory and inhibitory neurons. In fact, their widths are similar and their mean is close to zero (see Materials and methods for exact values); the latter being the hallmark of the negative feedback that characterizes the balanced state. The subsequent model-based analysis will substantiate this idea and show that it also holds for networks with spatially organized heterogeneous connectivity.
To play this hypothesis further, an important consequence of the dominance of multi-synaptic connections could be that correlations are not restricted to the spatial range of direct connectivity. Through interactions via indirect paths the reach of a single neuron could effectively be increased. But the details of the spatial profile of the correlations in principle could be highly complex as it depends on the interplay of two antagonistic effects: On the one hand, signal propagation becomes weaker with distance, as the signal has to pass several synaptic connections. Along these paths mean firing rates of neurons are typically diverse, and so are their signal transmission properties (de la Rocha et al., 2007). On the other hand, the number of contributing indirect paths between any pair of neurons proliferates with their distance. With single neurons typically projecting to thousands of other neurons in cortex, this leads to involved combinatorics; intuition here ceases to provide a sensible hypothesis on what is the effective spatial profile and range of coordination between neurons. Also it is unclear which parameters these coordination patterns depend on. The model-driven and analytical approach of the next section will provide such a hypothesis.
Networks close to instability show shallow exponential decay of covariances
We first note that the large magnitude and dispersion of individual correlations in the data and their spatial structure primarily stem from features in the underlying covariances between neuron pairs (Appendix 1—figure 1). Given the close relationship between correlations and covariances (Appendix 1—figure 1D and E), in the following we analyze covariances, as these are less dependent on single neuron properties and thus analytically simpler to treat. To gain an understanding of the spatial features of intrinsically generated covariances in balanced critical networks, we investigate a network of excitatory and inhibitory neurons on a two-dimensional sheet, where each neuron receives external Gaussian white noise input (Figure 3A). We investigate the covariance statistics in this model by help of linear-response theory, which has been shown to approximate spiking neuron models well (Pernice et al., 2012; Trousdale et al., 2012; Tetzlaff et al., 2012; Helias et al., 2013; Grytskyy et al., 2013; Dahmen et al., 2019). To allow for multapses, the connections between two neurons are drawn from a binomial distribution, and the connection probability decays with inter-neuronal distance on a characteristic length scale (for more details see Materials and methods). Previous studies have used linear-response theory in combination with methods from statistical physics and field theory to gain analytic insights into both mean covariances (Ginzburg and Sompolinsky, 1994; Lindner et al., 2005; Pernice et al., 2011; Tetzlaff et al., 2012) and the width of the distribution of covariances (Dahmen et al., 2019). Field-theoretic approaches, however, were so far restricted to purely random networks devoid of any network structure and thus not suitable to study spatial features of covariances. To analytically quantify the relation between the spatial ranges of covariances and connections, we therefore here develop a theory for spatially organized random networks with multiple populations. The randomness in our model is based on the sparseness of connections, which is one of the main sources of heterogeneity in cortical networks in that it contributes strongly to the variance of connections (see Appendix 1 Section 15).
A distance-resolved histogram of the covariances in the spatially organized E-I network shows that the mean covariance is close to zero but the width or variance of the covariance distribution stays large, even for large distances (Figure 3C). Analytically, we derive that, despite the complexity of the various indirect interactions, both the mean and the variance of covariances follow simple exponential laws in the long-distance limit (see Appendix 1 Section 4 - Section 12). These laws are universal in that they do not depend on details of the spatial profile of connections. Our theory shows that the associated length scales are strikingly different for means and variances of covariances. They each depend on the reach of direct connections and on specific eigenvalues of the effective connectivity matrix. These eigenvalues summarize various aspects of network connectivity and signal transmission into a single number: Each eigenvalue belongs to a ‘mode’, a combination of neurons that act collaboratively, rather than independently, coordinating neuronal activity within a one-dimensional subspace. To start with, there are as many such subspaces as there are neurons. But if the spectral bound in Figure 3B is close to one, only a relatively small fraction of them, namely those close to the spectral bound, dominate the dynamics; the dynamics is then effectively low-dimensional. Additionally, the eigenvalue quantifies how fast a mode decays when transmitted through a network. The eigenvalues of the dominating modes are close to one, which implies a long lifetime. The corresponding fluctuations thus still contribute significantly to the overall signal, even if they passed by many synaptic connections. Therefore, indirect multi-synaptic connections contribute significantly to covariances if the spectral bound is close to one, and in that case we expect to see long-range covariances.
To quantify this idea, for the mean covariance we find that the dominant behavior is an exponential decay on a length scale . This length scale is determined by a particular eigenvalue, the population eigenvalue, corresponding to the mode in which all neurons are excited simultaneously. Its position solely depends on the ratio between excitation and inhibition in the network and becomes more negative in more strongly inhibition-dominated networks (Figure 3B). We show in Appendix 1 Section 9.4 that this leads to a steep decay of mean covariances with distance. The variance of covariances, however, predominantly decays exponentially on a length scale deff that is determined by the spectral bound , the largest real part among all eigenvalues (Figure 3B and D). In inhibition-dominated networks, is determined by the heterogeneity of connections. For we obtain the effective length scale
(1) |
What this means is that precisely at the point where is close to one, when neural activity occupies a low-dimensional manifold, the length scale deff on which covariances decay exceeds the reach of direct connections by a large factor (Figure 3D). As the network approaches instability, which corresponds to the spectral bound going to one, the effective decay constant diverges (Figure 3D inset) and so does the range of covariances.
Our population-resolved theoretical analysis, furthermore, shows that the larger the spectral bound the more similar the decay constants between different populations, with only marginal differences for (Figure 3E). This holds strictly if connection weights only depend on the type of the presynaptic neuron but not on the type of the postsynaptic neuron. Moreover, we find a relation between the squared effective decay constants and the squared anatomical decay constants of the form
(2) |
This relation is independent of the eigenvalues of the effective connectivity matrix, as the constant of order does only depend on the choice of the connectivity profile. For , this means that even though the absolute value of both effective length scales on the left hand side is large, their relative difference is small because it equals the small difference of anatomical length scales on the right hand side.
Pairwise covariances in motor cortex decay on a millimeter scale
To check if these predictions are confirmed by the data from macaque motor cortex, we first observe that, indeed, covariances in the resting state show a large dispersion over almost all distances on the Utah array (Figure 4). Moreover, the variance of covariances agrees well with the predicted exponential law: Performing an exponential fit reveals length constants above 1 mm. These large length constants have to be compared to the spatial reach of direct connections, which is about an order of magnitude shorter, in the range of 100-400 μm (Schnepel et al., 2015), so below the 400 μm inter-electrode distance of the Utah array. The shallow decay of the variance of covariances is, next to the broad distribution of covariances, a second indication that the network is in the dynamically balanced critical regime, in line with the prediction by Equation (1).
The population-resolved fits to the data show a larger length constant for excitatory covariances than for inhibitory ones (Figure 4A). This is qualitatively in line with the prediction of Equation (2) given the – by tendency – longer reach of excitatory connections compared to inhibitory ones, as derived from morphological constraints (Reimann et al., 2017, Fig. S2). In the dynamically balanced critical regime, however, the predicted difference in slope for all three fits is practically negligible. Therefore, we performed a second fit where the slope of the three exponentials is constrained to be identical (Figure 4B). The error of this fit is only marginally larger than the ones of fitting individual slopes (Figure 4C). This shows that differences in slopes are hardly detectable given the empirical evidence, thus confirming the predictions of the theory given by Equation (1) and Equation (2).
Firing rates alter connectivity-dependent covariance patterns
Since covariances measure the coordination of temporal fluctuations around the individual neurons’ mean firing rates, they are determined by how strong a neuron transmits such fluctuations from input to output (Abeles, 1991). To leading order this is explained by linear-response theory (Ginzburg and Sompolinsky, 1994; Lindner et al., 2005; Pernice et al., 2011; Tetzlaff et al., 2012): How strongly a neuron reacts to a small change in its input depends on its dynamical state, foremost the mean and variance of its total input, called ‘working point’ in the following. If a neuron receives almost no input, a small perturbation in the input will not be able to make the neuron fire. If the neuron receives a large input, a small perturbation will not change the firing rate either, as the neuron is already saturated. Only in the intermediate regime the neuron is susceptible to small deviations of the input. Mathematically, this behavior is described by the gain of the neuron, which is the derivative of the input-output relation (Abeles, 1991). Due to the non-linearity of the input-output relation, the gain is vanishing for very small and very large inputs and non-zero in the intermediate regime. How strongly a perturbation in the input to one neuron affects one of the subsequent neurons therefore not only depends on the synaptic weight but also on the gain and thereby the working point. This relation is captured by the effective connectivity . What is the consequence of the dynamical interaction among neurons depending on the working point? Can it be used to reshape the low-dimensional manifold, the collective coordination between neurons?
The first part of this study finds that long-range coordination can be achieved in a network with short-range random connections if effective connections are sufficiently strong. Alteration of the working point, for example by a different external input level, can affect the covariance structure: The pattern of coordination between individual neurons can change, even though the anatomical connectivity remains the same. In this way, routing of information through the network can be adapted dynamically on a mesoscopic scale. This is a crucial difference of such coordination as opposed to coordination imprinted by complex but static connection patterns.
Here, we first illustrate this concept by simulations of a network of 2000 sparsely connected threshold-linear (ReLU) rate neuron models that receive Gaussian white noise inputs centered around neuron-specific non-zero mean values (see Materials and methods and Appendix 1 Section 14 for more details). The ReLU activation function thereby acts as a simple model for the vanishing gain for neurons with too low input levels. Note that in cortical-like scenarios with low firing rates, neuronal working points are far away from the high-input saturation discussed above, which is therefore neglected by the choice of the ReLU activation function. For independent and stationary external inputs covariances between neurons are solely generated inside the network via the sparse and random recurrent connectivity. External inputs only have an indirect impact on the covariance structure by setting the working point of the neurons.
We simulate two networks with identical structural connectivity and identical external input fluctuations, but small differences in mean external inputs between corresponding neurons in the two simulations (Figure 5A). These small differences in mean external inputs create different gains and firing rates and thereby differences in effective connectivity and covariances. Since mean external inputs are drawn from the same distribution in both simulations (Figure 5B), the overall distributions of firing rates and covariances across all neurons are very similar (Figure 5E1, F1). But individual neurons’ firing rates do differ (Figure 5E2). For the simple ReLU activation used here, we in particular observe neurons that switch between non-zero and zero firing rate between the two simulations. This resulting change of working points substantially affects the covariance patterns (Figure 5F2): Differences in firing rates and covariances between the two simulations are significantly larger than the differences across two different epochs of the same simulation (Figure 5C). The larger the spectral bound, the more sensitive are the intrinsically generated covariances to the changes in firing rates (Figure 5D). Thus, a small offset of individual firing rates is an effective parameter to control network-wide coordination among neurons. As the input to the local network can be changed momentarily, we predict that in the dynamically balanced critical regime coordination patterns should be highly dynamic.
Coordination patterns in motor cortex depend on behavioral context
In order to test the theoretical prediction in experimental data, we analyze parallel spiking activity from macaque motor cortex, recorded during a reach-to-grasp experiment (Riehle et al., 2013; Brochier et al., 2018). In contrast to the resting state, where the animal was in an idling state, here the animal is involved in a complex task with periods of different cognitive and behavioral conditions (Figure 6A). We compare two epochs in which the animal is requested to wait and is sitting still but which differ in cognitive conditions. The first epoch is a starting period (S), where the monkey has self-initiated the behavioral trial and is attentive because it is expecting a cue. The second epoch is a preparatory period (P), where the animal has just received partial information about the upcoming trial and is waiting for the missing information and the GO signal to initiate the movement.
Within each epoch, S or P, the neuronal firing rates are mostly stationary, likely due to the absence of arm movements which create relatively large transient activities in later epochs of the task, which are not analyzed here (see Appendix 1 Section 3). The overall distributions of the firing rates are comparable for epochs S and P, but the firing rates are distributed differently across the individual neurons: Figure 6C shows one example session of monkey N, where the changes in firing rates between the two epochs are visible in the spread of markers around the diagonal line in panel C2. To assess the extent of these changes, we split each epoch, S and P, into two disjoint sub-periods, S1/S2 and P1/P2 (Figure 6A). We compute the correlation coefficient between the firing rate vectors of two sub-periods of different epochs (‘between’ markers in Figure 6E) and compare it to the correlation coefficient between the firing rate vectors of two sub-periods of the same epoch (‘within’ markers): Firing rate vectors in S1 are almost perfectly correlated with firing rate vectors in S2 ( for all of the five/eight different recording sessions from different recording days for monkey E/N, similarly for P1 and P2), confirming stationarity investigated in Appendix 1 Section 3. Firing rate vectors in S1 or S2, however, show significantly lower correlation to firing rate vectors in P1 and P2, confirming a significant change in network state between epochs S and P (Figure 6E).
The mechanistic model in the previous section shows a qualitatively similar scenario (Figure 5C and E). By construction it produces different firing rate patterns in the two simulations. While the model is simplistic and in particular not adapted to quantitatively reproduce the experimentally observed activity statistics, its simulations and our underlying theory make a general prediction: Differences in firing rates impact the effective connectivity between neurons and thereby evoke even larger differences in their coordination if the network is operating in the dynamically balanced critical regime (Figure 5D). To check this prediction, we repeat the correlation analysis between the two epochs, which we described above for the firing rates, but this time for the covariance patterns. Despite similar overall distributions of covariances in S and P (Figure 6D1), covariances between individual neuron pairs are clearly different between S and P: Figure 6B shows the covariance pattern for one representative reference neuron in one example recording session of monkey N. In both epochs, this covariance pattern has a salt-and-pepper structure as for the resting state data in Figure 1D. Yet, neurons change their individual coordination: a large number of neuron pairs even changes from positive covariance values to negative ones and vice versa. These neurons fire cooperatively in one epoch of the task while they show antagonistic firing in the other epoch. The covariances of all neuron pairs of that particular recording session are shown in Figure 6D2. Markers in the upper left and lower right quadrant show neuron pairs that switch the sign of their coordination (45 % of all neuron pairs). The extent of covariance changes between epochs is again quantified by correlation coefficients between the covariance patterns of two sub-periods (Figure 6F). As for the firing rates, we find rather large correlations between covariance patterns in S1 and S2 as well as between covariance patterns in P1 and P2. Note, however, that correlation coefficients are around 0.8 rather than 1, presumably since covariance estimates from 200 ms periods are noisier than firing rate estimates. The covariance patterns in S1 or S2 are, however, significantly more distinct from covariance patterns in P1 and P2, with correlation coefficients around 0.5 (Figure 6F). This more pronounced change of covariances compared to firing rates is predicted by a network whose effective connectivity has a large spectral bound, in the dynamically balanced critical state. In particular, the theory provides a mechanistic explanation for the different coordination patterns between neurons on the mesoscopic scale (range of a Utah array), which are observed in the two states S and P (Figure 6B). The coordination between neurons is thus considerably reshaped by the behavioral condition.
Discussion
In this study, we investigate coordination patterns of many neurons across mesoscopic distances in macaque motor cortex. We show that these patterns have a salt-and-pepper structure, which can be explained by a network model with a spatially dependent random connectivity operating in a dynamically balanced critical state. In this state, cross-covariances are shaped by a large number of parallel, multi-synaptic pathways, leading to interactions reaching far beyond the range of direct connections. Strikingly, this coordination on the millimeter scale is only visible if covariances are resolved on the level of individual neurons; the population mean of covariances quickly decays with distance and is overall very small. In contrast, the variance of covariances is large and predominantly decreases exponentially on length scales of up to several millimeters, even though direct connections typically only reach a few hundred micrometers.
Since the observed coordination patterns are determined by the effective connectivity of the network, they are dynamically controllable by the network state; for example, due to modulations of neuronal firing rates. Parallel recordings in macaque motor cortex during resting state and in different epochs of a reach-to-grasp task confirm this prediction. Simulations indeed exhibit a high sensitivity of coordination patterns to weak modulations of the individual neurons’ firing rates, providing a plausible mechanism for these dynamic changes.
Models of balanced networks have been investigated before (van Vreeswijk and Sompolinsky, 1996; Brunel, 2000; Renart et al., 2010; Tetzlaff et al., 2012) and experimental evidence for cortical networks operating in the balanced state is overwhelming (Okun and Lampl, 2008; Reinhold et al., 2015; Dehghani et al., 2016). Excess of inhibition in such networks yields stable and balanced population-averaged activities as well as low average covariances (Tetzlaff et al., 2012). Recently, the notion of balance has been combined with criticality in the dynamically balanced critical state that results from large heterogeneity in the network connectivity (Dahmen et al., 2019). Here, we focus on another ubiquitous property of cortical networks, their spatial organization, and study the interplay between balance, criticality, and spatial connectivity in networks of excitatory and inhibitory neurons. We show that in such networks, heterogeneity generates disperse covariance structures between individual neurons on large length-scales with a salt-and-pepper structure.
Spatially organized balanced network models have been investigated before in the limit of infinite network size, as well as under strong and potentially correlated external drive, as is the case, for example, in primary sensory areas of the brain (Rosenbaum et al., 2017; Baker et al., 2019). In this scenario, intrinsically generated contributions to covariances are much smaller than external ones. Population-averaged covariances then fulfill a linear equation, called the ‘balance condition’ (van Vreeswijk and Sompolinsky, 1996; Hertz, 2010; Renart et al., 2010; Rosenbaum and Doiron, 2014), that predicts a non-monotonous change of population-averaged covariances with distance (Rosenbaum et al., 2017). In contrast, we here consider covariances on the level of individual cells in finite-size networks receiving only weak inputs. While we cannot strictly rule out that the observed covariance patterns in motor cortex are a result of very specific external inputs to the recorded local network, we believe that the scenario of weak external drive is more suitable for non-sensory brain areas, such as, for example, the motor cortex in the resting state conditions studied here. Under such conditions, covariances have been shown to be predominantly generated locally rather than from external inputs: Helias et al., 2014 investigated intrinsic and extrinsic sources of covariances in ongoing activity of balanced networks and found that for realistic sizes of correlated external populations the major contribution to covariances is generated from local network interactions (Figure 7a in Helias et al., 2014). Dahmen et al., 2019 investigated the extreme case, where the correlated external population is of the same size as the local population (Fig. S6 in Dahmen et al., 2019). Despite sizable external input correlations projected onto the local circuit via potentially strong afferent connections, the dependence of the statistics of covariances on the spectral bound of the local recurrent connectivity is predicted well by the theory that neglects correlated external inputs (see supplement section 3 in Dahmen et al., 2019).
Our analysis of covariances on the single-neuron level goes beyond the balance condition and requires the use of field-theoretical techniques to capture the heterogeneity in the network (Dahmen et al., 2019; Helias and Dahmen, 2020). It relies on linear-response theory, which has previously been shown to faithfully describe correlations in balanced networks of nonlinear (spiking) units (Tetzlaff et al., 2012; Trousdale et al., 2012; Pernice et al., 2012; Grytskyy et al., 2013; Helias et al., 2013; Dahmen et al., 2019). These studies mainly investigated population-averaged correlations with small spectral bounds of the effective connectivity. Subsequently, Dahmen et al., 2019 showed the quantitative agreement of this linear-response theory for covariances between individual neurons in networks of spiking neurons for the whole range of spectral bounds, including the dynamically balanced critical regime. The long-range coordination studied in the current manuscript requires the inclusion of spatially non-homogeneous coupling to analyze excitatory-inhibitory random networks on a two-dimensional sheet with spatially decaying connection probabilities. This new theory allows us to derive expressions for the spatial decay of the variance of covariances. We primarily evaluate these expressions in the long-range limit, which agrees well with simulations for distances , which is fulfilled for most distances on the Utah array (Figure 3, Appendix 1—figure 7). For these distances, we find that the decay of covariances is dominated by a simple exponential law. Unexpectedly, its decay constant is essentially determined by only two measures, the spectral bound of the effective connectivity, and the length scale of direct connections. The length scale of covariances diverges when approaching the breakdown of linear stability. In this regime, differences in covariances induced by differences in length scales of excitatory and inhibitory connections become negligible. The predicted emergence of a single length scale of covariances is consistent with our data.
This study focuses on local and isotropic connection profiles to show that long-range coordination does not rely on specific connection patterns but can result from the network state alone. Alternative explanations for long-range coordination are based on specifically imprinted network structures: Anisotropic local connection profiles have been studied and shown to create spatio-temporal sequences (Spreizer et al., 2019). Likewise, embedded excitatory feed-forward motifs and cell assemblies via excitatory long-range patchy connections (DeFelipe et al., 1986) can create positive covariances at long distances (Diesmann et al., 1999; Litwin-Kumar and Doiron, 2012). Yet, these connections cannot provide an explanation for the large negative covariances between excitatory neurons at long distances (see e.g. Figure 1D). Long-range connectivity, for example arising from a salt-and-pepper organization of neuronal selectivity with connections preferentially targeting neurons with equal selectivity (Ben-Yishai et al., 1995; Hansel and Sompolinsky, 1998; Roxin et al., 2005; Blumenfeld et al., 2006), would produce salt-and-pepper covariance patterns even in networks with small spectral bounds where interactions are only mediated via direct connections. However, in this scenario, one would expect that neurons which have similar selectivity would throughout show positive covariance due to their mutual excitatory connections and due to the correlated input they receive. Yet, when analyzing two different epochs of the reach-to-grasp task, we find that a large fraction of neuron pairs actually switches from being significantly positively correlated to negatively correlated and vice versa (see Figure 6D2, upper left and lower right quadrant). This state-dependence of covariances is in line with the here suggested mechanism of long-range coordination by indirect interactions: Such indirect interactions depend on the effective strengths of various connections and can therefore change considerably with network state. In contrast, correlations due to imprinted network structures are static, so that a change in gain of the neurons will either strengthen or weaken the specific activity propagation, but it will not lead to a change of the sign of covariances that we see in our data. The static impact of these connectivity structures on covariances could nevertheless in principle be included in the presented formalism. Long-range coordination can also be created from short-range connections with random orientations of anisotropic local connection profiles (Smith et al., 2018). This finding can be linked to the emergence of tuning maps in the visual cortex. The mechanism is similar to ours in that it uses nearly linearly unstable modes that are determined by spatial connectivity structures and heterogeneity. Given the different source of heterogeneity, the modes and corresponding covariance patterns are different from the ones discussed here: Starting from fully symmetric networks with corresponding symmetric covariance patterns, Smith et al., 2018 found that increasing heterogeneity (anisotropy) yields more randomized, but still patchy regions of positive and negative covariances that are in line with low-dimensional activity patterns found in visual cortex. In motor cortex we instead find salt-and-pepper patterns that can be explained in terms of heterogeneity through sparsity. We provide the theoretical basis and explicit link between connectivity eigenspectra and covariances and show that heterogeneity through sparsity is sufficient to generate the dynamically balanced critical state as a simple explanation for the broad distribution of covariances in motor cortex, the salt-and-pepper structure of coordination, its long spatial range, and its sensitive dependence on the network state. Note that both mechanisms of long-range coordination, the one studied in Smith et al., 2018 and the one presented here, rely on the effective connectivity for the network to reside in the dynamically balanced critical regime. The latter regime is, however, not just one single point in parameter space, but an extended region that can be reached via a multitude of control mechanisms for the effective connectivity, for example by changing neuronal gains (Salinas and Sejnowski, 2001a; Salinas and Sejnowski, 2001b), synaptic strengths (Sompolinsky et al., 1988), and network microcircuitry (Dahmen et al., 2021).
What are possible functional implications of the coordination on mesoscopic scales? Recent work demonstrated activity in motor cortex to be organized in low-dimensional manifolds (Gallego et al., 2017; Gallego, 2018; Gallego et al., 2020). Dimensionality reduction techniques, such as PCA or GPFA (Yu et al., 2009), employ covariances to expose a dynamical repertoire of motor cortex that is comprised of neuronal modes. Previous work started to analyze the relation between the dimensionality of activity and connectivity (Aljadeff et al., 2015; Aljadeff et al., 2016; Mastrogiuseppe and Ostojic, 2018; Dahmen et al., 2019; Dahmen et al., 2021; Hu and Sompolinsky, 2020), but only in spatially unstructured networks, where each neuron can potentially be connected to any other neuron. The majority of connections within cortical areas, however, stems from local axonal arborizations (Schnepel et al., 2015). Here, we add this biological constraint and demonstrate that these networks, too, support a dynamically balanced critical state. This state in particular exhibits neural modes which are spanned by neurons spread across the experimentally observed large distances. In this state a small subset of modes that are close to the point of instability dominates the variability of the network activity and thus spans a low-dimensional neuronal manifold. As opposed to specifically designed connectivity spectra via plasticity mechanisms (Hennequin et al., 2014) or low-rank structures embedded into the connectivity (Mastrogiuseppe and Ostojic, 2018), the dynamically balanced critical state is a mechanism that only relies on the heterogeneity which is inherent to sparse connectivity and abundant across all brain areas.
While we here focus on covariance patterns in stationary activity periods, the majority of recent works studied transient activity during motor behavior (Gallego et al., 2017). How are stationary and transient activities related? During stationary ongoing activity states, covariances are predominantly generated intrinsically (Helias et al., 2014). Changes in covariance patterns therefore arise from changes in the effective connectivity via changes in neuronal gains, as demonstrated here in the two periods of the reach-to-grasp experiment and in our simulations for networks close to criticality (Figure 5D). During transient activity, on top of gain changes, correlated external inputs may directly drive specific neural modes to create different motor outputs, thereby restricting the dynamics to certain subspaces of the manifold. In fact, Elsayed et al., 2016 reported that the covariance structures during movement preparation and movement execution are unrelated and corresponding to orthogonal spaces within a larger manifold. Also, Luczak et al., 2009 studied auditory and somatosensory cortices of awake and anesthetized rats during spontaneous and stimulus-evoked conditions and found that neural modes of stimulus-evoked activity lie in subspaces of the neural manifold spanned by the spontaneous activity. Similarly, visual areas V1 and V2 seem to exploit distinct subspaces for processing and communication (Semedo et al., 2019), and motor cortex uses orthogonal subspaces capturing communication with somatosensory cortex or behavior-generating dynamics (Perich et al., 2021). Gallego, 2018 further showed that manifolds are not identical, but to a large extent preserved across different motor tasks due to a number of task-independent modes. This leads to the hypothesis that the here described mechanism for long-range cooperation in the dynamically balanced critical state provides the basis for low-dimensional activity by creating such spatially extended neural modes, whereas transient correlated inputs lead to their differential activation for the respective target outputs. The spatial spread of the neural modes thereby leads to a distributed representation of information that may be beneficial to integrate information into different computations that take place in parallel at various locations. Further investigation of these hypotheses is an exciting endeavor for the years to come.
Materials and methods
Experimental design and statistical analysis
Two adult macaque monkeys (monkey E - female, and monkey N - male) are recorded in behavioral experiments of two types: resting state and reach-to-grasp. The recordings of neuronal activity in motor and pre-motor cortex (hand/arm region) are performed with a chronically implanted Utah array (Blackrock Microsystems). Details on surgery, recordings, spike sorting and classification of behavioral states can be found in Riehle et al., 2013; Riehle et al., 2018; Brochier et al., 2018; Dąbrowska et al., 2020. All animal procedures were approved by the local ethical committee (C2EA 71; authorization A1/10/12) and conformed to the European and French government regulations.
Resting state data
During the resting state experiment, the monkey is seated in a primate chair without any task or stimulation. Registration of electrophysiological activity is synchronized with a video recording of the monkey’s behavior. Based on this, periods of ‘true resting state’ (RS), defined as no movements and eyes open, are chosen for the analysis. Eye movements and minor head movements are included. Each monkey is recorded twice, with a session lasting approximately 15 and 20 min for monkeys E (sessions E1 and E2) and N (sessions N1 and N2), respectively, and the behavior is classified by visual inspection with single second precision, resulting in 643 and 652 s of RS data for monkey E and 493 and 502 s of RS data for monkey N.
Reach-to-grasp data
In the reach-to-grasp experiment, the monkeys are trained to perform an instructed delayed reach-to-grasp task to obtain a reward. Trials are initiated by a monkey closing a switch (TS, trial start). After 400 ms a diode is illuminated (WS, warning signal), followed by a cue after another 400 ms(CUE-ON), which provides partial information about the upcoming trial. The cue lasts 300 ms and its removal (CUE-OFF) initiates a 1 s preparatory period, followed by a second cue, which also serves as GO signal. Two epochs, divided into 200 ms sub-periods, within such defined trials are chosen for analysis: the first 400 ms after TS (starting period, S1 and S2), and the 400 ms directly following CUE-OFF (preparatory period, P1 and P2) (Figure 6a). Five selected sessions for monkey E and eight for monkey N provide a total of 510 and 1111 correct trials, respectively. For detailed numbers of trials and single units per recording session see Appendix 1—table 1.
Separation of putative excitatory and inhibitory neurons
Offline spike-sorted single units (SUs) are separated into putative excitatory (broad-spiking) and putative inhibitory (narrow-spiking) based on their spike waveform width (Barthó et al., 2004; Kaufman et al., 2010; Kaufman et al., 2013; Peyrache, 2012; Peyrache and Destexhe, 2019). The width is defined as the time (number of data samples) between the trough and peak of the waveform. Widths of all average waveforms from all selected sessions (both resting state and reach-to-grasp) per monkey are collected. Thresholds for ‘broadness’ and ‘narrowness’ are chosen based on the monkey-specific distribution of widths, such that intermediate values stay unclassified. For monkey E the thresholds are 0.33 ms and 0.34 ms and for monkey N 0.40 ms and 0.41 ms. Next, a two-step classification is performed session by session. Firstly, the thresholds are applied to average SU waveforms. Secondly, the thresholds are applied to SU single waveforms and a percentage of single waveforms pre-classified as the same type as the average waveform is calculated. SU for which this percentage is high enough are marked classified. All remaining SUs are grouped as unclassified. We verify the robustness of our results with respect to changes in the spike sorting procedure in Appendix 1 Section 2.
Synchrofacts, that is, spike-like synchronous events across multiple electrodes at the sampling resolution of the recording system (1/30 ms) (Torre, 2016), are removed. In addition, only SUs with a signal-to-noise ratio (Hatsopoulos et al., 2004) of at least 2.5 and a minimal average firing rate of 1 Hz are considered for the analysis, to ensure enough and clean data for valid statistics.
Statistical analysis
All RS periods per resting state recording are concatenated and binned into 1 s bins. Next, pairwise covariances of all pairs of SUs are calculated according to the following formula:
(3) |
with bi, bj - binned spike trains, , being their mean values, the number of bins, and the scalar product of vectors and . Obtained values are broadly distributed, but low on average in every recorded session: in session E1 E-E pairs: (M±SD), E-I: , I-I: , in session E2 E-E: , E-I , I-I , in session N1 E-E , E-I , I-I , in session N2 E-E , E-I , I-I .
To explore the dependence of covariance on the distance between the considered neurons, the obtained values are grouped according to distances between electrodes on which the neurons are recorded. For each distance the average and variance of the obtained distribution of cross-covariances is calculated. The variance is additionally corrected for bias due to a finite number of measurements (Dahmen et al., 2019). In most of cases, the correction does not exceed 0.01%.
In the following step, exponential functions are fitted to the obtained distance-resolved variances of cross-covariances ( corresponding to the variance and to distance between neurons), which yields a pair of values . The least squares method implemented in the Python scipy.optimize module (SciPy v.1.4.1) is used. Firstly, three independent fits are performed to the data for excitatory-excitatory, excitatory-inhibitory, and inhibitory-inhibitory pairs. Secondly, analogous fits are performed, with the constraint that the decay constant should be the same for all three curves.
Covariances in the reach-to-grasp data are calculated analogously but with different time resolution. For each chosen sub-period of a trial, data are concatenated and binned into 200 ms bins, meaning that the number of spikes in a single bin corresponds to a single trial. The mean of these counts normalized to the bin width gives the average firing rate per SU and sub-period. The pairwise covariances are calculated according to Equation (3). To assess the similarity of neuronal activity in different periods of a trial, Pearson product-moment correlation coefficients are calculated on vectors of SU-resolved rates and pair-resolved covariances. Correlation coefficients from all recording sessions per monkey are separated into two groups: using sub-periods of the same epoch (within-epoch), and using sub-periods of different epochs of a trial (between-epochs). These groups are tested for differences with significance level . Firstly, to check if the assumptions for parametric tests are met, the normality of each obtained distribution is assessed with a Shapiro-Wilk test, and the equality of variances with an F-test. Secondly, a t-test is applied to compare within- and between-epochs correlations of rates or covariances. Since there are two within and four between correlation values per recording session, the number of degrees of freedom equals: , which is 28 for monkey E and 46 for monkey N. To estimate the confidence intervals for obtained differences, the mean difference between groups and their pooled standard deviation are calculated for each comparison
with mwithin and mbetween being the mean, swithin and sbetween the standard deviation and and the number of within- and between-epoch correlation coefficient values, respectively.
This results in 95 % confidence intervals of for rates and for covariances in monkey E and for rates and for covariances in monkey N.
For both monkeys the within-epoch rate-correlations distribution does not fulfill the normality assumption of the t-test. We therefore perform an additional non-parametric Kolmogorov-Smirnov test for the rate comparison. The differences are again significant; for monkey E ; for monkey N .
For all tests we use the implementations from the Python scipy.stats module (SciPy v.1.4.1).
Mean and variance of covariances for a two-dimensional network model with excitatory and inhibitory populations
The mean and variance of covariances are calculated for a two-dimensional network consisting of one excitatory and one inhibitory population of neurons. The connectivity profile , describing the probability of a neuron having a connection to another neuron at distance decays with distance. We assume periodic boundary conditions and place the neurons on a regular grid (Figure 3A), which imposes translation and permutation symmetries that enable the derivation of closed-form solutions for the distance-dependent mean and variance of the covariance distribution. These simplifying assumptions are common practice and simulations show that they do not alter the results qualitatively.
Our aim is to find an expression for the mean and variance of covariances as functions of distance between two neurons. While the theory in Dahmen et al., 2019 is restricted to homogeneous connections, understanding the spatial structure of covariances here requires us to take into account the spatial structure of connectivity. Field-theoretic methods, combined with linear-response theory, allow us to obtain expressions for the mean covariance and variance of covariance
(4) |
with identity matrix 1, mean and variance of connectivity matrix , input noise strength , and spectral bound . Since and have a similar structure, the mean and variance can be derived in the same way, which is why we only consider variances in the following.
To simplify Equation (4), we need to find a basis in which and therefore also , is diagonal. Due to invariance under translation, the translation operators and the matrix have common eigenvectors, which can be derived using that translation operators satisfy , where is the number of lattice sites in - or -direction (see Appendix 1). Projecting onto a basis of these eigenvectors shows that the eigenvalues of are given by a discrete two-dimensional Fourier transform of the connectivity profile
Expressing in the eigenvector basis yields , where is a discrete inverse Fourier transform of the kernel . Assuming a large network with respect to the connectivity profiles allows us to take the continuum limit
As we are only interested in the long-range behavior, which corresponds to , or , respectively, we can approximate the Fourier kernel around by a rational function, quadratic in the denominator, using a Padé approximation. This allows us to calculate the integral which yields
where denotes the modified Bessel function of second kind and zeroth order (Olver et al., 2010), and the effective decay constant deff is given by Equation (1). In the long-range limit, the modified Bessel function behaves like
Writing Equation (4) in terms of gives
with the double asterisk denoting a two-dimensional convolution. is a function proportional to the modified Bessel function of second kind and first order (Olver et al., 2010), which has the long-range limit
Hence, the effective decay constant of the variances is given by deff. Note that further details of the above derivation can be found in the Appendix 1 Section 4 - Section 12.
Network model simulation
The explanation of the network state dependence of covariance patterns presented in the main text is based on linear-response theory, which has been shown to yield results quantitatively in line with non-linear network models, in particular networks of spiking leaky integrate-and-fire neuron models (Tetzlaff et al., 2012; Trousdale et al., 2012; Pernice et al., 2012; Grytskyy et al., 2013; Helias et al., 2013; Dahmen et al., 2019). The derived mechanism is thus largely model independent. We here chose to illustrate it with a particularly simple non-linear input-output model, the rectified linear unit (ReLU). In this model, a shift of the network’s working point can turn some neurons completely off, while activating others, thereby leading to changes in the effective connectivity of the network. In the following, we describe the details of the network model simulation.
We performed a simulation with the neural simulation tool NEST (Jordan, 2019) using the parameters listed in Appendix 1—table 4. We simulated a network of inhibitory neurons (threshold_lin_rate_ipn, Hahne, 2017), which follow the dynamical equation
(5) |
where zi is the input to neuron i, the output firing rate with (threshold linear activation function)
time constant , connectivity matrix , a constant external input , and uncorrelated Gaussian white noise , , with noise strength . The neurons were connected using the fixed_indegree connection rule, with connection probability , indegree , and delta-synapses (rate_connection_instantaneous) of weight .
The constant external input to each neuron was normally distributed, with mean , and standard deviation . It was used to set the firing rates of neurons, which, via the effective connectivity, influence the intrinsically generated covariances in the network. The two parameters and were chosen such that, in the stationary state, half of the neurons were expected to be above threshold. Which neurons are active depends on the realization of and is therefore different for different networks.
To assess the distribution of firing rates, we first considered the static variability of the network and studied the stationary solution of the noise-averaged input , which follows from Equation (5) as
(6) |
Note that , through the nonlinearity , in principle depends on fluctuations of the system. This dependence is, however, small for the chosen threshold linear , which is only nonlinear in the point .
The derivation of is based on the following mean-field considerations: according to Equation (6) the mean input to a neuron in the network is given by the sum of external input and recurrent input
The variance of the input is given by
The mean firing rate can be calculated using the diffusion approximation (Tuckwell, 2009; Amit and Tsodyks, 2009), which is assuming a normal distribution of inputs due to the central-limit theorem, and the fact that a linear threshold neuron only fires if its input is positive
where denotes the probability density of the firing rate . The variance of the firing rates is given by
The number of active neurons is the number of neurons with a positive input, which we set to be equal to
which is only fulfilled for . Inserting this condition simplifies the equations above and leads to
For the purpose of relating synaptic weight and spectral bound , we can view the nonlinear network as an effective linear network with half the population size (only the active neurons). In the latter case, we obtain
For a given spectral bound , this relation allows us to derive the value
(7) |
that, for a arbitrarily fixed (here ), makes half of the population being active. We were aiming for an effective connectivity with only weak fluctuations in the stationary state. Therefore, we fixed the noise strength for all neurons to the small value compared to the external input, such that the noise fluctuations did not have a large influence on the calculation above that determines which neurons were active.
To show the effect of a change in the effective connectivity on the covariances, we simulated two networks with identical connectivity, but supplied them with slightly different external inputs. This was realized by choosing
with
, and indexing the two networks. The main component of the external input was the same for both networks. But, the small component was drawn independently for the two networks. This choice ensures that the two networks have a similar external input distribution (Figure 5B1), but with the external inputs distributed differently across the single neurons (Figure 5B2). How similar the external inputs are distributed across the single neurons is determined by .
The two networks have a very similar firing rate distribution (Figure 5E1), but, akin to the external inputs, the way the firing rates are distributed across the single neurons differs between the two networks (Figure 5E2). As the effective connectivity depends on the firing rates
this leads to a difference in the effective connectivities of the two networks and therefore to different covariance patterns, as discussed in Figure 5.
We performed the simulation for spectral bounds ranging from 0.1 to 0.9 in increments of 0.1. We calculated the correlation coefficient of firing rates and the correlation coefficient of time-lag integrated covariances between neurons in the two networks (Figure 5D) and studied the dependence on the spectral bound.
To check whether the simulation was long enough to yield a reliable estimate of the rates and covariances, we split each simulation into two halves, and calculate the correlation coefficient between the rates and covariances from the first half of the simulation with the rates and covariances from the second half. They were almost perfectly correlated (Figure 5C). Then, we calculated the correlation coefficients comparing all halves of the first simulation with all halves of the second simulation, showing that the covariance patterns changed much more than the rate patterns (Figure 5C).
Acknowledgements
This work was partially supported by HGF young investigator’s group VH-NG-1028, European Union’s Horizon 2020 research and innovation program under Grant agreements No. 785,907 (Human Brain Project SGA2) and No. 945,539 (Human Brain Project SGA3), ANR grant GRASP and partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 368482240/GRK2416. We are grateful to our colleagues in the NEST and Elephant developer communities and for continuous collaboration. All network simulations were carried out with NEST 2.20.0 (http://www.nest-simulator.org). All data analyses performed with Elephant (https://neuralensemble.org/elephant/). We thank Sebastian Lehmann for help with the design of the figures.
Appendix 1
1 Correlations and covariances
A typical measure for the strength of neuronal coordination is the Pearson correlation coefficient, here applied to spike counts in bins. Correlation coefficients, however, comprise features of both auto- and cross-covariances. From a theoretical point of view, it is simpler to study cross-covariances separately. Indeed, linear-response theory has been shown to faithfully predict cross-covariances in spiking leaky integrate-and-fire networks (Tetzlaff et al., 2012; Pernice et al., 2012; Trousdale et al., 2012; Helias et al., 2013; Dahmen et al., 2019; Grytskyy et al., 2013). Appendix 1—figure 1 justifies the investigation of cross-covariances instead of correlation coefficients for the purpose of this study. It shows that the spatial organization of correlations closely matches the spatial organization of cross-covariances.
2 Robustness to E/I separation
The analysis of the experimental data involves a number of preprocessing steps, which may affect the resulting statistics. In our study one such critical step is the separation of putative excitatory and inhibitory units, which is partially based on setting thresholds on the widths of spike waveform, as described in the Methods section. We tested the robustness of our conclusions with respect to these thresholds.
As mentioned in the Methods, two thresholds for the width of a spike waveform are chosen, based on all SU average waveforms: A width larger than the ‘‘broadness’’ threshold indicates a putative excitatory neuron. A width lower than the ‘‘narrowness’’ threshold indicates a putative inhibitory neuron. Units with intermediate widths are unclassified. Additionally, to increase the reliability of the classification, we perform it in two steps: first on the SU’s average waveform, and second on all its single waveforms. We calculate the percentage of single waveforms classified as either type. Finally, only SUs showing a high enough percentage of single waveforms classified the same as the average waveform are sorted as the respective type. The minimal percentage required, referred to as consistency , is initially set to the lowest value which ensures no contradictions between average- and single-waveform thresholding results. While the ‘‘broadness’’ and ‘‘narrowness’’ thresholds are chosen based on all available data for a given monkey, the required consistency is determined separately for each recording session. For monkey N is set to 0.6 in all but one sessions: In resting state session N1 it is increased to 0.62. For monkey E the values of equals 0.6 in the resting state recordings and take the following values in five analyzed reach-to-grasp sessions: 0.6, 0.89, 0.65, 0.61, 0.64.
The only step of our analysis for which the separation of putative excitatory and inhibitory neurons is crucial is the fitting of exponentials to the distance-resolved covariances. This step only involves resting state data. To test the robustness of our conclusions, we manipulate the required consistency value for sessions E1, E2, N1, and N2 by setting it to 0.75. Appendix 1—figure 2 and Appendix 1—table 1 summarize the resulting fits.
It turns out that increasing to 0.75, which implies disregarding about 20-25 percent of all data, does not have a strong effect on the fitting results. The obtained decay constants are smaller than for a lower value, but they stay in a range about an order of magnitude larger than the anatomical connectivity. We furthermore see that fitting individual slopes to different populations in some sessions leads to unreliable results (cf. yellow lines in Appendix 1—figure 2A, I and blue lines in Appendix 1—figure 2C,D,K,L). Therefore, the data is not sufficient to detect differences in decay constants for different neuronal populations. Fitting instead a single decay constant yields trustworthy results (cf. yellow lines in Appendix 1—figure 2E,M and blue lines in Appendix 1—figure 2G,H,O,P). Our data thus clearly expose that decay constants of covariances are in the millimeter range.
Appendix 1—table 1. Summary of exponential fits to distance-resolved variance of covariance.
C | E1 | E2 | N1 | N2 | |
---|---|---|---|---|---|
0.6 (default) | #exc/#inh | 56/50 | 67/56 | 76/45 | 78/62 |
unclassified | 0.078 | 0.075 | 0.069 | 0.091 | |
relative error | 1.1157 | 1.0055 | 1.0097 | 1.0049 | |
1-slope fit | 1.674 | 1.029 | 1.676 | 4.273 | |
I-I | 1.919 | 0.996 | 1.647 | 4.156 | |
I-E | 0.537 | 1.206 | 2.738 | 96100.688 | |
E-E | 1.642 | 1.308 | 80308.482 | 94096.871 | |
0.75 | #exc/#inh | 45/42 | 47/48 | 70/36 | 74/48 |
unclassified | 0.24 | 0.28 | 0.18 | 0.21 | |
relative error | 1.1778 | 1.0141 | 1.0102 | 1.0090 | |
1-slope fit | 1.357 | 0.874 | 1.420 | 2.587 | |
I-I | 1.794 | 0.809 | 1.394 | 2.550 | |
I-E | 0.496 | 1.123 | 3.682 | 40.852 | |
E-E | 1.390 | 1.199 | 80548.500 | 10310.780 |
3 Stationarity of behavioral data
The linear-response theory, with the aid of which we develop our predictions about the covariance structure in the network, assumes that the processes under examination are stationary in time. However, this assumption is not necessarily met in experimental data, especially in motor cortex during active behavioral tasks. For this reason we analyzed the stationarity of average single unit firing rate and pairwise zero time-lag covariance throughout a reach-to-grasp trial, similarly to Dahmen et al., 2019. Although the spiking activity becomes highly non-stationary during the movement, those epochs that are chosen for the analysis in our study (S and P) show only moderate variability in time (Appendix 1—figure 3). An analysis on the level of single-unit resolved activity also shows that the majority of neurons has stationary activity statistics within the relevant epochs S and P, especially when comparing to their whole dynamic range that is explored during movement transients towards the end of the task (Appendix 1—figure 5). Appendix 1—figure 6 shows that there are, however, a few exceptions (e.g. units 11, 84 in this session) that show moderate transients also within an epoch. Nevertheless, these transients are small compared to changes between the two epochs S and P.
Thus both the population-level and single-unit level analyses are in line with the second test for stationarity that we show in Figure 6. There we compare the firing rate and covariance changes between two 200 ms segments of the same epoch to the firing rate and covariance changes between two 200 ms segments of different epochs. If the neural activity was not stationary within an epoch then we would not obtain correlation coefficients of almost one between firing rates in Figure 6E and correlation coefficients up to 0.9 between covariance patterns within one epoch in Figure 6F. In summary, the analyses together make us confident that assuming stationarity within an epoch is a good approximation to show that there are significant behaviorally related changes in covariances across epochs of the reach-to-grasp experiment.
Appendix 1—table 2. Numbers of trials and single units per reach-to-grasp recording session.
Session | Ntrials | Nsingle units |
---|---|---|
e161212-002 | 108 | 129 |
e61214-001 | 99 | 118 |
e161222-002 | 102 | 118 |
e170105-002 | 101 | 116 |
e170106-001 | 100 | 113 |
i140613-001 | 93 | 137 |
i140617-001 | 129 | 155 |
i140627-001 | 138 | 145 |
i140702-001 | 157 | 134 |
i140703-001 | 142 | 142 |
i140704-001 | 141 | 124 |
i140721-002 | 160 | 96 |
i140725-002 | 151 | 106 |
4 Network model
We are considering neuronal network models with isotropic and distance-dependent connection profiles. Ultimately, we are interested in describing cortical networks with two-dimensional sheet-like structure. But, for developing the theory, we first consider the simpler case of a one-dimensional ring and subsequently develop the theory on a two-dimensional torus, ensuring periodic boundary conditions in both cases. equidistantly distributed neurons form a grid on these manifolds. The position of neuron is described by the vector , with . The connections from neuron to neuron i are drawn randomly with a connection probability that decays with distance between neurons , described by the normalized connectivity profile , , which we assume to obey radial symmetry. The connection probability decays on a characteristic length scale . As we are working on discrete lattices, we introduce the probability of two neurons being connected , which is defined by the relation , with lattice spacing . We set the synaptic weights for connections of a single type to a fixed value , but allow for multiple connections between neurons, that is for all sending neurons of a given type, where is binomially distributed. Such multapses are required to simultaneously meet biological constraints on neuronal indegrees, neuron densities, and spatial ranges of connections. If instead one assumed Bernoulli connectivity, an analysis analogous to Eq. 7 of Senk et al., 2018 would yield a connection probability exceeding unity.
We introduce two populations of neurons, excitatory (E) and inhibitory (I) neurons. The number of neurons of a given population is , and their ratio is , which, for convenience, we assume to be an even number (see permutation symmetry below). The connection from population to population has the synaptic weight and characteristic decay length of the connectivity profile . The average number of inputs drawn per neuron is fixed to . In order to preserve translation symmetry, excitatory neurons and one inhibitory neuron are put onto the same lattice point, as shown in Figure 3A in the main text.
Linear-response theory has been shown to faithfully capture the statistics of fluctuations in asynchronous irregular network states (Lindner et al., 2005). Here we follow Grytskyy et al., 2013, who show that different types of neuronal network models can be mapped to an Ornstein-Uhlenbeck process and that the low-frequency limit of this simple rate model describes spike count covariances of spiking models well (Tetzlaff et al., 2012). In particular, Dahmen et al., 2019 showed quantitative agreement of linear-response predictions for the statistics of spike-count covariances in leaky integrate-and-fire networks for the full range of spectral bounds . Therefore, we consider a network of linear rate neurons, whose activity is described by
with uncorrelated Gaussian white noise , , . The solution to this differential equation can be found by multiplying the whole equation with the left eigenvectors of
(8) |
where , , and is denoting the corresponding eigenvalue of . Neglecting the noise term, the solutions are given by
(9) |
with Heaviside function . These are the eigenmodes of the linear system and they are linear combinations of the individual neuronal rates
Note that the weights of these linear combinations depend on the details of the effective connectivity matrix . The stability of an eigenmode is determined by the corresponding eigenvalue . If , the eigenmode is stable and decays exponentially. If , the eigenmode is unstable and grows exponentially. If the eigenmode is oscillatory with an exponential envelope. is here referred to as the critical point. This type of stability is also called linear stability to stress that these considerations are only valid in the linear approximation. Realistic neurons have a saturation at high rates, which prevents activity from diverging indefinitely. A network is called linearly stable if all modes are stable. This is determined by the real part of the largest eigenvalue of , called spectral bound In inhibition-dominated networks, the spectral bound is determined by the heterogeneity in connections and defines the dynamically balanced critical state (Dahmen et al., 2019).
The different noise components excite the corresponding eigenmodes of the system and act as a driving force. A noise vector that is not parallel to a single eigenvector excites several eigenmodes, each with the corresponding strength .
Note that the different eigenmodes do not interact, which is why the total activity is given by a linear combination, or superposition, of the eigenmodes
where denotes the α-th right eigenvector of the connectivity matrix .
5 Covariances
Time-lag integrated covariances can be computed analytically for the linear dynamics (Gallego et al., 2020). They follow from the connectivity and the noise strength as (Pernice et al., 2011; Trousdale et al., 2012; Grytskyy et al., 2013; Lindner et al., 2005)
(10) |
with identity matrix 1. These covariances are equivalent to covariances of spike counts in large time windows, given by the zero-frequency component of the Fourier transform of (sometimes referred to as Wiener-Khinchin theorem Gardiner, 1985; even though the theorem proper applies in cases where the Fourier transforms of the signals do not exist). Spike count covariances (Figure 1B in the main text) can be computed from trial-resolved spiking data (Dahmen et al., 2019). This equivalence allow us to directly relate theoretical predictions for covariances to the experimentally observed ones.
While Equation (10) provides the full information on covariances between any two neurons in the network, this information is not available in the experimental data. Only a small subset of neuronal activities can be recorded such that inference of connectivity parameters from Equation 10 is unfeasible. We recently proposed in Dahmen et al., 2019 to instead consider the statistics of covariances as the basis for comparison between models and data. Using Equation 8 and Equation 10 as a starting point, field theoretical techniques allow the derivation of equations for the mean and variance of cross-covariances in relation to the mean and variance of the connectivity matrix (Dahmen et al., 2019):
(11) |
(12) |
and are defined in the subsequent section. The renormalized input noise strength is given by
(13) |
with input noise covariance , and the all-ones vector . Note that Equation (12) only holds for cross-covariances (). The diagonal terms , that is the variance of auto-covariances, do get a second contribution, which is negligible for the cross-covariances considered here.
6 Cumulant generating function of connectivity matrix
For calculating the mean and variance of the covariances of the network activity ((11) and (12)) we need mean and variance of connectivity . In the following, we derive the cumulant generating function (Gardiner, 1985) of .
The number of connections from neuron to neuron i is a binomial random variable with trials with the probability of success given by (in the following, for brevity, we ignore the index i, )
The average number of connections from neuron to neuron i is , which assures the correct average total indegree
The moment generating function of a connectivity matrix element is given by
In a realistic network, is very large. In the limit , while keeping , the binomial distribution converges to a Poisson distribution and we can write
Taking the logarithm leads to the cumulant generating function
and the first two cumulants
7 Note on derivation of variance of covariances
Note that and have an identical structure determined by the connectivity profile and the structure of the covariance equation is identical for the mean Equation (11) and variance Equation (12) as well. This is why in the following we only derive the results for the mean of covariances. The results for the variance of covariances is obtained by substituting by and by . As we show, divergences in expressions related to the mean covariances arise if the population eigenvalue of the effective connectivity matrix approaches one. In expressions related to the variance of covariances, the divergences are caused by the squared spectral bound being close to one. In general expressions, we sometimes write in order to denote either the population eigenvalue or the spectral bound, corresponding to the context of mean or variance of covariances.
8 Utilizing symmetries to reduce dimensionality
For real neuronal networks, the anatomical connectivity is never known completely, let alone the effective connectivity. This is why we are considering disorder-averaged systems. They are described by the mean and variance of the connectivity. The latter inherit the underlying symmetries of the network, like for example the same radially symmetric connectivity profile for all neurons of one type. As neuronal networks are high dimensional systems, calculating covariances from Equation (11) and Equation (12) first seems like a daunting task. But, leveraging the aforementioned symmetries similarly as in Kriener et al., 2013 allows for an effective reduction of the dimensionality of the system, thereby rendering the problem manageable.
As a demonstrative example of how this is done, consider a random network of neurons on a one-dimensional ring, in which a neuron can form a connection with weight to any other neuron with probability p0. In that case, is a homogeneous matrix, with all entries given by the same average connectivity weight
This corresponds to an all-to-all connected ring network. Due to the symmetry of the system, moving all neurons by one lattice constant does not change the system. The translation operator , representing this operation mathematically, is defined via its effect on the vector of neuron activity
Applying -times yields the identity operation
Hence, its eigenvalues are given by complex roots of one
with denoting the circumference of the ring. This shows that has one-dimensional eigenspaces. Since the system is invariant under translation, is invariant under the transformation and thus and commute. As leaves eigenspaces of invariant (if is an eigenvector of , is an eigenvector with the same eigenvalue, so they need to be multiples of each other), all eigenvectors of must be eigenvectors of . Accordingly, knowing the eigenvectors of allows diagonalizing . The normalized (left and right) eigenvectors of are given by
We get the eigenvalues of by multiplying it with the eigenvectors of
which is always zero, except for , which corresponds to the population eigenvalue of (Figure 3C in the main text). Now, we can simply write down the diagonalized form of
and we effectively reduced the -dimensional to a one dimensional problem. Inverting in Equation (11) is straightforward now, since it is diagonal in the new basis. Its eigenvalues can be written as where we suppressed the index . Therefore its inverse is given by
The renormalized noise can be evaluated using that the all-ones vector occurring in equation Equation (13) is the eigenvector of . After identifying the eigenvalue s0 with the squared spectral bound , we find
which allows us to express the mean cross-covariances (see Equation (11)) and the variance of cross-covariances (see Equation (12)) in terms of the eigenvectors of and respectively
9 One-dimensional network with one population
The simplest network with spatial connectivity is a one-dimensional ring of neurons with one population of neurons. Following section Section 6, the mean connectivity matrix has the form
As only depends on the distance of two neurons, the rows in are identical, but shifted by one index.
9.1 Dimensionality reduction
We follow the procedure developed in Section 8, as the system is invariant under translation as well. Suppressing the subscripts of , we get the eigenvalues of
where the sum over denotes a sum over all lattice sites. We used the translational symmetry from the first to the second line. The change of sign in the exponential from line two to three is due to the fact that we are summing over the second index of Thus, the eigenvalues are effectively given by the discrete Fourier transform of the connectivity profile. Expressing using the eigenvectors of leads to
(14) |
where we extracted an identity for later convenience, and we defined .
Next, we consider the renormalized noise, which is given by Equation (13). Using that the all-ones vector in the second term is the eigenvector of corresponding to , we get
Again, we identify s0 with the spectral bound , and find
(15) |
Inserting Equation (14) and Equation 15 into Equation (11) yields
9.2 Continuum limit
As we assume the lattice constant to be small, we know that the connectivity profile is sampled densely, and we are allowed to take the continuum limit. Therefore, we write
Note that , because we are summing over the second index . If the decay constant of the connectivity profile is small compared to the size of the network , we can take to infinity and finally end up with
(16) |
Analogously, we find
(17) |
where we defined
(18) |
with
(19) |
Finally, we get
(20) |
where the asterisk denotes the convolution.
9.3 Prediction of exponential decay of covariance statistics
Note that the integral in equation Equation 18 can be interpreted as an integral in the complex plane. According to the residue theorem, the solution to this integral is a weighted sum of exponentials, evaluated at the poles of . As appears in the equation for the mean covariances, and the convolution of two exponentials is an exponential with the prefactor , we expect the dominant behavior to be an exponential decay in the long-range limit, with decay constants given by the inverse imaginary part of the poles. The poles which are closest to zero are the ones which lead to the most shallow and thereby dominant decay. A real part of the poles leads to oscillations in .
9.4 Long-range limit
We cannot expect to solve the integral in Equation 17 for arbitrary connectivity profiles. To continue our analysis, we make use of the Padé method, which approximates arbitrary functions as rational functions (Basdevant, 1972). We approximate around using a Padé approximation of order (0,2)
with
(21) |
This allows us to calculate the approximate poles of
(22) |
As will be negative, due to factor from the second derivative of the Fourier integral, we write
Closing the integral contour in Equation 18 in the upper half plane for , and in the lower half plane for , we get
where we defined the effective decay constant for the mean covariances
with and , since is the Fourier transform of the connectivity profile Equation (16). Note that again is the population eigenvalue of the effective connectivity matrix For evaluating Equation (11) and Equation (12), we need to calculate the convolution of μ with itself
The final expression for the mean covariances is
Equivalently, for the variance of covariances we obtain the final result
where
Note that the quality of the Padé approximation depends on the outlier eigenvalue and the spectral bound. For the variances, the approximation works best for spectral bounds close to 1. The reason for this is that we are approximating the position of the poles in the complex integral Equation (18). We make an approximation around and Equation (22) shows that the position of the complex poles moves closer to as .
General results:
Using Equation (21)
we find
with
For the variance we use
to get
with
Exponential connectivity profile:
Using an exponential connectivity profile given by
we find and
with for the mean, and for the variance.
Gaussian connectivity profile:
Analogously, using a Gaussian connectivity profile given by
we find , and get
(23) |
10 One-dimensional network with two populations
Realistic neuronal network consist of excitatory and inhibitory neurons. So we need to introduce a second population to our network. Typically, there are more excitatory than inhibitory neurons in the brain. Therefore, we introduce excitatory neurons for each inhibitory neuron. We place excitatory neurons and one inhibitory neuron together in one cell. The cells are distributed equally along the ring. For convenience, we define .
The structure of the connectivity matrix depends on the choice of the activity vector . For later convenience we choose
where is a -dimensional vector denoting the activity of the excitatory neurons in cell i. is a -matrix, which qualitatively has the structure
(24) |
Note that are matrices, are matrices, are matrices and are matrices. The entries describe the connectivities from population in cell to population in cell i. The entries are given by
The difference stems from the fact that we have times as many excitatory neurons. As the total number of indegrees from excitatory neurons should be given by , we need to introduce a reducing factor of , as the connection probability is normalized to one.
10.1 Dimensionality reduction
In the following, we will reduce the dimensionality of as done before in the case with one population. First, we make use of the symmetry within the cells. All entries in corresponding to connections coming from excitatory neurons of the same cell need to be the same. For that reason, we change the basis to
(25) |
where denotes a -dimensional vector containing only ones. For a full basis, we need to include all the vectors with being replaced by a vector containing all possible permutations of equal numbers of ±1. In this basis is block diagonal
and is an matrix, which has the same qualitative structure as shown in Equation (24), but the submatrices are replaced by
Next, we use translational symmetry of the cells. The translation operator is defined by
As the system is invariant under moving each cell to the next lattice site, is invariant under the transformation
Again, the eigenvalues of can be determined using and they are the same as in the case of one population. But, note that here the eigenspaces corresponding to the single eigenvalues are two dimensional. The eigenvectors
belong to the same eigenvalue. In this basis, is block diagonal, with each block consisting of a 2×2 matrix, corresponding to one value of ,
Since all block matrices can be treated equally, we further reduced the problem to diagonalizing a 2×2 matrix. The submatrices take the form
with the discrete Fourier transform
(26) |
Note that and are still discrete here, but we could take the continuum limit at this point. The eigenvalues of are given by
(27) |
The corresponding eigenvectors are
(28) |
with normalization . The eigenvectors written in the Fourier basis are given by
(29) |
and we can get the eigenvectors in the basis we started with by extending and to vectors similar to Equation (25), where the elements corresponding to excitatory neurons are repeated -times. Note that the normalization of the original basis leads to an additional factor in the first term of Equation (29).
Analogously, we can find the left eigenvectors of by conducting the same steps with the transpose of
(30) |
and the vectors in the original basis are obtained similarly to the right eigenvectors. The normalization is chosen such that
which leads to
Now, we can express in terms of the eigenvalues and eigenvectors of
(31) |
which leads to
(32) |
where we defined similar to Equation (19). Let and be the sets of indices referring to excitatory or inhibitory neurons respectively. We find
with
(33) |
and
10.2 General results
The renormalized noise is evaluated using the same trick as in the one population case. We express the all-ones vector using eigenvectors of the variance matrix
Evaluating the coefficients and and inserting the corresponding solutions into Equation (13) yields
(34) |
with
with the eigenvalues of . We again identified the spectral bound
(35) |
The mean covariances can be written as
where . We can distinguish three different kinds of covariances depending on the type of neurons involved
with
10.3 Long-range limit
From here on, we consider the special case in which the synaptic connections only depend on the type of the presynaptic neuron and not on the type of the postsynaptic neuron. This is in agreement with network parameters used in established cortical network models (Potjans and Diesmann, 2014; Senk et al., 2018), in which the connection probabilities to both types of target neurons in the same layer are usually of the same order of magnitude. In that case, all expressions become independent of the first population index , and the only expressions we need to evaluate become
with
and
(36) |
After taking the continuum limit, we can make a (0,2)-Padé approximation again
which leads to the poles
or the effective decay constant of the mean covariances
Using
we get
after introducing relative parameters
The renormalized noise in Equation (13) reduces to
(37) |
The mean covariances are
with
and
Note that expressions coming from both populations contribute to each kind of covariance. Therefore, all mean covariances contain a part that decays with either of the decay constants we just determined. If, for example, the inhibitory decay constant is much larger than the excitatory one, will decay with the largest decay constant in the long-range limit
Exponential connectivity profile:
Just as in Section 9.4 we get
with for the decay constant of the mean covariances, and for the decay constant of the variances.
Gaussian connectivity profile:
And similar to Section 9.4 we get
11 Two-dimensional network with one population
In the following, we are considering two-dimensional networks, which are supposed to mimic a single-layered cortical network. Neurons are positioned on a two-dimensional lattice ( grid) with periodic boundary conditions in both dimensions (a torus). We define the activity vector to be of the form
The connectivity matrix is defined correspondingly.
11.1 Dimensionality reduction
In two dimensions we have to define two translation operators that move all neurons either one step in the -direction, or the -direction, respectively. They are defined via their action on
(38) |
Similar reasoning as in one dimension leads to the eigenvalues
and similar for the -direction. The eigenvectors can be inferred from the recursion relations
where entries of the vector are defined analogously to Equation (38). The eigenvectors are given by
where we suppressed the subscripts of and again. Using that these eigenvectors are eigenvectors of as well, yields the eigenvalues of
In the continuum limit, this becomes the two-dimensional Fourier transform
(39) |
The inverse of A is given by
(40) |
with the inverse two-dimensional Fourier transform
(41) |
The expression for the renormalized noise is the same as in the one-dimensional case with one population. Hence, the mean covariances are given by
(42) |
which is the one-dimensional expression, except for the convolution, which is replaced by its two-dimensional analogon denoted here by the double asterisk.
11.2 Long-range limit
Employing the symmetry of the connectivity kernel, we rewrite the integral in using polar coordinates
(43) |
with , and make a Padé approximation of order (0,2) of the integration kernel
(44) |
Following (Goldenfeld, 1992, p.160f), we can interpret this as calculating the Green’s function of the heat equation
(45) |
which can be solved, using the fact that can only be a function of the radial distance , due to the given symmetry of the kernel. Rewriting leads to
with the effective decay constant
(46) |
and . Defining , , and using , we get
The solution to this equation is given by the modified Bessel function of second kind and zeroth order K0
Reinserting the defined variables yields
(47) |
Note that the modified Bessel functions of second kind decay exponentially for long distances
(48) |
But, consider that the inverse square root of the distance appears in front of the exponential. Formally, this is the one-dimensional result. The only difference here is, that is a two-dimensional Fourier transform instead of a one-dimensional one and contains modified Bessel functions of second kind instead of exponentials.
In order to evaluate the expression for the mean covariances Equation 42, one needs to calculate the two-dimensional convolution of a modified Bessel function of second kind with itself, for which we use the following trick
where denotes the Fourier transform, denotes the Hankel transform, and . The last step can be found in Abramowitz and Stegun, 1964, 9.6.27.
The mean covariances are given by
Using
(49) |
we get the effective decay constant
(50) |
Exponential connectivity profile:
Using a two-dimensional exponential connectivity profile
leads to , and we get
with , and .
Gaussian connectivity profile:
Using a two-dimensional Gaussian connectivity profile
leads to , and we get
11.3 Note on higher order approximation
While the (0,2)-Padé approximation seems to yield good results for the one-dimensional cases, in two dimensions the results only coincide for large spectral radii (Appendix 1—figure 7). One can extract a higher order approximation of the poles of the integration kernel of and thereby the effective decay constant deff using the DLog-Padé-method, for which one calculates an -Padé approximation of the logarithmic derivative of the integration kernel around zero (Pelizzola, 1994). Using a (1,2)-Padé approximation leads to
which coincides with our previous results in the limit , and thus for large spectral radii. Note that this expression contains the fourth moment of the connectivity kernel .
12 Two-dimensional network with two populations
Finally, we consider a two-dimensional network with two populations of neurons. As in the one dimensional case, the neurons are gathered in cells, which contain one inhibitory and excitatory neurons. Again, they are placed on a two-dimensional lattice with periodic boundary conditions. The activity vector takes the form
(51) |
where denotes a -dimensional vector.
12.1 Dimensionality reduction
We apply the procedure developed so far, which leads to the results we found in the one-dimensional case with two populations, with Fourier transforms and convolutions replaced by their two-dimensional analogons and modified Bessel functions of second kind instead of exponentials. So, we end up with
and given by (33) and the two-dimensional Fourier transform
The renormalized noise is given by Equation 34 with spectral bound Equation 35, with the eigenvalues replaced by the two-dimensional Fourier transforms .
12.2 Long-range limit
Again, considering the special case in which the synaptic connections only depend on the type of the presynaptic neuron and not on the type of the postsynaptic neuron, the expressions simplify to
(52) |
with
Padé approximation of the Fourier kernel, integration using (Goldenfeld, 1992, p.160f) and suppressing the zero arguments of and leads to
(53) |
with
After introducing the same relative parameters as in Section 10.3, we find
(54) |
The two-dimensional convolutions are given by
(55) |
The renormalized noise simplifies to Equation 37. The mean covariances are given by
(56) |
Remember that the result for the variances of the covariances is obtained by substituting by its square, and , or respectively, by its square, and setting .
Equation (2) in the main text can be proven by inserting the result for
Using an exponential connectivity profile yields , a Gaussian connectivity profile yields .
Exponential connectivity profile:
Using the results from 11.2, we find
with , and .
Gaussian connectivity profile:
Using the results from 11.2, we find
12.3 Higher order approximation
Using a (1,2)-DLog-Padé method as in Section 11.3 yields
(57) |
which again contains the fourth moments of the connectivity kernels.
13 Validation of theory
In order to validate our results, we performed simulations, in which an effective connectivity matrix of a two-dimensional network was drawn randomly, and covariances were calculated using the result from Pernice et al., 2011, Trousdale et al., 2012, and Lindner et al., 2005
The elements of the different components of the effective connectivity matrix, similar to Equation (24), were drawn from a binomial distribution with trials and a success probability of , with given by Equation (36) and denoting the distance between the neurons.
We compared the results to the predictions by our discrete theory, continuum theory, and the long-range limit. We did this for all cases presented above: one dimension with one population, one dimension with two populations, two dimensions with one population, and two dimensions with two populations. In the cases of two populations we solely considered the special case of synaptic connections only depending on the type of the presynaptic neuron. The first three cases are not reported here. We simulated several sets of parameters, varying the number of neurons, the number of inputs, the decay constants and the spectral bound, of which we only report the one using the parameters listed in Appendix 1—table 3, because the results do not differ qualitatively. Using
and choosing
we calculated the synaptic weights
The comparison of simulation and discrete theory is shown in Appendix 1—figure 7a. Simulation and theory match almost perfectly. The continuum theory, which is shown in Figure 3D,E of the main text, matches as well as the discrete theory (not shown here). The slight shift in y-direction in Appendix 1—figure 7a is due to the fact that in the random realization of the network the spectral bound is not exactly matching the desired value, but is slightly different for each realization and distributed around the chosen value. This jittering around the real spectral bound is more pronounced as . Note that the simulated networks were small compared to the decay constant of the connectivity profile, in order to keep simulation times reasonable. This is why the variances do not fall off linearly in the semi-log plot. The kink and the related spreading starting around is a finite size effect due to periodic boundary conditions: The maximal distance of two neurons along the axes in units of spatial decay constants is . Because of the periodic boundary conditions, the covariances between two neurons increases once the distance between them exceeds the maximal distance along an axis. This, together with the fact that the curve is the result of the discrete Fourier transform of Equation 52, implies a zero slope at the boundary. This holds for any direction in the two dimensional plane, but the maximal distances between two neurons is longer for directions not aligned with any axis and depends on the precise direction, which explains the observed spreading.
In order to validate the long-range limit, we compared our discrete theory with the result from the Padé approximation at large distances (Appendix 1—figure 7b). We do not expect the Padé approximation to hold at small distances. We are mainly interested in the slope of the variance of covariances, because the slope determines how fast typical pairwise covariances decay with increasing inter-neuronal distance. The slope at large distances for the (0,2)-Padé approximation is smaller than the prediction by our theory, but the higher order approximation matches our theory very well (Appendix 1—figure 7C). In the limit both Padé predictions yield similar results. The absolute value of the covariances in the Padé approximation can be obtained from a residue analysis. The (0,2)-Pade approximation yields absolute values with a small offset, analogous to the slope results. Calculating the residues for the (1,2)-DLog Padé approximation would lead to a better approximation. Note that for plotting the higher order prediction in Appendix 1—figure 7b, we just inserted Equation (57) into Equation (53) and Equation (55).
Appendix 1—table 3. Parameters used to create theory figures. Decay constants in units of lattice constant .
Figure 3B, C | Figure 3D, E | App 1—fig 7A | App 1—fig 7B | ||
---|---|---|---|---|---|
61 | 201 | 61 | 1,001 | Number of neurons in x-direction | |
61 | 201 | 61 | 1,001 | Number of neurons in y-direction | |
4 | 4 | 4 | 4 | Ratio of excitatory to inhibitory neurons | |
100 | 100 | 100 | 100 | Number of excitatory inputs per neuron | |
50 | 50 | 50 | 50 | Number of inhibitory inputs per neuron | |
20 | 20 | 20 | 20 | Decay constant of excitatory connectivity profile | |
10 | 10 | 10 | 10 | Decay constant of inhibitory connectivity profile | |
1 | 1 | 1 | 1 | Squared noise amplitude | |
0.95 | 0.95 | 0.8 | 0.95 | Spectral bound | |
exponential | exponential | exponential | exponential | Connectivity kernel |
14 Parameters of NEST simulation
Appendix 1—table 4. Parameters used for NEST simulation and subsequent analysis.
Network parameters | ||
---|---|---|
2000 | Number of neurons | |
0.1 | Connection probability | |
Time constant | ||
Standard deviation of external input | ||
Standard deviation of noise | ||
Spectral bound | ||
0.1 | Parameter controlling difference of two network simulations | |
Simulation Parameters | ||
Simulation step size | ||
t init | Initialization time | |
t sim | Simulation time without initialization time | |
t sample | Sample resolution at which rates where recorded | |
Analysis Parameters | ||
200 | Sample size | |
Correlation time window |
15 Sources of heterogeneity
Sparseness of connections is a large source of heterogeneity in cortical networks. It contributes strongly to the variance of effective connection weights that determines the spectral bound, the quantity that controls stability of balanced networks (Sompolinsky et al., 1988; Dahmen et al., 2019): Consider the following simple model for the effective connection weights , where are independent Bernoulli numbers, which are 1 with probability and 0 with probability , and are independently distributed amplitudes. The encode the sparseness of connections and the encode the experimentally observed distributions of synaptic amplitudes and single neuron heterogeneities that lead to different neuronal gains. Since and are independent, the variance of is
For low connection probabilities as observed in cortex (), assessing the different contributions to the variance thus amounts to comparing the mean and standard deviation of . Even though synaptic amplitudes are broadly distributed in cortical networks, one typically finds that their mean and standard deviation are of the same magnitude (see e.g. Sayer et al., 1990, Tab 1; Feldmeyer et al., 1999, Tab 1; Song et al., 2005, Fig.1; Lefort et al., 2009, Tab 2; Ikegaya et al., 2013, Fig.1; Loewenstein et al., 2011, Fig. 2). Sparseness of connections (second term on the right hand side) is thus one of the dominant contributors to the variance of connections. For simplicity, the other sources, in particular the distribution of synaptic amplitudes, are left out in this study. They can, however, be straight-forwardly added in the model and the theoretical formalism, because it only depends on .
Funding Statement
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Contributor Information
David Dahmen, Email: d.dahmen@fz-juelich.de.
Moritz Helias, Email: m.helias@fz-juelich.de.
Stephanie E Palmer, The University of Chicago, United States.
Timothy E Behrens, University of Oxford, United Kingdom.
Funding Information
This paper was supported by the following grants:
Helmholtz Association VH-NG-1028 to David Dahmen, Moritz Helias.
European Commission HBP (785907 945539) to David Dahmen, Moritz Layer, Lukas Deutz, Nicole Voges, Michael von Papen, Markus Diesmann, Sonja Grün, Moritz Helias.
Deutsche Forschungsgemeinschaft 368482240/GRK2416 to Moritz Layer, Markus Diesmann, Moritz Helias.
Agence Nationale de la Recherche GRASP to Thomas Brochier, Alexa Riehle.
Additional information
Competing interests
No competing interests declared.
No competing interests declared.
Author contributions
Conceptualization, Formal analysis, Investigation, Methodology, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review and editing.
Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review and editing, Conceptualization.
Formal analysis, Methodology, Validation, Visualization, Writing – original draft.
Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review and editing.
Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review and editing.
Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft.
Data curation, Funding acquisition, Resources, Writing – original draft, Writing – review and editing.
Data curation, Funding acquisition, Resources, Writing – original draft, Writing – review and editing.
Conceptualization, Funding acquisition, Investigation, Resources, Supervision, Writing – original draft, Writing – review and editing.
Conceptualization, Funding acquisition, Investigation, Resources, Supervision, Writing – original draft, Writing – review and editing.
Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review and editing.
Ethics
All animal procedures were approved by the local ethical committee (C2EA 71; authorization A1/10/12) and conformed to the European and French government regulations.
Additional files
Data availability
All code and data required to reproduce the figures are available in a public zenodo repository at https://zenodo.org/record/5524777. Source data/code files are also attached as zip folders to the individual main figures of this submission.
References
- Abbott LF, Rajan K, Sompolinsky H. The Dynamic Brain: An Exploration of Neuronal Variability and Its Functional Significance. Oxford Scholarship Online; 2011. [DOI] [Google Scholar]
- Abeles M. Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge University Press; 1991. [Google Scholar]
- Abramowitz M, Stegun IA. In: Applied Mathematics Series. Abramowitz M, editor. National Bureau of Standards; 1964. Handbook of Mathematical Functions; pp. 55–57. [Google Scholar]
- Aljadeff J, Stern M, Sharpee T. Transition to chaos in random networks with cell-type-specific connectivity. Physical Review Letters. 2015;114:088101. doi: 10.1103/PhysRevLett.114.088101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aljadeff J, Renfrew D, Vegué M, Sharpee TO. Low-dimensional dynamics of structured random networks. Physical Review E. 2016;93:022302. doi: 10.1103/PhysRevE.93.022302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex. 1997;7:237–252. doi: 10.1093/cercor/7.3.237. [DOI] [PubMed] [Google Scholar]
- Amit DJ, Tsodyks MV. Quantitative study of attractor neural network retrieving at low spike rates: I. substrate—spikes, rates and neuronal gain. Network. 2009;2:259–273. doi: 10.1088/0954-898X_2_3_003. [DOI] [Google Scholar]
- Baker C, Ebsch C, Lampl I, Rosenbaum R. Correlated states in balanced neuronal networks. Physical Review. E. 2019;99:052414. doi: 10.1103/PhysRevE.99.052414. [DOI] [PubMed] [Google Scholar]
- Barthó P, Hirase H, Monconduit L, Zugaro M, Harris KD, Buzsáki G. Characterization of neocortical principal cells and interneurons by network interactions and extracellular features. Journal of Neurophysiology. 2004;92:600–608. doi: 10.1152/jn.01170.2003. [DOI] [PubMed] [Google Scholar]
- Basdevant JL. The Padé Approximation and its Physical Applications. Fortschritte Der Physik. 1972;20:283–331. doi: 10.1002/prop.19720200502. [DOI] [Google Scholar]
- Ben-Yishai R, Bar-Or RL, Sompolinsky H. Theory of orientation tuning in visual cortex. PNAS. 1995;92:3844–3848. doi: 10.1073/pnas.92.9.3844. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blumenfeld B, Bibitchkov D, Tsodyks M. Neural network model of the primary visual cortex: from functional architecture to lateral connectivity and back. Journal of Computational Neuroscience. 2006;20:219–241. doi: 10.1007/s10827-006-6307-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brinkman BAW, Rieke F, Shea-Brown E, Buice MA. Predicting how and when hidden neurons skew measured synaptic interactions. PLOS Computational Biology. 2018;14:e1006490. doi: 10.1371/journal.pcbi.1006490. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brochier T, Zehl L, Hao Y, Duret M, Sprenger J, Denker M, Grün S, Riehle A. Massively parallel recordings in macaque motor cortex during an instructed delayed reach-to-grasp task. Scientific Data. 2018;5:180055. doi: 10.1038/sdata.2018.55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience. 2000;8:183–208. doi: 10.1023/a:1008925309027. [DOI] [PubMed] [Google Scholar]
- Dąbrowska PA, Voges N, von Papen M, Ito J, Dahmen D, Riehle A, Brochier T, Grün S. On the Complexity of Resting State Spiking Activity in Monkey Motor Cortex. Cerebral Cortex Communications. 2020;2:tgab033. doi: 10.1093/texcom/tgab033. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dahmen D, Grün S, Diesmann M, Helias M. Second type of criticality in the brain uncovers rich multiple-neuron dynamics. PNAS. 2019;116:13051–13060. doi: 10.1073/pnas.1818972116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dahmen D, Recanatesi S, Jia X, Ocker GK, Campagnola L, Jarsky T, Seeman S, Helias M, Shea-Brown E. Strong and Localized Coupling Controls Dimensionality of Neural Activity across Brain Areas. bioRxiv. 2021 doi: 10.1101/2020.11.02.365072. [DOI]
- Darshan R, van Vreeswijk C, Hansel D. Strength of Correlations in Strongly Recurrent Neuronal Networks. Physical Review X. 2018;8:031072. doi: 10.1103/PhysRevX.8.031072. [DOI] [Google Scholar]
- de la Rocha J, Doiron B, Shea-Brown E, Josić K, Reyes A. Correlation between neural spike trains increases with firing rate. Nature. 2007;448:802–806. doi: 10.1038/nature06028. [DOI] [PubMed] [Google Scholar]
- DeFelipe J, Conley M, Jones EG. Long-range focal collateralization of axons arising from corticocortical cells in monkey sensory-motor cortex. The Journal of Neuroscience. 1986;6:3749–3766. doi: 10.1523/JNEUROSCI.06-12-03749.1986. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dehghani N, Peyrache A, Telenczuk B, Le Van Quyen M, Halgren E, Cash SS, Hatsopoulos NG, Destexhe A. Dynamic Balance of Excitation and Inhibition in Human and Monkey Neocortex. Scientific Reports. 2016;6:23176. doi: 10.1038/srep23176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Diesmann M, Gewaltig MO, Aertsen A. Stable propagation of synchronous spiking in cortical neural networks. Nature. 1999;402:529–533. doi: 10.1038/990101. [DOI] [PubMed] [Google Scholar]
- Elsayed GF, Lara AH, Kaufman MT, Churchland MM, Cunningham JP. Reorganization between preparatory and movement population responses in motor cortex. Nature Communications. 2016;7:13239. doi: 10.1038/ncomms13239. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Feldmeyer D, Egger V, Lubke J, Sakmann B. Reliable synaptic connections between pairs of excitatory layer 4 neurones within a single “barrel” of developing rat somatosensory cortex. The Journal of Physiology. 1999;521 Pt 1:169–190. doi: 10.1111/j.1469-7793.1999.00169.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gallego JA, Perich MG, Miller LE, Solla SA. Neural Manifolds for the Control of Movement. Neuron. 2017;94:978–984. doi: 10.1016/j.neuron.2017.05.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gallego JA. Cortical population activity within a preserved neural manifold underlies multiple motor behaviors. Nature Communications. 2018;9:4233. doi: 10.1038/s41467-018-06560-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gallego JA, Perich MG, Chowdhury RH, Solla SA, Miller LE. Long-term stability of cortical population dynamics underlying consistent behavior. Nature Neuroscience. 2020;23:260–270. doi: 10.1038/s41593-019-0555-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gao P. A theory of multineuronal dimensionality, dynamics and measurement. bioRxiv. 2017 doi: 10.1101/214262. [DOI]
- Gardiner CW. Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences, No.13 in Springer Series in Synergetics. Springer; 1985. [Google Scholar]
- Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT. Interruption of motor cortical discharge subserving aimed arm movements. Experimental Brain Research. 1983;49:327–340. doi: 10.1007/BF00238775. [DOI] [PubMed] [Google Scholar]
- Ginzburg I, Sompolinsky H. Theory of correlations in stochastic neural networks. Physical Review E. 1994;50:3171–3191. doi: 10.1103/PhysRevE.50.3171. [DOI] [PubMed] [Google Scholar]
- Goldenfeld N. Lectures on Phase Transitions and the Renormalization Group. Perseus books; 1992. [Google Scholar]
- Grytskyy D, Tetzlaff T, Diesmann M, Helias M. A unified view on weakly correlated recurrent networks. Frontiers in Computational Neuroscience. 2013;7:131. doi: 10.3389/fncom.2013.00131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hahne J. An SDE waveform‐relaxation method with application in distributed neural network simulations. PAMM. 2017;19:201900373. doi: 10.1002/pamm.201900373. [DOI] [Google Scholar]
- Hansel D, Sompolinsky H. Methods in Neuronal Modeling. MIT Press; 1998. [Google Scholar]
- Hatsopoulos N, Joshi J, O’Leary JG. Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles. Journal of Neurophysiology. 2004;92:1165–1174. doi: 10.1152/jn.01245.2003. [DOI] [PubMed] [Google Scholar]
- Helias M, Tetzlaff T, Diesmann M. Echoes in correlated neural systems. New Journal of Physics. 2013;15:023002. doi: 10.1088/1367-2630/15/2/023002. [DOI] [Google Scholar]
- Helias M, Tetzlaff T, Diesmann M. The correlation structure of local neuronal networks intrinsically results from recurrent dynamics. PLOS Computational Biology. 2014;10:e1003428. doi: 10.1371/journal.pcbi.1003428. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Helias M, Dahmen D. Statistical Field Theory for Neural Networks. Cham: Springer International Publishing; 2020. Statistical Field Theory for Neural Networks. [DOI] [Google Scholar]
- Hennequin G, Vogels TP, Gerstner W. Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron. 2014;82:1394–1406. doi: 10.1016/j.neuron.2014.04.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hertz J. Cross-correlations in high-conductance states of a model cortical network. Neural Computation. 2010;22:427–447. doi: 10.1162/neco.2009.06-08-806. [DOI] [PubMed] [Google Scholar]
- Hu Y, Sompolinsky H. The Spectrum of Covariance Matrices of Randomly Connected Recurrent Neuronal Networks. bioRxiv. 2020 doi: 10.1101/2020.08.31.274936. [DOI] [PMC free article] [PubMed]
- Ikegaya Y, Sasaki T, Ishikawa D, Honma N, Tao K, Takahashi N, Minamisawa G, Ujita S, Matsuki N. Interpyramid spike transmission stabilizes the sparseness of recurrent network activity. Cerebral Cortex. 2013;23:293–304. doi: 10.1093/cercor/bhs006. [DOI] [PubMed] [Google Scholar]
- Jordan J. NEST 2.18.0. Zenodo. 2019 doi: 10.5281/zenodo.2605422. [DOI]
- Kaufman MT, Churchland MM, Santhanam G, Yu BM, Afshar A, Ryu SI, Shenoy KV. Roles of monkey premotor neuron classes in movement preparation and execution. Journal of Neurophysiology. 2010;104:799–810. doi: 10.1152/jn.00231.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kaufman MT, Churchland MM, Shenoy KV. The roles of monkey M1 neuron classes in movement preparation and execution. Journal of Neurophysiology. 2013;110:817–825. doi: 10.1152/jn.00892.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kobayashi R, Kurita S, Kurth A, Kitano K, Mizuseki K, Diesmann M, Richmond BJ, Shinomoto S. Reconstructing neuronal circuitry from parallel spike trains. Nature Communications. 2019;10:4468. doi: 10.1038/s41467-019-12225-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kriener B, Helias M, Rotter S, Diesmann M, Einevoll GT. How pattern formation in ring networks of excitatory and inhibitory spiking neurons depends on the input current regime. Frontiers in Computational Neuroscience. 2013;7:187. doi: 10.3389/fncom.2013.00187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lefort S, Tomm C, Petersen CCH. The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex. Neuron. 2009;61:301–316. doi: 10.1016/j.neuron.2008.12.020. [DOI] [PubMed] [Google Scholar]
- Lindner B, Doiron B, Longtin A. Theory of oscillatory firing induced by spatially correlated noise and delayed inhibitory feedback. Physical Review E. 2005;72:061919. doi: 10.1103/PhysRevE.72.061919. [DOI] [PubMed] [Google Scholar]
- Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nature Neuroscience. 2012;15:1498–1505. doi: 10.1038/nn.3220. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loewenstein Y, Kuras A, Rumpel S. Multiplicative dynamics underlie the emergence of the log-normal distribution of spine sizes in the neocortex in vivo. The Journal of Neuroscience. 2011;31:9481–9488. doi: 10.1523/JNEUROSCI.6130-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luczak A, Barthó P, Harris KD. Spontaneous events outline the realm of possible sensory responses in neocortical populations. Neuron. 2009;62:413–425. doi: 10.1016/j.neuron.2009.03.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mastrogiuseppe F, Ostojic S. Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron. 2018;99:609–623. doi: 10.1016/j.neuron.2018.07.003. [DOI] [PubMed] [Google Scholar]
- Mazzucato L, Fontanini A, La Camera G. Stimuli Reduce the Dimensionality of Cortical Activity. Frontiers in Systems Neuroscience. 2016;10:11. doi: 10.3389/fnsys.2016.00011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Okun M, Lampl I. Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nature Neuroscience. 2008;11:535–537. doi: 10.1038/nn.2105. [DOI] [PubMed] [Google Scholar]
- Olver FWJ, Lozier DW, Boisvert RF, Clark CW. NIST Handbook of Mathematical Functions. Cambridge University Press; 2010. [Google Scholar]
- Pelizzola A. Cluster variation method, Padé approximants, and critical behavior. Physical Review E. 1994;49:R2503–R2506. doi: 10.1103/PhysRevE.49.R2503. [DOI] [PubMed] [Google Scholar]
- Perich MG, Arlt C, Soares S, Young ME, Mosher CP, Minxha J, Carter E, Rutishauser U, Rudebeck PH, Harvey CD, Rajan K. Inferring Brain-Wide Interactions Using Data-Constrained Recurrent Neural Network Models. bioRxiv. 2021 doi: 10.1101/2020.12.18.423348. [DOI]
- Pernice V, Staude B, Cardanobile S, Rotter S. How structure determines correlations in neuronal networks. PLOS Computational Biology. 2011;7:e1002059. doi: 10.1371/journal.pcbi.1002059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pernice V, Staude B, Cardanobile S, Rotter S. Recurrent interactions in spiking networks with arbitrary topology. Physical Review E. 2012;85:031916. doi: 10.1103/PhysRevE.85.031916. [DOI] [PubMed] [Google Scholar]
- Peyrache A. Spatiotemporal dynamics of neocortical excitation and inhibition during human sleep. PNAS. 2012;109:1731–1736. doi: 10.1073/pnas.1109895109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peyrache A, Destexhe A. Electrophysiological monitoring of inhibition in mammalian species, from rodents to humans. Neurobiology of Disease. 2019;130:104500. doi: 10.1016/j.nbd.2019.104500. [DOI] [PubMed] [Google Scholar]
- Potjans TC, Diesmann M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cerebral Cortex. 2014;24:785–806. doi: 10.1093/cercor/bhs358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLOS Computational Biology. 2019;15:e1006446. doi: 10.1371/journal.pcbi.1006446. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reimann MW, Horlemann AL, Ramaswamy S, Muller EB, Markram H. Morphological Diversity Strongly Constrains Synaptic Connectivity and Plasticity. Cerebral Cortex. 2017;27:4570–4585. doi: 10.1093/cercor/bhx150. [DOI] [PubMed] [Google Scholar]
- Reinhold K, Lien AD, Scanziani M. Distinct recurrent versus afferent dynamics in cortical visual processing. Nature Neuroscience. 2015;18:1789–1797. doi: 10.1038/nn.4153. [DOI] [PubMed] [Google Scholar]
- Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD. The asynchronous state in cortical circuits. Science. 2010;327:587–590. doi: 10.1126/science.1179850. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Riehle A, Wirtssohn S, Grün S, Brochier T. Mapping the spatio-temporal structure of motor cortical LFP and spiking activities during reach-to-grasp movements. Frontiers in Neural Circuits. 2013;7:48. doi: 10.3389/fncir.2013.00048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Riehle A, Brochier T, Nawrot M, Grün S. Behavioral Context Determines Network State and Variability Dynamics in Monkey Motor Cortex. Frontiers in Neural Circuits. 2018;12:52. doi: 10.3389/fncir.2018.00052. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rosenbaum R, Doiron B. Balanced Networks of Spiking Neurons with Spatially Dependent Recurrent Connections. Physical Review X. 2014;4:021039. doi: 10.1103/PhysRevX.4.021039. [DOI] [Google Scholar]
- Rosenbaum R, Smith MA, Kohn A, Rubin JE, Doiron B. The spatial structure of correlated neuronal variability. Nature Neuroscience. 2017;20:107–114. doi: 10.1038/nn.4433. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roxin A, Brunel N, Hansel D. Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks. Physical Review Letters. 2005;94:238103. doi: 10.1103/PhysRevLett.94.238103. [DOI] [PubMed] [Google Scholar]
- Salinas E, Sejnowski TJ. Correlated neuronal activity and the flow of neural information. Nature Reviews. Neuroscience. 2001a;2:539–550. doi: 10.1038/35086012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Salinas E, Sejnowski TJ. Gain modulation in the central nervous system: where behavior, neurophysiology, and computation meet. The Neuroscientist. 2001b;7:430–440. doi: 10.1177/107385840100700512. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sayer RJ, Friedlander MJ, Redman SJ. The time course and amplitude of EPSPs evoked at synapses between pairs of CA3/CA1 neurons in the hippocampal slice. The Journal of Neuroscience. 1990;10:826–836. doi: 10.1523/JNEUROSCI.10-03-00826.1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schnepel P, Kumar A, Zohar M, Aertsen A, Boucsein C. Physiology and Impact of Horizontal Connections in Rat Neocortex. Cerebral Cortex. 2015;25:3818–3835. doi: 10.1093/cercor/bhu265. [DOI] [PubMed] [Google Scholar]
- Semedo JD, Zandvakili A, Machens CK, Yu BM, Kohn A. Cortical Areas Interact through a Communication Subspace. Neuron. 2019;102:249–259. doi: 10.1016/j.neuron.2019.01.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Senk J, Hagen E, van Albada SJ, Diesmann M. Reconciliation of Weak Pairwise Spike-Train Correlations and Highly Coherent Local Field Potentials across Space. arXiv. 2018 https://arxiv.org/abs/1805.10235
- Smith GB, Hein B, Whitney DE, Fitzpatrick D, Kaschube M. Distributed network interactions and their emergence in developing neocortex. Nature Neuroscience. 2018;21:1600–1608. doi: 10.1038/s41593-018-0247-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sompolinsky H, Crisanti A, Sommers HJ. Chaos in random neural networks. Physical Review Letters. 1988;61:259–262. doi: 10.1103/PhysRevLett.61.259. [DOI] [PubMed] [Google Scholar]
- Song S, Sjöström PJ, Reigl M, Nelson S, Chklovskii DB, Friston KJ. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLOS Biology. 2005;3:e68. doi: 10.1371/journal.pbio.0030068. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spreizer S, Aertsen A, Kumar A. From space to time: Spatial inhomogeneities lead to the emergence of spatiotemporal sequences in spiking neuronal networks. PLOS Computational Biology. 2019;15:e1007432. doi: 10.1371/journal.pcbi.1007432. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stringer C, Pachitariu M, Steinmetz N, Carandini M, Harris KD. High-dimensional geometry of population responses in visual cortex. Nature. 2019;571:361–365. doi: 10.1038/s41586-019-1346-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sussillo D, Churchland MM, Kaufman MT, Shenoy KV. A neural network that finds a naturalistic solution for the production of muscle activity. Nature Neuroscience. 2015;18:1025–1033. doi: 10.1038/nn.4042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of neural-network activity by inhibitory feedback. PLOS Computational Biology. 2012;8:e1002596. doi: 10.1371/journal.pcbi.1002596. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Torre E. Synchronous Spike Patterns in Macaque Motor Cortex during an Instructed-Delay Reach-to-Grasp Task. Journal of Neuroscience. 2016;36:8329–8340. doi: 10.1523/JNEUROSCI.4375-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Trousdale J, Hu Y, Shea-Brown E, Josić K. Impact of network structure and cellular response on spike time correlations. PLOS Computational Biology. 2012;8:e1002408. doi: 10.1371/journal.pcbi.1002408. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tuckwell HC. Introduction to Theoretical Neurobiology. Cambridge University Press; 2009. [DOI] [Google Scholar]
- van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274:1724–1726. doi: 10.1126/science.274.5293.1724. [DOI] [PubMed] [Google Scholar]
- van Vreeswijk C, Sompolinsky H. Chaotic balanced state in a model of cortical circuits. Neural Computation. 1998;10:1321–1371. doi: 10.1162/089976698300017214. [DOI] [PubMed] [Google Scholar]
- Yu BM, Cunningham JP, Santhanam G, Ryu SI, Shenoy KV, Sahani M. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Journal of Neurophysiology. 2009;102:614–635. doi: 10.1152/jn.90941.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]