Skip to main content
eLife logoLink to eLife
. 2022 Jan 20;11:e68422. doi: 10.7554/eLife.68422

Global organization of neuronal activity only requires unstructured local connectivity

David Dahmen 1,†,, Moritz Layer 1,2,, Lukas Deutz 1,3, Paulina Anna Dąbrowska 1,2, Nicole Voges 1,4, Michael von Papen 1, Thomas Brochier 4, Alexa Riehle 1,4, Markus Diesmann 1,5,6, Sonja Grün 1,7, Moritz Helias 1,5,
Editors: Stephanie E Palmer8, Timothy E Behrens9
PMCID: PMC8776256  PMID: 35049496

Abstract

Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.

Research organism: Rhesus macaque

Introduction

Complex brain functions require coordination between large numbers of neurons. Unraveling mechanisms of neuronal coordination is therefore a core ingredient towards answering the long-standing question of how neuronal activity represents information. Population coding is one classical paradigm (Georgopoulos et al., 1983) in which entire populations of similarly tuned neurons behave coherently, thus leading to positive correlations among their members. The emergence and dynamical control of such population-averaged correlations has been studied intensely (Ginzburg and Sompolinsky, 1994; Renart et al., 2010; Helias et al., 2014; Rosenbaum and Doiron, 2014). More recently, evidence accumulated that neuronal activity often evolves within more complex low-dimensional manifolds, which imply more involved ways of neuronal activity coordination (Gallego et al., 2017; Gallego, 2018; Gallego et al., 2020): A small number of population-wide activity patterns, the neural modes, are thought to explain most variability of neuronal activity. In this case, individual neurons do not necessarily follow a stereotypical activity pattern that is identical across all neurons contributing to a representation. Instead, the coordination among the members is determined by more complex relations. Simulations of recurrent network models indeed indicate that networks trained to perform a realistic task exhibit activity organized in low-dimensional manifolds (Sussillo et al., 2015). The dimensionality of such manifolds is determined by the structure of correlations (Abbott et al., 2011; Mazzucato et al., 2016) and tightly linked to the complexity of the task the network has to perform (Gao, 2017) as well as to the dimensionality of the stimulus (Stringer et al., 2019). Recent work has started to decipher how neural modes and the dimensionality of activity are shaped by features of network connectivity, such as heterogeneity of connections (Smith et al., 2018; Dahmen et al., 2019), block structure (Aljadeff et al., 2015; Aljadeff et al., 2016), and low-rank perturbations (Mastrogiuseppe and Ostojic, 2018) of connectivity matrices, as well as connectivity motifs (Recanatesi et al., 2019; Dahmen et al., 2021; Hu and Sompolinsky, 2020). Yet, these works neglected the spatial organization of network connectivity (Schnepel et al., 2015) that becomes more and more important with current experimental techniques that allow the simultaneous recording of ever more neurons. How distant neurons that are likely not connected can still be strongly coordinated to participate in the same neural mode is a widely open question.

To answer this question, we combine analyses of parallel spiking data from macaque motor cortex with the analytical investigation of a spatially organized neuronal network model. We here quantify coordination by Pearson correlation coefficients and pairwise covariances, which measure how temporal departures of the neurons’ activities away from their mean firing rate are correlated. We show that, even with only unstructured and short-range connections, strong covariances across distances of several millimeters emerge naturally in balanced networks if their dynamical state is close to an instability within a ‘critical regime’. This critical regime arises from strong heterogeneity in local network connections that is abundant in brain networks. Intuitively, it arises because activity propagates over a large number of different indirect paths. Heterogeneity, here in the form of sparse random connectivity, is thus essential to provide a rich set of such paths. While mean covariances are readily accessible by mean-field techniques and have been shown to be small in balanced networks (Renart et al., 2010; Tetzlaff et al., 2012), explaining covariances on the level of individual pairs requires methods from statistical physics of disordered systems. With such a theory, here derived for spatially organized excitatory-inhibitory networks, we show that large individual covariances arise at all distances if the network is close to the critical point. These predictions are confirmed by recordings of macaque motor cortex activity. The long-range coordination found in this study is not merely determined by the anatomical connectivity, but depends substantially on the network state, which is characterized by the individual neurons’ mean firing rates. This allows the network to adjust the neuronal coordination pattern in a dynamic fashion, which we demonstrate through simulations and by comparing two behavioral epochs of a reach-to-grasp experiment.

Results

Macaque motor cortex shows long-range coordination patterns

We first analyze data from motor cortex of macaques during rest, recorded with 4×4mm2, 100-electrode Utah arrays with 400 µm inter-electrode distance (Figure 1A). The resting condition of motor cortex in monkeys is ideal to assess intrinsic coordination between neurons during ongoing activity. In particular, our analyses focus on true resting state data, devoid of movement-related transients in neuronal firing (see Materials and methods). Parallel single-unit spiking activity of 130 neurons per recording session, sorted into putative excitatory and inhibitory cells, shows strong spike-count correlations across the entire Utah array, well beyond the typical scale of the underlying short-range connectivity profiles (Figure 1B and D). Positive and negative correlations form patterns in space that are furthermore seemingly unrelated to the neuron types. All populations show a large dispersion of both positive and negative correlation values (Figure 1C).

Figure 1. Salt-and-pepper structure of covariances in motor cortex.

Figure 1.

(A) Sketch of 10 × 10 Utah electrode array recording in motor cortex of macaque monkey during rest. (B) Spikes are sorted into putative excitatory (blue triangles) and inhibitory (red circles) single units according to widths of spike waveforms (see Appendix 1 Section 2). Resulting spike trains are binned in 1 s bins to obtain spike counts. (C) Population-resolved distribution of pairwise spike-count Pearson correlation coefficients in session E2 (E-E: excitatory-excitatory, E-I: excitatory-inhibitory, I-I: inhibitory-inhibitory). (D) Pairwise spike-count correlation coefficients with respect to the neuron marked by black triangle in one recording (session E2, see Materials and methods). Grid indicates electrodes of a Utah array, triangles and circles correspond to putative excitatory and inhibitory neurons, respectively. Size as well as color of markers represent correlation. Neurons within the same square were recorded on the same electrode.

Figure 1—source data 1. Code and data.

The classical view on pairwise correlations in balanced networks (Ginzburg and Sompolinsky, 1994; Renart et al., 2010; Pernice et al., 2011; Pernice et al., 2012; Tetzlaff et al., 2012; Helias et al., 2014) focuses on averages across many pairs of cells: average correlations are small if the network dynamics is stabilized by an excess of inhibitory feedback; dynamics known as the ‘balanced state’ arise (van Vreeswijk and Sompolinsky, 1996; Amit and Brunel, 1997; van Vreeswijk and Sompolinsky, 1998): Negative feedback counteracts any coherent increase or decrease of the population-averaged activity, preventing the neurons from fluctuating in unison (Tetzlaff et al., 2012). Breaking this balance in different ways leads to large correlations (Rosenbaum and Doiron, 2014; Darshan et al., 2018; Baker et al., 2019). Can the observation of significant correlations between individual cells across large distances be reconciled with the balanced state? In the following, we provide a mechanistic explanation.

Multi-synaptic connections determine correlations

Connections mediate interactions between neurons. Many studies therefore directly relate connectivity and correlations (Pernice et al., 2011; Pernice et al., 2012; Trousdale et al., 2012; Brinkman et al., 2018; Kobayashi et al., 2019). From direct connectivity, one would expect positive correlations between excitatory neurons and negative correlations between inhibitory neurons and a mix of negative and positive correlations only for excitatory-inhibitory pairs. Likewise, a shared input from inside or outside the network only imposes positive correlations between any two neurons (Figure 2A). The observations that excitatory neurons may have negative correlations (Figure 1D), as well as the broad distribution of correlations covering both positive and negative values (Figure 1C), are not compatible with this view. In fact, the sign of correlations appears to be independent of the neuron types. So how do negative correlations between excitatory neurons arise?

Figure 2. Correlations from direct and indirect connections.

Figure 2.

(A) Positive correlation (green neuron i) follows from direct excitatory connection (top) or shared input (middle). (B) Negative correlation (magenta) between two excitatory neurons cannot be explained by direct connections: Neuronal interactions are not only mediated via direct connections (n=1; sign uniquely determined by presynaptic neuron type) but also via indirect paths of different length n§gt;1. The latter may have any sign (positive: green; negative: purple) due to intermediate neurons of arbitrary type (triangle: excitatory, circle: inhibitory).

The view that equates connectivity with correlation implicitly assumes that the effect of a single synapse on the receiving neuron is weak. This view, however, regards each synapse in isolation. Could there be states in the network where, collectively, many weak synapses cooperate, as perhaps required to form low-dimensional neuronal manifolds? In such a state, interactions may not only be mediated via direct connections but also via indirect paths through the network (Figure 2B). Such effective multi-synaptic connections may explain our observation that far apart neurons that are basically unconnected display considerable correlation of arbitrary sign.

Let us here illustrate the ideas first and corroborate them in subsequent sections. Direct connections yield correlations of a predefined sign, leading to correlation distributions with multiple peaks, for example a positive peak for excitatory neurons that are connected and a peak at zero for neurons that are not connected. Multi-synaptic paths, however, involve both excitatory and inhibitory intermediate neurons, which contribute to the interaction with different signs (Figure 2B). Hence, a single indirect path can contribute to the total interaction with arbitrary sign (Pernice et al., 2011). If indirect paths dominate the interaction between two neurons, the sign of the resulting correlation becomes independent of their type. Given that the connecting paths in the network are different for any two neurons, the resulting correlations can fall in a wide range of both positive and negative values, giving rise to the broad distributions for all combinations of neuron types in Figure 1C. This provides a hypothesis why there may be no qualitative difference between the distribution of correlations for excitatory and inhibitory neurons. In fact, their widths are similar and their mean is close to zero (see Materials and methods for exact values); the latter being the hallmark of the negative feedback that characterizes the balanced state. The subsequent model-based analysis will substantiate this idea and show that it also holds for networks with spatially organized heterogeneous connectivity.

To play this hypothesis further, an important consequence of the dominance of multi-synaptic connections could be that correlations are not restricted to the spatial range of direct connectivity. Through interactions via indirect paths the reach of a single neuron could effectively be increased. But the details of the spatial profile of the correlations in principle could be highly complex as it depends on the interplay of two antagonistic effects: On the one hand, signal propagation becomes weaker with distance, as the signal has to pass several synaptic connections. Along these paths mean firing rates of neurons are typically diverse, and so are their signal transmission properties (de la Rocha et al., 2007). On the other hand, the number of contributing indirect paths between any pair of neurons proliferates with their distance. With single neurons typically projecting to thousands of other neurons in cortex, this leads to involved combinatorics; intuition here ceases to provide a sensible hypothesis on what is the effective spatial profile and range of coordination between neurons. Also it is unclear which parameters these coordination patterns depend on. The model-driven and analytical approach of the next section will provide such a hypothesis.

Networks close to instability show shallow exponential decay of covariances

We first note that the large magnitude and dispersion of individual correlations in the data and their spatial structure primarily stem from features in the underlying covariances between neuron pairs (Appendix 1—figure 1). Given the close relationship between correlations and covariances (Appendix 1—figure 1D and E), in the following we analyze covariances, as these are less dependent on single neuron properties and thus analytically simpler to treat. To gain an understanding of the spatial features of intrinsically generated covariances in balanced critical networks, we investigate a network of excitatory and inhibitory neurons on a two-dimensional sheet, where each neuron receives external Gaussian white noise input (Figure 3A). We investigate the covariance statistics in this model by help of linear-response theory, which has been shown to approximate spiking neuron models well (Pernice et al., 2012; Trousdale et al., 2012; Tetzlaff et al., 2012; Helias et al., 2013; Grytskyy et al., 2013; Dahmen et al., 2019). To allow for multapses, the connections between two neurons are drawn from a binomial distribution, and the connection probability decays with inter-neuronal distance on a characteristic length scale d (for more details see Materials and methods). Previous studies have used linear-response theory in combination with methods from statistical physics and field theory to gain analytic insights into both mean covariances (Ginzburg and Sompolinsky, 1994; Lindner et al., 2005; Pernice et al., 2011; Tetzlaff et al., 2012) and the width of the distribution of covariances (Dahmen et al., 2019). Field-theoretic approaches, however, were so far restricted to purely random networks devoid of any network structure and thus not suitable to study spatial features of covariances. To analytically quantify the relation between the spatial ranges of covariances and connections, we therefore here develop a theory for spatially organized random networks with multiple populations. The randomness in our model is based on the sparseness of connections, which is one of the main sources of heterogeneity in cortical networks in that it contributes strongly to the variance of connections (see Appendix 1 Section 15).

Figure 3. Spatially organized E-I network model.

Figure 3.

(A) Network model: space is divided into cells with four excitatory (triangles) and one inhibitory (circle) neuron each. Distance-dependent connection probabilities (shaded areas) are defined with respect to cell locations. (B) Eigenvalues λ of effective connectivity matrix for network in dynamically balanced critical state. Each dot shows the real part Re(λ) and imaginary part Im(λ) of one complex eigenvalue. The spectral bound (dashed vertical line) denotes the right-most edge of the eigenvalue spectrum. (C) Simulation: covariances of excitatory neurons over distance x between cells (blue dots: individual pairs; cyan: mean; orange: standard deviation; sample of 150 covariances at each of 200 chosen distances). (D) Theory: variance of covariance distribution as a function of distance x for different spectral bounds of the effective connectivity matrix. Inset: effective decay constant of variances diverges as the spectral bound approaches one. (E) For large spectral bounds, the variances of EE, EI, and II covariances decay on a similar length scale. Spectral bound R=0.95. Other parameters see Appendix 1—table 3.

Figure 3—source data 1. Code and data.

A distance-resolved histogram of the covariances in the spatially organized E-I network shows that the mean covariance is close to zero but the width or variance of the covariance distribution stays large, even for large distances (Figure 3C). Analytically, we derive that, despite the complexity of the various indirect interactions, both the mean and the variance of covariances follow simple exponential laws in the long-distance limit (see Appendix 1 Section 4 - Section 12). These laws are universal in that they do not depend on details of the spatial profile of connections. Our theory shows that the associated length scales are strikingly different for means and variances of covariances. They each depend on the reach of direct connections and on specific eigenvalues of the effective connectivity matrix. These eigenvalues summarize various aspects of network connectivity and signal transmission into a single number: Each eigenvalue belongs to a ‘mode’, a combination of neurons that act collaboratively, rather than independently, coordinating neuronal activity within a one-dimensional subspace. To start with, there are as many such subspaces as there are neurons. But if the spectral bound in Figure 3B is close to one, only a relatively small fraction of them, namely those close to the spectral bound, dominate the dynamics; the dynamics is then effectively low-dimensional. Additionally, the eigenvalue quantifies how fast a mode decays when transmitted through a network. The eigenvalues of the dominating modes are close to one, which implies a long lifetime. The corresponding fluctuations thus still contribute significantly to the overall signal, even if they passed by many synaptic connections. Therefore, indirect multi-synaptic connections contribute significantly to covariances if the spectral bound is close to one, and in that case we expect to see long-range covariances.

To quantify this idea, for the mean covariance c¯ we find that the dominant behavior is an exponential decay c¯exp(-x/d¯) on a length scale d¯. This length scale is determined by a particular eigenvalue, the population eigenvalue, corresponding to the mode in which all neurons are excited simultaneously. Its position solely depends on the ratio between excitation and inhibition in the network and becomes more negative in more strongly inhibition-dominated networks (Figure 3B). We show in Appendix 1 Section 9.4 that this leads to a steep decay of mean covariances with distance. The variance of covariances, however, predominantly decays exponentially on a length scale deff that is determined by the spectral bound R, the largest real part among all eigenvalues (Figure 3B and D). In inhibition-dominated networks, R is determined by the heterogeneity of connections. For R1 we obtain the effective length scale

deffdR21R2+const.1. (1)

What this means is that precisely at the point where R is close to one, when neural activity occupies a low-dimensional manifold, the length scale deff on which covariances decay exceeds the reach of direct connections by a large factor (Figure 3D). As the network approaches instability, which corresponds to the spectral bound R going to one, the effective decay constant diverges (Figure 3D inset) and so does the range of covariances.

Our population-resolved theoretical analysis, furthermore, shows that the larger the spectral bound the more similar the decay constants between different populations, with only marginal differences for R1 (Figure 3E). This holds strictly if connection weights only depend on the type of the presynaptic neuron but not on the type of the postsynaptic neuron. Moreover, we find a relation between the squared effective decay constants and the squared anatomical decay constants of the form

deff,E2deff,I2=const.(dE2dI2). (2)

This relation is independent of the eigenvalues of the effective connectivity matrix, as the constant of order O(1) does only depend on the choice of the connectivity profile. For R1, this means that even though the absolute value of both effective length scales on the left hand side is large, their relative difference is small because it equals the small difference of anatomical length scales on the right hand side.

Pairwise covariances in motor cortex decay on a millimeter scale

To check if these predictions are confirmed by the data from macaque motor cortex, we first observe that, indeed, covariances in the resting state show a large dispersion over almost all distances on the Utah array (Figure 4). Moreover, the variance of covariances agrees well with the predicted exponential law: Performing an exponential fit reveals length constants above 1 mm. These large length constants have to be compared to the spatial reach of direct connections, which is about an order of magnitude shorter, in the range of 100-400 μm (Schnepel et al., 2015), so below the 400 μm inter-electrode distance of the Utah array. The shallow decay of the variance of covariances is, next to the broad distribution of covariances, a second indication that the network is in the dynamically balanced critical regime, in line with the prediction by Equation (1).

Figure 4. Long-range covariances in macaque motor cortex.

Figure 4.

Variance of covariances as a function of distance. (A) Population-specific exponential fits (lines) to variances of covariances (dots) in session E2, with fitted decay constants indicated in the legend (I-I: putative inhibitory neuron pairs, I-E: inhibitory-excitatory, E-E: excitatory pairs). Dots show the empirical estimate of the variance of the covariance distribution for each distance. Size of the dots represents relative count of pairs per distance and was used as weighting factor for the fits to compensate for uncertainty at large distances, where variance estimates are based on fewer samples. Mean squared error 2.918. (B) Population-specific exponential fits (lines) analogous to (A), with slopes constrained to be identical. This procedure yields a single fitted decay constant of 1.029 mm. Mean squared error 2.934. (C) Table listing decay constants fitted as in (B) for all recording sessions and the ratios between mean squared errors of the fits obtained in procedures B and A.

Figure 4—source data 1. Code and data.

The population-resolved fits to the data show a larger length constant for excitatory covariances than for inhibitory ones (Figure 4A). This is qualitatively in line with the prediction of Equation (2) given the – by tendency – longer reach of excitatory connections compared to inhibitory ones, as derived from morphological constraints (Reimann et al., 2017, Fig. S2). In the dynamically balanced critical regime, however, the predicted difference in slope for all three fits is practically negligible. Therefore, we performed a second fit where the slope of the three exponentials is constrained to be identical (Figure 4B). The error of this fit is only marginally larger than the ones of fitting individual slopes (Figure 4C). This shows that differences in slopes are hardly detectable given the empirical evidence, thus confirming the predictions of the theory given by Equation (1) and Equation (2).

Firing rates alter connectivity-dependent covariance patterns

Since covariances measure the coordination of temporal fluctuations around the individual neurons’ mean firing rates, they are determined by how strong a neuron transmits such fluctuations from input to output (Abeles, 1991). To leading order this is explained by linear-response theory (Ginzburg and Sompolinsky, 1994; Lindner et al., 2005; Pernice et al., 2011; Tetzlaff et al., 2012): How strongly a neuron reacts to a small change in its input depends on its dynamical state, foremost the mean and variance of its total input, called ‘working point’ in the following. If a neuron receives almost no input, a small perturbation in the input will not be able to make the neuron fire. If the neuron receives a large input, a small perturbation will not change the firing rate either, as the neuron is already saturated. Only in the intermediate regime the neuron is susceptible to small deviations of the input. Mathematically, this behavior is described by the gain of the neuron, which is the derivative of the input-output relation (Abeles, 1991). Due to the non-linearity of the input-output relation, the gain is vanishing for very small and very large inputs and non-zero in the intermediate regime. How strongly a perturbation in the input to one neuron affects one of the subsequent neurons therefore not only depends on the synaptic weight J but also on the gain S and thereby the working point. This relation is captured by the effective connectivity W=SJ. What is the consequence of the dynamical interaction among neurons depending on the working point? Can it be used to reshape the low-dimensional manifold, the collective coordination between neurons?

The first part of this study finds that long-range coordination can be achieved in a network with short-range random connections if effective connections are sufficiently strong. Alteration of the working point, for example by a different external input level, can affect the covariance structure: The pattern of coordination between individual neurons can change, even though the anatomical connectivity remains the same. In this way, routing of information through the network can be adapted dynamically on a mesoscopic scale. This is a crucial difference of such coordination as opposed to coordination imprinted by complex but static connection patterns.

Here, we first illustrate this concept by simulations of a network of 2000 sparsely connected threshold-linear (ReLU) rate neuron models that receive Gaussian white noise inputs centered around neuron-specific non-zero mean values (see Materials and methods and Appendix 1 Section 14 for more details). The ReLU activation function thereby acts as a simple model for the vanishing gain for neurons with too low input levels. Note that in cortical-like scenarios with low firing rates, neuronal working points are far away from the high-input saturation discussed above, which is therefore neglected by the choice of the ReLU activation function. For independent and stationary external inputs covariances between neurons are solely generated inside the network via the sparse and random recurrent connectivity. External inputs only have an indirect impact on the covariance structure by setting the working point of the neurons.

We simulate two networks with identical structural connectivity and identical external input fluctuations, but small differences in mean external inputs between corresponding neurons in the two simulations (Figure 5A). These small differences in mean external inputs create different gains and firing rates and thereby differences in effective connectivity and covariances. Since mean external inputs are drawn from the same distribution in both simulations (Figure 5B), the overall distributions of firing rates and covariances across all neurons are very similar (Figure 5E1, F1). But individual neurons’ firing rates do differ (Figure 5E2). For the simple ReLU activation used here, we in particular observe neurons that switch between non-zero and zero firing rate between the two simulations. This resulting change of working points substantially affects the covariance patterns (Figure 5F2): Differences in firing rates and covariances between the two simulations are significantly larger than the differences across two different epochs of the same simulation (Figure 5C). The larger the spectral bound, the more sensitive are the intrinsically generated covariances to the changes in firing rates (Figure 5D). Thus, a small offset of individual firing rates is an effective parameter to control network-wide coordination among neurons. As the input to the local network can be changed momentarily, we predict that in the dynamically balanced critical regime coordination patterns should be highly dynamic.

Figure 5. Changes in effective connectivity modify coordination patterns.

Figure 5.

(A) Visualization of effective connectivity: A sparse random network with given structural connectivity (left network sketch) is simulated with two different input levels for each neuron (depicted by insets), resulting in different firing rates (grayscale in right network sketches) and therefore different effective connectivities (thickness of connections) in the two simulations. Parameters can be found in Appendix 1—table 4. (B1) Histogram of input currents across neurons for the two simulations (N1 and N2). (B2) Scatter plot of inputs to subset of 1500 corresponding neurons in the first and the second simulation (Pearson correlation coefficient ρ=0.90). (C) Correlation coefficients of rates and of covariances between the two simulations (b, black) and within two epochs of the same simulation (w, gray). (D) Correlation coefficient of rates (gray) and covariances (black) between the two simulations as a function of the spectral bound R. (E1) Distribution of rates in the two simulations (excluding silent neurons with |rate|§lt;10-3). (E2) Scatter plot of rates in the first compared to the second simulation (Pearson correlation coefficient ρ=0.81). (F1) Distribution of covariances in the two simulation (excluding silent neurons). (F2) Scatter plot of sample of 5000 covariances in first compared to the second simulation (Pearson correlation coefficient ρ=0.40). Here silent neurons are included (accumulation of markers on the axes). Other parameters: number of neurons N=2000, connection probability p=0.1, spectral bound for panels B, C, E, F is R=0.8.

Figure 5—source data 1. Code and data.

Coordination patterns in motor cortex depend on behavioral context

In order to test the theoretical prediction in experimental data, we analyze parallel spiking activity from macaque motor cortex, recorded during a reach-to-grasp experiment (Riehle et al., 2013; Brochier et al., 2018). In contrast to the resting state, where the animal was in an idling state, here the animal is involved in a complex task with periods of different cognitive and behavioral conditions (Figure 6A). We compare two epochs in which the animal is requested to wait and is sitting still but which differ in cognitive conditions. The first epoch is a starting period (S), where the monkey has self-initiated the behavioral trial and is attentive because it is expecting a cue. The second epoch is a preparatory period (P), where the animal has just received partial information about the upcoming trial and is waiting for the missing information and the GO signal to initiate the movement.

Figure 6. Behavioral condition reshapes mesoscopic neuronal coordination.

Figure 6.

(A) Trial structure of the reach-to-grasp experiment (Brochier et al., 2018). Blue segments above the time axis indicate data pieces at trial start (dark blue: S (S1+ S2)) and during the preparatory period (light blue: P (P1+ P2)). (B) Salt-and-pepper structure of covariance during two different epochs (S1 and P1) of one recording session of monkey N (151 trials, 106 single units, Figure 1 for recording setup). For some neurons, the covariance completely reverses, while in the others it does not change. Inhibitory reference neuron indicated by black circle. (C1) Distributions of firing rates during S1 and P1. (C2) Scatter plot comparing firing rates in S1 and P1 (Pearson correlation coefficient ρ=0.69). (D1/D2) Same as panels C1/C2, but for covariances (Pearson correlation coefficient ρ=0.40). (E) Correlation coefficient of firing rates across neurons in different epochs of a trial for eight recorded sessions. Correlations between sub-periods of the same epoch (S1-S2, P1-P2; within-epoch, gray) and between sub-periods of different epochs (Sx-Py; between-epochs, black). Data shown in panels B-D is from session 8. Box plots to the right of the black dashed line show distributions obtained after pooling across all analyzed recording sessions per monkey. The line in the center of each box represents the median, box’s area represents the interquartile range, and the whiskers indicate minimum and maximum of the distribution (outliers excluded). Those distributions differ significantly (Student t-test, two-sided, p0.001). (F) Correlation coefficient of covariances, analogous to panel E. The distributions of values pooled across sessions also differ significantly (Student t-test, two-sided, p0.001). For details of the statistical tests, see Materials and methods. Details on number of trials and units in each recording session are provided in Appendix 1—table 1.

Figure 6—source data 1. Code and data.

Within each epoch, S or P, the neuronal firing rates are mostly stationary, likely due to the absence of arm movements which create relatively large transient activities in later epochs of the task, which are not analyzed here (see Appendix 1 Section 3). The overall distributions of the firing rates are comparable for epochs S and P, but the firing rates are distributed differently across the individual neurons: Figure 6C shows one example session of monkey N, where the changes in firing rates between the two epochs are visible in the spread of markers around the diagonal line in panel C2. To assess the extent of these changes, we split each epoch, S and P, into two disjoint sub-periods, S1/S2 and P1/P2 (Figure 6A). We compute the correlation coefficient between the firing rate vectors of two sub-periods of different epochs (‘between’ markers in Figure 6E) and compare it to the correlation coefficient between the firing rate vectors of two sub-periods of the same epoch (‘within’ markers): Firing rate vectors in S1 are almost perfectly correlated with firing rate vectors in S2 (ρ1 for all of the five/eight different recording sessions from different recording days for monkey E/N, similarly for P1 and P2), confirming stationarity investigated in Appendix 1 Section 3. Firing rate vectors in S1 or S2, however, show significantly lower correlation to firing rate vectors in P1 and P2, confirming a significant change in network state between epochs S and P (Figure 6E).

The mechanistic model in the previous section shows a qualitatively similar scenario (Figure 5C and E). By construction it produces different firing rate patterns in the two simulations. While the model is simplistic and in particular not adapted to quantitatively reproduce the experimentally observed activity statistics, its simulations and our underlying theory make a general prediction: Differences in firing rates impact the effective connectivity between neurons and thereby evoke even larger differences in their coordination if the network is operating in the dynamically balanced critical regime (Figure 5D). To check this prediction, we repeat the correlation analysis between the two epochs, which we described above for the firing rates, but this time for the covariance patterns. Despite similar overall distributions of covariances in S and P (Figure 6D1), covariances between individual neuron pairs are clearly different between S and P: Figure 6B shows the covariance pattern for one representative reference neuron in one example recording session of monkey N. In both epochs, this covariance pattern has a salt-and-pepper structure as for the resting state data in Figure 1D. Yet, neurons change their individual coordination: a large number of neuron pairs even changes from positive covariance values to negative ones and vice versa. These neurons fire cooperatively in one epoch of the task while they show antagonistic firing in the other epoch. The covariances of all neuron pairs of that particular recording session are shown in Figure 6D2. Markers in the upper left and lower right quadrant show neuron pairs that switch the sign of their coordination (45 % of all neuron pairs). The extent of covariance changes between epochs is again quantified by correlation coefficients between the covariance patterns of two sub-periods (Figure 6F). As for the firing rates, we find rather large correlations between covariance patterns in S1 and S2 as well as between covariance patterns in P1 and P2. Note, however, that correlation coefficients are around 0.8 rather than 1, presumably since covariance estimates from 200 ms periods are noisier than firing rate estimates. The covariance patterns in S1 or S2 are, however, significantly more distinct from covariance patterns in P1 and P2, with correlation coefficients around 0.5 (Figure 6F). This more pronounced change of covariances compared to firing rates is predicted by a network whose effective connectivity has a large spectral bound, in the dynamically balanced critical state. In particular, the theory provides a mechanistic explanation for the different coordination patterns between neurons on the mesoscopic scale (range of a Utah array), which are observed in the two states S and P (Figure 6B). The coordination between neurons is thus considerably reshaped by the behavioral condition.

Discussion

In this study, we investigate coordination patterns of many neurons across mesoscopic distances in macaque motor cortex. We show that these patterns have a salt-and-pepper structure, which can be explained by a network model with a spatially dependent random connectivity operating in a dynamically balanced critical state. In this state, cross-covariances are shaped by a large number of parallel, multi-synaptic pathways, leading to interactions reaching far beyond the range of direct connections. Strikingly, this coordination on the millimeter scale is only visible if covariances are resolved on the level of individual neurons; the population mean of covariances quickly decays with distance and is overall very small. In contrast, the variance of covariances is large and predominantly decreases exponentially on length scales of up to several millimeters, even though direct connections typically only reach a few hundred micrometers.

Since the observed coordination patterns are determined by the effective connectivity of the network, they are dynamically controllable by the network state; for example, due to modulations of neuronal firing rates. Parallel recordings in macaque motor cortex during resting state and in different epochs of a reach-to-grasp task confirm this prediction. Simulations indeed exhibit a high sensitivity of coordination patterns to weak modulations of the individual neurons’ firing rates, providing a plausible mechanism for these dynamic changes.

Models of balanced networks have been investigated before (van Vreeswijk and Sompolinsky, 1996; Brunel, 2000; Renart et al., 2010; Tetzlaff et al., 2012) and experimental evidence for cortical networks operating in the balanced state is overwhelming (Okun and Lampl, 2008; Reinhold et al., 2015; Dehghani et al., 2016). Excess of inhibition in such networks yields stable and balanced population-averaged activities as well as low average covariances (Tetzlaff et al., 2012). Recently, the notion of balance has been combined with criticality in the dynamically balanced critical state that results from large heterogeneity in the network connectivity (Dahmen et al., 2019). Here, we focus on another ubiquitous property of cortical networks, their spatial organization, and study the interplay between balance, criticality, and spatial connectivity in networks of excitatory and inhibitory neurons. We show that in such networks, heterogeneity generates disperse covariance structures between individual neurons on large length-scales with a salt-and-pepper structure.

Spatially organized balanced network models have been investigated before in the limit of infinite network size, as well as under strong and potentially correlated external drive, as is the case, for example, in primary sensory areas of the brain (Rosenbaum et al., 2017; Baker et al., 2019). In this scenario, intrinsically generated contributions to covariances are much smaller than external ones. Population-averaged covariances then fulfill a linear equation, called the ‘balance condition’ (van Vreeswijk and Sompolinsky, 1996; Hertz, 2010; Renart et al., 2010; Rosenbaum and Doiron, 2014), that predicts a non-monotonous change of population-averaged covariances with distance (Rosenbaum et al., 2017). In contrast, we here consider covariances on the level of individual cells in finite-size networks receiving only weak inputs. While we cannot strictly rule out that the observed covariance patterns in motor cortex are a result of very specific external inputs to the recorded local network, we believe that the scenario of weak external drive is more suitable for non-sensory brain areas, such as, for example, the motor cortex in the resting state conditions studied here. Under such conditions, covariances have been shown to be predominantly generated locally rather than from external inputs: Helias et al., 2014 investigated intrinsic and extrinsic sources of covariances in ongoing activity of balanced networks and found that for realistic sizes of correlated external populations the major contribution to covariances is generated from local network interactions (Figure 7a in Helias et al., 2014). Dahmen et al., 2019 investigated the extreme case, where the correlated external population is of the same size as the local population (Fig. S6 in Dahmen et al., 2019). Despite sizable external input correlations projected onto the local circuit via potentially strong afferent connections, the dependence of the statistics of covariances on the spectral bound of the local recurrent connectivity is predicted well by the theory that neglects correlated external inputs (see supplement section 3 in Dahmen et al., 2019).

Our analysis of covariances on the single-neuron level goes beyond the balance condition and requires the use of field-theoretical techniques to capture the heterogeneity in the network (Dahmen et al., 2019; Helias and Dahmen, 2020). It relies on linear-response theory, which has previously been shown to faithfully describe correlations in balanced networks of nonlinear (spiking) units (Tetzlaff et al., 2012; Trousdale et al., 2012; Pernice et al., 2012; Grytskyy et al., 2013; Helias et al., 2013; Dahmen et al., 2019). These studies mainly investigated population-averaged correlations with small spectral bounds of the effective connectivity. Subsequently, Dahmen et al., 2019 showed the quantitative agreement of this linear-response theory for covariances between individual neurons in networks of spiking neurons for the whole range of spectral bounds, including the dynamically balanced critical regime. The long-range coordination studied in the current manuscript requires the inclusion of spatially non-homogeneous coupling to analyze excitatory-inhibitory random networks on a two-dimensional sheet with spatially decaying connection probabilities. This new theory allows us to derive expressions for the spatial decay of the variance of covariances. We primarily evaluate these expressions in the long-range limit, which agrees well with simulations for distances x>2dO(1mm), which is fulfilled for most distances on the Utah array (Figure 3, Appendix 1—figure 7). For these distances, we find that the decay of covariances is dominated by a simple exponential law. Unexpectedly, its decay constant is essentially determined by only two measures, the spectral bound of the effective connectivity, and the length scale of direct connections. The length scale of covariances diverges when approaching the breakdown of linear stability. In this regime, differences in covariances induced by differences in length scales of excitatory and inhibitory connections become negligible. The predicted emergence of a single length scale of covariances is consistent with our data.

This study focuses on local and isotropic connection profiles to show that long-range coordination does not rely on specific connection patterns but can result from the network state alone. Alternative explanations for long-range coordination are based on specifically imprinted network structures: Anisotropic local connection profiles have been studied and shown to create spatio-temporal sequences (Spreizer et al., 2019). Likewise, embedded excitatory feed-forward motifs and cell assemblies via excitatory long-range patchy connections (DeFelipe et al., 1986) can create positive covariances at long distances (Diesmann et al., 1999; Litwin-Kumar and Doiron, 2012). Yet, these connections cannot provide an explanation for the large negative covariances between excitatory neurons at long distances (see e.g. Figure 1D). Long-range connectivity, for example arising from a salt-and-pepper organization of neuronal selectivity with connections preferentially targeting neurons with equal selectivity (Ben-Yishai et al., 1995; Hansel and Sompolinsky, 1998; Roxin et al., 2005; Blumenfeld et al., 2006), would produce salt-and-pepper covariance patterns even in networks with small spectral bounds where interactions are only mediated via direct connections. However, in this scenario, one would expect that neurons which have similar selectivity would throughout show positive covariance due to their mutual excitatory connections and due to the correlated input they receive. Yet, when analyzing two different epochs of the reach-to-grasp task, we find that a large fraction of neuron pairs actually switches from being significantly positively correlated to negatively correlated and vice versa (see Figure 6D2, upper left and lower right quadrant). This state-dependence of covariances is in line with the here suggested mechanism of long-range coordination by indirect interactions: Such indirect interactions depend on the effective strengths of various connections and can therefore change considerably with network state. In contrast, correlations due to imprinted network structures are static, so that a change in gain of the neurons will either strengthen or weaken the specific activity propagation, but it will not lead to a change of the sign of covariances that we see in our data. The static impact of these connectivity structures on covariances could nevertheless in principle be included in the presented formalism. Long-range coordination can also be created from short-range connections with random orientations of anisotropic local connection profiles (Smith et al., 2018). This finding can be linked to the emergence of tuning maps in the visual cortex. The mechanism is similar to ours in that it uses nearly linearly unstable modes that are determined by spatial connectivity structures and heterogeneity. Given the different source of heterogeneity, the modes and corresponding covariance patterns are different from the ones discussed here: Starting from fully symmetric networks with corresponding symmetric covariance patterns, Smith et al., 2018 found that increasing heterogeneity (anisotropy) yields more randomized, but still patchy regions of positive and negative covariances that are in line with low-dimensional activity patterns found in visual cortex. In motor cortex we instead find salt-and-pepper patterns that can be explained in terms of heterogeneity through sparsity. We provide the theoretical basis and explicit link between connectivity eigenspectra and covariances and show that heterogeneity through sparsity is sufficient to generate the dynamically balanced critical state as a simple explanation for the broad distribution of covariances in motor cortex, the salt-and-pepper structure of coordination, its long spatial range, and its sensitive dependence on the network state. Note that both mechanisms of long-range coordination, the one studied in Smith et al., 2018 and the one presented here, rely on the effective connectivity for the network to reside in the dynamically balanced critical regime. The latter regime is, however, not just one single point in parameter space, but an extended region that can be reached via a multitude of control mechanisms for the effective connectivity, for example by changing neuronal gains (Salinas and Sejnowski, 2001a; Salinas and Sejnowski, 2001b), synaptic strengths (Sompolinsky et al., 1988), and network microcircuitry (Dahmen et al., 2021).

What are possible functional implications of the coordination on mesoscopic scales? Recent work demonstrated activity in motor cortex to be organized in low-dimensional manifolds (Gallego et al., 2017; Gallego, 2018; Gallego et al., 2020). Dimensionality reduction techniques, such as PCA or GPFA (Yu et al., 2009), employ covariances to expose a dynamical repertoire of motor cortex that is comprised of neuronal modes. Previous work started to analyze the relation between the dimensionality of activity and connectivity (Aljadeff et al., 2015; Aljadeff et al., 2016; Mastrogiuseppe and Ostojic, 2018; Dahmen et al., 2019; Dahmen et al., 2021; Hu and Sompolinsky, 2020), but only in spatially unstructured networks, where each neuron can potentially be connected to any other neuron. The majority of connections within cortical areas, however, stems from local axonal arborizations (Schnepel et al., 2015). Here, we add this biological constraint and demonstrate that these networks, too, support a dynamically balanced critical state. This state in particular exhibits neural modes which are spanned by neurons spread across the experimentally observed large distances. In this state a small subset of modes that are close to the point of instability dominates the variability of the network activity and thus spans a low-dimensional neuronal manifold. As opposed to specifically designed connectivity spectra via plasticity mechanisms (Hennequin et al., 2014) or low-rank structures embedded into the connectivity (Mastrogiuseppe and Ostojic, 2018), the dynamically balanced critical state is a mechanism that only relies on the heterogeneity which is inherent to sparse connectivity and abundant across all brain areas.

While we here focus on covariance patterns in stationary activity periods, the majority of recent works studied transient activity during motor behavior (Gallego et al., 2017). How are stationary and transient activities related? During stationary ongoing activity states, covariances are predominantly generated intrinsically (Helias et al., 2014). Changes in covariance patterns therefore arise from changes in the effective connectivity via changes in neuronal gains, as demonstrated here in the two periods of the reach-to-grasp experiment and in our simulations for networks close to criticality (Figure 5D). During transient activity, on top of gain changes, correlated external inputs may directly drive specific neural modes to create different motor outputs, thereby restricting the dynamics to certain subspaces of the manifold. In fact, Elsayed et al., 2016 reported that the covariance structures during movement preparation and movement execution are unrelated and corresponding to orthogonal spaces within a larger manifold. Also, Luczak et al., 2009 studied auditory and somatosensory cortices of awake and anesthetized rats during spontaneous and stimulus-evoked conditions and found that neural modes of stimulus-evoked activity lie in subspaces of the neural manifold spanned by the spontaneous activity. Similarly, visual areas V1 and V2 seem to exploit distinct subspaces for processing and communication (Semedo et al., 2019), and motor cortex uses orthogonal subspaces capturing communication with somatosensory cortex or behavior-generating dynamics (Perich et al., 2021). Gallego, 2018 further showed that manifolds are not identical, but to a large extent preserved across different motor tasks due to a number of task-independent modes. This leads to the hypothesis that the here described mechanism for long-range cooperation in the dynamically balanced critical state provides the basis for low-dimensional activity by creating such spatially extended neural modes, whereas transient correlated inputs lead to their differential activation for the respective target outputs. The spatial spread of the neural modes thereby leads to a distributed representation of information that may be beneficial to integrate information into different computations that take place in parallel at various locations. Further investigation of these hypotheses is an exciting endeavor for the years to come.

Materials and methods

Experimental design and statistical analysis

Two adult macaque monkeys (monkey E - female, and monkey N - male) are recorded in behavioral experiments of two types: resting state and reach-to-grasp. The recordings of neuronal activity in motor and pre-motor cortex (hand/arm region) are performed with a chronically implanted 4x4mm2 Utah array (Blackrock Microsystems). Details on surgery, recordings, spike sorting and classification of behavioral states can be found in Riehle et al., 2013; Riehle et al., 2018; Brochier et al., 2018; Dąbrowska et al., 2020. All animal procedures were approved by the local ethical committee (C2EA 71; authorization A1/10/12) and conformed to the European and French government regulations.

Resting state data

During the resting state experiment, the monkey is seated in a primate chair without any task or stimulation. Registration of electrophysiological activity is synchronized with a video recording of the monkey’s behavior. Based on this, periods of ‘true resting state’ (RS), defined as no movements and eyes open, are chosen for the analysis. Eye movements and minor head movements are included. Each monkey is recorded twice, with a session lasting approximately 15 and 20 min for monkeys E (sessions E1 and E2) and N (sessions N1 and N2), respectively, and the behavior is classified by visual inspection with single second precision, resulting in 643 and 652 s of RS data for monkey E and 493 and 502 s of RS data for monkey N.

Reach-to-grasp data

In the reach-to-grasp experiment, the monkeys are trained to perform an instructed delayed reach-to-grasp task to obtain a reward. Trials are initiated by a monkey closing a switch (TS, trial start). After 400 ms a diode is illuminated (WS, warning signal), followed by a cue after another 400 ms(CUE-ON), which provides partial information about the upcoming trial. The cue lasts 300 ms and its removal (CUE-OFF) initiates a 1 s preparatory period, followed by a second cue, which also serves as GO signal. Two epochs, divided into 200 ms sub-periods, within such defined trials are chosen for analysis: the first 400 ms after TS (starting period, S1 and S2), and the 400 ms directly following CUE-OFF (preparatory period, P1 and P2) (Figure 6a). Five selected sessions for monkey E and eight for monkey N provide a total of 510 and 1111 correct trials, respectively. For detailed numbers of trials and single units per recording session see Appendix 1—table 1.

Separation of putative excitatory and inhibitory neurons

Offline spike-sorted single units (SUs) are separated into putative excitatory (broad-spiking) and putative inhibitory (narrow-spiking) based on their spike waveform width (Barthó et al., 2004; Kaufman et al., 2010; Kaufman et al., 2013; Peyrache, 2012; Peyrache and Destexhe, 2019). The width is defined as the time (number of data samples) between the trough and peak of the waveform. Widths of all average waveforms from all selected sessions (both resting state and reach-to-grasp) per monkey are collected. Thresholds for ‘broadness’ and ‘narrowness’ are chosen based on the monkey-specific distribution of widths, such that intermediate values stay unclassified. For monkey E the thresholds are 0.33 ms and 0.34 ms and for monkey N 0.40 ms and 0.41 ms. Next, a two-step classification is performed session by session. Firstly, the thresholds are applied to average SU waveforms. Secondly, the thresholds are applied to SU single waveforms and a percentage of single waveforms pre-classified as the same type as the average waveform is calculated. SU for which this percentage is high enough are marked classified. All remaining SUs are grouped as unclassified. We verify the robustness of our results with respect to changes in the spike sorting procedure in Appendix 1 Section 2.

Synchrofacts, that is, spike-like synchronous events across multiple electrodes at the sampling resolution of the recording system (1/30 ms) (Torre, 2016), are removed. In addition, only SUs with a signal-to-noise ratio (Hatsopoulos et al., 2004) of at least 2.5 and a minimal average firing rate of 1 Hz are considered for the analysis, to ensure enough and clean data for valid statistics.

Statistical analysis

All RS periods per resting state recording are concatenated and binned into 1 s bins. Next, pairwise covariances of all pairs of SUs are calculated according to the following formula:

COV(i,j)=biμi,bjμjl1, (3)

with bi, bj - binned spike trains, μi, μj being their mean values, l the number of bins, and x,y the scalar product of vectors x and y. Obtained values are broadly distributed, but low on average in every recorded session: in session E1 E-E pairs: 0.19±1.10 (M±SD), E-I: 0.24±2.31, I-I: 0.90±4.19, in session E2 E-E: 0.060±1.332, E-I 0.30±2.35, I-I 1.0±4.5, in session N1 E-E 0.24±1.13, E-I 0.66±2.26, I-I 2.4±4.9, in session N2 E-E 0.41±1.47, E-I 1.0±3.1, I-I 3.9±7.3.

To explore the dependence of covariance on the distance between the considered neurons, the obtained values are grouped according to distances between electrodes on which the neurons are recorded. For each distance the average and variance of the obtained distribution of cross-covariances is calculated. The variance is additionally corrected for bias due to a finite number of measurements (Dahmen et al., 2019). In most of cases, the correction does not exceed 0.01%.

In the following step, exponential functions y=ae-xd are fitted to the obtained distance-resolved variances of cross-covariances (y corresponding to the variance and x to distance between neurons), which yields a pair of values (a,d). The least squares method implemented in the Python scipy.optimize module (SciPy v.1.4.1) is used. Firstly, three independent fits are performed to the data for excitatory-excitatory, excitatory-inhibitory, and inhibitory-inhibitory pairs. Secondly, analogous fits are performed, with the constraint that the decay constant d should be the same for all three curves.

Covariances in the reach-to-grasp data are calculated analogously but with different time resolution. For each chosen sub-period of a trial, data are concatenated and binned into 200 ms bins, meaning that the number of spikes in a single bin corresponds to a single trial. The mean of these counts normalized to the bin width gives the average firing rate per SU and sub-period. The pairwise covariances are calculated according to Equation (3). To assess the similarity of neuronal activity in different periods of a trial, Pearson product-moment correlation coefficients are calculated on vectors of SU-resolved rates and pair-resolved covariances. Correlation coefficients from all recording sessions per monkey are separated into two groups: using sub-periods of the same epoch (within-epoch), and using sub-periods of different epochs of a trial (between-epochs). These groups are tested for differences with significance level α=0.05. Firstly, to check if the assumptions for parametric tests are met, the normality of each obtained distribution is assessed with a Shapiro-Wilk test, and the equality of variances with an F-test. Secondly, a t-test is applied to compare within- and between-epochs correlations of rates or covariances. Since there are two within and four between correlation values per recording session, the number of degrees of freedom equals: df=(Nsessions2-1)+(Nsessions4-1), which is 28 for monkey E and 46 for monkey N. To estimate the confidence intervals for obtained differences, the mean difference between groups m and their pooled standard deviation s are calculated for each comparison

m=mwithinmbetween,s=(Nwithin1)swithin2+(Nbetween1)sbetween2Nwithin+Nbetween2,

with mwithin and mbetween being the mean, swithin and sbetween the standard deviation and Nwithin and Nbetween the number of within- and between-epoch correlation coefficient values, respectively.

This results in 95 % confidence intervals m±t(df)s of 0.192±0.093 for rates and 0.32±0.14 for covariances in monkey E and 0.19±0.14 for rates and 0.26±0.17 for covariances in monkey N.

For both monkeys the within-epoch rate-correlations distribution does not fulfill the normality assumption of the t-test. We therefore perform an additional non-parametric Kolmogorov-Smirnov test for the rate comparison. The differences are again significant; for monkey E D=1.00,p=6.66108; for monkey N D=1.00,p=8.871013.

For all tests we use the implementations from the Python scipy.stats module (SciPy v.1.4.1).

Mean and variance of covariances for a two-dimensional network model with excitatory and inhibitory populations

The mean and variance of covariances are calculated for a two-dimensional network consisting of one excitatory and one inhibitory population of neurons. The connectivity profile p(x), describing the probability of a neuron having a connection to another neuron at distance x, decays with distance. We assume periodic boundary conditions and place the neurons on a regular grid (Figure 3A), which imposes translation and permutation symmetries that enable the derivation of closed-form solutions for the distance-dependent mean and variance of the covariance distribution. These simplifying assumptions are common practice and simulations show that they do not alter the results qualitatively.

Our aim is to find an expression for the mean and variance of covariances as functions of distance between two neurons. While the theory in Dahmen et al., 2019 is restricted to homogeneous connections, understanding the spatial structure of covariances here requires us to take into account the spatial structure of connectivity. Field-theoretic methods, combined with linear-response theory, allow us to obtain expressions for the mean covariance c¯ and variance of covariance δc2¯

c¯=[1M]1D1R2[1M]T,δc2¯=[1S]1(D1R2)2[1S]T, (4)

with identity matrix 1, mean M and variance S of connectivity matrix W, input noise strength D, and spectral bound R. Since M and S have a similar structure, the mean and variance can be derived in the same way, which is why we only consider variances in the following.

To simplify Equation (4), we need to find a basis in which S, and therefore also A=1-S, is diagonal. Due to invariance under translation, the translation operators T and the matrix S have common eigenvectors, which can be derived using that translation operators satisfy TN=1, where N is the number of lattice sites in x- or y-direction (see Appendix 1). Projecting onto a basis of these eigenvectors shows that the eigenvalues sK of S are given by a discrete two-dimensional Fourier transform of the connectivity profilesKxp(x)e-iKX.

Expressing A-1 in the eigenvector basis yields A-1(x)=1+B(x), where B(x) is a discrete inverse Fourier transform of the kernel sK/(1-sK). Assuming a large network with respect to the connectivity profiles allows us to take the continuum limit

B(x)=1(2π)2d2ks(K)1-s(K)eiKX.

As we are only interested in the long-range behavior, which corresponds to |x|, or |K|0, respectively, we can approximate the Fourier kernel around |K|0 by a rational function, quadratic in the denominator, using a Padé approximation. This allows us to calculate the integral which yields

B(x)K0(-|x|/deff),

where K0(x) denotes the modified Bessel function of second kind and zeroth order (Olver et al., 2010), and the effective decay constant deff is given by Equation (1). In the long-range limit, the modified Bessel function behaves like

B(x)|x|exp(-|x|/deff)|x|.

Writing Equation (4) in terms of B(x) gives

δc2¯(x)=(D1R2)2[δ(|x|)+B(x)+(BB)(x)],

with the double asterisk denoting a two-dimensional convolution. (BB)(x) is a function proportional to the modified Bessel function of second kind and first order (Olver et al., 2010), which has the long-range limit

(BB)(x)|x||x|exp(-|x|/deff).

Hence, the effective decay constant of the variances is given by deff. Note that further details of the above derivation can be found in the Appendix 1 Section 4 - Section 12.

Network model simulation

The explanation of the network state dependence of covariance patterns presented in the main text is based on linear-response theory, which has been shown to yield results quantitatively in line with non-linear network models, in particular networks of spiking leaky integrate-and-fire neuron models (Tetzlaff et al., 2012; Trousdale et al., 2012; Pernice et al., 2012; Grytskyy et al., 2013; Helias et al., 2013; Dahmen et al., 2019). The derived mechanism is thus largely model independent. We here chose to illustrate it with a particularly simple non-linear input-output model, the rectified linear unit (ReLU). In this model, a shift of the network’s working point can turn some neurons completely off, while activating others, thereby leading to changes in the effective connectivity of the network. In the following, we describe the details of the network model simulation.

We performed a simulation with the neural simulation tool NEST (Jordan, 2019) using the parameters listed in Appendix 1—table 4. We simulated a network of N inhibitory neurons (threshold_lin_rate_ipn, Hahne, 2017), which follow the dynamical equation

τdzidt=-zi+jJijνj+μext,i+ξiτσnoise,i, (5)

where zi is the input to neuron i, ν the output firing rate with (threshold linear activation function)

ν=ϕ(z)={0for z0zfor z>0,

time constant τ, connectivity matrix J, a constant external input μext,i, and uncorrelated Gaussian white noise ξi(t)=0, ξi(s)ξj(t)=δijδ(s-t), with noise strength τσnoise,i. The neurons were connected using the fixed_indegree connection rule, with connection probability p, indegree K=pN, and delta-synapses (rate_connection_instantaneous) of weight w.

The constant external input μext,i to each neuron was normally distributed, with mean μext, and standard deviation σext. It was used to set the firing rates of neurons, which, via the effective connectivity, influence the intrinsically generated covariances in the network. The two parameters μext and σext were chosen such that, in the stationary state, half of the neurons were expected to be above threshold. Which neurons are active depends on the realization of μext,i and is therefore different for different networks.

To assess the distribution of firing rates, we first considered the static variability of the network and studied the stationary solution of the noise-averaged input znoise, which follows from Equation (5) as

zinoise=jJijνjnoise+μext,i. (6)

Note that νjnoise=ϕ(zj)noise, through the nonlinearity ϕ, in principle depends on fluctuations of the system. This dependence is, however, small for the chosen threshold linear ϕ, which is only nonlinear in the point z=0.

The derivation of μext is based on the following mean-field considerations: according to Equation (6) the mean input to a neuron in the network is given by the sum of external input and recurrent input

μ=μext+μrecurrent=μext+KwMean(ν).

The variance of the input is given by

σ2=σext2+σrecurrent2=σext2+Kw2Var(ν).

The mean firing rate can be calculated using the diffusion approximation (Tuckwell, 2009; Amit and Tsodyks, 2009), which is assuming a normal distribution of inputs due to the central-limit theorem, and the fact that a linear threshold neuron only fires if its input is positive

Mean(ν)=dνP(μ,σ2,ν)ν=dzN(μ,σ2,z)ϕ(z)=0dzN(μ,σ2,z)z=σ2πexp(μ22σ2)+μ2[1+erf(μ2σ)],

where P denotes the probability density of the firing rate ν. The variance of the firing rates is given by

Var(ν)=Mean(ν2)Mean(ν)2=μ24[1erf2(μ2σ)]+σ22[11πexp(μ2σ2)+erf(μ2σ)]+μσ2πerf(μ2σ).

The number of active neurons is the number of neurons with a positive input, which we set to be equal to N/2

N2=!N0dzN(μ,σ2,z)=N2[1+erf(μ2σ)],

which is only fulfilled for μ=0. Inserting this condition simplifies the equations above and leads to

μext=-Kwσ2π.

For the purpose of relating synaptic weight w and spectral bound R, we can view the nonlinear network as an effective linear network with half the population size (only the active neurons). In the latter case, we obtain

w=-RN2p(1-p).

For a given spectral bound R, this relation allows us to derive the value

μext=Npπ(1-p)-(π-1)R2σext2R, (7)

that, for a arbitrarily fixed σext (here σext=1), makes half of the population being active. We were aiming for an effective connectivity with only weak fluctuations in the stationary state. Therefore, we fixed the noise strength for all neurons to the small value σnoise=0.1σext compared to the external input, such that the noise fluctuations did not have a large influence on the calculation above that determines which neurons were active.

To show the effect of a change in the effective connectivity on the covariances, we simulated two networks with identical connectivity, but supplied them with slightly different external inputs. This was realized by choosing

μext,i(α)=μext,i+μext,i(α),

with

μext,iN(μext,[1ϵ]σext2),μext,i(α)N(0,ϵσext2),

ϵ1, and α{1,2} indexing the two networks. The main component μext,i of the external input was the same for both networks. But, the small component μext,i(α) was drawn independently for the two networks. This choice ensures that the two networks have a similar external input distribution (Figure 5B1), but with the external inputs distributed differently across the single neurons (Figure 5B2). How similar the external inputs are distributed across the single neurons is determined by ϵ.

The two networks have a very similar firing rate distribution (Figure 5E1), but, akin to the external inputs, the way the firing rates are distributed across the single neurons differs between the two networks (Figure 5E2). As the effective connectivity depends on the firing rates

Wij=Jijϕ(νj),

this leads to a difference in the effective connectivities of the two networks and therefore to different covariance patterns, as discussed in Figure 5.

We performed the simulation for spectral bounds ranging from 0.1 to 0.9 in increments of 0.1. We calculated the correlation coefficient of firing rates and the correlation coefficient of time-lag integrated covariances between Nsample neurons in the two networks (Figure 5D) and studied the dependence on the spectral bound.

To check whether the simulation was long enough to yield a reliable estimate of the rates and covariances, we split each simulation into two halves, and calculate the correlation coefficient between the rates and covariances from the first half of the simulation with the rates and covariances from the second half. They were almost perfectly correlated (Figure 5C). Then, we calculated the correlation coefficients comparing all halves of the first simulation with all halves of the second simulation, showing that the covariance patterns changed much more than the rate patterns (Figure 5C).

Acknowledgements

This work was partially supported by HGF young investigator’s group VH-NG-1028, European Union’s Horizon 2020 research and innovation program under Grant agreements No. 785,907 (Human Brain Project SGA2) and No. 945,539 (Human Brain Project SGA3), ANR grant GRASP and partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 368482240/GRK2416. We are grateful to our colleagues in the NEST and Elephant developer communities and for continuous collaboration. All network simulations were carried out with NEST 2.20.0 (http://www.nest-simulator.org). All data analyses performed with Elephant (https://neuralensemble.org/elephant/). We thank Sebastian Lehmann for help with the design of the figures.

Appendix 1

1 Correlations and covariances

A typical measure for the strength of neuronal coordination is the Pearson correlation coefficient, here applied to spike counts in 1s bins. Correlation coefficients, however, comprise features of both auto- and cross-covariances. From a theoretical point of view, it is simpler to study cross-covariances separately. Indeed, linear-response theory has been shown to faithfully predict cross-covariances in spiking leaky integrate-and-fire networks (Tetzlaff et al., 2012; Pernice et al., 2012; Trousdale et al., 2012; Helias et al., 2013; Dahmen et al., 2019; Grytskyy et al., 2013). Appendix 1—figure 1 justifies the investigation of cross-covariances instead of correlation coefficients for the purpose of this study. It shows that the spatial organization of correlations closely matches the spatial organization of cross-covariances.

Appendix 1—figure 1. Correlations and covariances.

Appendix 1—figure 1.

The shown data is taken from session E2. (E-E: excitatory-excitatory, E-I: excitatory-inhibitory, I-I: inhibitory-inhibitory). (A) Population-resolved distribution of pairwise spike-count Pearson correlation coefficients. Same data as in Figure 1C. (B) Population-resolved distribution of pairwise spike-count covariances. (C) Population-resolved distribution of variances. (D) Pairwise spike-count correlation coefficients with respect to the neuron marked by black triangle. Grid indicates electrodes of a Utah array, triangles and circles correspond to putative excitatory and inhibitory neurons, respectively. Size as well as color of markers represent correlation. Neurons within the same square were recorded on the same electrode. Same data as in Figure 1D. (E) Pairwise spike-count covariances with respect to the neuron marked by black triangle.

2 Robustness to E/I separation

The analysis of the experimental data involves a number of preprocessing steps, which may affect the resulting statistics. In our study one such critical step is the separation of putative excitatory and inhibitory units, which is partially based on setting thresholds on the widths of spike waveform, as described in the Methods section. We tested the robustness of our conclusions with respect to these thresholds.

As mentioned in the Methods, two thresholds for the width of a spike waveform are chosen, based on all SU average waveforms: A width larger than the ‘‘broadness’’ threshold indicates a putative excitatory neuron. A width lower than the ‘‘narrowness’’ threshold indicates a putative inhibitory neuron. Units with intermediate widths are unclassified. Additionally, to increase the reliability of the classification, we perform it in two steps: first on the SU’s average waveform, and second on all its single waveforms. We calculate the percentage of single waveforms classified as either type. Finally, only SUs showing a high enough percentage of single waveforms classified the same as the average waveform are sorted as the respective type. The minimal percentage required, referred to as consistency c, is initially set to the lowest value which ensures no contradictions between average- and single-waveform thresholding results. While the ‘‘broadness’’ and ‘‘narrowness’’ thresholds are chosen based on all available data for a given monkey, the required consistency is determined separately for each recording session. For monkey N c is set to 0.6 in all but one sessions: In resting state session N1 it is increased to 0.62. For monkey E the values of c equals 0.6 in the resting state recordings and take the following values in five analyzed reach-to-grasp sessions: 0.6, 0.89, 0.65, 0.61, 0.64.

The only step of our analysis for which the separation of putative excitatory and inhibitory neurons is crucial is the fitting of exponentials to the distance-resolved covariances. This step only involves resting state data. To test the robustness of our conclusions, we manipulate the required consistency value for sessions E1, E2, N1, and N2 by setting it to 0.75. Appendix 1—figure 2 and Appendix 1—table 1 summarize the resulting fits.

Appendix 1—figure 2. Distance-resolved variance of covariance: robustness of decay constant estimation.

Appendix 1—figure 2.

Exponential fits (lines) to variances of covariances (dots) analogous to Figure 4A and B in the main text (columns 1&3 and 2&4, respectively) for all analyzed resting state sessions. The two sets of plots differ in E/I separation consistency values chosen during data preprocessing. Panels A-H: default (lowest) required consistency (~0.6), used throughout the main analysis; panels I-P: c=0.75 The values of the obtained decay constants are listed in Appendix 1—table 1.

It turns out that increasing c to 0.75, which implies disregarding about 20-25 percent of all data, does not have a strong effect on the fitting results. The obtained decay constants are smaller than for a lower c value, but they stay in a range about an order of magnitude larger than the anatomical connectivity. We furthermore see that fitting individual slopes to different populations in some sessions leads to unreliable results (cf. yellow lines in Appendix 1—figure 2A, I and blue lines in Appendix 1—figure 2C,D,K,L). Therefore, the data is not sufficient to detect differences in decay constants for different neuronal populations. Fitting instead a single decay constant yields trustworthy results (cf. yellow lines in Appendix 1—figure 2E,M and blue lines in Appendix 1—figure 2G,H,O,P). Our data thus clearly expose that decay constants of covariances are in the millimeter range.

Appendix 1—table 1. Summary of exponential fits to distance-resolved variance of covariance.

For each value of E/I separation consistency c the numbers of sorted putative neurons and the percentages of unclassified units, and therefore not considered for fitting SUs, are listed per resting state session, along with the resulting fits (Figure 4 in the main text)

C E1 E2 N1 N2
0.6 (default) #exc/#inh 56/50 67/56 76/45 78/62
unclassified 0.078 0.075 0.069 0.091
relative error 1.1157 1.0055 1.0097 1.0049
1-slope fit 1.674 1.029 1.676 4.273
I-I 1.919 0.996 1.647 4.156
I-E 0.537 1.206 2.738 96100.688
E-E 1.642 1.308 80308.482 94096.871
0.75 #exc/#inh 45/42 47/48 70/36 74/48
unclassified 0.24 0.28 0.18 0.21
relative error 1.1778 1.0141 1.0102 1.0090
1-slope fit 1.357 0.874 1.420 2.587
I-I 1.794 0.809 1.394 2.550
I-E 0.496 1.123 3.682 40.852
E-E 1.390 1.199 80548.500 10310.780

3 Stationarity of behavioral data

The linear-response theory, with the aid of which we develop our predictions about the covariance structure in the network, assumes that the processes under examination are stationary in time. However, this assumption is not necessarily met in experimental data, especially in motor cortex during active behavioral tasks. For this reason we analyzed the stationarity of average single unit firing rate and pairwise zero time-lag covariance throughout a reach-to-grasp trial, similarly to Dahmen et al., 2019. Although the spiking activity becomes highly non-stationary during the movement, those epochs that are chosen for the analysis in our study (S and P) show only moderate variability in time (Appendix 1—figure 3). An analysis on the level of single-unit resolved activity also shows that the majority of neurons has stationary activity statistics within the relevant epochs S and P, especially when comparing to their whole dynamic range that is explored during movement transients towards the end of the task (Appendix 1—figure 5). Appendix 1—figure 6 shows that there are, however, a few exceptions (e.g. units 11, 84 in this session) that show moderate transients also within an epoch. Nevertheless, these transients are small compared to changes between the two epochs S and P.

Appendix 1—figure 3. Rate and covariance stationarity during a reach-to-grasp trial: monkey E.

Appendix 1—figure 3.

Black line indicates population mean and gray area +/- 1 population standard deviation of single unit firing rate (left column) and pairwise zero time-lag covariance (right column) during trial of a given session (row). Blue bars indicate starting (S) and preparatory (P) periods used in the analysis (Figure 6 in the main text). First, second and fourth dashed lines indicate visual signals lighting up and the third dashed line indicates the removal of a visual cue and beginning of a waiting period.

Appendix 1—figure 4. Rate and covariance stationarity during a reach-to-grasp trial: monkey N.

Appendix 1—figure 4.

Analogous to Appendix 1—figure 3.

Appendix 1—figure 5. Stationarity of single-unit activity during a reach-to-grasp trial (monkey N, session i140627-001).

Appendix 1—figure 5.

Black lines indicate mean and gray areas +/- 1 standard deviation across trials of single unit activity in each panel (sliding window analysis with 5 ms step size and 100 ms window length). Blue bars indicate starting (S) and preparatory (P) periods used in the analysis (Figure 6 in the main text). First, second and fourth dashed lines (marked with stars) indicate visual signals lighting up and the third dashed line indicates the removal of a visual cue and beginning of a waiting period.

Appendix 1—figure 6. Stationarity of single-unit activity during a reach-to-grasp trial (monkey N, session i140704-001).

Appendix 1—figure 6.

Analogous to Appendix 1—figure 5.

Thus both the population-level and single-unit level analyses are in line with the second test for stationarity that we show in Figure 6. There we compare the firing rate and covariance changes between two 200 ms segments of the same epoch to the firing rate and covariance changes between two 200 ms segments of different epochs. If the neural activity was not stationary within an epoch then we would not obtain correlation coefficients of almost one between firing rates in Figure 6E and correlation coefficients up to 0.9 between covariance patterns within one epoch in Figure 6F. In summary, the analyses together make us confident that assuming stationarity within an epoch is a good approximation to show that there are significant behaviorally related changes in covariances across epochs of the reach-to-grasp experiment.

Appendix 1—table 2. Numbers of trials and single units per reach-to-grasp recording session.

Session names starting with “e” correspond to monkey E and session names starting with “i” to monkey N.

Session Ntrials Nsingle units
e161212-002 108 129
e61214-001 99 118
e161222-002 102 118
e170105-002 101 116
e170106-001 100 113
i140613-001 93 137
i140617-001 129 155
i140627-001 138 145
i140702-001 157 134
i140703-001 142 142
i140704-001 141 124
i140721-002 160 96
i140725-002 151 106

4 Network model

We are considering neuronal network models with isotropic and distance-dependent connection profiles. Ultimately, we are interested in describing cortical networks with two-dimensional sheet-like structure. But, for developing the theory, we first consider the simpler case of a one-dimensional ring and subsequently develop the theory on a two-dimensional torus, ensuring periodic boundary conditions in both cases. N equidistantly distributed neurons form a grid on these manifolds. The position of neuron i{1,,N} is described by the vector riRD, with D{1,2}. The connections Wij from neuron j to neuron i are drawn randomly with a connection probability that decays with distance between neurons |Ri-Rj|, described by the normalized connectivity profile p(R), p(R)dDr=1, which we assume to obey radial symmetry. The connection probability decays on a characteristic length scale d. As we are working on discrete lattices, we introduce the probability of two neurons being connected pij, which is defined by the relation p(Ri-Rj)=lima0pij/a, with lattice spacing a. We set the synaptic weights for connections of a single type to a fixed value w, but allow for multiple connections between neurons, that is Wij{0,w,2w,}=nijw for all sending neurons j of a given type, where nij is binomially distributed. Such multapses are required to simultaneously meet biological constraints on neuronal indegrees, neuron densities, and spatial ranges of connections. If instead one assumed Bernoulli connectivity, an analysis analogous to Eq. 7 of Senk et al., 2018 would yield a connection probability exceeding unity.

We introduce two populations of neurons, excitatory (E) and inhibitory (I) neurons. The number of neurons of a given population a{E,I} is Na, and their ratio is q=NE/NI, which, for convenience, we assume to be an even number (see permutation symmetry below). The connection from population b to population a has the synaptic weight wab and characteristic decay length of the connectivity profile dab. The average number of inputs drawn per neuron is fixed to Kab. In order to preserve translation symmetry, q excitatory neurons and one inhibitory neuron are put onto the same lattice point, as shown in Figure 3A in the main text.

Linear-response theory has been shown to faithfully capture the statistics of fluctuations in asynchronous irregular network states (Lindner et al., 2005). Here we follow Grytskyy et al., 2013, who show that different types of neuronal network models can be mapped to an Ornstein-Uhlenbeck process and that the low-frequency limit of this simple rate model describes spike count covariances of spiking models well (Tetzlaff et al., 2012). In particular, Dahmen et al., 2019 showed quantitative agreement of linear-response predictions for the statistics of spike-count covariances in leaky integrate-and-fire networks for the full range of spectral bounds R[0,1). Therefore, we consider a network of linear rate neurons, whose activity xN is described by

τddtx=x+Wx+ξ,

with uncorrelated Gaussian white noise ξ, ξi(t)=0, ξi(s)ξj(t)=Diδijδ(st). The solution to this differential equation can be found by multiplying the whole equation with the left eigenvectors uα of W

τddtyα=-yα+λαyα+ξα, (8)

where yα=Uαx, ξα=uαξ, and λα is denoting the corresponding eigenvalue of W. Neglecting the noise term, the solutions are given by

yα(t)Θ(t)exp[tτ(1λα)], (9)

with Heaviside function Θ(t). These are the eigenmodes of the linear system and they are linear combinations of the individual neuronal rates

yα=i=1N(uα)ixi.

Note that the weights (uα)i of these linear combinations depend on the details of the effective connectivity matrix W. The stability of an eigenmode is determined by the corresponding eigenvalue λα. If Re(λα)§lt;1, the eigenmode is stable and decays exponentially. If Re(λα)§gt;1, the eigenmode is unstable and grows exponentially. If Im(λα)0, the eigenmode is oscillatory with an exponential envelope. Re(λα)=1 is here referred to as the critical point. This type of stability is also called linear stability to stress that these considerations are only valid in the linear approximation. Realistic neurons have a saturation at high rates, which prevents activity from diverging indefinitely. A network is called linearly stable if all modes are stable. This is determined by the real part of the largest eigenvalue of W, called spectral bound R. In inhibition-dominated networks, the spectral bound is determined by the heterogeneity in connections and R1 defines the dynamically balanced critical state (Dahmen et al., 2019).

The different noise components ξα excite the corresponding eigenmodes of the system and act as a driving force. A noise vector ξ that is not parallel to a single eigenvector Uα excites several eigenmodes, each with the corresponding strength ξα.

Note that the different eigenmodes do not interact, which is why the total activity x is given by a linear combination, or superposition, of the eigenmodes

x=α=1NyαVα,

where Vα denotes the α-th right eigenvector of the connectivity matrix W.

5 Covariances

Time-lag integrated covariances cij=Dτxi(t)xj(t+τ)-xi(t)xj(t+τ) can be computed analytically for the linear dynamics (Gallego et al., 2020). They follow from the connectivity W and the noise strength D as (Pernice et al., 2011; Trousdale et al., 2012; Grytskyy et al., 2013; Lindner et al., 2005)

c=[1W]1D[1W]T, (10)

with identity matrix 1. These covariances are equivalent to covariances of spike counts in large time windows, given by the zero-frequency component of the Fourier transform of x (sometimes referred to as Wiener-Khinchin theorem Gardiner, 1985; even though the theorem proper applies in cases where the Fourier transforms of the signals x do not exist). Spike count covariances (Figure 1B in the main text) can be computed from trial-resolved spiking data (Dahmen et al., 2019). This equivalence allow us to directly relate theoretical predictions for covariances to the experimentally observed ones.

While Equation (10) provides the full information on covariances between any two neurons in the network, this information is not available in the experimental data. Only a small subset of neuronal activities can be recorded such that inference of connectivity parameters from Equation 10 is unfeasible. We recently proposed in Dahmen et al., 2019 to instead consider the statistics of covariances as the basis for comparison between models and data. Using Equation 8 and Equation 10 as a starting point, field theoretical techniques allow the derivation of equations for the mean c¯ and variance δc2¯ of cross-covariances in relation to the mean M and variance S of the connectivity matrix W (Dahmen et al., 2019):

c¯=[1M]1Dr[1M]T, (11)
δc2¯=[1S]1Dr2[1S]T. (12)

Mand S are defined in the subsequent section. The renormalized input noise strength is given by

Dr=D+diag[D(1-S)-1SI], (13)

with input noise covariance D, and the all-ones vector I=(1,,1)TN. Note that Equation (12) only holds for cross-covariances (ij). The diagonal terms [δc2¯]ii, that is the variance of auto-covariances, do get a second contribution, which is negligible for the cross-covariances considered here.

6 Cumulant generating function of connectivity matrix

For calculating the mean and variance of the covariances of the network activity ((11) and (12)) we need mean M and variance S of connectivity W. In the following, we derive the cumulant generating function (Gardiner, 1985) of Wij.

The number of connections n from neuron j to neuron i is a binomial random variable with K trials with the probability of success given by pij (in the following, for brevity, we ignore the index i, pijpj)

Probj(n)=(Kn)pjn(1-pj)K-n.

The average number of connections from neuron j to neuron i is Kj=pjK, which assures the correct average total indegree

jKj=Kjpj=K.

The moment generating function of a connectivity matrix element WjWij{0,w,2w,} is given by

ZWj(k)=n=0K(Kn)pjn(1-pj)K-nenwk.

In a realistic network, K is very large. In the limit K, while keeping Kp=const., the binomial distribution converges to a Poisson distribution and we can write

ZWj(k)n=0KKjnn!eKjenwk=n=0K(Kjewk)nn!eKjKexp[Kj(ewk1)].

Taking the logarithm leads to the cumulant generating function

GWj(k)pjK(ewk-1),

and the first two cumulants

Mij=kGWj(k)|k=0=pjKw=p(|xixj|)Kw,Sij=2k2GWj(k)|k=0=pjKw2=p(|xixj|)Kw2.

7 Note on derivation of variance of covariances

Note that M and S have an identical structure determined by the connectivity profile and the structure of the covariance equation is identical for the mean Equation (11) and variance Equation (12) as well. This is why in the following we only derive the results for the mean of covariances. The results for the variance of covariances is obtained by substituting w by w2 and Dr by Dr2. As we show, divergences in expressions related to the mean covariances arise if the population eigenvalue λ0 of the effective connectivity matrix approaches one. In expressions related to the variance of covariances, the divergences are caused by the squared spectral bound R2 being close to one. In general expressions, we sometimes write ζ in order to denote either the population eigenvalue or the spectral bound, corresponding to the context of mean or variance of covariances.

8 Utilizing symmetries to reduce dimensionality

For real neuronal networks, the anatomical connectivity is never known completely, let alone the effective connectivity. This is why we are considering disorder-averaged systems. They are described by the mean M and variance S of the connectivity. The latter inherit the underlying symmetries of the network, like for example the same radially symmetric connectivity profile for all neurons of one type. As neuronal networks are high dimensional systems, calculating covariances from Equation (11) and Equation (12) first seems like a daunting task. But, leveraging the aforementioned symmetries similarly as in Kriener et al., 2013 allows for an effective reduction of the dimensionality of the system, thereby rendering the problem manageable.

As a demonstrative example of how this is done, consider a random network of N neurons on a one-dimensional ring, in which a neuron can form a connection with weight w to any other neuron with probability p0. In that case, M is a homogeneous matrix, with all entries given by the same average connectivity weight

M=(p0wp0wp0wp0wp0wp0wp0wp0wp0w).

This corresponds to an all-to-all connected ring network. Due to the symmetry of the system, moving all neurons by one lattice constant does not change the system. The translation operator T, representing this operation mathematically, is defined via its effect on the vector of neuron activity x

Tx=T(x1x2xN)=T(xNx1xN-1).

Applying TN N-times yields the identity operation

TN=1.

Hence, its eigenvalues are given by complex roots of one

e-i2πl/N=e-i2πla/L=e-ikla,l{0,1,,N-1},

with L=Na denoting the circumference of the ring. This shows that T has N one-dimensional eigenspaces. Since the system is invariant under translation, M is invariant under the transformation TMT1=M, and thus M and T commute. As M leaves eigenspaces of T invariant (if V is an eigenvector of T, MV is an eigenvector with the same eigenvalue, so they need to be multiples of each other), all eigenvectors of T must be eigenvectors of M. Accordingly, knowing the eigenvectors of T allows diagonalizing M. The normalized (left and right) eigenvectors of T are given by

Vkl=1N(1eiklaei2klaei(N-1)kla).

We get the eigenvalues of M by multiplying it with the eigenvectors ofT

mkl=vklMvkl=1Np0wvkl(j=0N1eiklajj=0N1eiklajj=0N1eiklaj),

which is always zero, except for l=0, which corresponds to the population eigenvalue λ0:=mk0=Np0w of W (Figure 3C in the main text). Now, we can simply write down the diagonalized form of M

(λ000000000),

and we effectively reduced the N-dimensional to a one dimensional problem. Inverting A:=1-M in Equation (11) is straightforward now, since it is diagonal in the new basis. Its eigenvalues can be written as ak=1mk, where we suppressed the index l. Therefore its inverse is given by

Aij1=kak1(vk)i(vk)j=1Nk11mkeik(xixj)=1Nk(1+mk1mk)eik(xixj)=δij+1Nλ01λ0.

The renormalized noise can be evaluated using that the all-ones vector occurring in equation Equation (13) is the eigenvector V0 of S. After identifying the eigenvalue s0 with the squared spectral bound R2, we find

Dr=diag(D1-R2),

which allows us to express the mean cross-covariances c¯ (see Equation (11)) and the variance of cross-covariances δc2¯ (see Equation (12)) in terms of the eigenvectors of M and S respectively

c¯=D1R2{2λ0N(1λ0)+[λ0N(1λ0)]2N},
δc2¯=(D1R2)2{2R2N(1R2)+[R2N(1R2)]2N}.

9 One-dimensional network with one population

The simplest network with spatial connectivity is a one-dimensional ring of neurons with one population of neurons. Following section Section 6, the mean connectivity matrix has the form

M=Kw(p11p12p1Np21p22p2NpN1pN2pNN).

As pij only depends on the distance of two neurons, the rows in M are identical, but shifted by one index.

9.1 Dimensionality reduction

We follow the procedure developed in Section 8, as the system is invariant under translation as well. Suppressing the subscripts of k, we get the eigenvalues of M

mk=1NKw(1,eika,...,ei(N1)ka)(j=0N1p1(j+1)eikajj=0N1p2(j+1)eikajj=0N1pN(j+1)eikaj)=Kwj=0N1p(j+1)eikaj=Kwaxp(x)eikx,

where the sum over x denotes a sum over all lattice sites. We used the translational symmetry from the first to the second line. The change of sign in the exponential from line two to three is due to the fact that we are summing over the second index of pij. Thus, the eigenvalues are effectively given by the discrete Fourier transform of the connectivity profile. Expressing A-1 using the eigenvectors Vk of M leads to

Aij1=1Nk11mkeik(xixj)=1Nk(1+mk1mk)eik(xixj)=δij+1Nkmk1mkeik(xixj)δij+μij, (14)

where we extracted an identity for later convenience, and we defined μij.

Next, we consider the renormalized noise, which is given by Equation (13). Using that the all-ones vector I in the second term is the eigenvector of S corresponding to k=0, we get

D(1-S)-1SV=Ds01-s0.

Again, we identify s0 with the spectral bound R2, and find

Dr=D+DR21-R2=D1-R2. (15)

Inserting Equation (14) and Equation 15 into Equation (11) yields

c¯ij=D1-R2(δij+2μij+kμikμkj).

9.2 Continuum limit

As we assume the lattice constant to be small, we know that the connectivity profile is sampled densely, and we are allowed to take the continuum limit. Therefore, we write

mk=Kwjp(j+1)eikaj=Kwjap(j+1)aeikaja0KwL/2L/2dxp(x)eikx=KwL/2L/2dxp(x)eikx.

Note that lima0jpj/a=lima0jp(xi-xj)/a=Dxp(-x), because we are summing over the second index j. If the decay constant d of the connectivity profile is small compared to the size of the network L, we can take L to infinity and finally end up with

m(k)=Kwdxp(x)e-ikx. (16)

Analogously, we find

A-1(x)=δ(x)+12πdkm(k)1-m(k)eikxδ(x)+μ(x), (17)

where we defined

μ(x)=12πdkμ(k)eikx, (18)

with

μ(k)=m(k)1-m(k). (19)

Finally, we get

c¯(x)=D1R2[δ(x)+2μ(x)+(μμ)(x)], (20)

where the asterisk denotes the convolution.

9.3 Prediction of exponential decay of covariance statistics

Note that the integral in equation Equation 18 can be interpreted as an integral in the complex plane. According to the residue theorem, the solution to this integral is a weighted sum of exponentials, evaluated at the poles of [1-m(k)]-1. As μ(x) appears in the equation for the mean covariances, and the convolution of two exponentials is an exponential with the prefactor (const. + |x|), we expect the dominant behavior to be an exponential decay in the long-range limit, with decay constants given by the inverse imaginary part of the poles. The poles which are closest to zero are the ones which lead to the most shallow and thereby dominant decay. A real part of the poles leads to oscillations in μ(x).

9.4 Long-range limit

We cannot expect to solve the integral in Equation 17 for arbitrary connectivity profiles. To continue our analysis, we make use of the Padé method, which approximates arbitrary functions as rational functions (Basdevant, 1972). We approximate μ(k) around k=0 using a Padé approximation of order (0,2)

μ(k)m(0)1-m(0)-m′′(0)2m(0)k2,

with

m(0)=Kwdxp(x)=Kw=λ0,m(0)=Kwdxx2p(x)=Kw<x2>. (21)

This allows us to calculate the approximate poles of μ(k)

k0=±2m(0)m′′(0)[1-m(0)]. (22)

As 2m(0)/m′′(0) will be negative, due to factor i2 from the second derivative of the Fourier integral, we write

k0=±i-2m(0)m′′(0)[1-m(0)].

Closing the integral contour in Equation 18 in the upper half plane for x§gt;0, and in the lower half plane for x§lt;0, we get

μ(x)=-m(0)2m′′(0)-m′′(0)2m(0)11-m(0)exp(-|x|-m′′(0)2m(0)11-m(0))-m(0)2m′′(0)deff,μexp(-|x|deff,μ),

where we defined the effective decay constant for the mean covariances

d¯=m(0)2m(0)11m(0)=x2211λ0,

with m(0)=λ0 and m′′(0)=λ0x2, since m(k) is the Fourier transform of the connectivity profile Equation (16). Note that λ0=Kw again is the population eigenvalue of the effective connectivity matrix W. For evaluating Equation (11) and Equation (12), we need to calculate the convolution of μ with itself

(μμ)(x)=dyμ(x-y)μ(y)=m(0)4m′′(0)2d¯2(d¯+|x|)exp(-|x|d¯).

The final expression for the mean covariances is

c¯(x)=D1-R2{δ(x)+[(m(0)4m′′(0)2d¯2-2m(0)2m′′(0))d¯+m(0)4m′′(0)2d¯2|x|]exp(-|x|d¯)}.

Equivalently, for the variance of covariances we obtain the final result

δc2¯(x)=(D1R2)2{δ(x)+[(s(0)4s(0)2deff22s(0)2s(0))deff+s(0)4s(0)2deff2|x|]exp(|x|deff)},

where

s(k)=Kw2dxp(x)e-ikx.

Note that the quality of the Padé approximation depends on the outlier eigenvalue and the spectral bound. For the variances, the approximation works best for spectral bounds R close to 1. The reason for this is that we are approximating the position of the poles in the complex integral Equation (18). We make an approximation around k=0 and Equation (22) shows that the position of the complex poles moves closer to k=0 as s(0)R21.

General results:

Using Equation (21)

m(0)=Kw=λ0,m(0)=Kwx2,

we find

c¯(x)=D1R2{δ(x)+[Kw(13Kw)2x2(1Kw)d¯+(Kw)2x22d¯2|x|]exp(|x|d¯)},

with

d¯=|x2211λ0|.

For the variance we use

s(0)=Kw2=R2,s(0)=Kw2x2,

to get

δc2¯(x)=D2(1R2)2{δ(x)+[Kw2(13Kw2)2x2(1Kw2)deff+(Kw2)2x22deff2|x|]exp(|x|deff)},

with

deff=|x2211R2|.

Exponential connectivity profile:

Using an exponential connectivity profile given by

p(x)=12de-|x|/d,

we find x2=2d2 and

d¯=|11-λ0|d,deff=|11-R2|d,

with λ0=Kw for the mean, and R2=Kw2 for the variance.

Gaussian connectivity profile:

Analogously, using a Gaussian connectivity profile given by

p(x)=12πd2e-x2/(2d2),

we find x2=d2, and get

d¯=|1211λ0|d,deff=|1211R2|d. (23)

10 One-dimensional network with two populations

Realistic neuronal network consist of excitatory and inhibitory neurons. So we need to introduce a second population to our network. Typically, there are more excitatory than inhibitory neurons in the brain. Therefore, we introduce q excitatory neurons for each inhibitory neuron. We place q excitatory neurons and one inhibitory neuron together in one cell. The cells are distributed equally along the ring. For convenience, we define NNI.

The structure of the connectivity matrix depends on the choice of the activity vector x. For later convenience we choose

x=(x1(E)x1(I)x2(E)x2(I)xN(E)xN(I)),

where xi(E) is a q-dimensional vector denoting the activity of the q excitatory neurons in cell i. M is a (q+1)N×(q+1)N-matrix, which qualitatively has the structure

M=(EE11EI11EE12EI12EE1NEI1NIE11II11IE12II12IE1NII1NEE21EI21EE22EI22EE2NEI2NIE21II21IE22II22IE2NII2NEEN1EIN1EEN2EIN2EENNEINNIEN1IIN1IEN2IIN2IENNIINN). (24)

Note that EEij are q×q matrices, EIij are q×1 matrices, IEij are 1×q matrices and IIij are 1×1 matrices. The entries abij describe the connectivities from population b in cell j to population a in cell i. The entries are given by

abij={1qwabKab(pab)ijif b=EwabKab(pab)ijif b=I.

The difference stems from the fact that we have q times as many excitatory neurons. As the total number of indegrees from excitatory neurons should be given by KaE, we need to introduce a reducing factor of 1/q, as the connection probability is normalized to one.

10.1 Dimensionality reduction

In the following, we will reduce the dimensionality of M as done before in the case with one population. First, we make use of the symmetry within the cells. All entries in M corresponding to connections coming from excitatory neurons of the same cell need to be the same. For that reason, we change the basis to

ei(E)=1q(00I0),ei(I)=(0010), (25)

where I denotes a q-dimensional vector containing only ones. For a full basis, we need to include all the vectors with I being replaced by a vector containing all possible permutations of equal numbers of ±1. In this basis M is block diagonal

(M000),

and M is an 2N×2N matrix, which has the same qualitative structure as shown in Equation (24), but the submatrices (ab)ij are replaced by

abij={wEEKEE(pEE)ijif ab=EEqwEIKEI(pEI)ijif ab=EIwIEKIE(pIE)ij/qif ab=IEwIIKII(pII)ijif ab=II.

Next, we use translational symmetry of the cells. The translation operator is defined by

Tx=T(x1(E)x1(I)x2(E)x2(I)xN(E)xN(I))=(xN(E)xN(I)x1(E)x1(I)xN-1(E)xN-1(I)).

As the system is invariant under moving each cell to the next lattice site, M is invariant under the transformation

TMT-1=M.

Again, the eigenvalues of T can be determined using TN=1 and they are the same as in the case of one population. But, note that here the eigenspaces corresponding to the single eigenvalues are two dimensional. The eigenvectors

vk(E)=1N(10eika0ei(N1)ka0),vk(I)=1N(010eika0ei(N1)ka),

belong to the same eigenvalue. In this basis, M is block diagonal, with each block consisting of a 2×2 matrix, corresponding to one value of kl=2πlL,l{0,,N-1}

M=(Mk0000Mk1000MkN-1).

Since all block matrices can be treated equally, we further reduced the problem to diagonalizing a 2×2 matrix. The submatrices take the form

Mk=(mEE(k)qmEI(k)mIE(k)/qmII(k)),

with the discrete Fourier transform

mab(k)=Kabwabx=Na/2Na/2pab(x)eikx. (26)

Note that x and k are still discrete here, but we could take the continuum limit at this point. The eigenvalues of Mk are given by

m±(k)=12(mEE(k)+mII(k))±12mEE(k)2+mII(k)2-2mEE(k)mII(k)+4mEI(k)mIE(k). (27)

The corresponding eigenvectors are

V1,2(k)=N±(qmEI(k)m±(k)-mEE(k)), (28)

with normalization N±. The eigenvectors written in the Fourier basis are given by

V±(k)=N±[qmEI(k)Vk(E)+(m±(k)-mEE(k))Vk(I)], (29)

and we can get the eigenvectors V~±(k) in the basis we started with by extending Vk(E) and Vk(I) to vectors similar to Equation (25), where the elements corresponding to excitatory neurons are repeated q-times. Note that the normalization of the original basis leads to an additional factor 1/q in the first term of Equation (29).

Analogously, we can find the left eigenvectors of M by conducting the same steps with the transpose of M

U±(k)=N±[mIE(k)Vk(E)/q+(m±(k)-mEE(k))Vk(I)], (30)

and the vectors in the original basis U~±(k) are obtained similarly to the right eigenvectors. The normalization N± is chosen such that

u~+(k)v~+(k)=1,u~+(k)v~(k)=0,u~(k)v~+(k)=0,u~(k)v~(k)=1,

which leads to

N±=mEI(k)mIE(k)-(m±(k)-mEE(k))2.

Now, we can express A-1 in terms of the eigenvalues and eigenvectors of M

A-1=1+k(m+(k)1-m+(k)V~+(k)U~+(k)+m-(k)1-m-(k)V~-(k)U~-(k)), (31)

which leads to

Aij-1=δij+1Nkμij(k)eik|xi-xj|, (32)

where we defined μ(k) similar to Equation (19). Let E and I be the sets of indices referring to excitatory or inhibitory neurons respectively. We find

μij(k){μEE(k)fori,jEμEI(k)foriE,jIμIE(k)foriE,jIμII(k)fori,jI,

with

μEE(k)=1qmEE(k)+mIE(k)mEI(k)mEE(k)mII(k)1ζ(k),μEI(k)=mEI(k)1ζ(k),μIE(k)=1qmIE(k)1ζ(k),μII(k)=mII(k)+mIE(k)mEI(k)mEE(k)mII(k)1ζ(k), (33)

and

ζ(k)=mEE(k)+mII(k)+mEI(k)mIE(k)-mEE(k)mII(k).

10.2 General results

The renormalized noise is evaluated using the same trick as in the one population case. We express the all-ones vector using eigenvectors of the variance matrix S

I=aV~+(0)+bV~-(0).

Evaluating the coefficients a and b and inserting the corresponding solutions into Equation (13) yields

Dr=diag(Dr(E),,Dr(E)q-times,Dr(I),Dr(E),,Dr(E)q-times,Dr(I),,Dr(E),,Dr(E)q-times,Dr(I)N(q+1)-entries), (34)

with

Dr(E)=D[1+sEE(0)+sEI(0)+sEI(0)sIE(0)sEE(0)sII(0)1R2],Dr(I)=D[1+sIE(0)+sII(0)+sEI(0)sIE(0)sEE(0)sII(0)1R2],

with the eigenvalues sab(k) of S. We again identified the spectral bound

R2=sEE(0)+sII(0)+sEI(0)sIE(0)-sEE(0)sII(0). (35)

The mean covariances can be written as

c¯=Dr+μDr+DrμT+μDrμT,

where μ=μ(x). We can distinguish three different kinds of covariances depending on the type of neurons involved

cij¯{cEE¯(x)fori,jEcEI¯(x)foriE,jIoriE,jIcII¯(x)fori,jI.

with

cEE¯(x)=Dr(E)δ(x)+2Dr(E)μEE(x)+Dr(E)q(μEEμEE)(x)+Dr(I)(μEIμEI)(x),cEI¯(x)=Dr(E)μIE(x)+Dr(I)μEI(x)+Dr(E)q(μEEμIE)(x)+Dr(I)(μIIμEI)(x),cII¯(x)=Dr(I)δ(x)+2Dr(I)μII(x)+Dr(E)q(μIEμIE)(x)+Dr(I)(μIIμII)(x).

10.3 Long-range limit

From here on, we consider the special case in which the synaptic connections only depend on the type of the presynaptic neuron and not on the type of the postsynaptic neuron. This is in agreement with network parameters used in established cortical network models (Potjans and Diesmann, 2014; Senk et al., 2018), in which the connection probabilities to both types of target neurons in the same layer are usually of the same order of magnitude. In that case, all expressions become independent of the first population index AabAb, and the only expressions we need to evaluate become

μa(k)=γama(k)1-ζ(k),

with

ζ(k)=mE(k)+mI(k),

and

γa={1if a=I1/qif a=E. (36)

After taking the continuum limit, we can make a (0,2)-Padé approximation again

μa(k)γama(0)1-ζ(0)-[ζ′′(0)2+(1-ζ(0))ma′′(0)2ma(0)]k2,

which leads to the poles

k0=±[ζ′′(0)2ζ(0)ζ(0)1-ζ(0)+ma′′(0)2ma(0)]-1,

or the effective decay constant of the mean covariances

d¯a=Im(k0)-1=-ζ′′(0)2ζ(0)ζ(0)1-ζ(0)-ma′′(0)2ma(0).

Using

ζζ(0)=wEKE+wIKI,ζζ(0)=wEKEx2EwIKIx2I,ma(0)=waKa,ma(0)=waKax2a,

we get

d¯a=wEKEx2E+wIKIx2IwEKE+wIKIζ1ζ+x2a2=(ωκη~2+1)ωκ+1ζ1ζx2I2+x2a2,

after introducing relative parameters

ω=wEwI,κ=KEKI,η~2=x2Ex2I,η=λEλI.

The renormalized noise in Equation (13) reduces to

Dr=D1-R2. (37)

The mean covariances are

cEE¯(x)=Dr[δ(x)+2μE(x)+q(μEμE)(x)+(μIμI)(x)],cEI¯(x)=Dr[μE(x)+μI(x)+q(μEμE)(x)+(μIμI)(x)],cII¯(x)=Dr[δ(x)+2μI(x)+q(μEμE)(x)+(μIμI)(x)],

with

μa(x)=γama(0)2(1-ζ)d¯aexp(-|x|d¯a),

and

(μa*μa)(x)=(γam(0)2(1-ζ)d¯a)2(d¯a+|x|)exp(-|x|d¯a).

Note that expressions coming from both populations contribute to each kind of covariance. Therefore, all mean covariances contain a part that decays with either of the decay constants we just determined. If, for example, the inhibitory decay constant is much larger than the excitatory one, cEI¯(x) will decay with the largest decay constant in the long-range limit

Exponential connectivity profile:

Just as in Section 9.4 we get

d¯a=(ωκη2+1)ωκ+1λ01λ0dI2+da2,deff,a=(ω2κη2+1)ω2κ+1R21R2dI2+da2,

with λ0=wEKE+wIKI for the decay constant of the mean covariances, and R2=wE2KE+wI2KI for the decay constant of the variances.

Gaussian connectivity profile:

And similar to Section 9.4 we get

d¯a=(ωκη2+1)ωκ+1λ01λ0dI22+da22,deff,a=(ωκη2+1)ωκ+1λ01λ0dI22+da22.

11 Two-dimensional network with one population

In the following, we are considering two-dimensional networks, which are supposed to mimic a single-layered cortical network. Neurons are positioned on a two-dimensional lattice (Nx×Ny grid) with periodic boundary conditions in both dimensions (a torus). We define the activity vector to be of the form

x=(x1,1x1,2x1,Nyx2,1x2,NyxNx,1xNx,Ny).

The connectivity matrix is defined correspondingly.

11.1 Dimensionality reduction

In two dimensions we have to define two translation operators that move all neurons either one step in the x-direction, or the y-direction, respectively. They are defined via their action on x

Txx=(xNx,1xNx,2xNx,Nyx1,1x1,NyxNx1,1xNx1,Ny),Tyx=(x1,Nyx1,1x1,Ny1x2,Nyx2,Ny1xNx,NyxNx,Ny1). (38)

Similar reasoning as in one dimension leads to the eigenvalues

e-ikl(x)a,kl(x)=2πLxl,l{0,1,,Nx-1},

and similar for the y-direction. The eigenvectors can be inferred from the recursion relations

Txv=eikl(x)avv(α+1)β=eikl(x)avαβ,Tyv=eikl(y)avvα(β+1)=eikl(y)avαβ,

where entries vαβ of the vector V are defined analogously to Equation (38). The eigenvectors are given by

VK=1NxNy(V(x)eik(y)aV(x)e-ik(y)aV(x)),V(x)=(1eik(x)aeiNx-12k(x)ae-iNx-12k(x)ae-ik(x)a),

where we suppressed the subscripts of k(x) and k(y) again. Using that these eigenvectors are eigenvectors of M as well, yields the eigenvalues of M

mK=VKMVK=Kwxyp(|x|)e-iKx.

In the continuum limit, this becomes the two-dimensional Fourier transform

m(k)=Kwd2xp(x)eikx. (39)

The inverse of A is given by

A1(x)=δ(x)+μ(x), (40)

with the inverse two-dimensional Fourier transform

μ(x)=1(2π)2d2km(K)1-m(K)eiKx. (41)

The expression for the renormalized noise is the same as in the one-dimensional case with one population. Hence, the mean covariances are given by

c¯(x)=D1-R2[δ(x)+2μ(x)+(μμ)(x)], (42)

which is the one-dimensional expression, except for the convolution, which is replaced by its two-dimensional analogon denoted here by the double asterisk.

11.2 Long-range limit

Employing the symmetry of the connectivity kernel, we rewrite the integral in μ(x) using polar coordinates

μ(x)=1(2π)20dk02πdφkm(k)1-m(k)eikrcos(φ), (43)

with r=|x|, and make a Padé approximation of order (0,2) of the integration kernel

μ(x)=1(2π)20dk02πDφkm(0)1-m(0)-m′′(0)2m(0)k2eikrcos(φ). (44)

Following (Goldenfeld, 1992, p.160f), we can interpret this as calculating the Green’s function of the heat equation

[1-m(0)+m′′(0)2m(0)2]μ(x)=m(0)δ(r), (45)

which can be solved, using the fact that μ(x) can only be a function of the radial distance r, due to the given symmetry of the kernel. Rewriting leads to

[-1rddr(rddr)+d¯-2]μ(r)=Γδ(r),

with the effective decay constant

d¯=-m′′(0)2m(0)11-m(0), (46)

and Γ=-2m(0)2/m′′(0). Defining ρr/d¯, μ~(ρ)μ(r/d¯), and using δ(ρd¯)=d¯-2δ(ρ), we get

[-1ρddρ(ρddρ)+1]μ~(ρ)=Γδ(ρ).

The solution to this equation is given by the modified Bessel function of second kind and zeroth order K0

μ~(ρ)=Γ2πK0(ρ).

Reinserting the defined variables yields

μ(r)=-m(0)2πm′′(0)K0(rd¯). (47)

Note that the modified Bessel functions of second kind decay exponentially for long distances

Ki(rd¯)rπd¯2re-r/d¯. (48)

But, consider that the inverse square root of the distance appears in front of the exponential. Formally, this is the one-dimensional result. The only difference here is, that m(k) is a two-dimensional Fourier transform instead of a one-dimensional one and μ(r) contains modified Bessel functions of second kind instead of exponentials.

In order to evaluate the expression for the mean covariances Equation 42, one needs to calculate the two-dimensional convolution of a modified Bessel function of second kind with itself, for which we use the following trick

(K0K0)(rd¯)=F1[K~0K~0](rd¯)=12πH1[1(β+k2)2](βr)=12πddβH1[1β+k2](βr)=12πddβK0(βr)=deff,μr4πK1(rd¯),

where denotes the Fourier transform, denotes the Hankel transform, and β=d¯-2. The last step can be found in Abramowitz and Stegun, 1964, 9.6.27.

The mean covariances are given by

c¯(r)=D1R2[δ(r)2m(0)2πm(0)K0(rd¯)+m(0)4m(0)2d¯r4π3K1(rd¯)]rD1R2[δ(r)m(0)2m(0)2d¯πrer/d¯+m(0)4m(0)2d¯3r32π5er/d¯].

Using

m(0)=Kwζ,m(0)=Kwr2, (49)

we get the effective decay constant

d¯=11ζr22d. (50)

Exponential connectivity profile:

Using a two-dimensional exponential connectivity profile

p(x)=12πd2e|x|/d,

leads to r2=6d2, and we get

d¯=31λ0d,deff=31R2d,

with λ0=Kw, and R2=Kw2.

Gaussian connectivity profile:

Using a two-dimensional Gaussian connectivity profile

p(x)=12πd2e-x2/(2d2),

leads to r2=2d2, and we get

d¯=11λ0d,deff=11R2d.

11.3 Note on higher order approximation

While the (0,2)-Padé approximation seems to yield good results for the one-dimensional cases, in two dimensions the results only coincide for large spectral radii (Appendix 1—figure 7). One can extract a higher order approximation of the poles of the integration kernel of μ(x) and thereby the effective decay constant deff using the DLog-Padé-method, for which one calculates an (n,n+1)-Padé approximation of the logarithmic derivative of the integration kernel around zero (Pelizzola, 1994). Using a (1,2)-Padé approximation leads to

d¯=-3(2m(0)-1)m′′(0)2+(1-m(0))m(0)m′′′′(0)6m′′(0)m(0)(1-m(0)),

which coincides with our previous results in the limit m(0)1, and thus for large spectral radii. Note that this expression contains the fourth moment of the connectivity kernel m′′′′(0)=wKx4.

12 Two-dimensional network with two populations

Finally, we consider a two-dimensional network with two populations of neurons. As in the one dimensional case, the neurons are gathered in cells, which contain one inhibitory and q excitatory neurons. Again, they are placed on a two-dimensional lattice with periodic boundary conditions. The activity vector takes the form

x=(x1,1(E)x1,1(I)x1,2(E)x1,2(I)x1,Ny(E)x1,Ny(I)x2,1(E)x2,1(I)xNx,Ny(E)xNx,Ny(I)), (51)

where xi,j(E) denotes a q-dimensional vector.

12.1 Dimensionality reduction

We apply the procedure developed so far, which leads to the results we found in the one-dimensional case with two populations, with Fourier transforms and convolutions replaced by their two-dimensional analogons and modified Bessel functions of second kind instead of exponentials. So, we end up with

cEE¯(x)=Dr(E)δ(x)+2Dr(E)μEE(x)+Dr(E)q(μEEμEE)(x)+Dr(I)(μEIμEI)(x),cEI¯(x)=Dr(E)μIE(x)+Dr(I)μEI(x)+Dr(E)q(μEEμIE)(x)+Dr(I)(μIIμEI)(x),cII¯(x)=Dr(I)δ(x)+2Dr(I)μII(x)+Dr(E)q(μIEμIE)(x)+Dr(I)(μIIμII)(x),

and μab(x) given by (33) and the two-dimensional Fourier transform

mab(K)=Kabwabd2xpab(x)e-iKx.

The renormalized noise is given by Equation 34 with spectral bound Equation 35, with the eigenvalues sab(k) replaced by the two-dimensional Fourier transforms sab(K).

12.2 Long-range limit

Again, considering the special case in which the synaptic connections only depend on the type of the presynaptic neuron and not on the type of the postsynaptic neuron, the expressions simplify to

μa(k)=ma(k)1-ζ(k), (52)

with

ζ(k)=mE(k)+mI(k).

Padé approximation of the Fourier kernel, integration using (Goldenfeld, 1992, p.160f) and suppressing the zero arguments of ζ and ma leads to

μa(r)=γawaKa2π(1ζ)d¯a2K0(rd¯a)rγawaKa(1ζ)18πrd¯a3er/d¯a (53)

with

d¯a=-ζ′′2ζζ1-ζ-ma′′2ma.

After introducing the same relative parameters as in Section 10.3, we find

d¯a=(ωκη~2+1)ωκ+1ζ1ζx2I2+x2a2. (54)

The two-dimensional convolutions are given by

(μaμa)(r)=[γawaKa4(1ζ)]21π3d¯a3K1(rd¯a)r[γawaKa4(1ζ)]212π5d¯a5rer/d¯a. (55)

The renormalized noise simplifies to Equation 37. The mean covariances are given by

cEE¯(x)=Dr[δ(x)+2μE(x)+q(μEμE)(x)+(μIμI)(x)],cEI¯(x)=Dr[μE(x)+μI(x)+q(μEμE)(x)+(μIμI)(x)],cII¯(x)=Dr[δ(x)+2μI(x)+q(μEμE)(x)+(μIμI)(x)]. (56)

Remember that the result for the variances of the covariances is obtained by substituting Dr by its square, and wa, or ω respectively, by its square, and setting ζ=R2.

Equation (2) in the main text can be proven by inserting the result for

d¯E2d¯I2=x2E2x2I2=const.(dE2dI2).

Using an exponential connectivity profile yields const.=3, a Gaussian connectivity profile yields const.=1.

Exponential connectivity profile:

Using the results from 11.2, we find

d¯a=3[(ωκη2+1)ωκ+1λ01λ0dI+da],deff,a=3[(ω2κη2+1)ω2κ+1R21R2dI+da],

with λ0=wEKE+wIKI, and R2=wE2KE+wI2KI.

Gaussian connectivity profile:

Using the results from 11.2, we find

d¯a=(ωκη2+1)ωκ+1λ01λ0dI+da,deff,a=(ω2κη2+1)ω2κ+1R21R2dI+da.

12.3 Higher order approximation

Using a (1,2)-DLog-Padé method as in Section 11.3 yields

d¯a=-(1-ζ)2(mama′′′′-3ma′′2)+ma2[(1-ζ)ζ′′′′+3ζ′′2]6ma(1-ζ)[(1-ζ)ma′′+mζ′′], (57)

which again contains the fourth moments of the connectivity kernels.

13 Validation of theory

Appendix 1—figure 7. Comparison of simulation and theory.

Appendix 1—figure 7.

(A) Variance of EE, EI, and II covariances as a function of distance. Darker dots are the results of the simulation. Lighter ones are the prediction of the discrete theory. (B) Variance of EE covariances as a function of distance (Equation (56) for variances). The lightest blue dots are the predictions of the discrete theory (μa replaced by the discrete Fourier transform of Equation (52), taking into account Section 7), the medium blue line is the (0,2)-Padé prediction (μa replaced by its Padé approximation Equation (53), taking into account Section 7), and the dark blue line is the higher order (1,2)-DLog-Padé prediction (μa replaced by its Padé approximation Equation (53), using Equation (57), and taking into account Section 7). (C) Fitted slope of linear regions in panel B for different spectral bounds R (light blue: discrete theory, medium blue: Padé approximation, dark blue: higher order Padé approximation).

In order to validate our results, we performed simulations, in which an effective connectivity matrix W of a two-dimensional network was drawn randomly, and covariances were calculated using the result from Pernice et al., 2011, Trousdale et al., 2012, and Lindner et al., 2005

c(W)=(1W)1D(1W)T.

The elements of the different components Wab of the effective connectivity matrix, similar to Equation (24), were drawn from a binomial distribution with Kb trials and a success probability of γbpb(|x|), with γb given by Equation (36) and |x| denoting the distance between the neurons.

We compared the results to the predictions by our discrete theory, continuum theory, and the long-range limit. We did this for all cases presented above: one dimension with one population, one dimension with two populations, two dimensions with one population, and two dimensions with two populations. In the cases of two populations we solely considered the special case of synaptic connections only depending on the type of the presynaptic neuron. The first three cases are not reported here. We simulated several sets of parameters, varying the number of neurons, the number of inputs, the decay constants and the spectral bound, of which we only report the one using the parameters listed in Appendix 1—table 3, because the results do not differ qualitatively. Using

R2=s(0)=KEwE2+KIwI2,

and choosing

wIwE=-NENI=-q,

we calculated the synaptic weights

wE=RKE+q2KI,wI=-qRKE+q2KI.

The comparison of simulation and discrete theory is shown in Appendix 1—figure 7a. Simulation and theory match almost perfectly. The continuum theory, which is shown in Figure 3D,E of the main text, matches as well as the discrete theory (not shown here). The slight shift in y-direction in Appendix 1—figure 7a is due to the fact that in the random realization of the network the spectral bound is not exactly matching the desired value, but is slightly different for each realization and distributed around the chosen value. This jittering around the real spectral bound is more pronounced as R1. Note that the simulated networks were small compared to the decay constant of the connectivity profile, in order to keep simulation times reasonable. This is why the variances do not fall off linearly in the semi-log plot. The kink and the related spreading starting around x/d=1.5 is a finite size effect due to periodic boundary conditions: The maximal distance of two neurons along the axes in units of spatial decay constants is (N/2)/d1.5. Because of the periodic boundary conditions, the covariances between two neurons increases once the distance between them exceeds the maximal distance along an axis. This, together with the fact that the curve is the result of the discrete Fourier transform of Equation 52, implies a zero slope at the boundary. This holds for any direction in the two dimensional plane, but the maximal distances between two neurons is longer for directions not aligned with any axis and depends on the precise direction, which explains the observed spreading.

In order to validate the long-range limit, we compared our discrete theory with the result from the Padé approximation at large distances (Appendix 1—figure 7b). We do not expect the Padé approximation to hold at small distances. We are mainly interested in the slope of the variance of covariances, because the slope determines how fast typical pairwise covariances decay with increasing inter-neuronal distance. The slope at large distances for the (0,2)-Padé approximation is smaller than the prediction by our theory, but the higher order approximation matches our theory very well (Appendix 1—figure 7C). In the limit R1 both Padé predictions yield similar results. The absolute value of the covariances in the Padé approximation can be obtained from a residue analysis. The (0,2)-Pade approximation yields absolute values with a small offset, analogous to the slope results. Calculating the residues for the (1,2)-DLog Padé approximation would lead to a better approximation. Note that for plotting the higher order prediction in Appendix 1—figure 7b, we just inserted Equation (57) into Equation (53) and Equation (55).

Appendix 1—table 3. Parameters used to create theory figures. Decay constants in units of lattice constant a.

Figure 3B, C Figure 3D, E App 1—fig 7A App 1—fig 7B
Nx 61 201 61 1,001 Number of neurons in x-direction
Ny 61 201 61 1,001 Number of neurons in y-direction
q 4 4 4 4 Ratio of excitatory to inhibitory neurons
KE 100 100 100 100 Number of excitatory inputs per neuron
KI 50 50 50 50 Number of inhibitory inputs per neuron
dE 20 20 20 20 Decay constant of excitatory connectivity profile
dI 10 10 10 10 Decay constant of inhibitory connectivity profile
D 1 1 1 1 Squared noise amplitude
R 0.95 0.95 0.8 0.95 Spectral bound
exponential exponential exponential exponential Connectivity kernel

14 Parameters of NEST simulation

Appendix 1—table 4. Parameters used for NEST simulation and subsequent analysis.

Network parameters
N 2000 Number of neurons
p 0.1 Connection probability
τ 1ms Time constant
σμ 1Hz Standard deviation of external input
σnoise 0.1Hz Standard deviation of noise
R [0.1,0.2,,0.9] Spectral bound
ϵ 0.1 Parameter controlling difference of two network simulations
Simulation Parameters
dt 0.1ms Simulation step size
t init 100ms Initialization time
t sim 2000000ms Simulation time without initialization time
t sample 1ms Sample resolution at which rates where recorded
Analysis Parameters
Nsample 200 Sample size
T 100ms Correlation time window

15 Sources of heterogeneity

Sparseness of connections is a large source of heterogeneity in cortical networks. It contributes strongly to the variance of effective connection weights that determines the spectral bound, the quantity that controls stability of balanced networks (Sompolinsky et al., 1988; Dahmen et al., 2019): Consider the following simple model Wij=Wijζij for the effective connection weights Wij, where ζij{0,1} are independent Bernoulli numbers, which are 1 with probability p and 0 with probability 1-p, and Wij are independently distributed amplitudes. The ζij encode the sparseness of connections and the Wij encode the experimentally observed distributions of synaptic amplitudes and single neuron heterogeneities that lead to different neuronal gains. Since Wij and ζij are independent, the variance of Wij is

Var(Wij)=pVar(Wij)+p(1p)Mean(Wij)2.

For low connection probabilities as observed in cortex (p(1p)p), assessing the different contributions to the variance thus amounts to comparing the mean and standard deviation of Wij. Even though synaptic amplitudes are broadly distributed in cortical networks, one typically finds that their mean and standard deviation are of the same magnitude (see e.g. Sayer et al., 1990, Tab 1; Feldmeyer et al., 1999, Tab 1; Song et al., 2005, Fig.1; Lefort et al., 2009, Tab 2; Ikegaya et al., 2013, Fig.1; Loewenstein et al., 2011, Fig. 2). Sparseness of connections (second term on the right hand side) is thus one of the dominant contributors to the variance of connections. For simplicity, the other sources, in particular the distribution of synaptic amplitudes, are left out in this study. They can, however, be straight-forwardly added in the model and the theoretical formalism, because it only depends on Var(Wij).

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

David Dahmen, Email: d.dahmen@fz-juelich.de.

Moritz Helias, Email: m.helias@fz-juelich.de.

Stephanie E Palmer, The University of Chicago, United States.

Timothy E Behrens, University of Oxford, United Kingdom.

Funding Information

This paper was supported by the following grants:

  • Helmholtz Association VH-NG-1028 to David Dahmen, Moritz Helias.

  • European Commission HBP (785907 945539) to David Dahmen, Moritz Layer, Lukas Deutz, Nicole Voges, Michael von Papen, Markus Diesmann, Sonja Grün, Moritz Helias.

  • Deutsche Forschungsgemeinschaft 368482240/GRK2416 to Moritz Layer, Markus Diesmann, Moritz Helias.

  • Agence Nationale de la Recherche GRASP to Thomas Brochier, Alexa Riehle.

Additional information

Competing interests

No competing interests declared.

No competing interests declared.

Author contributions

Conceptualization, Formal analysis, Investigation, Methodology, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review and editing.

Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review and editing, Conceptualization.

Formal analysis, Methodology, Validation, Visualization, Writing – original draft.

Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review and editing.

Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review and editing.

Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft.

Data curation, Funding acquisition, Resources, Writing – original draft, Writing – review and editing.

Data curation, Funding acquisition, Resources, Writing – original draft, Writing – review and editing.

Conceptualization, Funding acquisition, Investigation, Resources, Supervision, Writing – original draft, Writing – review and editing.

Conceptualization, Funding acquisition, Investigation, Resources, Supervision, Writing – original draft, Writing – review and editing.

Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review and editing.

Ethics

All animal procedures were approved by the local ethical committee (C2EA 71; authorization A1/10/12) and conformed to the European and French government regulations.

Additional files

Transparent reporting form

Data availability

All code and data required to reproduce the figures are available in a public zenodo repository at https://zenodo.org/record/5524777. Source data/code files are also attached as zip folders to the individual main figures of this submission.

References

  1. Abbott LF, Rajan K, Sompolinsky H. The Dynamic Brain: An Exploration of Neuronal Variability and Its Functional Significance. Oxford Scholarship Online; 2011. [DOI] [Google Scholar]
  2. Abeles M. Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge University Press; 1991. [Google Scholar]
  3. Abramowitz M, Stegun IA. In: Applied Mathematics Series. Abramowitz M, editor. National Bureau of Standards; 1964. Handbook of Mathematical Functions; pp. 55–57. [Google Scholar]
  4. Aljadeff J, Stern M, Sharpee T. Transition to chaos in random networks with cell-type-specific connectivity. Physical Review Letters. 2015;114:088101. doi: 10.1103/PhysRevLett.114.088101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Aljadeff J, Renfrew D, Vegué M, Sharpee TO. Low-dimensional dynamics of structured random networks. Physical Review E. 2016;93:022302. doi: 10.1103/PhysRevE.93.022302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex. 1997;7:237–252. doi: 10.1093/cercor/7.3.237. [DOI] [PubMed] [Google Scholar]
  7. Amit DJ, Tsodyks MV. Quantitative study of attractor neural network retrieving at low spike rates: I. substrate—spikes, rates and neuronal gain. Network. 2009;2:259–273. doi: 10.1088/0954-898X_2_3_003. [DOI] [Google Scholar]
  8. Baker C, Ebsch C, Lampl I, Rosenbaum R. Correlated states in balanced neuronal networks. Physical Review. E. 2019;99:052414. doi: 10.1103/PhysRevE.99.052414. [DOI] [PubMed] [Google Scholar]
  9. Barthó P, Hirase H, Monconduit L, Zugaro M, Harris KD, Buzsáki G. Characterization of neocortical principal cells and interneurons by network interactions and extracellular features. Journal of Neurophysiology. 2004;92:600–608. doi: 10.1152/jn.01170.2003. [DOI] [PubMed] [Google Scholar]
  10. Basdevant JL. The Padé Approximation and its Physical Applications. Fortschritte Der Physik. 1972;20:283–331. doi: 10.1002/prop.19720200502. [DOI] [Google Scholar]
  11. Ben-Yishai R, Bar-Or RL, Sompolinsky H. Theory of orientation tuning in visual cortex. PNAS. 1995;92:3844–3848. doi: 10.1073/pnas.92.9.3844. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Blumenfeld B, Bibitchkov D, Tsodyks M. Neural network model of the primary visual cortex: from functional architecture to lateral connectivity and back. Journal of Computational Neuroscience. 2006;20:219–241. doi: 10.1007/s10827-006-6307-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Brinkman BAW, Rieke F, Shea-Brown E, Buice MA. Predicting how and when hidden neurons skew measured synaptic interactions. PLOS Computational Biology. 2018;14:e1006490. doi: 10.1371/journal.pcbi.1006490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Brochier T, Zehl L, Hao Y, Duret M, Sprenger J, Denker M, Grün S, Riehle A. Massively parallel recordings in macaque motor cortex during an instructed delayed reach-to-grasp task. Scientific Data. 2018;5:180055. doi: 10.1038/sdata.2018.55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience. 2000;8:183–208. doi: 10.1023/a:1008925309027. [DOI] [PubMed] [Google Scholar]
  16. Dąbrowska PA, Voges N, von Papen M, Ito J, Dahmen D, Riehle A, Brochier T, Grün S. On the Complexity of Resting State Spiking Activity in Monkey Motor Cortex. Cerebral Cortex Communications. 2020;2:tgab033. doi: 10.1093/texcom/tgab033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Dahmen D, Grün S, Diesmann M, Helias M. Second type of criticality in the brain uncovers rich multiple-neuron dynamics. PNAS. 2019;116:13051–13060. doi: 10.1073/pnas.1818972116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Dahmen D, Recanatesi S, Jia X, Ocker GK, Campagnola L, Jarsky T, Seeman S, Helias M, Shea-Brown E. Strong and Localized Coupling Controls Dimensionality of Neural Activity across Brain Areas. bioRxiv. 2021 doi: 10.1101/2020.11.02.365072. [DOI]
  19. Darshan R, van Vreeswijk C, Hansel D. Strength of Correlations in Strongly Recurrent Neuronal Networks. Physical Review X. 2018;8:031072. doi: 10.1103/PhysRevX.8.031072. [DOI] [Google Scholar]
  20. de la Rocha J, Doiron B, Shea-Brown E, Josić K, Reyes A. Correlation between neural spike trains increases with firing rate. Nature. 2007;448:802–806. doi: 10.1038/nature06028. [DOI] [PubMed] [Google Scholar]
  21. DeFelipe J, Conley M, Jones EG. Long-range focal collateralization of axons arising from corticocortical cells in monkey sensory-motor cortex. The Journal of Neuroscience. 1986;6:3749–3766. doi: 10.1523/JNEUROSCI.06-12-03749.1986. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Dehghani N, Peyrache A, Telenczuk B, Le Van Quyen M, Halgren E, Cash SS, Hatsopoulos NG, Destexhe A. Dynamic Balance of Excitation and Inhibition in Human and Monkey Neocortex. Scientific Reports. 2016;6:23176. doi: 10.1038/srep23176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Diesmann M, Gewaltig MO, Aertsen A. Stable propagation of synchronous spiking in cortical neural networks. Nature. 1999;402:529–533. doi: 10.1038/990101. [DOI] [PubMed] [Google Scholar]
  24. Elsayed GF, Lara AH, Kaufman MT, Churchland MM, Cunningham JP. Reorganization between preparatory and movement population responses in motor cortex. Nature Communications. 2016;7:13239. doi: 10.1038/ncomms13239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Feldmeyer D, Egger V, Lubke J, Sakmann B. Reliable synaptic connections between pairs of excitatory layer 4 neurones within a single “barrel” of developing rat somatosensory cortex. The Journal of Physiology. 1999;521 Pt 1:169–190. doi: 10.1111/j.1469-7793.1999.00169.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Gallego JA, Perich MG, Miller LE, Solla SA. Neural Manifolds for the Control of Movement. Neuron. 2017;94:978–984. doi: 10.1016/j.neuron.2017.05.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Gallego JA. Cortical population activity within a preserved neural manifold underlies multiple motor behaviors. Nature Communications. 2018;9:4233. doi: 10.1038/s41467-018-06560-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Gallego JA, Perich MG, Chowdhury RH, Solla SA, Miller LE. Long-term stability of cortical population dynamics underlying consistent behavior. Nature Neuroscience. 2020;23:260–270. doi: 10.1038/s41593-019-0555-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Gao P. A theory of multineuronal dimensionality, dynamics and measurement. bioRxiv. 2017 doi: 10.1101/214262. [DOI]
  30. Gardiner CW. Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences, No.13 in Springer Series in Synergetics. Springer; 1985. [Google Scholar]
  31. Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT. Interruption of motor cortical discharge subserving aimed arm movements. Experimental Brain Research. 1983;49:327–340. doi: 10.1007/BF00238775. [DOI] [PubMed] [Google Scholar]
  32. Ginzburg I, Sompolinsky H. Theory of correlations in stochastic neural networks. Physical Review E. 1994;50:3171–3191. doi: 10.1103/PhysRevE.50.3171. [DOI] [PubMed] [Google Scholar]
  33. Goldenfeld N. Lectures on Phase Transitions and the Renormalization Group. Perseus books; 1992. [Google Scholar]
  34. Grytskyy D, Tetzlaff T, Diesmann M, Helias M. A unified view on weakly correlated recurrent networks. Frontiers in Computational Neuroscience. 2013;7:131. doi: 10.3389/fncom.2013.00131. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Hahne J. An SDE waveform‐relaxation method with application in distributed neural network simulations. PAMM. 2017;19:201900373. doi: 10.1002/pamm.201900373. [DOI] [Google Scholar]
  36. Hansel D, Sompolinsky H. Methods in Neuronal Modeling. MIT Press; 1998. [Google Scholar]
  37. Hatsopoulos N, Joshi J, O’Leary JG. Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles. Journal of Neurophysiology. 2004;92:1165–1174. doi: 10.1152/jn.01245.2003. [DOI] [PubMed] [Google Scholar]
  38. Helias M, Tetzlaff T, Diesmann M. Echoes in correlated neural systems. New Journal of Physics. 2013;15:023002. doi: 10.1088/1367-2630/15/2/023002. [DOI] [Google Scholar]
  39. Helias M, Tetzlaff T, Diesmann M. The correlation structure of local neuronal networks intrinsically results from recurrent dynamics. PLOS Computational Biology. 2014;10:e1003428. doi: 10.1371/journal.pcbi.1003428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Helias M, Dahmen D. Statistical Field Theory for Neural Networks. Cham: Springer International Publishing; 2020. Statistical Field Theory for Neural Networks. [DOI] [Google Scholar]
  41. Hennequin G, Vogels TP, Gerstner W. Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron. 2014;82:1394–1406. doi: 10.1016/j.neuron.2014.04.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Hertz J. Cross-correlations in high-conductance states of a model cortical network. Neural Computation. 2010;22:427–447. doi: 10.1162/neco.2009.06-08-806. [DOI] [PubMed] [Google Scholar]
  43. Hu Y, Sompolinsky H. The Spectrum of Covariance Matrices of Randomly Connected Recurrent Neuronal Networks. bioRxiv. 2020 doi: 10.1101/2020.08.31.274936. [DOI] [PMC free article] [PubMed]
  44. Ikegaya Y, Sasaki T, Ishikawa D, Honma N, Tao K, Takahashi N, Minamisawa G, Ujita S, Matsuki N. Interpyramid spike transmission stabilizes the sparseness of recurrent network activity. Cerebral Cortex. 2013;23:293–304. doi: 10.1093/cercor/bhs006. [DOI] [PubMed] [Google Scholar]
  45. Jordan J. NEST 2.18.0. Zenodo. 2019 doi: 10.5281/zenodo.2605422. [DOI]
  46. Kaufman MT, Churchland MM, Santhanam G, Yu BM, Afshar A, Ryu SI, Shenoy KV. Roles of monkey premotor neuron classes in movement preparation and execution. Journal of Neurophysiology. 2010;104:799–810. doi: 10.1152/jn.00231.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Kaufman MT, Churchland MM, Shenoy KV. The roles of monkey M1 neuron classes in movement preparation and execution. Journal of Neurophysiology. 2013;110:817–825. doi: 10.1152/jn.00892.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Kobayashi R, Kurita S, Kurth A, Kitano K, Mizuseki K, Diesmann M, Richmond BJ, Shinomoto S. Reconstructing neuronal circuitry from parallel spike trains. Nature Communications. 2019;10:4468. doi: 10.1038/s41467-019-12225-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Kriener B, Helias M, Rotter S, Diesmann M, Einevoll GT. How pattern formation in ring networks of excitatory and inhibitory spiking neurons depends on the input current regime. Frontiers in Computational Neuroscience. 2013;7:187. doi: 10.3389/fncom.2013.00187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Lefort S, Tomm C, Petersen CCH. The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex. Neuron. 2009;61:301–316. doi: 10.1016/j.neuron.2008.12.020. [DOI] [PubMed] [Google Scholar]
  51. Lindner B, Doiron B, Longtin A. Theory of oscillatory firing induced by spatially correlated noise and delayed inhibitory feedback. Physical Review E. 2005;72:061919. doi: 10.1103/PhysRevE.72.061919. [DOI] [PubMed] [Google Scholar]
  52. Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nature Neuroscience. 2012;15:1498–1505. doi: 10.1038/nn.3220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Loewenstein Y, Kuras A, Rumpel S. Multiplicative dynamics underlie the emergence of the log-normal distribution of spine sizes in the neocortex in vivo. The Journal of Neuroscience. 2011;31:9481–9488. doi: 10.1523/JNEUROSCI.6130-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Luczak A, Barthó P, Harris KD. Spontaneous events outline the realm of possible sensory responses in neocortical populations. Neuron. 2009;62:413–425. doi: 10.1016/j.neuron.2009.03.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Mastrogiuseppe F, Ostojic S. Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron. 2018;99:609–623. doi: 10.1016/j.neuron.2018.07.003. [DOI] [PubMed] [Google Scholar]
  56. Mazzucato L, Fontanini A, La Camera G. Stimuli Reduce the Dimensionality of Cortical Activity. Frontiers in Systems Neuroscience. 2016;10:11. doi: 10.3389/fnsys.2016.00011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Okun M, Lampl I. Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nature Neuroscience. 2008;11:535–537. doi: 10.1038/nn.2105. [DOI] [PubMed] [Google Scholar]
  58. Olver FWJ, Lozier DW, Boisvert RF, Clark CW. NIST Handbook of Mathematical Functions. Cambridge University Press; 2010. [Google Scholar]
  59. Pelizzola A. Cluster variation method, Padé approximants, and critical behavior. Physical Review E. 1994;49:R2503–R2506. doi: 10.1103/PhysRevE.49.R2503. [DOI] [PubMed] [Google Scholar]
  60. Perich MG, Arlt C, Soares S, Young ME, Mosher CP, Minxha J, Carter E, Rutishauser U, Rudebeck PH, Harvey CD, Rajan K. Inferring Brain-Wide Interactions Using Data-Constrained Recurrent Neural Network Models. bioRxiv. 2021 doi: 10.1101/2020.12.18.423348. [DOI]
  61. Pernice V, Staude B, Cardanobile S, Rotter S. How structure determines correlations in neuronal networks. PLOS Computational Biology. 2011;7:e1002059. doi: 10.1371/journal.pcbi.1002059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Pernice V, Staude B, Cardanobile S, Rotter S. Recurrent interactions in spiking networks with arbitrary topology. Physical Review E. 2012;85:031916. doi: 10.1103/PhysRevE.85.031916. [DOI] [PubMed] [Google Scholar]
  63. Peyrache A. Spatiotemporal dynamics of neocortical excitation and inhibition during human sleep. PNAS. 2012;109:1731–1736. doi: 10.1073/pnas.1109895109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Peyrache A, Destexhe A. Electrophysiological monitoring of inhibition in mammalian species, from rodents to humans. Neurobiology of Disease. 2019;130:104500. doi: 10.1016/j.nbd.2019.104500. [DOI] [PubMed] [Google Scholar]
  65. Potjans TC, Diesmann M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cerebral Cortex. 2014;24:785–806. doi: 10.1093/cercor/bhs358. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLOS Computational Biology. 2019;15:e1006446. doi: 10.1371/journal.pcbi.1006446. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Reimann MW, Horlemann AL, Ramaswamy S, Muller EB, Markram H. Morphological Diversity Strongly Constrains Synaptic Connectivity and Plasticity. Cerebral Cortex. 2017;27:4570–4585. doi: 10.1093/cercor/bhx150. [DOI] [PubMed] [Google Scholar]
  68. Reinhold K, Lien AD, Scanziani M. Distinct recurrent versus afferent dynamics in cortical visual processing. Nature Neuroscience. 2015;18:1789–1797. doi: 10.1038/nn.4153. [DOI] [PubMed] [Google Scholar]
  69. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD. The asynchronous state in cortical circuits. Science. 2010;327:587–590. doi: 10.1126/science.1179850. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Riehle A, Wirtssohn S, Grün S, Brochier T. Mapping the spatio-temporal structure of motor cortical LFP and spiking activities during reach-to-grasp movements. Frontiers in Neural Circuits. 2013;7:48. doi: 10.3389/fncir.2013.00048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Riehle A, Brochier T, Nawrot M, Grün S. Behavioral Context Determines Network State and Variability Dynamics in Monkey Motor Cortex. Frontiers in Neural Circuits. 2018;12:52. doi: 10.3389/fncir.2018.00052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Rosenbaum R, Doiron B. Balanced Networks of Spiking Neurons with Spatially Dependent Recurrent Connections. Physical Review X. 2014;4:021039. doi: 10.1103/PhysRevX.4.021039. [DOI] [Google Scholar]
  73. Rosenbaum R, Smith MA, Kohn A, Rubin JE, Doiron B. The spatial structure of correlated neuronal variability. Nature Neuroscience. 2017;20:107–114. doi: 10.1038/nn.4433. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Roxin A, Brunel N, Hansel D. Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks. Physical Review Letters. 2005;94:238103. doi: 10.1103/PhysRevLett.94.238103. [DOI] [PubMed] [Google Scholar]
  75. Salinas E, Sejnowski TJ. Correlated neuronal activity and the flow of neural information. Nature Reviews. Neuroscience. 2001a;2:539–550. doi: 10.1038/35086012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Salinas E, Sejnowski TJ. Gain modulation in the central nervous system: where behavior, neurophysiology, and computation meet. The Neuroscientist. 2001b;7:430–440. doi: 10.1177/107385840100700512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Sayer RJ, Friedlander MJ, Redman SJ. The time course and amplitude of EPSPs evoked at synapses between pairs of CA3/CA1 neurons in the hippocampal slice. The Journal of Neuroscience. 1990;10:826–836. doi: 10.1523/JNEUROSCI.10-03-00826.1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Schnepel P, Kumar A, Zohar M, Aertsen A, Boucsein C. Physiology and Impact of Horizontal Connections in Rat Neocortex. Cerebral Cortex. 2015;25:3818–3835. doi: 10.1093/cercor/bhu265. [DOI] [PubMed] [Google Scholar]
  79. Semedo JD, Zandvakili A, Machens CK, Yu BM, Kohn A. Cortical Areas Interact through a Communication Subspace. Neuron. 2019;102:249–259. doi: 10.1016/j.neuron.2019.01.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Senk J, Hagen E, van Albada SJ, Diesmann M. Reconciliation of Weak Pairwise Spike-Train Correlations and Highly Coherent Local Field Potentials across Space. arXiv. 2018 https://arxiv.org/abs/1805.10235
  81. Smith GB, Hein B, Whitney DE, Fitzpatrick D, Kaschube M. Distributed network interactions and their emergence in developing neocortex. Nature Neuroscience. 2018;21:1600–1608. doi: 10.1038/s41593-018-0247-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Sompolinsky H, Crisanti A, Sommers HJ. Chaos in random neural networks. Physical Review Letters. 1988;61:259–262. doi: 10.1103/PhysRevLett.61.259. [DOI] [PubMed] [Google Scholar]
  83. Song S, Sjöström PJ, Reigl M, Nelson S, Chklovskii DB, Friston KJ. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLOS Biology. 2005;3:e68. doi: 10.1371/journal.pbio.0030068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Spreizer S, Aertsen A, Kumar A. From space to time: Spatial inhomogeneities lead to the emergence of spatiotemporal sequences in spiking neuronal networks. PLOS Computational Biology. 2019;15:e1007432. doi: 10.1371/journal.pcbi.1007432. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Stringer C, Pachitariu M, Steinmetz N, Carandini M, Harris KD. High-dimensional geometry of population responses in visual cortex. Nature. 2019;571:361–365. doi: 10.1038/s41586-019-1346-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Sussillo D, Churchland MM, Kaufman MT, Shenoy KV. A neural network that finds a naturalistic solution for the production of muscle activity. Nature Neuroscience. 2015;18:1025–1033. doi: 10.1038/nn.4042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of neural-network activity by inhibitory feedback. PLOS Computational Biology. 2012;8:e1002596. doi: 10.1371/journal.pcbi.1002596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Torre E. Synchronous Spike Patterns in Macaque Motor Cortex during an Instructed-Delay Reach-to-Grasp Task. Journal of Neuroscience. 2016;36:8329–8340. doi: 10.1523/JNEUROSCI.4375-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Trousdale J, Hu Y, Shea-Brown E, Josić K. Impact of network structure and cellular response on spike time correlations. PLOS Computational Biology. 2012;8:e1002408. doi: 10.1371/journal.pcbi.1002408. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Tuckwell HC. Introduction to Theoretical Neurobiology. Cambridge University Press; 2009. [DOI] [Google Scholar]
  91. van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274:1724–1726. doi: 10.1126/science.274.5293.1724. [DOI] [PubMed] [Google Scholar]
  92. van Vreeswijk C, Sompolinsky H. Chaotic balanced state in a model of cortical circuits. Neural Computation. 1998;10:1321–1371. doi: 10.1162/089976698300017214. [DOI] [PubMed] [Google Scholar]
  93. Yu BM, Cunningham JP, Santhanam G, Ryu SI, Shenoy KV, Sahani M. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Journal of Neurophysiology. 2009;102:614–635. doi: 10.1152/jn.90941.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]

Editor's evaluation

Stephanie E Palmer 1

This is a thorough study showing that long-range correlations in the brain can arise without common input drive or long-range anatomical connections. These long-range correlations are modulated by the animal's behavioral state, a surprising finding that suggests a computational role for control of this kind of correlation. The paper details some analytical methods for modeling this behavior in disordered systems. The work will be of broad interest to neuroscientists, computational biologists, and biophysicists.

Decision letter

Editor: Stephanie E Palmer1

Our editorial process produces two outputs: (i) public reviews designed to be posted alongside the preprint for the benefit of readers; (ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.

Decision letter after peer review:

Thank you for submitting your article "Global organization of neuronal activity only requires unstructured local connectivity" for consideration by eLife. Your article has been reviewed by 2 peer reviewers, one of whom is a member of our Board of Reviewing Editors, and the evaluation has been overseen by Timothy Behrens as the Senior Editor. The reviewers have opted to remain anonymous.

The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.

Essential revisions:

1) The scenario put forward by the authors describe the data well, but a major drawback is that it needs fine tuning for the network to be close to the critical point. There are multiple other, and perhaps simpler, scenarios that could reproduce the data described by the authors, but they are not adequately discussed in this paper:

A: Fluctuating and broadly divergent external inputs could in principle generate the observed correlation structure. The authors quickly dismiss this scenario in the discussion: 'In such a scenario, covariances have been shown to be predominantly generated locally rather than from external inputs (4,15)'. It was not clear where this is shown in these papers. The authors should explain much more clearly why they believe the external input scenario is unlikely (see also issue #5).

B: Recurrent connectivity correlated with neuronal selectivity, as in visual cortex, but with salt-and-pepper organization. This would be a scenario similar to what has been proposed in multiple visual cortex models (see old work by Tsodyks, Sompolinsky, and more recently ref (14)). The authors also dismiss this scenario because of the lack of patchiness of the correlation structure in M1, but we would not expect any patchiness if selectivity is salt-and-pepper.

C: Long range connectivity within M1. The existence of horizontal long range connectivity has been demonstrated in multiple cortical areas, and in particular motor cortex (see DeFelipe et al., 1984). Such a long range connectivity could in principle give rise to the long-range correlations found by the authors.

D: Can this work be connected to the findings in Schwab et al., PRL 2014? If latent variables can give rise to signatures of criticality but do not require fine tuning of the brain's network state, might this not be a more biologically plausible origin for the long-range correlation observed here?

2) The main text is sorely lacking any details of the model analyzed and simulated by the authors. The reader has to go to p.47 of the Supplementary Material to finally understand the authors simulated a rate model with threshold-linear (ReLU) transfer functions. The model should be explained in the main text when it is first introduced, and the equations should be described in the Methods of the paper, and not confined in the Supplementary Material.

3) A related issue is that it remains unclear how general these theoretical results are. There are several places in the paper where the authors hint that their results are more general than the analyzed rate model, but it remains unclear whether this is the case, and in particular whether their results could also apply to networks of spiking neurons. This issue needs to be discussed more clearly.

4) Stationarity claims are not well substantiated. Figures 2 and 3 in the Appendix show that the population averaged activity is approximately stationary for different sessions. However, single neuron activity could be highly dynamic, but these fluctuations of activity could be washed out at the population level. It would be helpful to check whether trial-averaged single unit activity is indeed stationary during these epochs. If it is, then it should be shown in the Appendix. If the authors are not able to demonstrate stationarity at the single neuron level, then some of their conclusions should be toned down (in particular, the last paragraph of the Discussion). Note that this issue is strongly related to issue #1, scenario A.

5) There are claims of 'strong' spike count covariances at several points in the paper, but the covariances by themselves tell us nothing about the strength of correlations if we don't know the respective variances. In that respect, plotting correlations instead of covariances (as is done in many other studies of correlation structures in brain networks) would be much more informative about the strength of correlations between neurons. An example of why using covariances is problematic to assess strength of correlation is given by a comparison between Figures5 and 6: In Figure 5 covariances are of order 0.01, but they are of order 1 in Figure 6, likely because of a difference in mean rates of a factor 10 between the two.

6) The most interesting and compelling section of the paper is arguably the dependence of correlation on behavioral state. This result, though, seems a bit weak in magnitude. Are there other data to shore this up, or a null model to reveal how surprising this change is? A direct quantification of the covariance pattern change between epochs in Figure 6B, for example, would be good to see.

eLife. 2022 Jan 20;11:e68422. doi: 10.7554/eLife.68422.sa2

Author response


Essential revisions:

1) The scenario put forward by the authors describe the data well, but a major drawback is that it needs fine tuning for the network to be close to the critical point. There are multiple other, and perhaps simpler, scenarios that could reproduce the data described by the authors, but they are not adequately discussed in this paper:

We thank the reviewers for agreeing with us that our derived mechanism describes the experimental data well. In the revised manuscript, we expand on the discussion of potential other mechanisms underlying the observed correlation structures and argue in more detail why we think the presented mechanism is the most plausible one.

A: Fluctuating and broadly divergent external inputs could in principle generate the observed correlation structure. The authors quickly dismiss this scenario in the discussion: 'In such a scenario, covariances have been shown to be predominantly generated locally rather than from external inputs (4,15)'. It was not clear where this is shown in these papers. The authors should explain much more clearly why they believe the external input scenario is unlikely (see also issue #5).

We agree with the reviewers that we cannot fully exclude the scenario in which the measured correlations in the local network below the Utah array are generated from particularly correlated fluctuating external inputs. However, this scenario would raise the question which mechanism causes these particular external input correlations. In sensory areas, neurons are strongly driven by input stimuli, which is often reflected in low-rank structures leading to sizable positive average covariances (Rosenbaum et al., 2017). The resting-state activity in motor cortex investigated here, however, lacks these direct stimulations as well as behaviorally related activity transients. We consequently observe only very weak average covariances that can be well explained by ongoing activity in balanced networks. Helias et al., (2014) (ref 4 in our previous version) investigates intrinsic and extrinsic sources of correlations in such networks and finds that intrinsic connectivity dominates correlations for recurrent network size N<107(cint>>cext, see their Figure 7a). Furthermore, for realistic local network sizes, local correlations only weakly depend on the amount of external correlations (their Figure 3a). Dahmen et al., (2019) (ref 19 in our previous version) shows the extreme case of driving a local network with correlated input from outside (see supplement section 3 “Effect of correlated external input”). In that setting, the correlated external population is of the same size as the local population, exhibiting the same intrinsic correlation structure as the local population. While the resulting correlated external input has some quantitative effect on the local statistics of covariances, their qualitative dependence on the spectral radius of the local network connectivity remains unchanged, as can be seen by all curves of normalized covariances in panel E collapsing on the same theoretical prediction that neglects correlated external inputs. Moreover, Ref 19 (Figure 4C) shows that the dependence of the correlations on the spectral bound is qualitatively the same in spatially organized networks. In the revised manuscript, we extended our discussion on external inputs to clarify why the local origin of the observed correlation structure seems most plausible. The new text in the Discussion reads:

“Spatially organized balanced network models have been investigated before in the limit of infinite network size, as well as under strong and potentially correlated external drive, as is the case, for example, in primary sensory areas of the brain (Rosenbaum et al., 2017; Baker et al., 2019). […] Despite sizable external input correlations projected onto the local circuit via potentially strong afferent connections, the dependence of the statistics of covariances on the spectral bound of the local recurrent connectivity is predicted well by the theory that neglects correlated external inputs (see supplement section 3 in Dahmen et al. (2019)).”

B: Recurrent connectivity correlated with neuronal selectivity, as in visual cortex, but with salt-and-pepper organization. This would be a scenario similar to what has been proposed in multiple visual cortex models (see old work by Tsodyks, Sompolinsky, and more recently ref (14)). The authors also dismiss this scenario because of the lack of patchiness of the correlation structure in M1, but we would not expect any patchiness if selectivity is salt-and-pepper.

We thank the reviewers for this remark. We agree with the reviewers that this point was not sufficiently discussed in the previous version of our manuscript. In the revised manuscript, we expanded the text in the discussion in the following way:

“Long-range connectivity, for example arising from a salt-and-pepper organization of neuronal selectivity with connections preferentially targeting neurons with equal selectivity (Ben-Yishai et al., 1995; Hansel and Sompolinsky, 1998; Roxin et al., 2005; Blumenfeld et al., 2006), would produce salt-and-pepper covariance patterns even in networks with small spectral bounds where interactions are only mediated via direct connections. […] In contrast, correlations due to imprinted network structures are static, so that a change in gain of the neurons will either strengthen or weaken the specific activity propagation, but it will not lead to a change of the sign of covariances that we see in our data.”

Furthermore, in order to emphasize more clearly that covariance patterns are highly dynamic as opposed to hard-wired via long-range direct connectivity, we expanded the explanation of Figure 6 to clarify that covariance patterns change across the two different epochs (S and P) of the reach-to-grasp experiment in a manner that makes a static origin less plausible:

“Despite similar overall distributions of covariances in S and P (Figure 6D1), covariances between individual neuron pairs are clearly different between S and P: Figure 6B shows the covariance pattern for one representative reference neuron in one example recording session of monkey N. […] In particular, the theory provides a mechanistic explanation for the different coordination patterns between neurons on the mesoscopic scale (range of a Utah array), which are observed in the two states S and P (Figure 6B). The coordination between neurons is thus considerably reshaped by the behavioral condition.”

We finally note that ref 14 mentioned by the reviewers employs a different mechanism than long-range direct connectivity. There the authors find in simulations that long-range coordination emerges close to linear instability by tuning anisotropies of local connection kernels in a neural field model. Through communication with the authors we identified that their observation can presumably be explained with our theoretical findings. The smoother heterogeneity (anisotropies) in ref 14, however, leads to stronger patchiness of covariance patterns than observed here for sparse networks. We contacted the authors of ref 14 and made sure that they agree with the discussion of their results in our manuscript and the way we contrast our work.

C: Long range connectivity within M1. The existence of horizontal long range connectivity has been demonstrated in multiple cortical areas, and in particular motor cortex (see DeFelipe et al., 1984). Such a long range connectivity could in principle give rise to the long-range correlations found by the authors.

We agree with the reviewers that long-range connections could play an important role in long-range coordination. However, to our knowledge, long-range connections are mostly patchy and predominantly excitatory, which could explain positive covariances at long distances, but would not provide an explanation for the large negative covariances between excitatory neurons at long distances (see e.g. Figure 1D). Furthermore, as already discussed above in response to point B, long-range connectivity would be static and therefore not change as much as we observe during different epochs of the reach-to-grasp experiment. Therefore, we believe that our mechanism via indirect interactions is more plausible. We added the reference to DeFelipe et al., in the Discussion of the revised manuscript:

“Likewise, embedded excitatory feed-forward motifs and cell assemblies via excitatory long-range patchy connections (DeFelipe et al., 1986) can create positive covariances at long distances (Diesmann et al., 1999; Litwin-Kumar and Doiron, 2012). Yet these connections cannot provide an explanation for the large negative covariances between excitatory neurons at long distances (see e.g. Figure 1D).”

D: Can this work be connected to the findings in Schwab et al., PRL 2014? If latent variables can give rise to signatures of criticality but do not require fine tuning of the brain's network state, might this not be a more biologically plausible origin for the long-range correlation observed here?

The work by Schwab et al., PRL 2014 studies how a power law shape of the ranked network state probabilities not only occurs at criticality but more generally in case of unobserved external inputs. In fact, the authors study ranked network state probabilities, whereas we here study the variance of covariances and their distance dependence which we find to be exponential in the long-range limit. As such, these two measures are clearly different. This is why it is unclear why the explanation of Zipf distributions based on latent variables could be an explanation for the long-range coordination studied here. Yet, the reviewers are right that any critical state also produces long-range correlations and that latent external inputs can lead to spurious signatures of criticality. As discussed above, we cannot fully exclude that an individual experimental finding is produced by a more specific mechanism than ours. However, we want to emphasize that we derive a single general mechanism that at the same time explains multiple experimental findings: broad covariances, a shallow decay of correlations and a sensitive state dependence. If, instead, external inputs and long-range connections were the underlying cause of these observations, it would require tuning of different sets of parameters to explain the observed correlation structures. Furthermore, we show that the here proposed mechanism is general in that it only relies on thelocal network connectivity. We agree that it requires sufficiently high variability of the connectivity for the network to reside in the dynamically balanced regime. Yet, this is not really a fine-tuning, since the dynamically balanced regime is not just a single point in parameter space. Also, since the effective connectivity depends on a large number of network and neuron parameters, there is a plethora of options for the network to a create and maintain a large value of the spectral bound. In the revised manuscript, we extended our discussion on this aspect in the following way:

“Note that both mechanisms of long-range coordination, the one studied in (Smith et al., 2018) and the one presented here, rely on the effective connectivity for the network to reside in the dynamically balanced critical regime. The latter regime is, however, not just one single point in parameter space, but an extended region that can be reached via a multitude of control mechanisms for the effective connectivity, for example by changing neuronal gains (Salinas and Sejnowski, 2001a,b), synaptic strengths (Sompolinsky et al., 1988), and network microcircuitry (Dahmen et al., 2020).”

2) The main text is sorely lacking any details of the model analyzed and simulated by the authors. The reader has to go to p.47 of the Supplementary Material to finally understand the authors simulated a rate model with threshold-linear (ReLU) transfer functions. The model should be explained in the main text when it is first introduced, and the equations should be described in the Methods of the paper, and not confined in the Supplementary Material.

We thank the reviewers for this comment and moved the model details from the supplement to the main text and Methods section. We would like to point out that the specific rate model has only been chosen for Figure 5 to make the point of the state-dependence intuitive. The main results are not derived for this specific model, but are based on model-independent properties of linear-response theory: In a previous work we have explicitly shown (see Grytskyy et al., (2013) and Figure 4 in Dahmen et al., (2019)) that theoretical predictions are in agreement with simulations of networks of spiking leaky integrate-and-fire models. We discuss this in the new initial text of the methods section “Network model simulation”.

3) A related issue is that it remains unclear how general these theoretical results are. There are several places in the paper where the authors hint that their results are more general than the analyzed rate model, but it remains unclear whether this is the case, and in particular whether their results could also apply to networks of spiking neurons. This issue needs to be discussed more clearly.

We thank the reviewers for pointing out that the previous version of our manuscript was not stressing enough the generality of our findings and applicability to networks of spiking neurons. As mentioned above, our study builds on top of a series of works (Tetzlaff et al., 2012; Trousdale et al., 2012; Pernice et al., 2012; Grytskyy et al., 2013; Helias et al., 2013; Dahmen et al., 2019) that showed quantitative agreement of predicted correlations by the employed linear-response theory with simulations of spiking neurons. We added the following explanation to the discussion of the revised manuscript:

“Our analysis of covariances on the single-neuron level goes beyond the balance condition and requires the use of field-theoretical techniques to capture the heterogeneity in the network (Dahmen et al., 2019; Helias and Dahmen, 2020). It relies on a linear-response theory, which has previously been shown to faithfully describe correlations in balanced networks of nonlinear (spiking) units (Tetzlaff et al., 2012; Trousdale et al., 2012; Pernice et al., 2012; Grytskyy et al., 2013; Helias et al., 2013; Dahmen et al., 2019). These studies mainly investigated population-averaged correlations with small spectral bounds of the effective connectivity. Subsequently, (Dahmen et al., 2019) showed the quantitative agreement of this linear-response theory for covariances between individual neurons in networks of spiking neurons for the whole range of spectral bounds, including the dynamically balanced critical regime. The long-range coordination studied in the current manuscript requires the inclusion of spatially non-homogeneous coupling to analyze excitatory-inhibitory random networks on a two-dimensional sheet with spatially decaying connection probabilities. This new theory allows us to derive expressions for the spatial decay of the variance of covariances.”

4) Stationarity claims are not well substantiated. Figures 2 and 3 in the Appendix show that the population averaged activity is approximately stationary for different sessions. However, single neuron activity could be highly dynamic, but these fluctuations of activity could be washed out at the population level. It would be helpful to check whether trial-averaged single unit activity is indeed stationary during these epochs. If it is, then it should be shown in the Appendix. If the authors are not able to demonstrate stationarity at the single neuron level, then some of their conclusions should be toned down (in particular, the last paragraph of the Discussion). Note that this issue is strongly related to issue #1, scenario A.

We agree with the reviewers that the population averaged statistics shown in Figures 2 and 3 in the Appendix of the previous version were not sufficient to claim that the activity was stationary within a given epoch (S or P) of the reach-to-grasp experiment. Therefore, we performed the analysis suggested by the reviewers and provide trial-averaged single unit activity in the new Figures 5 and 6 of the revised Appendix. These figures show that the majority of neurons has stationary activity statistics within the relevant epochs S and P, especially when comparing to their whole dynamic range that is explored during movement transients towards the end of the task. The statistics changes notably between the two epochs S and P. This is in line with the second test for stationarity that we show in Figure 6E,F. There we compare the firing rate and covariance changes between two 200ms segments of the same epoch to the firing rate and covariance changes between two 200ms segments of different epochs. If the neural activity was not stationary within an epoch then we would not obtain correlation coefficients of almost 1 between firing rates in Figure 6E and correlation coefficients up to 0.9 between covariance patterns within one epoch in Figure 6F. In summary, the analyses together make us confident that assuming stationarity within an epoch is a good approximation to show that there are significant behaviorally related changes in covariances across epochs of the reach-to-grasp experiment.

5) There are claims of 'strong' spike count covariances at several points in the paper, but the covariances by themselves tell us nothing about the strength of correlations if we don't know the respective variances. In that respect, plotting correlations instead of covariances (as is done in many other studies of correlation structures in brain networks) would be much more informative about the strength of correlations between neurons. An example of why using covariances is problematic to assess strength of correlation is given by a comparison between Figures5 and 6: In Figure 5 covariances are of order 0.01, but they are of order 1 in Figure 6, likely because of a difference in mean rates of a factor 10 between the two.

We agree with the reviewers that correlation coefficients are easier to interpret than covariances. In the revised manuscript we therefore begin with discussing correlation coefficients of spike counts in the experimental data. We replaced the covariances in the previous version of Figure 1 by correlation coefficients, showing that spike counts between individual neuron pairs exhibit strong positive and negative correlations. To show that the differences in correlations across neuron pairs stem from largely differing covariances, we added a new figure to the revised Appendix (Appendix Figure 1) showing correlation coefficients, covariances, and variances side-by-side. Note that the spread of covariances is a more robust measure than the spread of correlation coefficients, because the latter depend on single neuron properties, such as the individual neurons’ firing rates. For that reason, a theory for covariances depends on fewer parameters. For the remainder of the revised manuscript we therefore prefer to show covariances rather than correlations and added explanations to relate covariance magnitudes in Figures 5 and 6, as the reviewers correctly suggested.

“The mechanistic model in the previous section shows a qualitatively similar scenario (Figure 5C,E). By construction it produces different firing rate patterns in the two simulations. While the model is simplistic and in particular not adapted to quantitatively reproduce the experimentally observed activity statistics, its simulations and our underlying theory make a general prediction: Differences in firing rates impact the effective connectivity between neurons and thereby evoke even larger differences in their coordination if the network is operating in the dynamically balanced critical regime (Figure 5D).”

6) The most interesting and compelling section of the paper is arguably the dependence of correlation on behavioral state. This result, though, seems a bit weak in magnitude. Are there other data to shore this up, or a null model to reveal how surprising this change is? A direct quantification of the covariance pattern change between epochs in Figure 6B, for example, would be good to see.

We thank the reviewers for sharing our interest in the state dependence of covariances that we showed during the reach-to-grasp task. And we agree with the reviewers that some reference case is needed in order to judge how surprising the change of covariances between epochs is. There has already been a reference in the previous version of the manuscript, which we now see was not explained well enough. Since measurements of covariances based on a finite number of data points are noisy, we compare the changes in covariances within two 200ms segments of the same epoch to the changes in covariances between 200ms segments of two different epochs (Figure 6E,F). This statistics also includes a quantitative comparison between the two covariance patterns shown in Figure 6B as one data point (session 8). We clarify this in the revised caption of Figure 6. Furthermore, we expanded the explanation and discussion of Figure 6B-F in the text explaining how we gauge the observed changes between behaviorally different epochs. The revised text reads:

“Within each epoch, S or P, the neuronal firing rates are mostly stationary, likely due to the absence of arm movements which create relatively large transient activities in later epochs of the task which are not analyzed here (see Appendix 1 Section 3). […] The coordination between neurons is thus considerably reshaped by the behavioral condition.”

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Figure 1—source data 1. Code and data.
    Figure 3—source data 1. Code and data.
    Figure 4—source data 1. Code and data.
    Figure 5—source data 1. Code and data.
    Figure 6—source data 1. Code and data.
    Transparent reporting form

    Data Availability Statement

    All code and data required to reproduce the figures are available in a public zenodo repository at https://zenodo.org/record/5524777. Source data/code files are also attached as zip folders to the individual main figures of this submission.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES