Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 May 13.
Published in final edited form as: Cell Rep. 2020 Apr 14;31(2):107483. doi: 10.1016/j.celrep.2020.03.047

Network Analysis of Murine Cortical Dynamics Implicates Untuned Neurons in Visual Stimulus Coding

Maayan Levy 1, Olaf Sporns 4,5, Jason N MacLean 1,2,3,6,*
PMCID: PMC7218481  NIHMSID: NIHMS1584934  PMID: 32294431

SUMMARY

Unbiased and dense sampling of large populations of layer 2/3 pyramidal neurons in mouse primary visual cortex (V1) reveals two functional sub-populations: neurons tuned and untuned to drifting gratings. Whether functional interactions between these two groups contribute to the representation of visual stimuli is unclear. To examine these interactions, we summarize the population partial pairwise correlation structure as a directed and weighted graph. We find that tuned and untuned neurons have distinct topological properties, with untuned neurons occupying central positions in functional networks (FNs). Implementation of a decoder that utilizes the topology of these FNs yields accurate decoding of visual stimuli. We further show that decoding performance degrades comparably following manipulations of either tuned or untuned neurons. Our results demonstrate that untuned neurons are an integral component of V1 FNs and suggest that network interactions contain information about the stimulus that is accessible to downstream elements.

Graphical Abstract

graphic file with name nihms-1584934-f0001.jpg

In Brief

Levy et al. record populations of neurons in visual cortex responding to drifting gratings. Pairwise correlations, summarized as functional networks, are specific for each direction. Reliably responding neurons, i.e., tuned and untuned neurons, are differentially located in networks. Both classes of neurons and their connections contribute to the coding of direction.

INTRODUCTION

Neurons in sensory cortices collectively encode information about the external world. In primary visual cortex (V1), neurons are thought of as selective, or tuned, to an orientation or direction of a moving bar or drifting gratings if the neuron exhibits consistently increased firing rate in response to a direction or orientation (Hubel and Wiesel, 1959). Not all neurons are tuned to a particular statistical feature of a visual stimulus shown in an experiment, with untuned neurons comprising roughly 20%–50% of pyramidal neurons in mouse V1 varying by lamina (Niell and Stryker, 2008; Ringach et al., 2016; Sun et al., 2016; Zariwala et al., 2011), and their role in visual coding remains understudied.

Imaging techniques allow us to densely sample large numbers of neurons in an unbiased manner, facilitating simultaneous examination of the full circuit response, including both tuned and untuned neurons (Olshausen and Field, 2005). This is helpful because one way by which untuned neurons are hypothesized to contribute to coding is through their correlation structure with both tuned and untuned neurons (Zylberberg, 2017). Indeed, studies that include pairwise correlations have demonstrated superior decoding performance compared with decoders assuming independent units (Chen et al., 2006; Graf et al., 2011; Shi et al., 2015). Theoretical studies have suggested that the correlation structure of a network of neurons can itself hold information about the stimulus, especially when the spatial decay of correlations is considered (Josić et al., 2009). Experimental evidence corroborates this postulation: noise correlations between direction-selective neurons in macaque middle temporal visual area (MT) were found to depend on the direction shown, indicating that those correlations themselves can be tuned (Ponce-Alvarez et al., 2013). Finally, correlational couplings between neurons, including coupling between tuned and untuned neurons, can be used to predict neuronal single-trial responses regardless of whether couplings represented tuned or untuned inputs (Dechery and MacLean, 2018). Yet the extent to which pairwise correlations across a large and functionally diverse neuronal population may contribute to decoding remains unknown.

We generated functional networks (FNs) as a summary of network activity because FNs maintain neuron-specific labels while simultaneously capturing all pairwise correlations. Specifically, in this framework, neurons are nodes, and statistical dependencies in the activity between neurons are edges resulting in a weighted and directed matrix that can be structurally evaluated using graph theoretic tools (Dechery and MacLean, 2018; Kotekal and MacLean, 2020; for a review, see Bassett and Sporns (2017). Here we find that FN topology is specific to a given visual stimulus. Untuned neurons inhabit central positions within the topology acting as functional hubs because of their propensity to form a rich club of strong connections and their increased ranking in random walks. Using a two-stage model comprised of a generative and a decoding component, we demonstrate that information about the stimulus is represented in the pattern of functional connections between tuned and untuned neurons. Hence the analysis of FNs, which naturally encompass circuit-wide interactions across multiple neuronal classes, provides an approach that unifies the neuron-centric and population-centric frameworks in visual system neuroscience.

RESULTS

Visual Cortical Responses to a Range of Visual Stimuli Overlap in Neuron Identities

To evaluate the circuit encoding of visual stimuli, we imaged populations of 73–347 layer 2/3 excitatory pyramidal neurons expressing GCaMP6s in murine primary visual cortex (see STAR Methods; data previously described in Dechery and MacLean (2018). Mice were awake and free to ambulate while they viewed drifting gratings in 12 directions (Stimulus) interleaved with a gray mean luminance screen (Gray). To evaluate the viability of a coding scheme based on the identities of neurons, we examined the overlap of the most active population between directions. The most active neurons were defined as cells with time-averaged fluorescence values in the top nth percentile. We varied n between 5% and 30%. We found that overlap depends on stimulus similarity (Figures 1A and 1B), with the largest counts of shared neurons found between directions 180 degrees apart (which have the same orientation; 70.51% ± 8.44%) and neighboring directions (63.96% ± 10.01%). In contrast, 53.24% ± 12.28% of the most active neurons were found to be shared between orthogonal directions. The extent of overlap would suggest that the changing sets of neurons active from trial to trial across multiple stimuli would represent an inefficient coding scheme. Indeed, a feed-forward (FF) neural net decoder (STAR Methods) that used the identities of the neurons active in response to each grating direction as inputs failed to reach realistic levels of performance. This continued to be the case when we varied the threshold of what constitutes the active population (Figure 1C). We confirmed this result with a more stringent maximum-likelihood (ML) decoder (STAR Methods; Avitan et al., 2016; Ponce-Alvarez et al., 2018), decoding from the mean activity of all neurons rather than binarizing and setting the most active neurons to 1 and all the other neurons to 0 as we did with the decoder in Figure 1C. The ML decoder had a mean performance of 28.12% ± 11.61% across datasets (Figure 1D), indicating, together with the FF decoder, that neither the identities of the most active neurons nor their level of activity was sufficient to decode from a population.

Figure 1. Overlap between the Most Active Cells in Different Stimuli.

Figure 1.

(A) An example field of view from dataset 3, with 347 neurons. Scale bars stand for 100 μm. FOV visualization was done by stitching together images we acquired for cell detection in a 4×4 grid, as described in STAR Methods. The top 20% of the most active neurons on average across trials are marked for drifting gratings at 30, 60, 120, and 210 degrees. Blue neurons are unique to the direction represented, whereas orange, green, red, and yellow neurons are shared between stimuli; i.e., they are in the most active group in at least two directions. Note that 60 degrees is a neighboring direction to 30 degrees, 120 degrees is the orthogonal direction, and 210 degrees has the same orientation as 30 degrees.

(B) Heatmaps quantifying the overlap in identities of the most active cells between each pair of directions of drifting grating, where the most active population was defined at the top 10% (left), 20% (middle), and 30% (right). Directions are on both axes of the heatmaps, with 1 being 30 degrees.

(C) Performance of a decoder trained with the identities of the most active cells. The input to the decoder was a binary vector where the most active cells in each trial received 1 and all other cells received 0. The most active cell pool was defined as the top 5%–30% cells with the largest fluorescence values averaged over the first 1.5 s of stimulus epochs. Line represents mean across datasets (n = 20), and shading stands for the standard error of the mean.

(D) Performance of a maximum-likelihood decoder. In this decoding approach we built conditional probability distributions for the activity of each neuron in response to each direction from a training set. Decoding was performed by taking the direction that produced the maximum-likelihood of a mean response in test set trials. Green dots represent datasets (n = 20), and the larger gray dot stands for the mean across datasets. Dashed line marks chance level.

(E) Tuned neurons (blue) did not differ in their latency to activate in stimulus (solid, 33.95 ± 34.66 frames) and gray (dashed, 33.03 ± 27.05 frames) trials, whereas untuned neurons (orange) activated significantly earlier in gray epochs (27.33 ± 24.92 frames) compared with drifting gratings (38.69 ± 38.64 frames; F = 9245.39; p < 0.001).

FNs Summarize Circuit Pairwise Correlations

We next explored a second coding scheme based on the co-activity between neurons. We sorted the population into significantly tuned (59.79% ± 19.95%) and untuned (40.21% ± 19.95%) neurons (Figures 2A2C; see STAR Methods; Niell and Stryker, 2008; Ringach et al., 2016; Sun et al., 2016; Zariwala et al., 2011). When comparing the two sub-populations, we found that untuned neurons were consistently activated with a shorter latency during presentation of gray epochs (27.33 ± 24.92 frames) but with a longer latency during presentation of drifting gratings (38.69 ± 38.64 frames) as compared with tuned neurons (Gray: 33.03 ± 27.05; Stimulus: 33.95 ± 34.66 frames; F = 9245.39; p < 0.001; Figure 1E). This longer latency of response to drifting gratings suggested that untuned neurons play a distinct functional role from tuned neurons in visual processing. To evaluate the potential roles of these two functional classes while also considering their interrelationships, we constructed FNs for each direction of drifting grating. We also separately considered FNs constructed from all stimulus epochs together (Stimulus) and FNs constructed from gray epochs (Gray; Figure 2E). In brief, we parsed the relevant epochs from fluorescence traces and calculated the partial correlation coefficient between each pair of neurons, factoring out the average responses of the neurons in the pair (analogous to signal correlation), as well as the overall level of population activity (corresponding to global state changes such as those that accompany locomotion; Figure 2D). Thus, the FN summarized the partial pairwise correlation that is independent of stimulus and internal state and is analogous to noise correlations. We assigned directionality to connections based on the lag that resulted in a peak in the neuronal pairwise cross-correlation (STAR Methods), with lag 0 taken to indicate a bi-directional connection.

Figure 2. Constructing a Network Comprising Two Functional Classes.

Figure 2.

(A) An example field of view from dataset 3, with 347 neurons. Scale bar denotes 100 mm. FOV visualization was done by stitching together images we acquired for cell detection in a 4×4 grid, as described in STAR Methods. Neurons tuned to different directions are marked in different colors, and untuned neurons are marked in white. A tuned and untuned neuron are circled in blue and orange, respectively, for further illustration.

(B) Trial-averaged fluorescence traces for 12 directions of drifting gratings and a luminance-matched gray screen for a tuned neuron (top, blue) and untuned neuron (bottom, orange). Those neurons are marked in (A).

(C) The resulting tuning curves for the neurons shown in (A) and (B).

(D) An illustration of edge weight and direction computation for the pair of neurons shown in (A)–(C). Top: the activity of the tuned neuron (blue) and untuned neuron (orange) in a single movie. Middle: the averaged activity of those two neurons in all other movies (n = 9 for most datasets). Bottom (gray): average activity of all other neurons in the population in the same single movie plotted on the top. For each neuron, we regress out the average activity of the neuron in other movies and the population average in the same movie. We then correlate the residual activity of the neurons in the pair, and this value is used as the edge weight. To assign directionality, we examine the peak in the cross-correlogram (bottom right). In this example, the peak is at zero and results in a bi-directional edge.

(E) An illustration of the different functional networks (FNs) we construct from fluorescent activity. Stimulus FN (left) is inferred from all visually evoked activity regardless of stimulus identity, gray FN (middle) is inferred from all gray epochs, and stimulus-specific or direction FNs are inferred only from trials of specific direction of drifting gratings.

First-Order Topological Features Reflect Single-Neuron Response Properties

Examining Stimulus FNs, we observed that edge weights between tuned neurons were quantitatively related to the similarity between preferred directions (Figure 3A) in agreement with previous studies (Cossell et al., 2015; Nauhaus et al., 2009). This remained the case when FNs were constructed from spikes inferred from the calcium fluorescence traces (see STAR Methods and Figure S1). The strongest edge weights were present between neurons tuned to the same direction, whereas neurons tuned to orthogonal directions were connected by weaker edge weights see also Dechery and MacLean, (2018). Notably, gray FNs exhibited less structure in the arrangement of weights related to tuning properties, likely reflecting the fact that similarly tuned neurons are more likely to be synaptically connected (Ko et al., 2013).

Figure 3. FNs Inferred from Fluorescence Contain Stimulus Features.

Figure 3.

(A) Mean edge weight between pairs of tuned neurons is a function of similarity in their preferred direction in FNs inferred from stimulus epochs, but not in gray epochs. Lines represent means across datasets (n = 20), and shading represents the standard error of the mean. Note that pairs of neurons 180 degrees apart are composed of neurons that prefer the same orientation.

(B) Untuned neurons have larger realized out-degrees in gray FNs (gray), both to tuned and untuned targets. The opposite is true for tuned neurons, with bigger realized out-degrees to untuned neurons in Stimulus (green) FNs. Slices in the polar plot present averages across neurons; shading stands for standard deviation. Asterisks denote significance in Kolmogorov-Smirnov test.

(C) Same as in (B) for edge weights between pairs of neurons.

(D) Realized in-degrees for tuned neurons depend on the neuronal source and the input (Stimulus versus Gray). Slices show the means across neurons, and shading represents the standard deviation. Orange asterisks mark the mean as significantly different from all other conditions within the same FN (same color) according to Tukey-Kramer test. Two-way analysis of variance (FN type and incoming edge type) was also significant (F = 89.98; p < 0.001). In gray FNs, realized in-degrees from neurons tuned to neighboring directions and neurons tuned to the same orientation did not significantly differ from each other, but were different from the other two incoming edge types.

(E) Relative degrees in Stimulus (green) and Gray (gray) FNs for tuned and untuned neurons. Slices and shading represent the means and standard deviations across neurons, respectively. Asterisks stand for significant difference between Stimulus and Gray FNs according to Kolmogorov-Smirnov test, Bonferroni corrected. t, tuned neurons; ut, untuned neurons.

(F) Same as in (D) for relative in-degrees. F = 24.19; p < 0.01. In both Stimulus and Gray FNs, relative in-degree was similar from neurons tuned to the same and neighboring directions, and those two categories were significantly different (p < 0.01, Tukey-Kramer corrected) from the relative in-degree from orthogonal and same orientation selective neurons. In turn, orthogonal and same orientation inputs also did not differ in their relative portion.

See also Figures S1S3.

We next evaluated whether tuned and untuned neurons possessed unique topological signatures in and across FNs. We found that both tuned and untuned neurons had similar in-degrees in both stimulus FNs and gray FNs. However, tuned and untuned neurons differed in their out-degrees, with untuned neurons displaying significantly larger out-degrees in gray FNs as compared with stimulus FNs (Figure S2). To clarify the source of this difference, we examined the composition of incoming and outgoing edges in both functional classes of neurons. Specifically, we looked at two neuron-centric metrics: (1) realized edges, defined as edge count out of the potential pool (e.g., the count of incoming edges from untuned neurons over the number of untuned neurons); and (2) relative degree, which is the proportion of edges from a certain class out of the total edges of the neuron (e.g., the count of outgoing edges to tuned neurons over the number of outgoing edges of the neuron in question; for more details, see STAR Methods).

We found that the average tuned neuron realizes more of the available potential outgoing edges to untuned neurons in Stimulus (0.32 ± 0.14) compared with Gray (0.27 ± 0.15; Figure 3B) FNs, and untuned neurons constitute a larger portion of tuned neurons outgoing pool in Stimulus (0.35 ± 0.19) than in Gray (0.29 ± 0.20; Figure 3E) FNs. Conversely, untuned neurons showed increased realized out edges in Gray FNs regardless of whether the target was tuned: to untuned neurons (0.34 ± 0.17 versus 0.32 ± 0.15 in Stimulus); to tuned neurons (0.33 ± 0.17 versus 0.26 ± 0.17 in Stimulus; Figure 3B). This latter trend also manifested as bigger relative out-degree of untuned to tuned neurons in Gray (0.47 ± 0.20) as compared with Stimulus (0.40 ± 0.21; Figure 3E) FNs. Edge weights reflected the same trends (Figure 3C). Hence neurons of the two functional classes show distinct correlation structure depending on the condition considered.

We next evaluated whether tuned neurons have a topological signature beyond partial pairwise correlation values. Tuned neurons displayed elevated realized in-degrees in Stimulus FNs dependent on the tuning of other tuned neurons. Elevated realized in-degrees were greatest from neurons preferring the same direction (0.50 ± 0.25), followed by incoming edges from neurons preferring the direction 180 degrees away, e.g., 30 and 210 degrees (0.44 ± 0.29), and then neurons tuned to neighboring directions (30 degrees apart; e.g., 30 and 60 degrees; 0.39 ± 0.22). Tuned neurons showed low realization rate of incoming edges from neurons selective to orthogonal directions (0.23 ± 0.23; Figure 3D), mirroring the dependency of edge weight on the difference in tuning (Figure 3A). In Gray FNs, however, realized in-degrees for tuned neurons depended less on the difference in tuning preference (same: 0.42 ± 0.25; neighboring: 0.37 ± 0.23; same orientation: 0.36 ± 0.28), and notably higher realization was found for incoming edges from neurons preferring orthogonal directions (0.30 ± 0.24; Figure 3D). The composition of incoming connections for tuned neurons also depended on tuning similarity with large relative degrees for neurons tuned to the same (0.18 ± 0.15) and neighboring (0.17 ± 0.12) directions. Surprisingly, only a small portion of the average tuned neuron in-degree in Stimulus FNs was due to edges arising from neurons selective to directions 180 degrees away, i.e., sharing the same orientation (0.07 ± 0.08; Figure 3F). Again, in Gray FNs, this effect was attenuated (same: 0.15 ± 0.16; neighboring: 0.15 ± 0.12; same orientation: 0.06 ± 0.08), and relative in-degrees from neurons tuned to orthogonal direction was larger (0.07 ± 0.08 versus 0.05 ± 0.07 in Stimulus; Figure 3F). Realized and relative out-degrees were similar to in-degrees and are depicted in Figure S3.

These results demonstrate that functional topology is nonrandom and specific to epochs of visual stimulation. Furthermore, neurons of different functional roles, tuned and untuned, differ in their correlations profile, in a manner that may enhance coding. Specifically, untuned neurons may contribute to visual stimulus encoding by modulating their interactions with tuned neurons, because these interactions are stimulus specific. It follows that tuning, or a lack thereof, is therefore a manifestation of network interactions as much as it is a single neuron property.

FNs Are Stimulus Specific

To further investigate the stimulus specificity of FNs, we compared the FNs that had been generated from each separate direction of drifting grating. Comparing the edge weights between a pair of neurons tuned to the same direction, we found the largest edge weights in FNs constructed from the direction the pair was tuned to (0.067 ± 0.11). Edge weights in FNs constructed from neighboring directions were found to be increased (0.058 ± 0.10) compared with FNs inferred from orthogonal directions (0.051 ± 0.09; Figure 4A). Noise correlations between pairs of neurons that are similarly tuned are thus stimulus dependent.

Figure 4. FNs Are Stimulus Specific.

Figure 4.

(A) Pairs of neurons preferring the same direction have larger edge weights in FNs constructed from the same (blue) direction they are tuned for, followed by FNs inferred from the neighboring directions (green). Pairs have the smallest edge weights in FNs of orthogonal directions (p < 0.01, all three groups are different according to Kolmogorov-Smirnov test, Bonferroni corrected). Inset: zoom of large edge weights. Vertical lines and shading are means and standard deviations, respectively.

(B–D) Probability density distributions of normalized alignment scores for pairs of FNs that are built from trials of neighboring directions (green), trials of directions 180 degrees apart (same orientation, orange), and trials of orthogonal directions (gray) in full FNs (B; F = 22.87; p < 0.01), sub-FNs including only tuned neurons (C; F = 47.57; p < 0.01), and sub-FNs with only untuned neurons (D; F = 3.16, NS). Vertical lines and shading are means and standard deviations, respectively.

We next asked whether the topology of FNs as a whole is stimulus dependent as well, and sought to quantify this separately for tuned and untuned neurons. Graph alignment allows for a principled comparison of FNs identifying common edges between graphs (Gemmetto et al., 2016; see STAR Methods). This metric preserves node identities, ranges between 0 and 1, and is normalized to control graphs to evaluate whether alignment is larger than expected by chance. We measured alignment between each pair of FNs constructed from trials of different drifting gratings directions. Alignment scores were calculated for FNs containing all neurons and in sub-networks with only the tuned or untuned neurons, where edges between untuned or tuned neurons were set to zero, respectively. To explore the relevance of FN topology to stimulus coding, we were especially interested in alignment between FNs for neighboring, opposite (180 degrees apart), and orthogonal directions. We found that edges were highly preserved across networks inferred from neighboring directions (0.225 ± 0.053 alignment score), were less similar (0.200 ± 0.053) for orthogonal directions, and finally were highly similar for opposite directions (0.237 ± 0.0056), which have the same orientation (F = 22.87; p < 0.01; Figure 4B). Alignment scores were driven by edges among tuned neurons, as the tuned sub-FNs showed the same stimulus specificity (F = 47.57; p < 0.01; Figure 4C). In contrast, the functional connectivity structure of untuned neurons was highly preserved regardless of stimulus similarity (F = 3.16, not significant [NS]; Figure 4D), suggesting unique roles of the two functional classes in stimulus coding.

Untuned Neurons Form a Rich Club of Large Edge Weights

Recent studies of neuronal networks constructed from whole human brain imaging data (van den Heuvel and Sporns, 2011; van den Heuvel et al., 2012), cortical slices (Nigam et al., 2016), cortical cultures (Faber et al., 2019), and fronto-parietal cortex in monkeys (Dann et al., 2016) have found a rich-club topology, in which the nodes with the largest degrees are also densely connected among themselves. Here we examined whether FNs in L2/3 in V1 in vivo also exhibit a rich club organization (STAR Methods). We found that all datasets displayed significant rich club topology spanning the majority of neurons in the sample, in both the stimulus and gray FNs (Figures 5A5C). To probe the position of untuned neurons within the rich club, we iteratively thresholded the networks according to edge weight. Consequently, at each iterative stage we included smaller and smaller weights, and networks became increasingly dense. We then sorted the neurons by their degrees in the resultant networks and examined the composition of the neurons with the kth percentile of top degrees (Figure 5D). Untuned neurons were found to be more prevalent in the group of neurons with the strongest weights and largest degrees, as evident in the weight-thresholded networks. As more small weights were included in the networks, untuned neurons no longer possessed the largest degrees in the network (Figures 5E and 5F). When pruning the networks in reverse order, keeping the smallest weights at each iteration (Pajevic and Plenz, 2012), untuned neurons were less likely to be among the most connected neurons in both Stimulus and Gray FNs when only small edge weights were included (Figure S4). This indicates that untuned neurons take part in a rich club of strong weights, putting them in a prime position for integration and computation (Dann et al., 2016; Faber et al., 2019).

Figure 5. Rich Club Structure in FNs.

Figure 5.

(A) Examples from three datasets of rich club coefficients in stimulus (green) and gray (gray) FNs across percentiles of neurons, sorted in descending order by their degrees. Lines represent rich club coefficients, and shading represents significance with p < 0.01, tested against a distribution of rich club coefficients from 1,000 networks permuted to preserve degree-sequence distributions.

(B) Count of datasets (out of 20) that display significant rich club coefficient in stimulus networks. Significance was determined as described in (A).

(C) Same as (B), for gray FNs.

(D) A cartoon of the rich club sparsification procedure: (left) we start with all the edges in the network. Edge weight is specified next to the arrow, and the degree of the neurons is written on each node. Tuned neurons are in blue, and untuned neurons are in orange. At each step we discard the smallest edges and re-count the degrees of the neurons. We then ask what is the prevalence of tuned and untuned neurons within the group of neurons with the largest degrees, and normalize this prevalence by their frequency in the population.

(E) Results of the analysis described in (D). Untuned neurons form a rich club of strong weights. Warmer colors represent prevalence that is over the expected prevalence (value greater than 1). As FNs move from bottom to top on the heatmaps they become more sparse, with only the strongest edges remaining on the top. As FNs move from left to right more neurons are examined, with the fewest, largest degree neurons on the left side. See also Figure S4.

(F) Four cross sections of (E), showing that as networks are sparsified to hold only the strongest weights (top), untuned neurons (orange) are more prevalent among high-degree neurons than expected by their prevalence in the population. All FNs in this analysis are stimulus FNs; lines represent means across datasets (n = 20), and shading stands for the standard error.

Untuned Neurons Are Hubs

In order to better quantify the network contribution of untuned neurons, we measured the centrality of tuned and untuned neurons within the FN topology. Centrality can be evaluated by a number of different metrics that focus on local or global network patterns. Of particular relevance in the present context is the family of PageRank (eigenvector-based) algorithms, which take into account the network embedding of any given node. PageRank has been shown to capture the importance of nodes in a variety of biological systems. For example, in protein networks, PageRank identifies proteins’ underlying traits and is predictive of prognosis (Wang and Marcotte, 2010), and in ecological networks, it identifies species that are crucial for biodiversity (Domínguez-García and Muñoz, 2015). In neuroscience, NeuronRank, a measure inspired by the PageRank algorithm, was found to be correlated with firing rates of single neurons (Fletcher and Wennekers, 2018) and, more importantly, of the population (Gürel et al., 2007) in networks of integrate-and-fire neurons. Here we used a variation of PageRank (Radicchi et al., 2009) that allowed us to assign an individual authority value to nodes a priori. This proved useful for testing a model in which tuned neurons are hypothesized to be a more reliable source of information. However, for the majority of the analyses, we kept the authority scores (z; see STAR Methods) equal across all neurons unless stated otherwise. We converted the raw ranking scores into relative rankings, which quantifies the percentage of neurons that possess a smaller ranking than the neuron in question.

We found that untuned neurons had larger rankings in FNs inferred from stimulus epochs (0.542 ± 0.292) as compared with tuned neurons (0.465 ± 0.281; t = 8.599; p < 0.001; Figure 6A). In contrast, tuned neurons displayed comparable rankings in gray FNs (0.518 ± 0.287) to untuned neurons (0.468 ± 0.288; Figure S5). This effect was robust across model parameters such as the damping value, which reflects time spent in the system (q; see STAR Methods; Figure 6B). The effect suggests that activity converges mostly onto untuned neurons across multiple timescales during stimulus presentation. This result was also robust to permuting incoming edges of untuned neurons (Figure 6C), consistent with the rich club structure of this sub-population. Interestingly, permuting incoming edges of tuned neurons resulted in a mild increase in the ranking of tuned neurons and a substantial decrease in the ranking of untuned neurons (Figure S6). Taken together with the robustness of ranking to configurations of incoming edges in untuned neurons, these results suggested that network topology is organized to form a backbone of untuned hubs. Finally, to asses a model of V1 in which one functional sub-population is a more trusted source of information, we varied the ratio between the authority values (z; see STAR Methods) of tuned and untuned neurons. The two sub-populations showed opposite trends: when the ratio favored tuned neurons, they showed increased rankings in gray FNs, whereas untuned neurons showed smaller rankings. In contrast, when untuned neurons were assigned larger authority values, they had even larger rankings in networks built from stimulus epochs, whereas tuned neurons had decreased ranking scores in random walks on these networks (Figure 6D). Importantly, even when tuned neurons were set to be four times more influential than untuned neurons, untuned neurons remained highly ranked in networks built from stimulus epochs.

Figure 6. Random Walks on Stimulus Networks Tend to Converge onto Untuned Neurons.

Figure 6.

(A) Probability density functions of the relative ranking for tuned (blue) and untuned (orange) neurons in networks constructed stimulus epochs. Untuned neurons show increased relative ranking, which is a metric for pooling activity; t = 8.599; p < 0.001. No such effect is present in gray FNs (Figure S5).

(B) Larger ranking among untuned neurons in stimulus epochs is robust to the damping factor and can thus be thought of as occurring at multiple timescales. Lines represent the means of tuned (blue) and untuned cells (orange), and shading stands for the standard error. The right-side axis (green) shows the number of steps it took for the random walk to converge.

(C) Untuned neurons in stimulus networks are ranked highly regardless of their incoming edges. We gradually increased the portion of untuned neurons whose incoming edges we randomly permuted, in steps of 5%. For each step we ran 30 trials, in which we chose neurons at random and permuted their incoming edges at random. Lines represent means over datasets, with shading for the standard error.

(D) Differences in the ranking of tuned and untuned neurons persist under a model in which one of the sub-population is more influential. For each value of x ∊ [0.25, 4], we set z(tuned) = x and z(untuned) = 1. We then ran the algorithm and computed the relative ranking of each of the sub-populations. Untuned neurons tended to have larger rankings in stimulus FNs, especially when they had larger z values compared with tuned neurons, but also when tuned neurons were assigned authority values as four times larger.

See also Figures S5 and S6.

A Decoder Model of V1 FNs Relies on Both Tuned and Untuned Neurons

In order to directly test the idea that the specific topology of a FN comprised of tuned and untuned neurons contains information about the stimulus, we constructed a two-stage model: the first stage was a simple generative model, in which we simulated spiking activity within each of the 12 direction-specific FNs. Specifically, we used the edge weights from the networks as synaptic weights in a sparse recurrent neural network (RNN) and initiated activity by activating the small subset of neurons that had the shortest latency response in the experimental data in each stimulus condition (12.41% ± 8.91% of neurons). In addition to using the edges from the FNs, inferred from data, as the weights in the RNN, we ensured that all other parameters in the RNN were biologically realistic (see STAR Methods). To do so, we performed a grid search and matched the activity produced by the RNN to the first-order statistics of spiking activity recorded from mouse V1. In the second phase, we used the spikes produced by each FN as inputs in a decoding framework, which was designed as a FF neural network with 12 output units, for the 12 directions. The connectivity between N input units and the output layer was all-to-all, and was trained on simulated spiking data produced using conjugate gradient (STAR Methods; Figure 7A). Because we employed FNs inferred from data and froze the connections within them, the topological properties, as well as neuron identities, were preserved from the in vivo experimental data in the model.

Figure 7. Two-Phase Model of V1 Decodes Accurately from Both Tuned and Untuned Neurons.

Figure 7.

(A) Illustration of the two-phase decoder model. In phase I we instantiate each of the direction-specific FNs as a recurrent neural network (STAR Methods). Edges between tuned (blue) and untuned neurons (orange), as well as edges within the same functional class, are inferred from data, whereas inhibitory neurons and inhibitory edges (light gray) are added in a pseudo-random fashion according to parameters to balance the activity. Inputs (yellow) consist of inserting a spike in neurons that spiked in real data in the first five imaging frames. Each one of the 12 direction FNs produced spiking activity that was used as inputs to phase II. In the second phase we either used spikes from one frame (i.e., binary vector, B), or binned over frames to produce a rate vector (C). Each one of the neurons was connected in an all-to-all manner to 12 output units, and those connection weights were trained with conjugate gradient.

(B) Performance of 19 datasets when decoding from a single frame across the population of N neurons (green), compared with a Poisson decoder in which we permuted the spikes from the real data (gray). Practically, in this analysis we trained the decoder on binary vectors of size 1XN, taken from time step t. We then decoded the direction of drifting gratings from a held-out set of binary vectors from the same t. Lines represent the mean across datasets, and shading is for the standard deviation. Note that chance level is 1/12 = 8.3% (dashed).

(C) Performance of the decoder is initially a function of bin size, but quickly saturates (green). Performance of the Poisson decoder is plotted in gray. We trained the decoder on summed activity from n timed steps, and decoded from a held-out set of summed activity from the same n, starting and ending at the same frames. Lines represent the mean across datasets, and shading is for the standard deviation. Note that chance level is 1/12 = 8.3% (dashed).

(D) Decrease in performance is equivalent when the learned weights of tuned (blue) or untuned (orange) neurons to the output layer are permuted on a single neuron base. For each neuron, we randomly shuffled the 12 weights to the 12 output units. Probability density distributions are across 100 permutations in each dataset. At each permutation we also picked neurons at random from the more prevalent sub-population such that the count of permuted neurons in comparable with the smaller sub-population. This means that in 16/19 cases we did not permute all tuned neurons. Vertical lines and shading are for the means and standard deviations, respectively.

(E) Example dataset showing performance degradation is linear with the percent of permuted neurons (and therefore permuted weights). For each percentage of neurons from 5% to 100%, we did 30 permutations: we first pseudo-randomly picked the neurons whose weights will be permuted by randomly sampling only tuned (blue) or untuned (orange) neurons, until there were no more neurons left in this sub-population, and then we randomly sampled from the other sub-population until the desired percentage was reached. Finally, we randomly shuffled the weights of picked neurons to the output layer. Lines represent means across 30 permutations for each percentage, and shading stands for the standard deviation. Arrows point at the percentage at which all tuned (blue) neurons are being permuted and we start permuting untuned neurons, and orange is the same for untuned neurons.

(F) The same analysis as in (E), pooled across datasets. Lines and shading are the means and standard errors, respectively.

See also Figure S7.

This decoding approach proved to be efficacious. Decoding from a single time step in our model, which was a binary vector, resulted in performance that was 310.53% ± 72.68% over chance level, and also significantly exceeded the performance of a Poisson decoder in which the neurons were considered independent (Figure 7B). Binning time steps and decoding from firing rates of units in each network saturated the performance in most datasets when bin sizes exceed 11.94 ± 3.53 frames (STAR Methods; Figure 7C), and greatly exceeded the performance of the decoder based on the most active neurons (Figure 1C).

We next examined the level at which the sub-populations of tuned and untuned neurons contributed to decoder performance. To assess their importance, we trained the weights to the output layer with all neurons, but then permuted the weights from tuned or untuned neurons and tested the decoder on held-out test set data. Performance degradation was equivalent regardless of the sub-population of neurons we permuted (Figure 7D). In fact, degradation in performance was a linear function of the fraction of cells whose weights we permuted, with no difference between tuned or untuned neurons (Figures 7E and 7F). Similar results were found when we trained the network with a subset of neurons, with tuned-only and untuned-only networks performing comparably (Figure S7).

DISCUSSION

Here we tested the hypothesis that FNs, composed of both untuned and tuned neurons, are stimulus specific, and that FN topology itself contains visual stimulus information that is accessible to and thus decodable by downstream neurons. Our findings indicate that the two sub-populations occupy specific topological positions in FNs, suggesting a collective role for all neurons, regardless of tuning, in the network representation of visual stimuli. This finding emphasizes the need for sufficiently broad and unbiased population sampling when studying cortical population coding. We note that although we sorted cells into two discrete categories in this study, in reality tuning is a continuous, graded property. Multiple quantification methods and parameter choices will engender varying proportions of tuned neurons, as well as tuning strength, rendering the dichotomy of tuned versus untuned somewhat arbitrary. However, the unique FN topological positions occupied by untuned neurons suggest that this functional designation is as much a manifestation of network interactions (Amsalem et al., 2016; Arakaki et al., 2017; Cossell et al., 2015; Tao et al., 2004) as it is a manifestation of single-neuron properties regardless of the specific parameter choices. In that regard, layer 2/3 may be different from layer 4, in which tuning properties are thought to be inherited from the dorsal lateral geniculate nucleus (Ringach et al., 2002). Finally, it is likely that distinct classes of stimuli, such as gratings, dots, or natural movies, will result in distinct FNs, consistent with the results presented here, and we hypothesize that the assignment of individual neurons to tuned and untuned categories will also change in a stimulus-dependent manner, again indicative of the fact that tuning is at least in part a consequence of network interactions. We suggest that our network-based approach can generalize across stimulus class.

This study highlights the crucial role of FN topology in stimulus coding. Previous studies have included statistical dependencies between neurons and found gains in decoding accuracy (Graf et al., 2011; Shi et al., 2015). We build on these results by generating and instantiating a complete functional topology from data, including tuned and untuned neurons, which preserves higher-order structure and also renders the readout layer of our decoder naive to all but the realistic spiking activity the network produces. Consequently, we show that every sensory neuron matters in the context of the active network and in the circuit-level representation of visual stimuli.

We find that untuned cells are strongly connected among themselves. This places untuned neurons at the core of the circuit, as evidenced in their propensity to form a rich club of strong weights. This demonstration of a rich club property in V1 FNs in vivo implicates rich clubs in processing on multiple spatial scales. Rich club topologies have been linked to increased integration of information (van den Heuvel and Sporns, 2011), synchronization (Watanabe, 2013), and fast decision making (Daniels and Romanczuk, 2019). Member nodes, or neurons, of the rich club were shown to perform disproportionate amounts of computation in cortical cultures (Faber et al., 2019). Furthermore, a spin glass model has demonstrated that a rich club organization supports a network’s capability to converge to a larger set of attractor states (Senden et al., 2014; see also Ponce-Alvarez et al., 2013) and hints at the potential role of rich club neurons in sensory discrimination. Our data agree with this hypothesis. The central position occupied by untuned neurons in visually evoked FNs is reinforced by their larger ranking in dynamic network models based on random walks, which suggests an important role as integrators, or poolers, of information. Taken together with their rich club structure of strong weights, untuned neurons may be especially crucial when the stimulus is ambiguous or low contrast (Nauhaus et al., 2009). Untuned neurons may hence aid the visual coding of more complex, naturalistic stimuli.

We propose that a model of V1 circuits based on FNs supports important functional roles in stimulus coding for both tuned and untuned neuronal populations. Moreover, our work suggests that the functional designation of a neuron as being tuned or untuned is a consequence of network topological interactions. Our work represents an example of how the application of network models and graph theory can provide insights and test hypotheses for future investigation of how neuronal populations encode and compute sensory signals.

STAR⋆METHODS

LEAD CONTACT AND MATERIALS AVAILABILITY

Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact Jason N. MacLean (jmaclean@uchicago.edu). This study did not generate new unique reagents.

EXPERIMENTAL MODEL AND SUBJECT DETAILS

Data collection was performed in accordance to the guidelines of, and approved by the Institutional Animal Care and Use Committee at the University of Chicago. Experimental animals were 8 mice (4 males, 4 females) between ages P84-P191, expressing GCaMP6s under the Thy-1 promotor (Dana et al., 2014; JAX #025776). Animals had ad libitum access to food and water. Information about the animals was previously described in Dechery and MacLean (2018).

METHOD DETAILS

Cranial window surgeries and two-photon calcium imaging

Fully described in Dechery and MacLean (2018). Briefly, a 3 mm diameter craniotomy over left V1, in which the anatomical location was verified using intrinsic signal imaging (Kalatsky and Stryker, 2003). During imaging sessions, animals were head-fixed, awake and allowed to run voluntarily on a linear treadmill. A field of view in V1 was found and compared against fiduciary markers from the images obtained by intrinsic signal imaging, and neurons were automatically detected by software written in house. To facilitate cell detection we divided the FOV into a 4×4 grid, zooming on each grid section individually. Neurons were then identified in each grid zoom, and their coordinates in pixel space were transformed back into the FOV coordinate system, enabling us to perform an accurate line-scan on the entire population. Two photon calcium imaging was performed with 910 nm excitation wavelength with a scanning rate of 25–33Hz, depending on the number of neurons imaged (72–347). Each frame is thus ~30 ms long.

Visual stimuli

We presented the mice with 8–10 repetition of the same movie, as which pseudo-random presentations (trials) of drifting gratings in 12 directions (0.04 deg/cycle, 2Hz), spaced 30 degrees apart. Drifting gratings (Stimulus) lasted 5 s and were interleaved with 3 s of uniform mean luminance screen (Gray). This provided 24–30 trials of each drifting grating direction.

Functional networks

Separate functional networks (FN) were inferred from data of stimulus and gray epochs (Figure 2D). We trimmed fluorescence traces from stimulus epochs to use the first 2 s of grating presentation, and traces from gray epochs to use the last 2 s of mean luminance screen. This was done to focus on the initial response to the drifting grating and to lessen the likelihood that fluorescence values had not returned to baseline from the preceding stimulus epoch, respectively. The trimmed individual traces from each epoch type were then concatenated in the order they were presented, separately for stimulus and gray and in each movie, and then smoothed with a running average window of 10 frames to discard discontinuities inserted by the concatenation procedure. This resulted in grand stimulus traces and grand gray traces for each movie. We then computed partial correlation between every pair of neurons, partialling out the mean response of the neurons in the pair in all other movies and the mean response of the population within the same movie (Figure 2D). Directionality was assigned to the partial correlation score by examining the peak (maximum value) in the cross-correlogram (see Dechery and MacLean, 2018).

Separate functional networks for each direction of drifting gratings were constructed by parsing out the traces of activity in response to trials of each of the 12 directions (Figures 2C and 2E). Only the first 2 s of each trial were kept, and concatenated with other trails of the same direction. In most datasets with had 30 trials for each direction, and we concatenated every 5 consecutive trials together. Those grand traces were then smoothed as described above and partial correlation was computed, regarding every grand trace as a movie. As the majority of our analysis tools such as graph alignment, rich club analysis and random walks are designed for positive edges only, we discarded negative edges from all FNs for all analyses.

Graph alignment

We defined the graph alignment score A of each pair of networks M and Nwith k neurons as:

A=i=1kj=1kmin(Mij,Nij)i=1kj=1kMij,Nij

We then normalized the alignment score using a distribution of alignment scores from 100 degree-sequence-preserving (randmio_dir.mat from Rubinov and Sporns, 2010) permuted FN as follows:

norm(A)=AA^1A^

Where A^ is the mean of the distribution from permuted FNs, so that alignment scores quantify similarity beyond what is expected by chance (Gemmetto et al., 2016).

Rich club analysis

We sorted neurons in descending order by their combined in and out degree, and then we sequentially considered degrees starting from the largest. For each degree threshold, d, we calculated the rich club coefficient as:

Rd=k(i)dNk(j)dNWijD2D

Where k is the total degree of the neuron, wij is the FN, and D is the count of neurons with kRd. We then performed 1000 degree-sequence-preserving (randmio_dir.mat from Rubinov and Sporns, 2010) randomizations of the network, and calculated the rich club coefficient in each of them. Significance was computed as the probability of the rich-club coefficient of real data being larger than the coefficient obtained from the population of randomized FNs.

Random walks

The ranking P of each neuron i˛N was initialized randomly in (0,1] and then calculated iteratively as:

Pi=(1q)j=1NPjsjoutWij+qzi+(1q)zij=1NPjδ(sjout)

where sjout=k=1NWjk, and wji are the edges from neuron j to neuron i and q is the damping factor (Radicchi et al., 2009). We picked k = 1 q = 0.1 for all analysis except when we tested the robustness to this factor. This algorithm differs from PageRank because it allows assignment of fixed authority to each node in the network, denoted by zi, which we set to 1 for all neurons and all analysis unless stated otherwise. The last term on the r.h.s corrects for neurons that do not have outgoing edges with δ(x)={1x=00otherwise and thus disappears if there are no such neurons in the network, which was the case for all but one FNs. The procedure repeats iteratively until the L2 norm between the ranking vector at iteration k and k-1 is smaller than a threshold set to 1×10 6:

Two-phase decoder model

The decoder model was built in two phases: generation of artificial spike trains using a simple recurrent neural network model, and decoding direction using a feed-forward pattern recognition network. In the first phase, each of the 12 direction-specific FNs was used as a recurrent neural network. The probability of firing for each neuron was governed by:

pi(t)=(1+exp(jiN(sj(t1)Wij)k=1M(sk(t1)aki)))1

once the probability of spiking pi(t) was calculated, x∊[0; 1]was drawn at random from a uniform distribution. Then spiking at time t, si(t) was set as:

si(t)={1xpi(t)0x>pi(t)

Each neuron i received excitatory and inhibitory inputs, described in the first and second terms of the exponent, respectively. The connectivity structure of N excitatory neurons wij was governed by the FNs inferred from data and frozen. In contrast, M inhibitory neurons were added artificially, since we did not image inhibitory neurons, to balance the activity in the network. Inputs from excitatory to inhibitory neurons were set according to fixed connection probability PEI and the weight values were drawn from a lognormal distribution with parameters estimated from the FN. Weights from inhibitory to excitatory and inhibitory neurons (aki) followed a random wiring procedure with probabilities PIE and PII, respectively. The weights were again drawn from a lognormal distribution estimated from the edge weights in the functional network, and then multiplied by a scaling factor, g.

The parameters M; PEI; PIE; PII and g were chosen by a grid search approach in which we set biologically plausible ranges for all the parameters (Chambers and MacLean, 2016; Song et al., 2005), and then ran the model with all possible combinations of values within these ranges. We then examined the resulting dynamics in excitatory neurons for three properties present in real cortical dynamics: persistent activity (Gutkin et al., 2001), realistic firing rates (Griffith and Horn, 1966; Koch and Fuster, 1989; Roxin et al., 2011) and asynchrony (Ecker et al., 2010; Renart et al., 2010; Zerlaut et al., 2019). Persistent, untruncated simulated activity was evaluated by only examining parameters that produced spiking up to 140 time-frames, which are equivalent to 5 s of stimulus presentation considering our average scanning rate. Realistic firing rates were achieved by only considering parameters that produced excitatory firing rate within two standard deviations of the data firing rate as estimated from the fluorescence traces by the OASIS inference algorithm (Friedrich et al., 2017). Finally, asynchrony was guaranteed by examining the simulated rasters by eye and discarding parameters that resulted in simultaneous, locked spikes in more than 20% of the population. This procedure to pick that parameters that satisfied those criteria was done separately for each FN, and on average across networks and datasets M = 0:28N, PEI = 0:39, PIE = 0:49, PII = 0:11, and g = 1:81. We failed to find parameters values that produced realistic activity for 1/20 dataset, and it was excluded from this analysis. Even though the parameters were fixed for each network, the matrix a was build anew, and thus different, in each trial.

We initiated trials by setting si(1) = 1 for ∀i∊ N that spiked in the first 5 frames in imaged data. We thus had 30 trials for each direction, with differing starting conditions that were directly informed by data. Typically, 12.41 ± 8.91% of neurons received an initial spike. We let activity propagate for 140 frames, at which point the simulation was terminated. For the performance per frame analysis (Figure 7B), we used the spikes from a single time frame, i.e., a binary vector as inputs. We did not consider the first five frames to allow the dynamics to develop. To examine the effect of bin size (Figure 7C), we binned frames into a rate vector to be used as input, again discarding the first 5 frames for all bin sizes. The decoder degradation analysis (Figures 7D7F) was performed with rates binned over 100 time frames, from time frame 20 to 120. This analysis is described in more detail after the second phase of the decoder.

In the second phase of the decoder we used the binary vector or binned activity as input to a feed-forward neural network, in which N excitatory neurons in the input layer were connected in an all-to-all manner to 12 output units. The weights from the input to the output units were initialized randomly between (0,1), and then trained in a supervised learning paradigm with 90% of the inputs for each direction used for training, and 10% held out as a test-set. Training was performed using the MATLAB Network toolbox, with an objective function of minimizing the cross-entropy of the output and the correct targets, and computing that by adjusting the weights in the direction of the conjugate gradient. Once the network was trained, we took the decoded direction to be the identity of the output unit with the largest probability.

We tested the contribution of tuned and untuned neurons to the performance of the model in two ways: 1) we permuted the trained weights from tuned or untuned neurons to the output layer, and 2) we trained the feed-forward network with a subset of tuned or untuned neurons. In both cases, we took steps to manipulate the same number of weights or units. Most datasets contained more tuned neurons, so we performed 100 manipulations, each time picking U tuned neurons to manipulate, with U being the count of untuned neurons.

Control decoders

The cell identity decoder (Figure 1C) was built by constructing a binary vector for each trial in each direction to be used as input to the second phase of our decoder, that is, the feed-forward pattern recognition network. For each trial we averaged the fluorescence traces of each neuron across the first 1.5 s of grating presentation. We then identified the most active neurons and set the binary input vector to 1 for neurons in this group, and 0 otherwise. We performed this procedure with the most active neurons defined at the top 5% to 30%, in steps of 5. Partitioning the inputs to training and test sets and decoding performance were implemented as described for the two-phase decoder.

The maximum-likelihood (ML) decoder was constructed as described in Avitan et al. (2016) and Ponce-Alvarez et al. (2018). 90% of trials, picked at random were used as a training set and the remaining 10% of trials were used for testing. We trimmed the trials for each direction to the initial 2 s, and estimated conditional probability distributions for the fluorescence of each neuron given the grating direction using MATLAB’s ksdensity function. To decode, we averaged the activity of each neuron over the 2 s in each test trial. We then collected the probabilities to obtain this activity from the estimated distributions for the 12 directions, and multiplied these probabilities for the neurons, assuming they are independent. The decoded direction was taken as:

direction=argmaxdir[1,12](i=1Np(fi|dir))

where N is the number of neurons, and f is the time-average df/f.

The Poisson control decoder was constructed by inferring spikes from fluorescent activity (OASIS; Friedrich et al., 2017), and then permuting the spikes for each neuron, thus preserving firing rates of individual units. For the frame analysis (Figure 7B), we used the spiking in a single frame as a binary vector input to the decoder as described for phase II. For the bin size analysis (Figure 7C), we binned the spikes and plugged in the rates as the input to the decoder. Partitioning the inputs to training and test sets and decoding performance were implemented as described for the two-phase decoder.

QUANTIFICATION AND STATISTICAL ANALYSIS

Analyses and modeling were performed in MATLAB 2017a or later. Sample sizes, statistical tests and significance values can be found in the figure legends. Center and dispersion numbers in the Results section stand for mean and standard deviation unless stated otherwise.

Tuning properties

We sorted neurons into tuned and untuned according to the following process: for each neuron we trimmed the fluorescence traces to the first 2 s of each stimulus epoch (out of 5 s) and the last 2 s of gray epochs (out of 3 s). We than averaged each trace and tested each stimulus condition against the preceding gray epoch in a paired-sample t test. p values were Bonferroni corrected for 12 directions (p < 0.05/12). Across datasets, 17.85+/−15.64% (M ± SD) of neurons did not pass this test. We manually examined the cell bodies of those neurons in their respective fields of view, and their fluorescence traces, and subsequently 4 neurons across all 20 datasets were deemed as artifacts and excluded. We included all other neurons as untuned neurons in all our analyses. Cells which passed this procedure were then tested by repeated-measures analysis of variance for direction or orientation tuning significance (p < 0.01) to further split this population into tuned and untuned neurons. Responses of significantly tuned cells were then iteratively fit with a circular Gaussian (Mazurek et al., 2014), with R2 = 0:72 ± 0:19 (M ± SD). We took θ∊ [0,330] in steps of 30 closest to the peak of the Gaussian to be the categorical tuning of the cell.

DATA AND CODE AVAILABILITY

Code used for analyses in this paper can be found at: https://figshare.com/projects/Levy_Sporns_MacLean2020_Cell_reports/76482

Supplementary Material

1
2

KEY RESOURCES TABLE

REGENTS or RESOURCE SOURCE IDENTIFIER
Experimental Models: Organisms/Strains
Mouse: C57BL/6J-Tg(Thy1-GCaMP6s)GP4.12Dkim/J The Jackson Laboratory JAX #025776
Software and Algorithms
MATLAB The MathWorks
Brain Connectivity Toolbox Rubinov and Sporns, 2010 https://sites.google.com/site/bctnet/
Analyses code This paper https://figshare.com/projects/Levy_Sporns_MacLean2020_Cell_reports/76482

Highlights.

  • Some neurons in V1 reliably respond to specific stimulus parameters and some do not

  • Functional networks containing these tuned and unturned neurons are stimulus specific

  • Tuned and untuned neurons have different positions in networks

  • Functional networks comprised of all neurons and connections improve decoding

ACKNOWLEDGMENTS

This study was funded by NIH grant EY022338. O.S. was supported by NIH grant 1R01MH121978-01. We thank members of the MacLean lab for helpful comments.

Footnotes

SUPPLEMENTAL INFORMATION

Supplemental Information can be found online at https://doi.org/10.1016/j.celrep.2020.03.047.

DECLARATION OF INTERESTS

The authors declare no competing interests.

REFERENCES

  1. Amsalem O, Van Geit W, Muller E, Markram H, and Segev I (2016). From Neuron Biophysics to Orientation Selectivity in Electrically Coupled Networks of Neocortical L2/3 Large Basket Cells. Cereb. Cortex 26, 3655–3668. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Arakaki T, Barello G, and Ahmadian Y (2017). Capturing the diversity of biological tuning curves using generative adversarial networks. arXiv, ar-Xiv:1707.04582. https://arxiv.org/abs/1707.04582. [Google Scholar]
  3. Avitan L, Pujic Z, Hughes NJ, Scott EK, and Goodhill GJ (2016). Limitations of Neural Map Topography for Decoding Spatial Information. J. Neurosci 36, 5385–5396. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bassett DS, and Sporns O (2017). Network neuroscience. Nat. Neurosci 20, 353–364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Chambers B, and MacLean JN (2016). Higher-Order Synaptic Interactions Coordinate Dynamics in Recurrent Networks. PLoS Comput. Biol 12, e1005078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Chen Y, Geisler WS, and Seidemann E (2006). Optimal decoding of correlated neural population responses in the primate visual cortex. Nat. Neurosci 9, 1412–1420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Cossell L, Iacaruso MF, Muir DR, Houlton R, Sader EN, Ko H, Hofer SB, and Mrsic-Flogel TD (2015). Functional organization of excitatory synaptic strength in primary visual cortex. Nature 518, 399–403. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Dana H, Chen T-W, Hu A, Shields BC, Guo C, Looger LL, Kim DS, and Svoboda K (2014). Thy1-GCaMP6 transgenic mice for neuronal population imaging in vivo. PLoS ONE 9, e108697. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Daniels BC, and Romanczuk P (2019). Quantifying the impact of network structure on speed and accuracy in collective decision-making. arXiv, ar-Xiv:1903.09710. https://arxiv.org/abs/1903.09710. [DOI] [PubMed] [Google Scholar]
  10. Dann B, Michaels JA, Schaffelhofer S, and Scherberger H (2016). Uniting functional network topology and oscillations in the fronto-parietal single unit network of behaving primates. eLife 5, e15719. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Dechery JB, and MacLean JN (2018). Functional triplet motifs underlie accurate predictions of single-trial responses in populations of tuned and untuned V1 neurons. PLoS Comput. Biol 14, e1006153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Domínguez-García V, and Muñoz MA (2015). Ranking species in mutualistic networks. Sci. Rep 5, 8182. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, and Tolias AS (2010). Decorrelated neuronal firing in cortical microcircuits. Science 327, 584–587. [DOI] [PubMed] [Google Scholar]
  14. Faber SP, Timme NM, Beggs JM, and Newman EL (2019). Computation is concentrated in rich clubs of local cortical networks. Netw. Neurosci 3, 384–404. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Fletcher JM, and Wennekers T (2018). From Structure to Activity: Using Centrality Measures to Predict Neuronal Activity. Int. J. Neural Syst 28, 1750013. [DOI] [PubMed] [Google Scholar]
  16. Friedrich J, Zhou P, and Paninski L (2017). Fast online deconvolution of calcium imaging data. PLoS Comput. Biol 13, e1005423. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Gemmetto V, Squartini T, Picciolo F, Ruzzenenti F, and Garlaschelli D (2016). Multiplexity and multireciprocity in directed multiplexes. Phys. Rev. E 94, 042316. [DOI] [PubMed] [Google Scholar]
  18. Graf ABA, Kohn A, Jazayeri M, and Movshon JA (2011). Decoding the activity of neuronal populations in macaque primary visual cortex. Nat. Neurosci 14, 239–245. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Griffith JS, and Horn G (1966). An analysis of spontaneous impulse activity of units in the striate cortex of unrestrained cats. J. Physiol 186, 516–534. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Gürel T, De Raedt L, and Rotter S (2007). Ranking neurons for mining structure-activity relations in biological neural networks: NeuronRank. Neurocomputing 70, 1897–1901. [Google Scholar]
  21. Gutkin BS, Laing CR, Colby CL, Chow CC, and Ermentrout GB (2001). Turning on and off with excitation: the role of spike-timing asynchrony and synchrony in sustained neural activity. J. Comput. Neurosci 11, 121–134. [DOI] [PubMed] [Google Scholar]
  22. Hubel DH, and Wiesel TN (1959). Receptive fields of single neurones in the cat’s striate cortex. J. Physiol 148, 574–591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Josić K, Shea-Brown E, Doiron B, and de la Rocha J (2009). Stimulus-dependent correlations and population codes. Neural Comput 21, 2774–2804. [DOI] [PubMed] [Google Scholar]
  24. Kalatsky VA, and Stryker MP (2003). New paradigm for optical imaging: temporally encoded maps of intrinsic signal. Neuron 38, 529–545. [DOI] [PubMed] [Google Scholar]
  25. Ko H, Cossell L, Baragli C, Antolik J, Clopath C, Hofer SB, and Mrsic-Flogel TD (2013). The emergence of functional microcircuits in visual cortex. Nature 496, 96–100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Koch KW, and Fuster JM (1989). Unit activity in monkey parietal cortex related to haptic perception and temporary memory. Exp. Brain Res 76, 292–306. [DOI] [PubMed] [Google Scholar]
  27. Kotekal S, and MacLean JN (2020). Recurrent interactions can explain the variance in single trial responses. PLOS Computational Biology 16, e1007591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Mazurek M, Kager M, and Van Hooser SD (2014). Robust quantification of orientation selectivity and direction selectivity. Front. Neural Circuits 8, 92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Nauhaus I, Busse L, Carandini M, and Ringach DL (2009). Stimulus contrast modulates functional connectivity in visual cortex. Nat. Neurosci 12, 70–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Niell CM, and Stryker MP (2008). Highly selective receptive fields in mouse visual cortex. J. Neurosci 28, 7520–7536. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Nigam S, Shimono M, Ito S, Yeh F-C, Timme N, Myroshnychenko M, Lapish CC, Tosi Z, Hottowy P, Smith WC, et al. (2016). Rich-Club Organization in Effective Connectivity among Cortical Neurons. J. Neurosci 36, 670–684. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Olshausen BA, and Field DJ (2005). How close are we to understanding v1? Neural Comput 17, 1665–1699. [DOI] [PubMed] [Google Scholar]
  33. Pajevic S, and Plenz D (2012). The organization of strong links in complex networks. Nat. Phys 8, 429–436. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Ponce-Alvarez A, Thiele A, Albright TD, Stoner GR, and Deco G (2013). Stimulus-dependent variability and noise correlations in cortical MT neurons. Proc. Natl. Acad. Sci. USA 110, 13162–13167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Ponce-Alvarez A, Jouary A, Privat M, Deco G, and Sumbre G (2018). Whole-Brain Neuronal Activity Displays Crackling Noise Dynamics. Neuron 100, 1446–1459.e6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Radicchi F, Fortunato S, Markines B, and Vespignani A (2009). Diffusion of scientific credits and the ranking of scientists. Phys. Rev. E Stat. Nonlin. Soft Matter Phys 80, 056103. [DOI] [PubMed] [Google Scholar]
  37. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, and Harris KD (2010). The asynchronous state in cortical circuits. Science 327, 587–590. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Ringach DL, Shapley RM, and Hawken MJ (2002). Orientation selectivity in macaque V1: diversity and laminar dependence. J. Neurosci 22, 5639–5651. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Ringach DL, Mineault PJ, Tring E, Olivas ND, Garcia-Junco-Clemente P, and Trachtenberg JT (2016). Spatial clustering of tuning in mouse primary visual cortex. Nat. Commun 7, 12270. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Roxin A, Brunel N, Hansel D, Mongillo G, and van Vreeswijk C (2011). On the distribution of firing rates in networks of cortical neurons. J. Neurosci 31, 16217–16226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Rubinov M, and Sporns O (2010). Complex network measures of brain connectivity: uses and interpretations. Neuroimage 52, 1059–1069. [DOI] [PubMed] [Google Scholar]
  42. Senden M, Deco G, de Reus MA, Goebel R, and van den Heuvel MP (2014). Rich club organization supports a diverse set of functional network configurations. Neuroimage 96, 174–182. [DOI] [PubMed] [Google Scholar]
  43. Shi L, Niu X, and Wan H (2015). Effect of the small-world structure on encoding performance in the primary visual cortex: an electrophysiological and modeling analysis. J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol 201, 471–483. [DOI] [PubMed] [Google Scholar]
  44. Song S, Sjöström PJ, Reigl M, Nelson S, and Chklovskii DB (2005). Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol 3, e68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Sun W, Tan Z, Mensh BD, and Ji N (2016). Thalamus provides layer 4 of primary visual cortex with orientation- and direction-tuned inputs. Nat. Neurosci 19, 308–315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Tao L, Shelley M, McLaughlin D, and Shapley R (2004). An egalitarian network model for the emergence of simple and complex cells in visual cortex. Proc. Natl. Acad. Sci. USA 101, 366–371. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. van den Heuvel MP, and Sporns O (2011). Rich-club organization of the human connectome. J. Neurosci 31, 15775–15786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. van den Heuvel MP, Kahn RS, Goñi J, and Sporns O (2012). High-cost, high-capacity backbone for global brain communication. Proc. Natl. Acad. Sci. USA 109, 11372–11377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Wang PI, and Marcotte EM (2010). It’s the machine that matters: Predicting gene function and phenotype from protein networks. J. Proteomics 73, 2277–2289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Watanabe T (2013). Rich-club network topology to minimize synchronization cost due to phase difference among frequency-synchronized oscillators. Physica A: Statistical Mechanics and its Applications 392, 1246–1255. [Google Scholar]
  51. Zariwala HA, Madisen L, Ahrens KF, Bernard A, Lein ES, Jones AR, and Zeng H (2011). Visual tuning properties of genetically identified layer 2/3 neuronal types in the primary visual cortex of cre-transgenic mice. Front. Syst. Neurosci 4, 162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Zerlaut Y, Zucca S, Panzeri S, and Fellin T (2019). The Spectrum of Asynchronous Dynamics in Spiking Networks as a Model for the Diversity of Non-rhythmic Waking States in the Neocortex. Cell Rep 27, 1119–1132.e7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Zylberberg J (2017). Untuned but not irrelevant: The role of untuned neurons in sensory information coding. bioRxiv 10.1101/134379. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1
2

Data Availability Statement

Code used for analyses in this paper can be found at: https://figshare.com/projects/Levy_Sporns_MacLean2020_Cell_reports/76482

RESOURCES