Abstract
In complex environments, animals can adopt diverse strategies to find rewards. How distinct strategies differentially engage brain circuits is not well understood. Here we investigate this question, focusing on the cortical Vip-Sst disinhibitory circuit. We characterize the behavioral strategies used by mice during a visual change detection task. Using a dynamic logistic regression model we find individual mice use mixtures of a visual comparison strategy and a statistical timing strategy. Separately, mice also have periods of task engagement and disengagement. Two-photon calcium imaging shows large strategy dependent differences in neural activity in excitatory, Sst inhibitory, and Vip inhibitory cells in response to both image changes and image omissions. In contrast, task engagement has limited effects on neural population activity. We find the diversity of neural correlates of strategy can be understood parsimoniously as increased activation of the Vip-Sst disinhibitory circuit during the visual comparison strategy which facilitates task appropriate responses.
eTOC blurb
Individuals may adopt unique strategies to find rewards, but it is unclear how strategy choice affects neural activity. Piet et al. find that mice solve a visual change detection task using two strategies, and that strategy choice influences the activity of cortical neurons in cell type-specific ways.
Introduction
Brain circuitry, including sensory cortex, does not operate in isolation but rather serves behavioral demands. Therefore, quantitative behavioral analysis is essential to understand brain circuits.1–4 Recently, computational tools have emerged for analysis of behavioral tasks, including the description of behavioral strategies.5–11 These tools vary in their statistical structure but all parameterize a space of possible behaviors which are then constrained by data. This has led to a greater appreciation of behavioral diversity, including across subject variability, and within subject variability over time.
Behavioral strategies have been shown to alter which brain circuits are active during a task. In a tactile discrimination task, Gilad et al, 201812 found that mice used either an active strategy (proactively moving against the stimulus) or a passive strategy (allowing the stimulus to touch themselves). They found strategy choice altered the location of short term memory in the cortex between M2 and posterior cortex. Bolkan et al, 202213 found that task difficulty and the subject’s behavioral state determined the role of striatal pathways. Other studies have demonstrated strategy dependent neural dynamics across human fMRI,14, 15 rodent wide-field imaging,16 and cellular activity in the fruit fly.17 Prefrontal structures, including the anterior cingulate cortex and medial prefrontal cortex, have been proposed to track strategy preferences and control strategy execution.18–21 Behaviorally relevant signals modify activity in sensory cortex22 including visual flow,23, 24 amplifying task relevant signals,25 dynamically scaling the range of stimulus encoding,26 and value signals.27
Simultaneously, there has been growing appreciation that neural cell types mediate specific computations.28, 29 Merging lines of evidence suggest a disinhibitory circuit between vasoactive intestinal peptide-positive (Vip) interneurons, somatostatin-positive (Sst) interneurons, and excitatory cells.30–33 In this circuit, Vip neurons disinhibit pyramidal activity through inhibition of Sst inhibitory neurons.34 In the visual cortex, the Vip-Sst circuit has been implicated in multiple computations. Millman et al, 202035 found that Vip-Sst antagonism controlled the dynamic range of stimulus contrast for excitatory cells. Keller et al, 202036 found that the Vip-Sst circuit mediates the influence of visual context on excitatory cells. Fu et al, 201437 found that running in mice activates the Vip-Sst disinhibition of excitatory cells. Finally, anatomical34, 38, 39 and functional studies40 demonstrate that top-down feedback is preferentially routed through Vip neurons, influencing local circuits through Vip disinhibition.
Despite progress in analysis of behavior and cell type circuit dissection, it remains unclear how behavioral strategies influence cortical circuits through specific cell types. Here we investigate how the Vip-Sst circuit in visual cortex mediates strategy dependent demands. We used the Allen Institute Visual Behavior - 2p calcium imaging dataset to examine how the activity of Vip, Sst, and excitatory cells depend on strategy during a change detection task.41, 42 This dataset contains behavior from 376 imaging sessions from 82 mice, thus permitting analysis of behavioral diversity. We used a dynamic logistic regression model7 to identify the strategies used by mice. We find individual mice have stable mixtures of a visual comparison strategy and a statistical timing strategy. Vip, Sst, and excitatory cells recorded from mice predominantly using each strategy show dramatic differences in activity. Further, the effects of strategy were independent from task engagement and stimulus novelty. Finally, we show that strategy differences can be succinctly described by the degree of activation of the Vip-Sst disinhibitory circuit.
Results
Visual change detection task
We analyzed the Allen Institute Visual Behavior - 2p calcium imaging dataset,42 which contains 2-photon calcium imaging recordings from transgenic mice expressing the calcium indicator GCaMP6f in excitatory neurons (Slc17a7-IRES2-Cre;Camk2a-tTa;Ai93), Sst inhibitory neurons (Sst-IRES-Cre;Ai148), and Vip inhibitory neurons (Vip-IRES-Cre;Ai148). The data was collected using standardized data collection and processing.41, 43 During imaging, head-fixed mice performed a visual change detection task (Fig. 1A). Mice were shown a series of natural images (250ms stimulus duration) interspersed with periods of a gray screen (500ms inter-stimulus duration). This task uses a roving baseline paradigm whereby an individual image repeats a variable number of times before a new image is presented and then itself repeats. Mice were given a water reward for licking in response to image changes. Premature licking delayed the time of the next image change. To serve as distractors, 5% of image repeats were omitted and replaced with a continuation of the gray screen for the same duration as image presentations. Image changes as well as the image immediately before the change were not omitted. Mice performed the task on a running disk. Running was “open loop” with respect to task progression and mice were free to run or not run without influence on stimulus or reward generation. Mice typically ran between licking bouts, and stopped running during licking and reward consumption (See Fig. S16). Mice learned the task through a standardized training pipeline and were selected for imaging based on a consistent criteria of minimum task performance.41, 44
Figure 1: Quantifying strategies during a change detection task.
(A) Head-fixed mice were shown repeating natural images and were rewarded for licking in response to image changes. Mice could perform this task using multiple strategies, including visually comparing images or learning statistical distributions of rewards. (B) Example lick rasters demonstrate multiple strategies. Each row is an epoch within one example session. Up to 20 examples for each strategy are shown. Gray bands show repeated images. Blue bands show “change images”, when the image changed from the previous image. Licks after change images generated water rewards (red markers). Dashed blue lines show omissions. Top left, licking aligned to image changes. Top right, licking aligned to omissions. Bottom left, licking aligned to post-omission images. Bottom right, licking aligned to a fixed time interval from the last licking bout, with epochs sorted into rewarded and unrewarded bouts. (C) Diagram of task structure, data processing, and strategies. Images were presented for 250ms with 500ms gray screens interleaved. 5% of all images were randomly omitted. Image changes were drawn from a geometric distribution. Individual licks were segmented into licking bouts. Bouts were assigned to the preceding image presentation. The model predicts whether a bout starts during each image interval, and therefore ignores images where the mouse was already licking. For each strategy we show the probability of starting a bout during each image interval. (D) Top, raster of licking, hits, and misses for full 1-hour session. Middle, time-varying strategy weights for each strategy. Bottom, licking probability in the data and model prediction smoothed with a one minute boxcar.
Mice could potentially perform this task in multiple ways. In one possible strategy, mice could hold a memory of the previous image and perform a visual comparison to the subsequent image. This visual comparison strategy may be vulnerable to distraction from image omissions. Alternatively, mice could learn the statistical distribution of when images change and make educated guesses based on the time since the last image change (Fig. 1A). To illustrate these strategies we can examine licking patterns around time points relevant to each strategy (Fig. 1B). In one example session we see: (top left) licking aligned to image changes, which results in a water reward, (top right) licking during image omissions, which never results in a reward, (bottom left) licking on the image after an omission, which never results in a reward, and (bottom right) licking on the fifth image after the last lick, which sometimes results in a water reward. Figure S1 shows more details of mouse behavior.
Behavioral strategy model
To identify these strategies across our dataset we used a dynamic logistic regression model.7 Our model predicts whether the mouse will start a licking bout in response to each image based on the weighted influence of each strategy. Importantly, the strategy weights were allowed to vary across the course of the session constrained by a smoothing prior.
We will now describe how we processed the data and constructed the model (Fig. 1C). First, licks were segmented into licking bouts based on an inter-lick interval of 700ms (Fig. S2A). The duration of bouts was largely governed by whether the mouse received and then consumed a water reward (Fig. S2B). Therefore, we focused our analysis on predicting the start of each bout. Bout onsets were time-locked to image presentations, thus for each bout we identified the last image or omission presented before the bout started (Fig. S2C). Since our model predicts the start of bouts, we ignore images when the mouse was already in a bout.
The design matrix of our strategy model describes the expected pattern of licking for each strategy. Our model uses 5 strategies, and each behavioral session had 4800 image presentations, resulting in a design matrix of size 4800 by 5. For each strategy, the entries in the design matrix express the probability that strategy would start a licking bout on each image presentation. Our model learns a weighted, time-varying, mixture of each strategy. The five strategies used in the model are a licking bias term, a visual strategy, timing strategy, omission strategy, and a post-omission strategy. For the licking bias strategy, which is simply a bias term, the entries of the design matrix are 1 on all images representing a constant drive to lick. For the visual strategy, the entries (expected licking probabilities) are 1 on image changes, and 0 otherwise. For the omission strategy, the entries are 1 during omissions, and 0 otherwise. For the post-omission strategy, the entries are 1 during the first image after an omission, and 0 otherwise. For the timing strategy, the licking probability is a sigmoidal function of how many images have been presented since the end of the last licking bout. The shape of the sigmoidal function is controlled by a slope parameter and a midpoint parameter that were both learned from a subset of the data, and then fixed for all model fits (Fig. S3). The best fit sigmoidal function has a licking probability that is low immediately after a licking bout, rises to 0.5 at 4 images after a licking bout, and then has a high licking probability at longer durations. All strategies other than the licking bias were mean-centered. This means, for example, that the visual strategy drives licking on image changes, but additionally suppresses licking on image repeats.
For each image, we will refer to the appropriate slice of the design matrix as the strategy vector, , for that image. The strategy vector describes the probability of licking from each of the five strategies we are regressing against. Using standard, non-dynamic, logistic regression we would predict the probability a mouse licked on a given image presentation, , by using weights that are fixed across time, , to combine the strategy vector for that image, , and passing that sum through a logistic function:
(1) |
However, by using the dynamic logistic regression model,7 we let the weight for each strategy, βk, vary for each image presentation subject to a smoothing prior. The prior is implemented as letting the weights for each strategy undergo a Gaussian random walk with standard deviation σk:
(2) |
(3) |
The smoothing prior σk is a hyper-parameter unique to each strategy during each session and controls the volatility of each strategy. These hyper-parameters were fit by maximizing the model evidence as described in Roy et al, 2018.7 The smoothing prior constrains the strategy weights to evolve gradually over time. This balances the ability of the model to dynamically track changing behavioral patterns against over-fitting to the responses on each stimulus. The dynamic model has the same form as the standard model with the weights now a function of each image, :
(4) |
The model was fit to each one-hour behavioral session by the maximum a posterori (MAP) estimate of the weights given the data and the hyper-parameters. Figure 1D shows the example output of the model for one session. For this session, the learned weights, βt, for the licking bias and timing strategies are more volatile than the weights for the visual, omission, and post-omission strategies. The model captures the time-varying probability of licking in the data as the mouse goes through epochs of hits, misses, as well as high and low licking rates.
Mouse behavior is largely described by the visual and timing strategies
We quantified the performance of the model with the area under the receiver operating characteristic (ROC) curve (Fig. 2A). In this analysis we use the model’s cross validated licking probability prediction on each image as a classifier of whether the mouse started a licking bout on each image presentation. The ROC curve computes the true positive rate and false positive rate of a classifier as the classification threshold is varied. The area under this curve (AUC) ranges from .5 (chance), to 1 (perfect classification across all thresholds). The dynamic model performs well, with an average AUC value of 0.83. See Figure S4 for additional model validation details. The static, or standard, logistic regression model performs poorly because it is unable to track within session fluctuations in licking rates, as seen in Fig. 1D.
Figure 2: Licking model reveals distinct task strategies.
(A) Cross validated model performance (area under the ROC curve) for the dynamic model (blue) and static model (gray) (n=382 sessions). The red line marks the average dynamic model performance (0.83). (B-H) Dots indicate individual sessions (n=382), and (B-D) black bars are population averages. (B) Average strategy weights. (C) Learned smoothing prior σ. (D) Reduction in model evidence when removing each strategy. (E) Average weights of the visual and post-omission strategies. Red line shows a linear correlation (R2 = 0.44). (F) Scatter plot of the absolute value of the reduction in model evidence (termed here as an index) for the visual and timing strategies. The strategy index is the difference between visual and timing indices. (G) Rewards per session compared with the strategy index. (H) Mice were sorted by their average strategy index. Each session from a mouse is shown in the same column. (I) 90 seconds of illustrative behavior for two example sessions with either a visual dominant strategy (top) or timing dominant strategy (bottom). Gray bands show image repeats, blue bands mark image changes, and dashed blue lines mark omissions.
For each session we can analyze the weights (Fig. 2B) and smoothing prior for each strategy (Fig. 2C). The weights are multiplied against the design matrix and then passed through the logistic function. The licking bias strategy sets the average licking probability on each image presentation. A licking bias weight of 0 would translate to a 50% licking probability, with smaller weights leading to lower licking probabilities, and larger weights leading to higher licking probabilities. The other strategies are mean-centered (the sum of their strategy design vectors equals 0), which means we can interpret the weights relative to the licking bias term. A weight of 0 would have no influence on the licking probability, a negative weight would result in less licking than the bias term, and a positive weight would result in more licking than the bias term. Across our population, the omission strategy has a negative value, meaning on average mice are less likely to lick during omissions. The other three strategies have, on average, positive values, meaning mice are more likely to lick on the image after an omission (post omission strategy), on image changes (visual strategy), and at the expected image change frequency (timing strategy). On average the strategies have smoothing priors within the same order of magnitude, with the licking bias and timing strategies generally being the most variable. However, there is significant variability across behavioral sessions in weights and priors. Figure S5 shows additional characterization of how strategy weights are correlated with the number of hits, misses, and licking probability within each session.
To evaluate the importance of each strategy, we measured the reduction in model evidence after removing each strategy (Fig. 2D). The model evidence, or marginal likelihood, measures the probability of the data given the hyper-parameters after integrating over possible parameter values. Model comparison metrics such as Bayes factor and Bayesian information criterion are based on comparing the model evidence of two models. If the model evidence decreases after removing a strategy then the model performs worse at describing the data. We did not evaluate the model evidence without the licking bias term because it sets the average licking rate and removing it breaks the model in a trivial manner. Across our population, removing the omission strategy does not lead to a reduction in model evidence. Therefore, the omission strategy is not a meaningful descriptor of mouse behavior. Removing the post omission strategy leads to a small decrease in model evidence, demonstrating a minor role in describing mouse behavior. Removing the visual and timing strategies leads to significant decreases in model performance, albeit with large variability across behavioral sessions. These conclusions from the model evidence analysis are supported by the frequency of licking bouts in response to image changes, omissions, and post-omission image (Fig. S1).
We focused the rest of our analysis on the visual and timing strategies based on three observations. First, the lack of change in model evidence for the omission strategy. Second, a strong correlation between the post omission strategy and visual strategy in terms of both the changes in model evidence, and their average weights (Fig. 2E). Third, after performing PCA on the matrix of changes in model evidence, we found the top two principal components explained 99.04% of the variance and are closely oriented with the timing and visual strategies respectively (Fig. S6).
Plotting the absolute value of the change in model evidence after removing either the visual or timing strategies against each other (referred to as visual index and timing index), we find that there is a continuous spectrum of behaviors that mix the visual and timing strategies together (Fig. 2F). We term the strategy index as the difference in the change of model evidence after removing the visual and timing strategies (equivalently the difference between the visual index and timing index). A positive strategy index indicates the session was better described by the visual strategy. A negative strategy index indicates the session was better described by the timing strategy. All strategy mixes are able to earn a significant number of rewards per session (Fig. 2G). However higher values of the strategy index tend to result in a higher number of earned rewards. The reason the highest values of the strategy index do not always produce the most rewards is due to session engagement (explored below and in Fig. S9).
Mice were stable in their strategy preferences across multiple sessions (Fig. 2H, up to 4 sessions per mouse performed during calcium imaging). Mouse identity explains 72% of the variance in the strategy index across imaging sessions, compared with only 21.6 ± 3% after shuffling mouse identity (mean ± std across 10,000 shuffles, p=0). Consistent with this finding, we did not observe mice switching between strategies within a session. Further, strategy preferences emerged gradually over training (Fig. S7). Taken together we find mice develop unique strategy preferences between the visual and timing strategies that are stable over many days.
The remaining analyses categorize each session by the dominant strategy (equivalently, the sign of the strategy index). We refer to sessions best described by the visual strategy as visual strategy sessions. Likewise, we refer to sessions best described by the timing strategy as timing strategy sessions. Figure 2I illustrates 90 seconds of behavior from each of the two dominant strategies. Figure S8 provides additional characterization of the behavior from each strategy.
Task strategy is distinct from task engagement
Mice had clear patterns of disengagement when they stopped licking altogether. To demonstrate this we generated a contour plot of the licking bout rate and reward rate aggregated across sessions (Fig. 3A). The licking bout rate and reward rate were both computed with a rolling Gaussian window. Mouse behavior is divided between two regions. One region, which we term disengaged, has low licking rates and low reward rates. The other region, which we term engaged, encompasses a wider range of licking and reward rates. We define a threshold for disengagement as licking bout rates below 1 bout/10 s, and reward rates below 1 reward/120 s. Our choice of threshold and rolling window size are such that a single reward will briefly move the mouse into the engaged state. On average, mice are engaged 72.2% of the time. Figure 3B shows an illustrative example session in which the mouse transitions from engagement to disengagement.
Figure 3: Strategy is distinct from engagement.
(A) Contour plot of reward rate and lick bout rate from all imaging sessions (n=382 sessions, 1,804,462 image intervals). Red line: threshold for classifying engaged behavior (1 reward/120s, 1 lick bout/10s). 60.1% of image intervals are classified as engaged. (B) Example session showing lick bout rate (solid black), licking threshold (dashed black), reward rate (red), and reward rate threshold (dashed red). (C, D) Average value of the visual and timing strategy weights across a range of licking and reward rates. Both panels show data from all imaging sessions. (E) Percentage of sessions in an engaged state at each point in the hour long behavioral session. (F) Response latency, defined as the time from the start of each licking bout to the most recent image onset, split by engaged and disengaged epochs. (G-H) Response latency for engaged (G) or disengaged (H) periods, split by visual or timing strategy sessions.
In order to determine if task engagement was related to task strategy, we plotted the average strategy weights across the same landscape of licking bout and reward rates. Both the visual (Fig. 3C) and timing (Fig. 3D) weights are at their lowest values in the disengaged region. In the engaged region, the visual strategy is highest in the upper left when the ratio of rewards to licks is maximized because the visual strategy efficiently transforms licking bouts into rewards. The timing strategy is highest in the lower end of the engaged region, because the timing strategy requires more false alarm licks to generate rewards. From this analysis we conclude that task engagement is separate from each of the dominant strategies.
Next, we examined the temporal pattern of engagement across each session (Fig. 3E). For both strategies, engagement is highest at the start of the session and gradually decreases throughout the session, presumably as the mice become sated. An equal percentage of mice performing both dominant strategies are in the engaged state at each time point. For a fixed level of engagement, mice with higher (more visual) strategy indexes yield more rewards per session (Fig. S9).
Finally, we examined the timing of each licking bout with respect to the latency from the last image presentation (Fig. 3F). By definition, the engaged periods have more licking bouts. Engaged licking is time-locked to image onset, with the peak response time around 400ms. However, disengaged licking lacks this clear time-locking to image onset. Both visual and timing strategy sessions have time-locked responses during engagement (Fig. 3G), and both lack time-locked responses during disengagement (Fig. 3H). Figure S9 shows a quantification of the response time distributions. Time-locked licking to image presentations sessions suggests that mice performing the timing strategy make use of the fact that rewards are tied to image presentations, and are synchronizing their timing-based guesses to image onsets when engaged rather than randomly licking without regard to the stimulus.
We conclude that mice performing both strategies go through periods of engagement and disengagement, which is a separate phenomenon from their strategy preferences. Mice performing both strategies gradually disengaged over the course of the session. Finally, mice performing both strategies have image-locked licking while engaged, and randomly timed licking when disengaged.
Strategy is reflected in neural activity across the Vip-Sst microcircuit
We next assessed whether the dominant behavioral strategy is reflected in neural activity. We focused on two cortical visual areas, V1 and LM (Fig. 4A). Previous studies found similar responses in these areas,43 and described them as early stages in a functional hierarchy of visual processing in the mouse cortex.45 We found similar neural responses in both areas (Fig. S10)
Figure 4: Neural correlates of behavioral strategy across multiple cell populations.
(A) Two-photon calcium imaging was performed in visual areas V1 and LM. (B) Cartoon of Vip-Sst microcircuit. Vip and Sst inhibitory neurons reciprocally inhibit each other. (C) Calcium events were regressed from the fluorescence traces. (D) Average calcium event magnitude of each cell class aligned to image omissions (left), hits (middle), and misses (right), split by dominant behavioral strategy. (E) Average calcium event magnitude ± hierarchically bootstrapped SEM in a interval around image changes split by strategy and whether the mouse responded. Excitatory and Sst cells show average events after image changes, (150, 250 ms) and (375, 750 ms) respectively. Vip cells show average events immediately before image changes (−375, 0 ms). (F) Average calcium event magnitude ± hb. SEM in the 750 ms interval after image presentations split by running speed and strategy. (G, H, I) Same as F after image omissions, hits, and misses. (All) Black stars indicates p<0.05 from a hierarchical bootstrap over imaging planes and cells, corrected for multiple comparisons. Gray stars indicate p<0.01
Two-photon calcium imaging was performed in transgenic mice expressing the calcium indicator GCaMP6f in a specific cell populations: excitatory neurons, Sst inhibitory neurons, and Vip inhibitory neurons. Recent work proposed a taxonomy in which excitatory neurons and GABAergic neurons are classes, and Sst and Vip neurons are subclasses of GABAergic neurons.46 For simplicity we refer to excitatory, Sst, and Vip as cell classes. The dataset contains recordings collected while mice performed the task with both familiar and novel stimuli,41 we restricted our analysis to familiar stimuli (See Fig. S20, S21)
Our dataset contains 8,619 excitatory cells (21 imaging sessions, 9 mice), 470 Sst cells (15 imaging sessions, 6 mice), and 1,239 Vip cells (21 imaging sessions, 9 mice). These cell classes are thought to form a microcircuit whereby Sst and Vip reciprocally inhibit each other (Fig. 4B). We performed our analyses on discrete calcium events that were regressed from the raw fluorescence traces, thus removing the slow decay dynamics of the calcium indicator GCaMP6f (Fig. 4C). The extracted calcium events are of variable magnitude and correspond to a transient increase in internal calcium levels.
We examined neural responses to three stimulus types: image repeats, image omissions, and image changes (Fig. 4D). We grouped cells by the dominant strategy used by the mouse during the session in which they were recorded. We first asked how each cell class responds to image repeats. Excitatory and Sst cells respond to image repeats with no difference in the population average between strategies. Excitatory cells are image selective, with each cell typically responding to only one of the 8 images presented during the session. In contrast Sst cells are broadly image tuned, thus the population average for excitatory cells is an order of magnitude smaller than Sst cells. Vip cells are suppressed by image presentations and ramp their activity between image presentations. Vip cells from visual strategy sessions showed more activity between image presentations (Average calcium event magnitude ± hierarchically bootstrapped SEM, visual 0.0096 ± 0.00094, timing 0.0070 ± 0.00092, p=0.025). We assess significance and report the standard error of the mean (hb. SEM) by performing a non-parametric hierarchical bootstrap,47 which effectively controls Type I errors in the presence of hierarchical data. In summary, excitatory and Sst cells showed no strategy differences in response to image repeats, while Vip cells from visual strategy sessions showed increased activity between image repeats compared to cells from timing strategy sessions.
We next assesed how each cell class responds to omissions. In response to omissions excitatory cells show an amplified response on the first image after the omission compared with the pre-omission image (pre-omission 0.0041 ± 0.00036, post-omission 0.0058 ± 0.00050, p = 0.0027), but was not strategy dependent (visual 0.0062 ± 0.0008, timing 0.0055 ± 0.0006, p = 0.25). Sst cells showed significant strategy dependent changes in activity during the second half of the omission interval. Cells from timing strategy sessions show increasing activity over the omission interval, while cells from visual strategy sessions show decreasing activity during the omission interval (visual 0.0052 ± 0.0021, timing 0.019 ± 0.0054, p=0.0037). Sst cells from both strategies show decreased responses on the first image after the omission compared to the pre-omission image (visual pre-omission 0.039 ± 0.0067, visual post-omission 0.026 ± 0.0046, p = 0.035, timing pre-omission 0.055 ± 0.0070, timing post-omission 0.040 ± 0.0058, p = 0.045). Vip ramp during the omission interval with significant differences between strategies (visual 0.036 ± 0.0038, timing 0.019 ± 0.0022, p=0.00). In summary, Sst cells from visual strategy sessions show lower activity following omissions, while Vip cells from visual strategy sessions show increased ramping activity following omissions.
We find strategy dependent differences in response to image changes, including hits (mouse licked) and misses (mouse did not lick), across all three cell classes. Figure 4E shows summary quantification of differences between hits and misses for each strategy.
Excitatory cells from visual strategy sessions show greater activity in response to hits (visual hit 0.014 ± 0.0017, timing hit 0.0097 ± 0.0014, visual hit vs timing hit p=0.04). Further, we find a significant difference between hits and misses for cells from visual, but not timing, strategy sessions (visual hit 0.014 ± 0.0017, visual miss 0.0081 ± 0.0010, timing hit 0.0097 ± 0.0014, timing miss 0.0080 ± 0.00095, visual hit vs visual miss p=0.0025). For Sst cells, cells from visual strategy sessions show lower activity during the interval after hits (visual hit 0.0015 ± 0.00074, timing hit 0.013 ± 0.0032, p=0.00). Further we find lower Sst activity following hits compared to misses for Sst cells from visual, but not timing, strategy sessions (visual hit 0.0015 ± 0.00074, visual miss 0.0059 ± 0.0021, timing hit 0.013 ± 0.0032, timing miss 0.0098 ± 0.0022, visual hit vs visual miss p=0.010). For Vip cells, in the interval before the image change we find significant differences in activation between the two strategies as well as between hits and misses for cells from visual, but not timing, strategy sessions (visual hit 0.016 ± 0.0022, visual miss 0.011 ± 0.0011, timing hit 0.0070 ± 0.00076, timing miss 0.0072 ± 0.00087, visual hit vs visual miss p=0.037, visual hit vs timing hit p=0.00, visual miss vs timing miss p=0.0003). In summary, cells from visual, but not timing, strategy sessions show differential activity in all three cell classes between hits and misses. Further, we see strategy dependent differences in responses to hits across all three cell types: excitatory cells from visual strategy sessions show greater activity after hits, Sst cells from visual strategy sessions show lower activity, and Vip cells from visual strategy sessions show greater activity before hits and misses.
Figure S11 shows the population activity censored by the reward on each trial to demonstrate that strategy dependent differences in neural activity after hits cannot be explained by reward signaling. We do not observe prominent neural correlates of false alarms, and we do not observe any evidence of timing related variables in visual cortex (Fig. S12).
Vip cells are modulated by locomotion.37 To assess whether the strategy differences we observe could be due to differences in running speed or patterns we looked at the Vip response to images, omissions, hits, and misses as a function of running speed (Fig. 4F–I). Broadly, we observe Vip activity in cells from visual strategy sessions is equal to or greater than cells from timing strategy sessions across all running speeds. We conclude that strategy differences in Vip cells cannot be a result of different running speeds or patterns. The supplemental figures show licking rates (Fig. S13), pupil diameter (Fig. S14), and running speeds aligned to task events (Fig. S16). Figure S15 also shows Vip activity on a subset of the data with running speeds matched between strategies.
Microcircuit disinhibition dynamics are amplified in the visual strategy
We now use the Vip-Sst microcircuit as a unifying model (Fig. 5A). We now group each population average by strategy in order to examine how the microcircuit responds to each stimulus (Fig. 5B). We emphasize here each cell class was recorded in a separate population of mice. We can condense this information by showing the three cell classes in a 3D state space plot (Fig. 5C), or for clarity in a 2D state space between excitatory and Vip cells. These plots reveal the dynamics of the microcircuit as a periodic cycle corresponding to the rhythmic stimulus presentations where the two strategies differ primarily in the Vip activation between image. Importantly, excitatory and Sst cells are correlated, each responding to image presentations, while Vip cells are suppressed by image presentations. Figure S18 shows 3D and 2D state space plots for all combinations of cell classes. Next, we consider omissions and image changes as perturbations to this microcircuit cycle.
Figure 5: Microcircuit disinhibition dynamics are amplified in the visual strategy.
(A) Cartoon of Vip-Sst microcircuit. (B) Population response to image repeats, grouped by strategy. (C) Population response to image repeats plotted in 3D space. (D) Population response to image repeats for excitatory cells against Vip cells. (C,D) Arrow marks forward progression of time. Black circle marks image onset in B. (E) Same as B,D for image omissions. (F) Same as B,D for hits. (G) Same as B,D for misses.
Previous work35 found that Vip-Sst antagonism regulates the gain of cortical circuits to broaden the range of contrast levels excitatory cells can encode. Comparing Vip and Sst cells in response to omissions we see this antagonism (Fig. 5E). During an omission, the Vip cells continue ramping until they are suppressed by the post-omission image. Sst cells have smaller, and excitatory cells have larger, responses to the post-omission image compared to the pre-omission image. The omission perturbs the periodic image cycle, causing Vip cells to respond to the lack of visual stimulus by amplifying the gain of the cortical circuit via inhibition of Sst cells and disinhibition of excitatory cells. Notably, these gain dynamics are amplified in cells from visual strategy sessions (Fig. 5E).
Image changes can also be understood as microcircuit perturbations (Fig. 5F, G). Excitatory cells have differential responses to both hits and misses. We emphasize here that the increased excitatory response on hits happens, on average, 200ms before the mouse responds and therefore cannot be interpreted as a reward response (Fig. S11). Vip cells from visual, but not timing, strategy sessions have differential activity before hits and misses (Fig. 4E). One interpretation of these data is that mice performing the visual strategy are more reliant on visual cortical activity to drive choices, and mice performing the timing strategy are less reliant on visual cortex activity to trigger responses, instead using some internal timing mechanism elsewhere in the brain. For mice performing the visual, but not timing, strategy, increased Vip activity could amplify excitatory responses and preferentially lead to behavioral responses. Thus, we see differential responses across all cell classes from visual, but not timing, strategy session. Increased Vip activity for visual, but not timing, strategy sessions may also explain why visual strategy sessions are more likely to respond to the post-omission image (Fig. 2E).
Trial by trial neural activity is more correlated with behavioral choices for visual than for timing strategy mice
To determine if the differences in neural activity between strategies were behaviorally relevant on an trial by trial basis, we used a random forest classifier trained on neural activity to predict either image changes versus repeats (change decoder), or hits versus misses (hit decoder). Decoding was performed on neural activity in the first 400ms after each stimulus presentation.
Changes versus repeats could be decoded equally well, for all cell classes, from visual and timing strategy sessions (Fig. 6A). We then measured the correlation between the decoder’s predictions on image changes (change vs repeat) and the animal’s choice (hit vs miss). For excitatory (p < 0.05, t-test), but not Vip or Sst cells, we find a stronger correlation for visual compared to timing sessions (Fig. 6B). For excitatory cells, the correlation between behavior and decoder predictions is more than twice as strong for visual sessions compared to timing sessions. This demonstrates that while image change information is equally present in neural activity from mice performing both strategies, it is more correlated with animals’ choices in visual strategy sessions. This is consistent with the interpretation that mice performing the visual strategy are more dependent on activity in visual cortex to drive responses.
Figure 6: Stronger behavioral choice signals in cells from visual strategy sessions.
(A-C). Decoding was performed in the first 400ms after image presentation. Error bars are SEM over imaging planes. Each cell type is plotted as a separate color, with marker size indicating the number of cells used for decoding from each imaging plane. Black asterisks mark significant differences between visual and timing sessions (p < 0.05, t-test) (A) Cross validated random forest classifier performance at decoding image changes and repeats (% correct). (B) Correlation between decoder prediction on image changes (change vs repeat) and animal behavior (hit vs miss). (C) Cross validated random forest classifier performance at decoding hits and misses (% correct).
Next, we decoded hits and misses. For excitatory and Vip cells (p < 0.05, t-test), but not Sst cells, decoder performance was higher for visual compared with timing strategy sessions. The strategy difference was not significant for the largest value of n, the number of cells used to decode, this could be from saturating information or limited data in those conditions. Finally, for all cell classes, false alarm decoding performance was very low, with no difference between strategies (Fig. S22).
Engagement state has limited modulation of neural activity
To investigate neural correlates of engagement, we plotted neural activity aligned to omissions and misses (the disengaged state does not contain hits) split by engagement (Fig. 7). Excitatory cells from visual, but not timing, strategy sessions, show elevated responses to misses when engaged (visual engaged 0.010 ± 0.0016, visual disengaged 0.0069 ± 0.00088, p = 0.032). We do not observe any other significant differences in average activity between engagement states, even when controlling for running speed in Vip cells (Fig. S19). Thus, the effects of strategy are separate from task engagement.
Figure 7: Task engagement has minor effects on neural population activity.
(A) Population response from each dominant strategy, split by epochs of task engagement and disengagement, aligned to image omissions (left), or image change misses (right). Error bars are ± SEM. (B) Average calcium event magnitude ± hierarchically bootstrapped SEM in a interval (150ms, 250ms) around image changes split by strategy and whether the mouse responded. * indicates p<0.05 from a hierarchical bootstrap over imaging planes and cells, corrected for multiple comparisons.
Both dominant strategies show robust effects of image novelty
Our analysis to this point has examined neural activity when mice are shown familiar stimuli they have seen many times. This dataset also contains neural activity in response to novel stimuli. Garrett et al, 202341 found, in this dataset, exposure to novel stimuli dramatically altered neural activity. This raises two questions. First, behaviorally, does strategy change with novel stimuli? Second, how do cells from each dominant strategy respond to novel stimuli? Analyzing behavior, we see a small but significant shift in strategy preference towards the visual strategy on the novel image session (Fig. S20). This shift manifests as most mice slightly increasing their strategy index, rather than individual mice making dramatic changes in strategy. Analyzing neural activity we see cells (Fig. S21) from both strategies show the effects of novelty documented in Garrett et al, 2023.41 Further, on the novel session the primary effects reported in Figure 4 are still present. Namely, Vip cells from visual strategy sessions had increased activity on omissions, and before hits. Likewise, for visual, but not timing, strategy session excitatory cells have increased responses to hits compared to misses. Thus, the effects of stimulus novelty are separate from strategy preference.
Discussion
We found that mice used unique mixtures of a visual comparison and statistical timing strategy on a change detection task. Individual mouse strategy preferences were stable over multiple behavioral sessions and emerged gradually over training. Mice performing either strategy gradually disengaged from the task throughout each behavioral session. We found a diversity of neural correlates of strategy across excitatory, Sst inhibitory, and Vip inhibitory neurons. These neural correlates can be understood through increased activation of the Vip-Sst circuit in mice performing the visual strategy. Despite clear effects of task strategy we found only limited neural correlates of task engagement. Additionally, we found the effects of strategy preference persist when the mice perform the task with novel stimuli, despite robust changes to population activity. Our findings demonstrate that behavioral strategy alters neural activity within visual cortex, and is mediated by specific cell classes.
Behavioral diversity
We note that it could have been possible to alter the training pipeline to push mice away from the timing strategy, skip over mice performing the timing strategy for neural imaging, or post-hoc exclude their data from neural analysis. Such practices are common in neuroscience laboratories due to the practicalities of limited experimental resources and a desire to clearly isolate single behaviors or computations. Recently there has been considerable discussion over the advantages of naturalistic behavior and tightly controlled laboratory tasks.48, 49 Naturalistic behavior offers ethological relevance and behavioral richness, while laboratory tasks can isolate behaviors and yield reproducibility. We demonstrate a middle ground that compliments the existing range of behavioral paradigms in the field: large scale brain observatories with a task that subjects can solve in multiple ways. We find behavioral richness across our population of mice, but still harness the advantages of well defined stimuli and task structure.
How do mice learn their strategy preferences?
Mice could adopt a strategy based on subtle biases such as visual ability, cognitive ability, or difficulty licking the reward spout. Alternatively, mice could be biased to one strategy based on variability in early behavioral exploration. The visual strategy maximizes rewards, but perhaps the timing strategy is a local maximum in behavioral performance that some mice settle into early in training. Finally, it is possible that prior to performing the task, some mice could be predispositioned to adopting a strategy as the result of genetics, or life experience. The learning processes that lead mice to each strategy may involve unique behavioral states and neural mechanisms.50, 51
How does strategy preference amplify Vip-Sst circuitry?
Vip cells are known to be preferential targets of top-down feedback,34, 39 and may facilitate strategy dependent modifications of visual cortex. External feedback could result in higher tonic activation of Vip cells during visual strategy sessions, or neuromodulatory input could alter intrinsic Vip firing patterns.37, 52, 53
Strategy preference could also recruit different visual pathways and brain structures.13 Our results suggest that the timing mice may execute this task primarily through other brain structures outside visual cortex. We re-iterate here that licking bouts from timing mice are time-locked to image presentations, which demonstrates that timing strategy mice still use visual input. Retinal ganglion cells project to many sub-cortical structures,54 which could facilitate the simpler visual processing required for the timing strategy.
Engagement and novelty
Mice performing both dominant strategies displayed periods of task engagement and disengagement (see also Ashwood et al, 20229). Across both dominant strategies we observed limited neural correlates of task engagement. This may be puzzling, especially given some similarities between engagement and visual attention. However, we caution that engagement is a separate phenomenon from visual attention. With this reservation in mind, we note that Myers-Joseph et al. 202255 found that attention modulation operates distinctly from Vip disinhibition. Consistent with our findings, Pho et al. 201856 found little modulation of neural activity by engagement in V1 but significant modulation in posterior parietal cortex.
Our full visual field stimuli with high contrast may be salient enough to evoke visual responses regardless of task engagement. If our task operated at a perceptual threshold we might observe stronger modulation of neural activity by task engagement.
Strategy differences in neural activity are present during both engaged and disengaged states, and persistent during novel stimulus presentations. The neural correlates of strategy during this task are therefore not a fleeting activity pattern, or a flexible state to be switched on and off, rather it appears to be a deeply ingrained change in the cortical circuit that the mouse develops to solve the task consistently well.
Future directions
A naive view of sensory circuits might expect veridical encoding of the sensory world. However our findings highlight that task strategy is mediated through changes in neural activity in sensory cortex. This result raises general questions about how, when, and why cognitive states alter sensory processing. Future studies should seek mechanistic understanding of how cognitive states influence local circuit processing in visual cortex, how local circuit processing changes with learning, and how cognitive states influence the propagation of sensory information up the visual hierarchy into deeper brain structures. Behavioral paradigm development could pursue task designs that motivate fast-time scale switching between strategies and engagement states which may facilitate uncovering neural mechanisms of strategy dependent sensory processing.
Our findings are based on neural population activity. However, Garrett et al, 202341 found functional clusters of neural activity within each cell type. Future studies should investigate how cognitive states influence cortical circuits through activation or modification of specific patterns of functional cell types above and beyond the population averages of genetically defined cell types studied here.
STAR Methods
Resource Availability
Lead Contact
For further details, please contact Alex Piet, alex.piet@alleninstitute.org
Materials availability
No new materials were created for this study.
Data and Code availability
This paper analyzes existing, publicly available data. All data used in this study, behavioral and neural, is available at https://portal.brain-map.org/explore/circuits/visual-behavior-2p
All original code has been deposited at Zenodo, with DOI for the version of record in the key resources table.
Any additional information required to reanalyze the data reported in this work paper is available from the Lead Contact upon request.
Key resources table
REAGENT or RESOURCE | SOURCE | IDENTIFIER |
---|---|---|
Software and algorithms | ||
Behavior analysis code | This study | https://zenodo.org/doi/10.5281/zenodo.10576343 |
Neural analysis code | This study | https://zenodo.org/doi/10.5281/zenodo.10576347 |
Psytrack | Roy et al, 20218 | https://github.com/nicholas-roy/psytrack/commit/c342a2c |
Other | ||
Behavior and neural data | Garrett et al, 202341 | https://portal.brain-map.org/explore/circuits/visual-behavior-2p |
Strategy model fits | This study | https://figshare.com/projects/Allen_Institute_Visual_Behavior_Strategy_Paper/160972 |
Experimental model and study participant details
The dataset analyzed here was previously described in (Garrett et al, 2023).41 In summary, the behavioral sessions and in vivo neural recordings were performed in male and female transgenic mice expressing GCaMP6 in various Cre-defined cell populations. The three genotypes used in this study were Slc17a7: Slc17a7-IRES2-Cre;Camk2atTA;Ai93(TITL-GCaMP6f), Sst: Sst-IRES-Cre;Ai148(TIT2L-GC6f-ICL-tTA2), Vip: Vip-IRES-Cre;Ai148(TIT2L-GC6fICL-tTA2). Mice were singly-housed and maintained on a reverse 12-hour light cycle; all experiments were performed during the dark cycle.
Method details
Data selection
The collection and processing of all data in the study was previously described in Garrett et al, 2023,41 and is available at https://portal.brain-map.org/explore/circuits/visual-behavior-2p. For our behavioral analysis we used all active behavioral sessions from mice in the V1 and LM datasets across all image set experience levels (familiar images, novel images, and repeated exposure to novel images). For all analyses we combined mice trained on image sets A and B. For neural analysis we used neurons recorded during familiar image set presentations on the multi-plane imaging rig. Except where noted we combined across cells from V1 and LM and across cortical depths.
Mouse Training
As described in Garrett et al, 2023,41 the training pipeline consisted of 7 stages before imaging:
Training 0 - Mice learned to lick for rewards
Training 1 - Mice earned rewards when a static grating changed orientation
Training 2 - Static gratings were interleaved with a gray screen
Training 3 - Static gratings are replaced with natural images
Training 4 - Rewards decrease in size, and free rewards are no longer given when the mouse misses 10 changes in a row
Training 5 - Performance must be consistently above a minimum threshold
Habitation - Mice performed the task on the imaging rig
Imaging - Mice performed the task with familiar images, then with novel images. Imaging was also performed during passive viewing of the same stimulus, which was not analyzed here.
Our main figures analyze the mouse behavior during the imaging sessions. See Figure S7 for mouse behavior over training.
Behavioral data processing
We performed all of our behavioral analysis after assigning behavioral events to each image presentation interval. By image presentation interval we refer to the 750 ms interval beginning with each image presentation. For image omissions we used the 750 ms following the time of the omission, when the image should have been presented. Licks were segmented into licking bouts using an inter-lick interval of 700 ms. This threshold was determined by visual inspection of the histogram of inter-lick intervals. The start and end of each licking bout was then assigned to an image presentation interval.
Strategy model
The strategy model predicts whether a licking bout started on each image presentation interval. We thus excluded from our model fits any image presentation intervals where a licking bout was on-going at the start of the interval. The licking bias strategy vector was defined as a 1 on every interval. The visual, omission, and post-omissions strategy vectors were defined as 1 on intervals with the respective stimulus and 0 otherwise. The timing strategy vector was a number between 0 and 1 based on a sigmoidal function of how many image intervals since the end of the last licking bout. See Figure S3 for details on how the sigmoidal function used in timing strategy was determined. Except for the licking bias, all strategy vectors where then mean-centered.
Constructing the timing strategy
A subset of 45 sessions were used to construct the timing regressor. The strategy model was fit with 10 timing regressors, each using 1-hot encoding for different length delays since the image with the end of the last licking bout. Then a four parameter sigmoid was fit to the average weights of each of these 1-hot timing regressors. The equation of the four parameter sigmoid used to construct the timing regressor is given by:
(5) |
Here, ymin and ymax scale the vertical limits of the sigmoid, a controls the midpoint of the sigmoid, and b influences the slope of the sigmoid. The slope of the sigmoid at the midpoint is given by −b/4a. After fitting the four parameter sigmoid to the average weights from the 45 sessions, we fixed ymin=0, ymax=1, but kept the best fit values for a = 4, b = −5. This had the effect of scaling the timing sigmoid between to have unit height. Since the timing regressor is ultimately multiplied by a weight on each image, scaling the timing sigmoid does not alter the flexibility of the model and aids in interpretation. The fixed sigmoid parameters used when fitting the full model to each session were: ymin=0, ymax=1, a = 4, b = −5. The timing regressor for each session was mean-centered. See Figure S3 for details.
Fitting the strategy model
We fit the strategy model using the PsyTrack package,7, 8 ( https://github.com/nicholas-roy/psytrack). The PsyTrack package fits the model through an empirical Bayes procedure. The hyper-parameters are first selected by maximizing the model evidence. Then the strategy weights were determined by the MAP estimate. The model hyper-parameters and strategy weights were fit separately for every behavioral session.
Model performance was determined using the cross-validated model predictions as a classifier to determine whether a mouse initiated a licking bout on each image interval. The receiver operator curve determines the rate of true positives against false positives as a function of different classifier thresholds. The area under the curve provides a summary statistic to compare models. We defined the visual (or timing) index as the absolute value of the percentage change in model evidence after removing the visual (or timing) strategy. A small visual index indicates the quality of model fit did not decrease much when the visual strategy was removed. A large visual index indicates the quality of the model fit did decrease substantially when the visual strategy was removed. The strategy index was defined as the difference between the visual index and timing index. Behavioral sessions were classified as visually dominant or timing dominant by determining which strategy led to the greater decrease in model evidence when that strategy was removed.
Task Engagement
Task engagement periods were determined by applying a threshold to the reward rate and lick bout rate. Task engagement was determined for each image presentation interval. Both rates were calculated by annotating which image presentations intervals had rewards and lick bout initiation. We then smoothed across image presentations with a filter. We used a Gaussian filter with standard deviation of 60 images. We then converted both rates into units of events per second. We set the thresholds of 1 reward per 120 seconds and 1 lick bout per 10 seconds through visual inspection of the behavioral landscape in Figure 3A. If either rate was above its threshold, then the interval was labeled engaged.
Neural data
For all analysis of neural data we used the detected calcium events as described in Garrett et al, 202341 and https://portal.brain-map.org/explore/circuits/visual-behavior-2p. This process produces, for each cell, a set of calcium events each with a time and magnitude. Except where noted we used familiar image set sessions collected on the multiplane calcium imaging rig.
To generate the population averages traces we compute the behavioral event triggered response for each cell to each behavioral event and then average across all cells in each population. We compute the behavioral event triggered response by isolating the calcium events around the triggering behavioral event, then linearly interpolating onto a consistent set of 30hz timestamps relative to the triggering behavioral event. This produces a vector of calcium event magnitudes for each cell relative to each behavioral event (omission, image change, or repeat).
To generate the average calcium response across a time interval, we first compute the event triggered response for each cell as described above. We then average across all time points in the same image interval, producing a single scalar for each cell on each image interval. Average calcium responses were computed for either the entire image interval (50, 800 ms), the first half of each image interval (50, 425 ms), the second half of each image interval (425, 800 ms), or a more narrow stimulus locked window for excitatory cells (150, 250 ms). We used a 50 ms delay for the image interval, (50, 800 ms) rather than (0, 750 ms), to account for signal propagation to visual cortex.
Hierarchical bootstrap analysis
To determine significance for average calcium response metrics we applied a non-parametric hierarchical bootstrap method.47 On each bootstrap iteration we sampled with replacement from first imaging planes, and then for each imaging plane we sampled with replacement from cells from that plane. Averaging across all of these cells produces one bootstrap sample. For all of our analyses, we used this procedure to generate 10,000 samples. The standard deviation of this set of samples produces an estimate of the standard error of the mean. To performance hypothesis testing we assigned samples from each condition into random pairs and performed pairwise comparisons to determine what fraction of samples from each condition was greater or less than the other condition. In this context an imaging plane is a specific cortical area and depth from one behavioral session, so by sampling imaging planes we are effectively sampling over sessions and mice. We corrected for multiple comparisons through the Benjamini-Hochberg procedure.57
Running speed
Running speed traces were processed in the same manner as calcium event traces. For each behavioral session, we computed the event triggered running trace by isolating running time-points around the triggering behavioral event then linearly interpolating onto a common 30hz timeseries. We then averaged across all points in the relevant time window.
Decoding analysis
To decode task signals on an image by image basis we used a random forest classifier to predict either image changes versus repeats (change decoder), or hits versus misses (hit decoder). We iterated over the number, n, of simultaneously recorded neurons used in the decoding analysis. For each imaging plane we sampled neurons and performed decoding until there was a 99% probability all neurons had been used in decoding. For each sample we took n neurons without replacement and concatenated their neural activity on each image presentation to make a k×nt matrix X. Here k is the number of images, and t is the number of timesteps from each neuron on each image. The change decoder used each image change and the image repeat immediately before the image change. The hit decoder used all image changes. We then performed 5-fold cross validated decoding using the RandomForestClassifier package from sklearn.58 We evaluated decoder performance as the percentage of test-set images correctly classified. We averaged the performance of all samples from the same imaging plane and report summary statistics as the mean ± SEM over imaging planes. We computed the correlation between the change decoder’s predictions and the animal’s choices using the phi coefficient.
False alarm analysis
We define false alarms as non-change image presentations (the image is a repeat of the previous presentation) where the mouse initiates a licking bout, the mouse did not lick during the preceding two images, and the preceding two images were not image omissions or image changes. This definition allows for a fair comparison with neural activity aligned to hits and misses as image changes had a minimum 4 image separation from previous image changes and licking bouts, and never happened immediately after image omissions.
Quantification and statistical analysis
All statistical analysis was performed using custom software that was written in Python, and is available in the Key resources table. Significance was assessed at p = 0.05, except as noted in Figure 4F–I which additionally reports significance at p = 0.1, after correcting for multiple comparisons with the Benjamini-Hochberg procedure. For all analyses, the statistical test performed, the size of the data used in the statistical test ”n”, and what n represents are described in the figure legends and Method details.
In our analysis of neural responses we adopted the hierarchical bootstrap approach due to the nested structure of our dataset (mice, cells, stimulus presentations), and to avoid assumptions about normality in the data.
Additional Resources
Tutorials on using the dataset is available at: https://portal.brain-map.org/explore/circuits/visual-behavior-2p
Supplementary Material
Figure S1. Quantification of mouse behavior, related to Figure 1
Figure S2. Licks were segmented into licking bouts and aligned to image onset, related to Figure 1
Figure S3. Constructing the timing regressor, related to Figure 1
Figure S4. Model validation, related to Figure 2
Figure S5. Average strategy weights are correlated with task events, related to Figure 2
Figure S6. Principal Components Analysis (PCA) on strategy index, related to Figure 2
Figure S7. Strategy over training, related to Figure 2
Figure S8. Strategy behavior over time, related to Figure 2
Figure S9. Analysis of engagement, related to Figure 3
Figure S10. Comparing neural activity in V1 and LM, related to Figure 4
Figure S11. Strategy differences in response to hits are not due to reward signals, related to Figure 4
Figure S12. Neural correlates of behavioral strategy aligned to false alarms, related to Figure 4
Figure S13. Licking rates aligned to task events, related to Figure 4
Figure S14. Pupil diameter aligned to task events, related to Figure 4
Figure S15. Distribution of running speeds split by strategy and transgenic line, related to Figure 4
Figure S16. Running speeds aligned to task events, related to Figure 4
Figure S17. Running matched Vip mice, related to Figure 4
Figure S18. Microcircuit dynamics, related to Figure 5
Figure S19. Running speed and task engagement, related to Figure 7
Figure S20. Stimulus novelty has a small influence on strategy, related to Figure 4
Figure S21. Both dominant strategies show robust changes to novel stimuli, related to Figure 4
Figure S22. False alarm decoding, related to Figure 6
Highlights.
Mice were trained to perform a visual change detection task
They used distinct strategies – visual comparison or statistical timing
Two-photon calcium imaging shows strategy dependent differences across cell types
The visual strategy activates the Vip-Sst disinhibitory circuit in visual cortex
Acknowledgements
We thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement, and support. Funding for this study was provided by the Allen Institute. We thank the members of Allen Institute for a fruitful scientific community and helpful discussions. AP thanks Tyler Boyd-Meredith for comments on the manuscript, and Nick Roy for developing the dynamic logistic regression model, the associated code package PsyTrack, and helpful discussions. Research reported in this publication was supported by the National Institute Of Mental Health of the National Institutes of Health under Award Number U01MH130907. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Declaration of interests
The authors declare no competing interests.
References
- 1.Carandini M (2012). From circuits to behavior: a bridge too far? Nature Neuroscience, 15 4, 507–509. ISSN 1546–1726. doi: 10.1038/nn.3043. [DOI] [PubMed] [Google Scholar]
- 2.Gomez-Marin A, Paton JJ, Kampff AR, Costa RM, and Mainen ZF (2014). Big behavioral data: psychology, ethology and the foundations of neuroscience. Nature Neuroscience, 17 11, 1455–1462. ISSN 1546–1726. doi: 10.1038/nn.3812. [DOI] [PubMed] [Google Scholar]
- 3.Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, and Poeppel D (2017). Neuroscience needs behavior: Correcting a reductionist bias. Neuron, 93 3, 480–490. [DOI] [PubMed] [Google Scholar]
- 4.Niv Y (2021). The primacy of behavioral research for understanding the brain. Behav. Neurosci, 135 5, 601–609. [DOI] [PubMed] [Google Scholar]
- 5.Brunton BW, Botvinick MM, and Brody CD (2013). Rats and humans can optimally accumulate evidence for decision-making. Science, 340 6128, 95–98. [DOI] [PubMed] [Google Scholar]
- 6.Berman GJ (2018). Measuring behavior across scales. BMC Biol, 16 1, 23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Roy NA, Bak JH, Akrami A, Brody C, and Pillow JW (2018). Efficient inference for time-varying behavior during learning. In Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, and Garnett R, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc. [PMC free article] [PubMed] [Google Scholar]
- 8.Roy NA, Bak JH, International Brain Laboratory, Akrami A, Brody CD, and Pillow JW (2021). Extracting the dynamics of behavior in sensory decision-making experiments. Neuron, 109 4, 597–610.e6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Ashwood ZC, Roy NA, Stone IR, Urai AE, Churchland AK, Pouget A, Pillow JW, and Laboratory TIB (2022). Mice alternate between discrete strategies during perceptual decision-making. Nature Neuroscience, 25 2, 201–212. ISSN 1546–1726. doi: 10.1038/s41593-021-01007-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Jha A, Ashwood ZC, and Pillow JW (2022). Bayesian active learning for discrete latent variable models. doi: 10.48550/ARXIV.2202.13426. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Le NM, Yildirim M, Wang Y, Sugihara H, Jazayeri M, and Sur M (2022). Mixture of learning strategies underlies rodent behavior in dynamic foraging. bioRxiv. doi: 10.1101/2022.03.14.484338. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Gilad A, Gallero-Salas Y, Groos D, and Helmchen F (2018). Behavioral strategy determines frontal or posterior location of short-term memory in neocortex. Neuron, 99 4, 814–828.e7. [DOI] [PubMed] [Google Scholar]
- 13.Bolkan SS, Stone IR, Pinto L, Ashwood ZC, Iravedra Garcia JM, Herman AL, Singh P, Bandi A, Cox J, Zimmerman CA, Cho JR, Engelhard B, Pillow JW, and Witten IB (2022). Opponent control of behavior by dorsomedial striatal pathways depends on task demands and internal state. Nat. Neurosci, 25 3, 345–357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Venkatraman V, Payne JW, Bettman JR, Luce MF, and Huettel SA (2009). Separate neural mechanisms underlie choices and strategic preferences in risky decision making. Neuron, 62 4, 593–602. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Yang Y, Sibert C, and Stocco A (2023). Competing decision-making systems are adaptively chosen based on individual differences in brain connectivity. bioRxiv. doi: 10.1101/2023.01.10.523458. [DOI] [Google Scholar]
- 16.Pinto L, Rajan K, DePasquale B, Thiberge SY, Tank DW, and Brody CD (2019). Task-dependent changes in the large-scale dynamics and necessity of cortical regions. Neuron, 104 4, 810–824.e9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Calhoun AJ, Pillow JW, and Murthy M (2019). Unsupervised identification of the internal states that shape natural behavior. Nat. Neurosci, 22 12, 2040–2049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Tervo DGR, Kuleshova E, Manakov M, Proskurin M, Karlsson M, Lustig A, Behnam R, and Karpova AY (2021). The anterior cingulate cortex directs exploration of alternative strategies. Neuron, 109 11, 1876–1887.e6. [DOI] [PubMed] [Google Scholar]
- 19.Domenech P, Rheims S, and Koechlin E (2020). Neural mechanisms resolving exploitation-exploration dilemmas in the medial prefrontal cortex. Science, 369 6507, eabb0184. [DOI] [PubMed] [Google Scholar]
- 20.Schuck NW, Gaschler R, Wenke D, Heinzle J, Frensch PA, Haynes J-D, and Reverberi C (2015). Medial prefrontal cortex predicts internally driven strategy shifts. Neuron, 86 1, 331–340. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Proskurin M, Manakov M, and Karpova AY (2022). Acc neural ensemble dynamics are structured by strategy prevalence. bioRxiv. doi: 10.1101/2022.11.17.516909. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Tseng S-Y, Chettih SN, Arlt C, Barroso-Luque R, and Harvey CD (2022). Shared and specialized coding across posterior cortical areas for dynamic navigation decisions. Neuron, 110 15, 2484–2502.e16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Leinweber M, Ward DR, Sobczak JM, Attinger A, and Keller GB (2017). A sensorimotor circuit in mouse cortex for visual flow predictions. Neuron, 95 6, 1420–1432.e5. [DOI] [PubMed] [Google Scholar]
- 24.Schneider DM (2020). Reflections of action in sensory cortex. Curr. Opin. Neurobiol, 64, 53–59. [DOI] [PubMed] [Google Scholar]
- 25.Kim J, Erskine A, Cheung JA, and Hires SA (2020). Behavioral and neural bases of tactile shape discrimination learning in head-fixed mice. Neuron, 108 5, 953–967.e8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Waiblinger C, McDonnell ME, Reedy AR, Borden PY, and Stanley GB (2022). Emerging experience-dependent dynamics in primary somatosensory cortex reflect behavioral adaptation. Nat. Commun, 13 1, 534. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Banerjee A, Parente G, Teutsch J, Lewis C, Voigt FF, and Helmchen F (2020). Value-guided remapping of sensory cortex by lateral orbitofrontal cortex. Nature, 585 7824, 245–250. [DOI] [PubMed] [Google Scholar]
- 28.Pinto L and Dan Y (2015). Cell-type-specific activity in prefrontal cortex during goaldirected behavior. Neuron, 87 2, 437–450. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Sylwestrak EL, Jo Y, Vesuna S, Wang X, Holcomb B, Tien RH, Kim DK, Fenno L, Ramakrishnan C, Allen WE, Chen R, Shenoy KV, Sussillo D, and Deisseroth K (2022). Cell-type-specific population dynamics of diverse reward computations. Cell, 185 19, 3568–3587.e27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Pfeffer CK, Xue M, He M, Huang ZJ, and Scanziani M (2013). Inhibition of inhibition in visual cortex: the logic of connections between molecularly distinct interneurons. Nat. Neurosci, 16 8, 1068–1076. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Kullander K and Topolnik L (2021). Cortical disinhibitory circuits: cell types, connectivity and function. Trends Neurosci, 44 8, 643–657. [DOI] [PubMed] [Google Scholar]
- 32.Campagnola L, Seeman SC, Chartrand T, Kim L, Hoggarth A, Gamlin C, Ito S, Trinh J, Davoudian P, Radaelli C, Kim M-H, Hage T, Braun T, Alfiler L, Andrade J, Bohn P, Dalley R, Henry A, Kebede S, Alice M, Sandman D, Williams G, Larsen R, Teeter C, Daigle TL, Berry K, Dotson N, Enstrom R, Gorham M, Hupp M, Dingman Lee S, Ngo K, Nicovich PR, Potekhina L, Ransford S, Gary A, Goldy J, McMillen D, Pham T, Tieu M, Siverts L, Walker M, Farrell C, Schroedter M, Slaughterbeck C, Cobb C, Ellenbogen R, Gwinn RP, Keene CD, Ko AL, Ojemann JG, Silbergeld DL, Carey D, Casper T, Crichton K, Clark M, Dee N, Ellingwood L, Gloe J, Kroll M, Sulc J, Tung H, Wadhwani K, Brouner K, Egdorf T, Maxwell M, McGraw M, Pom CA, Ruiz A, Bomben J, Feng D, Hejazinia N, Shi S, Szafer A, Wakeman W, Phillips J, Bernard A, Esposito L, D’Orazi FD, Sunkin S, Smith K, Tasic B, Arkhipov A, Sorensen S, Lein E, Koch C, Murphy G, Zeng H, and Jarsky T (2022). Local connectivity and synaptic dynamics in mouse and human neocortex. Science, 375 6585, eabj5861. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Karnani MM, Jackson J, Ayzenshtat I, Tucciarone J, Manoocheri K, Snider WG, and Yuste R (2016). Cooperative subnetworks of molecularly similar interneurons in mouse neocortex. Neuron, 90 1, 86–100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Kamigaki T (2019). Dissecting executive control circuits with neuron types. Neurosci. Res, 141, 13–22. [DOI] [PubMed] [Google Scholar]
- 35.Millman DJ, Ocker GK, Caldejon S, Kato I, Larkin JD, Lee EK, Luviano J, Nayan C, Nguyen TV, North K, Seid S, White C, Lecoq J, Reid C, Buice MA, and de Vries SE (2020). VIP interneurons in mouse primary visual cortex selectively enhance responses to weak but specific stimuli. Elife, 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Keller AJ, Dipoppa M, Roth MM, Caudill MS, Ingrosso A, Miller KD, and Scanziani M (2020). A disinhibitory circuit for contextual modulation in primary visual cortex. Neuron, 108 6, 1181–1193.e8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Fu Y, Tucciarone JM, Espinosa JS, Sheng N, Darcy DP, Nicoll RA, Huang ZJ, and Stryker MP (2014). A cortical circuit for gain control by behavioral state. Cell, 156 6, 1139–1152. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Williams LE and Holtmaat A (2019). Higher-order thalamocortical inputs gate synaptic long-term potentiation via disinhibition. Neuron, 101 1, 91–102.e4. [DOI] [PubMed] [Google Scholar]
- 39.Ma G, Liu Y, Wang L, Xiao Z, Song K, Wang Y, Peng W, Liu X, Wang Z, Jin S, Tao Z, Li CT, Xu T, Xu F, Xu M, and Zhang S (2021). Hierarchy in sensory processing reflected by innervation balance on cortical interneurons. Sci. Adv, 7 20, eabf5676. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Pi H-J, Hangya B, Kvitsiani D, Sanders JI, Huang ZJ, and Kepecs A (2013). Cortical interneurons that specialize in disinhibitory control. Nature, 503 7477, 521–524. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Garrett M, Groblewski P, Piet A, Ollerenshaw D, Najafi F, Yavorska I, Amster A, Bennett C, Buice M, Caldejon S, Casal L, D’Orazi F, Daniel S, de Vries SE, Kapner D, Kiggins J, Lecoq J, Ledochowitsch P, Manavi S, Mei N, Morrison CB, Naylor S, Orlova N, Perkins J, Ponvert N, Roll C, Seid S, Williams D, Williford A, Ahmed R, Amine D, Billeh Y, Bowman C, Cain N, Cho A, Dawe T, Departee M, Desoto M, Feng D, Gale S, Gelfand E, Gradis N, Grasso C, Hancock N, Hu B, Hytnen R, Jia X, Johnson T, Kato I, Kivikas S, Kuan L, L’Heureux Q, Lambert S, Leon A, Liang E, Long F, Mace K, Magrans de Abril I, Mochizuki C, Nayan C, North K, Ng L, Ocker GK, Oliver M, Rhoads P, Ronellenfitch K, Schelonka K, Sevigny J, Sullivan D, Sutton B, Swapp J, Nguyen TK, Waughman X, Wilkes J, Wang M, Farrell C, Wakeman W, Zeng H, Phillips J, Mihalas S, Arkhipov A, Koch C, and Olsen SR (2023). Stimulus novelty uncovers coding diversity in visual cortical circuits. bioRxiv. doi: 10.1101/2023.02.14.528085. [DOI] [Google Scholar]
- 42.Allen Institute (2022). https://portal.brain-map.org/explore/circuits/visual-behavior-2p.
- 43.de Vries SEJ, Lecoq JA, Buice MA, Groblewski PA, Ocker GK, Oliver M, Feng D, Cain N, Ledochowitsch P, Millman D, Roll K, Garrett M, Keenan T, Kuan L, Mihalas S, Olsen S, Thompson C, Wakeman W, Waters J, Williams D, Barber C, Berbesque N, Blanchard B, Bowles N, Caldejon SD, Casal L, Cho A, Cross S, Dang C, Dolbeare T, Edwards M, Galbraith J, Gaudreault N, Gilbert TL, Griffin F, Hargrave P, Howard R, Huang L, Jewell S, Keller N, Knoblich U, Larkin JD, Larsen R, Lau C, Lee E, Lee F, Leon A, Li L, Long F, Luviano J, Mace K, Nguyen T, Perkins J, Robertson M, Seid S, Shea-Brown E, Shi J, Sjoquist N, Slaughterbeck C, Sullivan D, Valenza R, White C, Williford A, Witten DM, Zhuang J, Zeng H, Farrell C, Ng L, Bernard A, Phillips JW, Reid RC, and Koch C (2020). A large-scale standardized physiological survey reveals functional organization of the mouse visual cortex. Nat. Neurosci, 23 1, 138–151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Groblewski PA, Ollerenshaw DR, Kiggins JT, Garrett ME, Mochizuki C, Casal L, Cross S, Mace K, Swapp J, Manavi S, Williams D, Mihalas S, and Olsen SR (2020). Characterization of learning, motivation, and visual perception in five transgenic mouse lines expressing GCaMP in distinct cell populations. Front. Behav. Neurosci, 14, 104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Siegle JH, Jia X, Durand S, Gale S, Bennett C, Graddis N, Heller G, Ramirez TK, Choi H, Luviano JA, Groblewski PA, Ahmed R, Arkhipov A, Bernard A, Billeh YN, Brown D, Buice MA, Cain N, Caldejon S, Casal L, Cho A, Chvilicek M, Cox TC, Dai K, Denman DJ, de Vries SEJ, Dietzman R, Esposito L, Farrell C, Feng D, Galbraith J, Garrett M, Gelfand EC, Hancock N, Harris JA, Howard R, Hu B, Hytnen R, Iyer R, Jessett E, Johnson K, Kato I, Kiggins J, Lambert S, Lecoq J, Ledochowitsch P, Lee JH, Leon A, Li Y, Liang E, Long F, Mace K, Melchior J, Millman D, Mollenkopf T, Nayan C, Ng L, Ngo K, Nguyen T, Nicovich PR, North K, Ocker GK, Ollerenshaw D, Oliver M, Pachitariu M, Perkins J, Reding M, Reid D, Robertson M, Ronellenfitch K, Seid S, Slaughterbeck C, Stoecklin M, Sullivan D, Sutton B, Swapp J, Thompson C, Turner K, Wakeman W, Whitesell JD, Williams D, Williford A, Young R, Zeng H, Naylor S, Phillips JW, Reid RC, Mihalas S, Olsen SR, and Koch C (2021). Survey of spiking in the mouse visual system reveals functional hierarchy. Nature, 592 7852, 86–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Tasic B, Yao Z, Graybuck LT, Smith KA, Nguyen TN, Bertagnolli D, Goldy J, Garren E, Economo MN, Viswanathan S, Penn O, Bakken T, Menon V, Miller J, Fong O, Hirokawa KE, Lathia K, Rimorin C, Tieu M, Larsen R, Casper T, Barkan E, Kroll M, Parry S, Shapovalova NV, Hirschstein D, Pendergraft J, Sullivan HA, Kim TK, Szafer A, Dee N, Groblewski P, Wickersham I, Cetin A, Harris JA, Levi BP, Sunkin SM, Madisen L, Daigle TL, Looger L, Bernard A, Phillips J, Lein E, Hawrylycz M, Svoboda K, Jones AR, Koch C, and Zeng H (2018). Shared and distinct transcriptomic cell types across neocortical areas. Nature, 563 7729, 72–78. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Saravanan V, Berman GJ, and Sober SJ (2020). Application of the hierarchical bootstrap to multi-level data in neuroscience. Neuron. Behav. Data Anal. Theory, 3 5. [PMC free article] [PubMed] [Google Scholar]
- 48.Juavinett AL, Erlich JC, and Churchland AK (2018). Decision-making behaviors: weighing ethology, complexity, and sensorimotor compatibility. Curr. Opin. Neurobiol, 49, 42–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Musall S, Urai AE, Sussillo D, and Churchland AK (2019). Harnessing behavioral diversity to understand neural computations for cognition. Curr. Opin. Neurobiol, 58, 229–238. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Rosenberg M, Zhang T, Perona P, and Meister M (2021). Mice in a labyrinth show rapid learning, sudden insight, and efficient exploration. Elife, 10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Meister M (2022). Learning, fast and slow. Current Opinion in Neurobiology, 75, 102555. ISSN 0959–4388. doi: 10.1016/j.conb.2022.102555. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Férézou I, Cauli B, Hill EL, Rossier J, Hamel E, and Lambolez B (2002). 5-HT3 receptors mediate serotonergic fast synaptic excitation of neocortical vasoactive intestinal peptide/cholecystokinin interneurons. J. Neurosci, 22 17, 7389–7397. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Prönneke A, Witte M, Möck M, and Staiger JF (2020). Neuromodulation leads to a burst-tonic switch in a subset of VIP neurons in mouse primary somatosensory (barrel) cortex. Cereb. Cortex, 30 2, 488–504. [DOI] [PubMed] [Google Scholar]
- 54.Martersteck EM, Hirokawa KE, Evarts M, Bernard A, Duan X, Li Y, Ng L, Oh SW, Ouellette B, Royall JJ, Stoecklin M, Wang Q, Zeng H, Sanes JR, and Harris JA (2017). Diverse central projection patterns of retinal ganglion cells. Cell Rep, 18 8, 2058–2072. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Myers-Joseph D, Wilmes KA, Fernandez-Otero M, Clopath C, and Khan AG (2023). Attentional modulation is orthogonal to disinhibition by vip interneurons in primary visual cortex. bioRxiv. doi: 10.1101/2022.11.28.518253. [DOI] [PubMed] [Google Scholar]
- 56.Pho GN, Goard MJ, Woodson J, Crawford B, and Sur M (2018). Task-dependent representations of stimulus and choice in mouse parietal cortex. Nat. Commun, 9 1, 2596. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Benjamini Y, Drai D, Elmer G, Kafkafi N, and Golani I (2001). Controlling the false discovery rate in behavior genetics research. Behav. Brain Res, 125 1–2, 279–284. [DOI] [PubMed] [Google Scholar]
- 58.Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, and Duchesnay E (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825–2830. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Figure S1. Quantification of mouse behavior, related to Figure 1
Figure S2. Licks were segmented into licking bouts and aligned to image onset, related to Figure 1
Figure S3. Constructing the timing regressor, related to Figure 1
Figure S4. Model validation, related to Figure 2
Figure S5. Average strategy weights are correlated with task events, related to Figure 2
Figure S6. Principal Components Analysis (PCA) on strategy index, related to Figure 2
Figure S7. Strategy over training, related to Figure 2
Figure S8. Strategy behavior over time, related to Figure 2
Figure S9. Analysis of engagement, related to Figure 3
Figure S10. Comparing neural activity in V1 and LM, related to Figure 4
Figure S11. Strategy differences in response to hits are not due to reward signals, related to Figure 4
Figure S12. Neural correlates of behavioral strategy aligned to false alarms, related to Figure 4
Figure S13. Licking rates aligned to task events, related to Figure 4
Figure S14. Pupil diameter aligned to task events, related to Figure 4
Figure S15. Distribution of running speeds split by strategy and transgenic line, related to Figure 4
Figure S16. Running speeds aligned to task events, related to Figure 4
Figure S17. Running matched Vip mice, related to Figure 4
Figure S18. Microcircuit dynamics, related to Figure 5
Figure S19. Running speed and task engagement, related to Figure 7
Figure S20. Stimulus novelty has a small influence on strategy, related to Figure 4
Figure S21. Both dominant strategies show robust changes to novel stimuli, related to Figure 4
Figure S22. False alarm decoding, related to Figure 6
Data Availability Statement
This paper analyzes existing, publicly available data. All data used in this study, behavioral and neural, is available at https://portal.brain-map.org/explore/circuits/visual-behavior-2p
All original code has been deposited at Zenodo, with DOI for the version of record in the key resources table.
Any additional information required to reanalyze the data reported in this work paper is available from the Lead Contact upon request.
Key resources table
REAGENT or RESOURCE | SOURCE | IDENTIFIER |
---|---|---|
Software and algorithms | ||
Behavior analysis code | This study | https://zenodo.org/doi/10.5281/zenodo.10576343 |
Neural analysis code | This study | https://zenodo.org/doi/10.5281/zenodo.10576347 |
Psytrack | Roy et al, 20218 | https://github.com/nicholas-roy/psytrack/commit/c342a2c |
Other | ||
Behavior and neural data | Garrett et al, 202341 | https://portal.brain-map.org/explore/circuits/visual-behavior-2p |
Strategy model fits | This study | https://figshare.com/projects/Allen_Institute_Visual_Behavior_Strategy_Paper/160972 |