Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Apr 18.
Published before final editing as: Curr Opin Neurobiol. 2026 Mar 27;98:103190. doi: 10.1016/j.conb.2026.103190

Addressing degeneracy in rodent studies of cognition

Christine M Constantinople 1,*
PMCID: PMC13089310  NIHMSID: NIHMS2155460  PMID: 41903432

Abstract

In order to relate neural dynamics to cognitive computations, it is critical to determine whether subjects are employing behavioral strategies that actually use those computations, as opposed to simpler heuristics. Here, I describe the problem of adjudicating between degenerate behavioral algorithms and degenerate neural implementations in perceptual and value-based decision-making tasks. Multi-dimensional behavioral measures allow researchers to build arguments from numerous lines of evidence in favor of or against hypotheses about algorithmic strategies. Neural perturbations and individual differences can help resolve degenerate neural implementations. Rigorously contending with these issues is critical for leveraging rodent models to reveal mechanisms of cognition.

Introduction

Degeneracy is a phenomenon in which different solutions can produce identical outputs1. DNA is degenerate, in that multiple codons can give rise to the same amino acid2. Different combinations of ion channels can produce similar biophysical properties in neurons3,4. At the circuit level, different combinations of synapse strengths, intrinsic conductances, and other neural properties can give rise to seemingly identical patterns of dynamic activity5.

The goal of this review is to highlight the problem of degeneracy in neuroscientific studies of cognition. Degeneracy poses challenges for studies across species, including rodents, non-human primates, and humans. However, in my view, the extent to which rodent studies contend with these issues is more variable and uneven than in primate studies. My hope is that greater awareness will increase the rigor and quality of mechanistic studies of cognition in rodents.

Marr famously argued that any information processing system, including the brain, can be studied at three levels of analysis6. The computational level seeks to understand the system’s goal or the problem it is trying to solve. The algorithmic level seeks to characterize the neural representations, transformations, and algorithms by which the computational goal is achieved. The implementational level describes the mechanisms by which those algorithms are instantiated in hardware, i.e., neural circuits. Most studies of cognition in animal models rely on fluid or food deprivation schedules and fluid or food reinforcers to incentivize task performance. Therefore, it is reasonable to assume that the computational goal of the animal is to maximize rewards and minimize various costs (effort, cognitive, time).

The thorny challenge is to identify the algorithms by which animals achieve that goal. Multiple algorithms can produce similar patterns of choice behavior, i.e. can be degenerate. Romo and colleagues highlighted this issue using a task in which they trained monkeys to compare the frequencies of two vibrotactile stimuli applied to the fingertip and report the higher frequency stimulus7,8. The stimuli were separated by a delay to enable the study of working memory of the first stimulus during the delay period. However, if the first stimulus was fixed and only the second stimulus varied, the monkeys ignored the first stimulus and simply compared the second stimulus to a learned threshold, producing compelling psychometric functions and high accuracy7. Instead of fixing the first stimulus, when both stimuli varied randomly on each trial, the monkeys compared them, enabling the study of working memory79. This is an example of algorithmic degeneracy: when the first stimulus was fixed, the threshold algorithm and parametric working memory both produced similar patterns of behavioral choices and psychometric functions.

At the implementational level, behavioral strategies can appear algorithmically identical, but differ in how they are implemented by neural circuits. Animal and human brains are remarkably robust to focal lesions or perturbations10,11, suggesting that distributed networks can compensate for local perturbations or damage. This is an example of implementational degeneracy: multiple combinations of brain areas can support similar algorithmic strategies. This review will describe examples of algorithmic and implementational degeneracy during decision-making tasks, and provide basic prescriptions for resolving them.

Algorithmic degeneracy

There is a strong tradition of considering algorithmic degeneracy in accumulation of evidence tasks, in which subjects are presented with one or more sequences of momentary sensory evidence whose mean across time drives a binary decision12,13. These tasks were designed to study the integration of evidence over time, and are often modeled as a drift diffusion process. Several algorithms can produce compelling psychometric performance in these tasks. In addition to integrating evidence throughout the entire trial, subjects might integrate evidence from brief, randomly selected epochs (a different 200ms window on each trial), or only integrate brief bursts of strong evidence. Because evidence in any time window reflects the generative trial statistics, it will likely provide evidence for the correct perceptual judgment, and that likelihood increases with stimulus strength. Therefore, these algorithms produce psychometric functions that are consistent with true integration (Figure 1).

Figure 1: Degenerate algorithms in accumulation of evidence.

Figure 1:

Psychometric performance of model simulations solving the random-dot-motion task using different strategies: true integration, burst detection, or random snapshots. Reproduced from Stine et al., 2020 eLife.

One approach is to identify trials in which a non-integration algorithm would lead to errors: for burst detection, trials in which the first burst of stimuli favors the weaker evidence stream. Animals perform above chance on these trials, arguing against a burst detection algorithm14,15. Experimental choices about task design, including whether stimuli are presented continuously or in discrete pulses, dictate which analyses are most informative for resolving algorithmic strategies1417. The psychometric function is necessary but not sufficient to infer the algorithm.

As another example, my lab developed a temporal wagering task in which rats reveal how much they value water rewards based on how long they wait for them18,19 (Figure 2a). Rats experience blocks of trials with different average rewards, and how long they are willing to wait reflects the current reward and the block, such that rats wait less time for rewards in high reward blocks, when the opportunity cost is high1820, consistent with foraging theories21,22 (Figure 2bd). Rats could estimate the opportunity cost using degenerate algorithms. They might infer the hidden block and use a learned, block-specific estimate of opportunity cost, or estimate the opportunity cost via reinforcement learning, as the previous reward, or by divisive normalization23 (Figure 2e).

Figure 2: Degenerate strategies produce block sensitivity in the temporal wagering task.

Figure 2:

a. Schematic of behavioral paradigm. b. Block structure of task. c. Mean wait times on catch trials by reward in each block for one example, representative rat. d. Foraging theory-inspired behavioral model compares the value of the reward offer to the opportunity cost, or what the rat might miss out on by continuing to wait, which differs in each block. The starting point of the value function reflects the reward offer. e. Simulations of models that use different strategies to estimate the opportunity cost of time. The inference model infers the most likely block using Bayes’ Rule. The model-free reinforcement learning model estimates the opportunity cost as a running average of past rewards by a delta rule (simulation was with a learning rate of 0.3). The one trial back model treats the reward offer on the previous trial as the opportunity cost. The divisive normalization model divides the current offer by the average of past rewards. Units are arbitrary as scaling factors can determine the actual wait times in seconds.

We have used multi-dimensional behavioral measures, including the dynamics of behavioral changes at different block transitions, sensitivity to previous rewards, responses to highly informative single trials, and sensitivity to sequences of trials within a session, among others, to resolve the algorithm by which rats estimate opportunity cost18,19. Across a range of metrics, rats’ behavior was most consistent with inferring the block and using a block-specific estimate of opportunity cost when deciding how long to wait. Bidirectional perturbation experiments and neural recordings have provided additional evidence for inferential strategies19,24.

Within-trial degeneracy for different actions

Even on single trials, different algorithms can support sequential actions or aspects of behavior. In the temporal wagering task, rats also modulated how quickly they initiated trials based on the blocks, but multi-dimensional behavioral analyses revealed that they used a reinforcement learning algorithm (instead of state inference) to estimate the value of the environment when deciding how quickly to initiate trials18,2527. Initiation and wait times on the same trial, just seconds apart, reflected distinct algorithms that coarsely produced similar behavioral patterns18. Initiation and wait time decisions rely on distinct neural circuits: manipulating dopamine in the ventral striatum impacted initiation times but not wait times26, while inactivating the orbitofrontal cortex (OFC) impacted wait times but not initiation times19. Studies in non-human primates have found evidence for similar phenomena28. Multi-agent reinforcement learning models explicitly model behavior as reflecting combinations of different reinforcement learning algorithms2932. Monkeys performing the two-step task exhibited different combinations of reinforcement learning algorithms for choice versus response vigor on the same trials33.

These results are also reminiscent of findings from the “Restaurant Row” task, in which humans and rodents make accept/reject decisions about whether to wait for rewards for a specified delay34,35. Both the decision to accept the offer, and the decision to continue waiting, were sensitive to the value of the offer. However, only the decision to continue waiting was sensitive to sunk costs, in which the amount of time subjects waited increased their willingness to continue to wait. These sequential choices - accept an offer, wait for the offer - reflect distinct valuation processes that are differentially susceptible to sunk costs34,35. Previous studies have shown that motivation, attention, and exploration can drift within sessions, causing different employed algorithms3639, indicating that algorithmic strategies can also be dynamic.

Prescriptions for algorithmic degeneracy

Certain task designs can reduce the possibility of degeneracy by discouraging the use of simple heuristics that would otherwise yield comparable performance. Memory-guided tasks present a transient stimulus whose location or identity must be reported after a delay period40. While animals may keep the stimulus in working memory, they also might plan the behavioral report without necessarily keeping the first stimulus in memory. If one wants to study working memory separately from motor preparation, sequential delayed comparison paradigms, such as delayed match/non-match-to-sample tasks, separate the stimulus from the response but prevent the animal from anticipating the correct motor response during the delay period, although animals could still rely on embodied heuristics for working memory.

During task design, one should pre-emptively consider and design multiple, independent behavioral analyses that will be used to determine if the animal has actually learned the task, and is performing the desired algorithmic strategy. Multi-dimensional behavioral analyses that can adjudicate between degenerate strategies are only possible in tasks with a certain degree of richness. For instance, bandit tasks, which are widely used, provide low dimensional measures: binary choices and binary reward outcomes. Choice probabilities can be conditioned on reward history, but there are few other ways to evaluate behavior, making it difficult to develop model-independent analyses that can resolve algorithmic strategies. However, richness should be targeted, meaning tasks should include diagnostic manipulations that test competing hypotheses, not simply add complexity. For instance, in accumulation of evidence tasks, experimenters can parametrically vary the strength and precise timing of sensory evidence, as well as the duration of the stimulus. Variants of these tasks allow subjects to opt-out of the decision for a small but guaranteed reward, indicating their confidence in the perceptual judgment, which can be related to stimulus features4143.

Once one has settled on a task design, it is productive to develop behavioral models that implement different algorithmic strategies for the task. Simulating the behavior of these models can identify trial types or conditions in which they make different predictions, and thus inspire iterative improvements in task design and diagnostic behavioral analyses14,15,17,4446 (for instance, trials in which the first burst of sensory evidence favors the incorrect choice in accumulation of evidence tasks). A model’s behavior depends on the parameter values that are used, so one might simulate the behavioral output of a model using the best parameters fit to data44,45, or across a range of plausible parameter values46. If parameters in a model trade-off, this limits interpretability.

How should one identify potential degenerate algorithms? If there are precedents in the literature, that is a good place to start. This is more likely if tasks are “deeply sampled” or well-studied across multiple experiments47. If not, one will have to brainstorm, simulate possible heuristic strategies, and evaluate their performance. Addressing algorithmic degeneracy can be an extended process that spans multiple studies.

Neural recordings can help resolve algorithmic degeneracy. For centuries, economists were agnostic about the mental processes driving economic choices (based on subjective preferences) because there was no way to know if people were actually computing and comparing subjective values. One can only infer value from choices, so the behavioral definition of subjective value is circular48. The finding that neurons in some brain regions encode value provides a measure of subjective value that is independent of choice, and breaks this circularity4954. As another example, the debate about whether neurons in monkey LIP exhibit ramping or stepping dynamics during evidence accumulation is a debate about LIP but also about algorithmic strategy: were the monkeys performing true integration, or a step-like algorithm like burst detection5557?

Degenerate neural implementations

Behavioral strategies can appear algorithmically identical, but differ in how they are implemented at the neural level. Work in mice and monkeys has shown that training history can change which cortical areas are required for behavior5860, indicating degenerate combinations of brain areas capable of supporting task performance. Studies of accumulation of evidence in monkeys have documented within-session behavioral compensation following inactivation of individual brain regions61,62. The recruitment of unaffected circuits may contribute to such compensation. A study of accumulation of evidence in mice showed that behavior within a session fluctuated between states that appeared outwardly identical but differed in the causal requirement of the dorsomedial striatum63. Animal and human brains are remarkably robust to focal lesions or perturbations10,11, suggesting that distributed networks can compensate for local perturbations or damage. Adaptive recruitment of non-perturbed brain regions is thought to contribute to functional recovery following brain damage64,65. Therefore, the same algorithmic strategy may reflect degenerate neural implementations: in these cases, reliance on different combinations of brain regions.

Perturbation experiments are more powerful for resolving how brain regions support behavior than regressing activity patterns against task variables, even if the recent proliferation of large-scale recordings methods facilitates the latter approach. The brain is a heavily interconnected dynamical system, and neural circuits can reflect task variables without causally contributing to behavior6668. Invertebrate central pattern generator circuits exhibit similar dynamics across subjects under baseline conditions, but temperature69, pH70, or modulatory71 perturbations reveal degenerate circuit implementations. Recurrent neural networks that learn about action values via synaptic plasticity or neural integration exhibit identical foraging behavior, but implementations are resolvable by perturbations72. Negative results of perturbation experiments can be difficult to interpret if compensation is rapid and difficult to resolve. Moreover, inactivating entire brain regions can produce null results that obscure a causal behavioral role for specific projection classes in those regions73,74. However, perturbations that produce algorithmic deficits generate hypotheses about the behavioral or cognitive operation performed by the circuit9,19,24,43,7577, and distinguish whether a brain area’s activity is permissive versus instructive for behavior78,79. Mice have become prominent models in neuroscience because of the availability of powerful experimental tools for monitoring but also for manipulating neural dynamics. Studies that exclusively rely on recordings fall short of realizing the potential of rodent models.

Implementational degeneracy was further highlighted by a recent study that leveraged high-throughput training to record from many rats performing a context-dependent accumulation of evidence task80. They engineered recurrent neural network models (RNNs) that instantiated similar algorithmic strategies but different implementational mechanisms for context-dependent integration, and identified analyses of behavior and neural responses that distinguished these neural mechanisms. These analyses revealed that some rats achieved context-dependent accumulation by modulating the impact of sensory inputs on neural dynamics in a particular way, while others used different mechanisms, such as exhibiting distinct context-dependent recurrent dynamics. The authors resolved different neural mechanisms supporting similar algorithmic strategies80.

Conclusions

Algorithmic strategies should be treated as hypotheses to be rigorously tested. Tasks designed with degeneracy in mind can support behavioral analyses that help adjudicate between degenerate algorithms. Behavioral models are powerful tools for identifying testable qualitative predictions about degenerate algorithms. Perturbation experiments should be used with behavioral modeling and thoughtful interpretation to identify deficits at the algorithmic level and resolve implementational degeneracy. Seriously contending with these issues is critical for characterizing neural mechanisms of cognition.

Acknowledgments

I thank Danique Jeurissen, Charles Kopec, Camillo Padoa-Schioppa, Marino Pagan, Lucas Pinto, Cristina Savin, two generous anonymous reviewers, and the members of my lab for thoughtful discussions and feedback.

Funding:

This work was supported by National Institutes of Health grants R01MH136272, R01DA065418, and an NSF CAREER Award (CMC).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Declaration of interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  • 1.Edelman GM & Gally JA Degeneracy and complexity in biological systems. Proceedings of the national academy of sciences 98, 13763–13768 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Agris PF, Vendeix FA & Graham WD tRNA’s wobble decoding of the genome: 40 years of modification. Journal of molecular biology 366, 1–13 (2007). [DOI] [PubMed] [Google Scholar]
  • 3.Goldman MS, Golowasch J, Marder E & Abbott L Dependence of firing pattern on intrinsic ionic conductances: sensitive and insensitive combinations. Neurocomputing 32, 141–146 (2000). [Google Scholar]
  • 4.Goaillard J-M & Marder E Ion channel degeneracy, variability, and covariation in neuron and circuit resilience. Annual review of neuroscience 44, 335–357 (2021). [DOI] [PubMed] [Google Scholar]
  • 5.Prinz AA, Bucher D & Marder E Similar network activity from disparate circuit parameters. Nature neuroscience 7, 1345–1352 (2004). [DOI] [PubMed] [Google Scholar]
  • 6.Marr D Vision: A computational investigation into the human representation and processing of visual information (MIT press, 2010). [Google Scholar]
  • 7.Hernandez A, Salinas E, Garcıa R & Romo R Discrimination in the sense of flutter: new psychophysical measurements in monkeys. Journal of Neuroscience 17, 6391–6400 (1997). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Romo R & de Lafuente V Conversion of sensory signals into perceptual decisions. Progress in neurobiology 103, 41–75 (2013). [DOI] [PubMed] [Google Scholar]
  • 9.Akrami A, Kopec CD, Diamond ME & Brody CD Posterior parietal cortex represents sensory history and mediates its effects on behaviour. Nature 554, 368–372 (2018). [DOI] [PubMed] [Google Scholar]
  • 10.Cramer SC et al. A functional MRI study of subjects recovered from hemiparetic stroke. Stroke 28, 2518–2527 (1997). [DOI] [PubMed] [Google Scholar]
  • 11.Li N, Daie K, Svoboda K & Druckmann S Robust neuronal dynamics in premotor cortex during motor planning. Nature 532, 459–464 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Gold JI & Shadlen MN The neural basis of decision making. Annu. Rev. Neurosci 30, 535–574 (2007). [DOI] [PubMed] [Google Scholar]
  • 13.Brody CD & Hanks TD Neural underpinnings of the evidence accumulator. Current opinion in neurobiology 37, 149–157 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Hyafil A et al. Temporal integration is a robust feature of perceptual decisions. Elife 12, e84045 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]; **This study proposed and applied new analytical tools for testing integration from non-integration strategies during accumulation of evidence. It highlights the importance of using appropriate stimuli and task designs to rigorously rule out non-integration strategies.
  • 15.Kopec CD et al. To integrate or not to integrate: Testing degenerate strategies for solving an accumulation of perceptual evidence decision-making task. bioRxiv, 2024–08 (2024). [Google Scholar]
  • 16.Ditterich J Stochastic models of decisions about motion direction: behavior and physiology. Neural networks 19, 981–1012 (2006). [DOI] [PubMed] [Google Scholar]
  • 17.Stine GM, Zylberberg A, Ditterich J & Shadlen MN Differentiating between integration and non-integration strategies in perceptual decision making. Elife 9, e55365 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Mah A, Schiereck SS, Bossio V & Constantinople CM Distinct value computations support rapid sequential decisions. Nature communications 14, 7573 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]; **This study leveraged statistically powerful datasets from high-throughput training of hundreds of rats, multi-dimensional analyses of behavior, and modeling to characterize how different value computations support distinct actions on single trials.
  • 19.Schiereck SS et al. The orbitofrontal cortex updates beliefs for state inference. Neuron (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]; **This study used a multitude of behavioral analyses, neural perturbations, and analysis of neural dynamics to test hypotheses about behavioral strategies in rats performing the temporal wagering task at different stages of training.
  • 20.Hocker D, Constantinople CM & Savin C Compositional pretraining improves computational efficiency and matches animal behaviour on complex tasks. Nature Machine Intelligence, 1–14 (2025). [Google Scholar]
  • 21.Charnov EL Optimal foraging, the marginal value theorem. Theoretical population biology 9, 129–136 (1976). [DOI] [PubMed] [Google Scholar]
  • 22.Stephens DW, Brown JS & Ydenberg RC Foraging: behavior and ecology (University of Chicago Press, 2008). [Google Scholar]
  • 23.Khaw MW, Glimcher PW & Louie K Normalized value coding explains dynamic adaptation in the human valuation process. Proceedings of the National Academy of Sciences 114, 12696–12701 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.DeMaegd MD et al. Corticostriatal pathways update beliefs about latent task states. Society for Neuroscience Meeting (2025). [Google Scholar]
  • 25.Mah A, Golden CE & Constantinople CM Dopamine transients encode reward prediction errors independent of learning rates. Cell reports 43 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Golden CE et al. Estrogenic control of reward prediction errors and reinforcement learning. bioRxiv, 2023–12 (2023). [Google Scholar]
  • 27.Jang H Acetylcholine demixes heterogeneous dopamine signals for learning and moving PhD thesis(New York University, 2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Bromberg-Martin ES, Matsumoto M, Nakahara H & Hikosaka O Multiple timescales of memory in lateral habenula and dopamine neurons. Neuron 67, 499–510 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Camerer C & Ho T-H Experience-weighted attraction learning in coordination games: Probability rules, heterogeneity, and time-variation. Journal of mathematical psychology 42, 305–326 (1998). [DOI] [PubMed] [Google Scholar]
  • 30.Doya K, Samejima K, Katagiri K. i. & Kawato M Multiple model-based reinforcement learning. Neural computation 14, 1347–1369 (2002). [DOI] [PubMed] [Google Scholar]
  • 31.Frank MJ & Badre D Mechanisms of hierarchical reinforcement learning in corticostriatal circuits 1: computational analysis. Cerebral cortex 22, 509–526 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Cruz BF et al. Action suppression reveals opponent parallel control via striatal circuits. Nature 607, 521–526 (2022). [DOI] [PubMed] [Google Scholar]
  • 33.Miranda B, Malalasekera WN, Behrens TE, Dayan P & Kennerley SW Combined model-free and model-sensitive reinforcement learning in non-human primates. PLoS computational biology 16, e1007944 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Sweis BM et al. Sensitivity to “sunk costs” in mice, rats, and humans. Science 361, 178–181 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Redish AD et al. Sunk cost sensitivity during change-of-mind decisions is informed by both the spent and remaining costs. Communications biology 5, 1337 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Wilson RC, Geana A, White JM, Ludvig EA & Cohen JD Humans use directed and random exploration to solve the explore–exploit dilemma. Journal of experimental psychology: General 143, 2074 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Ebitz RB, Albarran E & Moore T Exploration disrupts choice-predictive signals and alters dynamics in prefrontal cortex. Neuron 97, 450–461 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Cowley BR et al. Slow drift of neural activity as a signature of impulsivity in macaque visual and prefrontal cortex. Neuron 108, 551–567 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Ashwood ZC et al. Mice alternate between discrete strategies during perceptual decisionmaking. Nature Neuroscience 25, 201–212 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Goldman-Rakic PS, Bates JF & Chafee MV The prefrontal cortex and internally generated motor acts. Current opinion in neurobiology 2, 830–835 (1992). [DOI] [PubMed] [Google Scholar]
  • 41.Kepecs A, Uchida N, Zariwala HA & Mainen ZF Neural correlates, computation and behavioural impact of decision confidence. Nature 455, 227–231 (2008). [DOI] [PubMed] [Google Scholar]
  • 42.Kiani R & Shadlen MN Representation of confidence associated with a decision by neurons in the parietal cortex. science 324, 759–764 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Lak A et al. Orbitofrontal cortex is required for optimal waiting based on decision confidence. Neuron 84, 190–201 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Nassar MR & Frank MJ Taming the beast: extracting generalizable knowledge from computational models of cognition. Current opinion in behavioral sciences 11, 49–54 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Palminteri S, Wyart V & Koechlin E The importance of falsification in computational cognitive modeling. Trends in cognitive sciences 21, 425–433 (2017). [DOI] [PubMed] [Google Scholar]; *This review demonstrates that the best fit model parameters can sometimes fail to capture key aspects of behavior, and argues for evaluating models on the basis of their “generative performance”- their ability to reproduce behavioral measures of interest.
  • 46.Wilson RC & Collins AG Ten simple rules for the computational modeling of behavioral data. Elife 8, e49547 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Rosen MC & Freedman DJ How distributed is the brain-wide network that is recruited for cognition? Nature Reviews Neuroscience, 1–13 (2025). [DOI] [PubMed] [Google Scholar]
  • 48.Padoa-Schioppa C Neurobiology of economic choice: a good-based model. Annual review of neuroscience 34, 333–359 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Padoa-Schioppa C & Assad JA Neurons in the orbitofrontal cortex encode economic value. Nature 441, 223–226 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Paton JJ, Belova MA, Morrison SE & Salzman CD The primate amygdala represents the positive and negative value of visual stimuli during learning. Nature 439, 865–870 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Kable JW & Glimcher PW The neural correlates of subjective value during intertemporal choice. Nature neuroscience 10, 1625–1633 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Kim S, Hwang J & Lee D Prefrontal coding of temporally discounted values during intertemporal choice. Neuron 59, 161–172 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Bari BA et al. Stable representations of decision variables for flexible behavior. Neuron 103, 922–933 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Hirokawa J, Vaughan A, Masset P, Ott T & Kepecs A Frontal cortex neuron types categorically encode single decision variables. Nature 576, 446–451 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Latimer KW, Yates JL, Meister ML, Huk AC & Pillow JW Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science 349, 184–187 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Shadlen M et al. Comment on ”single-trial spike trains in parietal cortex reveal discrete steps during decision-making”. English (US). Science 351. 0036–8075 (Mar. 2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Latimer KW, Yates JL, Meister ML, Huk AC & Pillow JW Response to Comment on “Single-trial spike trains in parietal cortex reveal discrete steps during decisionmaking”. Science 351, 1406–1406 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Chowdhury SA & DeAngelis GC Fine discrimination training alters the causal contribution of macaque area MT to depth perception. Neuron 60, 367–377 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Liu LD & Pack CC The contribution of area MT to visual motion perception depends on training. Neuron 95, 436–446 (2017). [DOI] [PubMed] [Google Scholar]
  • 60.Arlt C et al. Cognitive experience alters cortical involvement in goal-directed navigation. Elife 11, e76051 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]; *This study performed perturbations and recordings from several cortical regions in mice performing a navigation task, and found that the contribution and dynamics of cortical regions depended on training experience.
  • 61.Fetsch CR et al. Focal optogenetic suppression in macaque area MT biases direction discrimination and decision confidence, but only transiently. Elife 7, e36523 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Jeurissen D, Shushruth S, El-Shamayleh Y, Horwitz GD & Shadlen MN Deficits in decision-making induced by parietal cortex inactivation are compensated at two timescales. Neuron 110, 1924–1931 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]; **This study performed inactivations in LIP of non-human primates, and documented behavioral compensation within and across sessions, suggesting behavioral compensation by unaffected areas over multiple timescales.
  • 63.Bolkan SS et al. Opponent control of behavior by dorsomedial striatal pathways depends on task demands and internal state. Nature neuroscience 25, 345–357 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]; *This study trained mice to perform an accumulation of evidence task, and found that mice alternated between strategies that produced similar psychometric performance but differed in the requirement of the dorsomedial striatum.
  • 64.Talwar SK, Musial PG & Gerstein GL Role of mammalian auditory cortex in the perception of elementary sound properties. Journal of neurophysiology 85, 2350–2358 (2001). [DOI] [PubMed] [Google Scholar]
  • 65.Newsome WT & Pare EB A selective impairment of motion perception following lesions of the middle temporal visual area (MT). Journal of Neuroscience 8, 2201–2211 (1988). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Chen S et al. Brain-wide neural activity underlying memory-guided movement. Cell 187, 676–691 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Guo ZV et al. Flow of cortical activity underlying a tactile decision in mice. Neuron 81, 179–194 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Pinto L et al. Task-dependent changes in the large-scale dynamics and necessity of cortical regions. Neuron 104, 810–824 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Alonso LM & Marder E Temperature compensation in a small rhythmic circuit. Elife 9, e55470 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Hampton D, Kedia S & Marder E Alterations in network robustness upon simultaneous temperature and pH perturbations. Journal of neurophysiology 131, 509–515 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Hamood AW & Marder E Consequences of acute and long-term removal of neuromodulatory input on the episodic gastric rhythm of the crab Cancer borealis. Journal of Neurophysiology 114, 1677–1692 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Pereira-Obilinovic U, Hou H, Svoboda K & Wang X-J Brain mechanism of foraging: Reward-dependent synaptic plasticity versus neural integration of values. Proceedings of the National Academy of Sciences 121, e2318521121 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Warden MR et al. A prefrontal cortex–brainstem neuronal projection that controls response to behavioural challenge. Nature 492, 428–432 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Gupta D et al. A multi-region recurrent circuit for evidence accumulation in rats. Neuron (2024). [DOI] [PubMed] [Google Scholar]
  • 75.Hanks TD, Ditterich J & Shadlen MN Microstimulation of macaque area LIP affects decision-making in a motion discrimination task. Nature neuroscience 9, 682–689 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Erlich JC, Brunton BW, Duan CA, Hanks TD & Brody CD Distinct effects of prefrontal and parietal cortex inactivations on an accumulation of evidence task in the rat. Elife 4, e05457 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Vertechi P et al. Inference-based decisions in a hidden state foraging task: differential contributions of prefrontal cortical areas. Neuron 106, 166–176 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Otchy TM et al. Acute off-target effects of neural circuit manipulations. Nature 528, 358–363 (2015). [DOI] [PubMed] [Google Scholar]
  • 79.Hong YK, Lacefield CO, Rodgers CC & Bruno RM Sensation, movement and learning in the absence of barrel cortex. Nature 561, 542–546 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Pagan M et al. Individual variability of neural computations underlying flexible decisions. Nature 639, 421–429 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]; **This study used an elegant task design, sophisticated neural network modeling, and combined analysis of neural dynamics and behavior to adjudicate between different (degenerate) neural implementations of context-dependent evidence accumulation in rats.

RESOURCES