Abstract
Motivation is often thought to enhance adaptive decision-making by biasing actions toward rewards and away from punishment. Emerging evidence, however, points to a more nuanced view whereby motivation can both enhance and impair different aspects of decision-making. Model-based approaches have gained prominence over the past decade for developing more precise mechanistic explanations for how incentives impact goal-directed behavior. In this Special Focus, we highlight three studies that demonstrate how computational frameworks help decompose decision processes into constituent cognitive components, as well as formalize when and how motivational factors (e.g., monetary rewards) influence specific cognitive processes, decision-making strategies, and self-report measures. Finally, I conclude with a provocative suggestion based on recent advances in the field: that organisms do not merely seek to maximize the expected value of extrinsic incentives. Instead, they may be optimizing decision-making to achieve a desired internal state (e.g., homeostasis, effort, affect). Future investigation into such internal processes will be a fruitful endeavor for unlocking the cognitive, computational, and neural mechanisms of motivated decision-making.
BACKGROUND
Over the past decade, a plethora of studies have examined interactions between motivation and goal-directed decision-making, which often recruits the process of cognitive control (Cubillo, Makwana, & Hare, 2019; Yee & Braver, 2018; Kool et al., 2017; Krebs & Woldorff, 2017; Chiew, Stanek, & Adcock, 2016; Fröber & Dreisbach, 2016; Botvinick & Braver, 2015; Braem, Hickey, Duthoo, & Notebaert, 2014; Braver et al., 2014; Dixon & Christoff, 2012; Padmala & Pessoa, 2010; Pessoa, 2009; Engelmann & Pessoa, 2007). This research has been highly influential for cognitive neuroscience, shedding light on variability in underlying cognitive and neural mechanisms of these interactions across the human lifespan (Insel, Charifson, & Somerville, 2019; Davidow, Insel, & Somerville, 2018; Ferdinand & Czernochowski, 2018; Samanez-Larkin & Knutson, 2015; Luna, Paulsen, Padmanabhan, & Geier, 2013) as well as in psychiatric and neurological disorders (Barch et al., 2023; Grahek, Shenhav, Musslick, Krebs, & Koster, 2019; Timmer, Aarts, Esselink, & Cools, 2018; Rosa, Schiff, Cagnolati, & Mapelli, 2015).
Researchers have leveraged powerful neuroimaging, electrophysiological, and pharmacological tools to uncover key neural signatures of motivational influences on control allocation. For instance, numerous fMRI studies reveal a consistent network of regions across prefrontal and mid-cingulate cortex related to the interaction between motivation and cognitive control (Yee, Crawford, Lamichhane, & Braver, 2021; Parro, Dixon, & Christoff, 2018; Bahlmann, Aarts, & D’Esposito, 2015; McGuire & Botvinick, 2010; Kouneiher, Charron, & Koechlin, 2009; Bush et al., 2002). Similarly, multiple EEG studies have identified various ERP components (e.g., P3a, P3b, CNV, FRN) modulated by reward and cognitive control (Grahek, Frömer, Fahey, & Shenhav, 2023; Overmeyer, Kirschner, Fischer, & Endrass, 2023; Fröber, Jurczyk, Mendl, & Dreisbach, 2021; Frömer, Lin, Wolf, Inzlicht, & Shenhav, 2021; Schevernels, Krebs, Santens, Woldorff, & Boehler, 2014), and recent intracranial EEG studies suggest that beta and theta oscillations may be involved in tracking reward learning and effort allocation (Xiao et al., 2024; Hoy et al., 2024). Finally, pharmacological and neurochemical PET studies have shown how striatal dopamine potentiates the sensitivity to the benefits (relative to the costs) of subjective effort, which subsequently biases the allocation of cognitive control (Westbrook et al., 2020; Cools et al., 2019; Cools, 2016).
However, despite the emerging evidence for the neural substrates underlying these motivation–cognition interactions, a significant limitation of cognitive neuroscience approaches is that measurement of neural activity alone does not necessarily specify how motivational value signals are represented—and subsequently translated—into strategic adjustments in cognitive control and goal-directed decision-making. Crucially, the inclusion of computational models, in conjunction with carefully designed experimental tasks and neural measurements, allow researchers to characterize how value information is represented. In this Special Focus, we highlight how computational models allow for a greater insight into the mechanisms through which motivational and cognitive processes interact to bias adaptive goal-directed behavior.
ADVANCES FROM COMPUTATIONAL COGNITIVE NEUROSCIENCE: THE CASE FOR PROCESS MODELS
In recent years, the rise of computational cognitive neuroscience has transformed the field of reward and decision-making, fostering an innovative interdisciplinary approach for formally characterizing various motivational, affective, and cognitive processes in humans and animals (Kriegeskorte & Douglas, 2018; Forstmann & Wagenmakers, 2015; O’Doherty, Hampton, & Kim, 2007). Computational frameworks are especially powerful tools to test various normative assumptions about cognitive control and decision-making (Bogacz, Brown, Moehlis, Holmes, & Cohen, 2006; Botvinick, Braver, Barch, Carter, & Cohen, 2001; Braver & Cohen, 2000), and recent advances have illustrated how humans may adopt behavioral control strategies to maximize expected value (Prater Fahey, Yee, Leng, Tarlow, & Shenhav, 2025; Silvestrini, Musslick, Berry, & Vassena, 2023; Leng, Yee, Ritz, & Shenhav, 2021; Shenhav, Prater Fahey, & Grahek, 2021; Shenhav et al., 2017; Musslick, Shenhav, Botvinick, & Cohen, 2015; Shenhav, Botvinick, & Cohen, 2013).
Several of these normative assumptions have been implemented via sequential sampling models (SSMs) and reinforcement learning models (Fengler, Bera, Pedersen, & Frank, 2022; Ratcliff, Smith, Brown, & McKoon, 2016; Forstmann, Ratcliff, & Wagenmakers, 2015; Forstmann & Wagenmakers, 2015; Wiecki, Poland, & Frank, 2015; Wiecki, Sofer, & Frank, 2013), process models that decompose task performance (e.g., RT, accuracy) into mathematical parameters related to underlying psychological processes of value-based decisions. For example, in a SSM, the drift rate may represent the quality of information processing, whereas the decision threshold may represent the level response caution an individual may choose to exert.
Model-based approaches have generated important insight into how particular decision-making processes are implemented in specific brain areas or via specific neural signals (Rac-Lubashevsky & Frank, 2021; Collins, Ciullo, Frank, & Badre, 2017; Herz, Zavala, Bogacz, & Brown, 2016; Frank et al., 2015; Niv et al., 2015; Ahn, Krawitz, Kim, Busemeyer, & Brown, 2011; Cavanagh et al., 2011; Daw, Gershman, Seymour, Dayan, & Dolan, 2011; O’Doherty et al., 2007). For example, Rac-Lubashevsky and Frank (2021) conducted an EEG study to examine the neural dynamics of selective gating operations in working memory (e.g., input gating, response gating, and output gating). They identified various neural markers associated with the three gating effects (e.g., input gating is associated with increased in parietal P3b, output switching is associated with negative enhancement of N2), and they find evidence for trial-wise neural similarity following switches in the input, output, and response gates across various ERP components. Next, they used a drift diffusion model (a kind of SSM) and tested the hypothesis that premotor conflict would increase “decision threshold,” a parameter associated with more cautious and deliberate response selection. They found that switches at each independent level of gating (input, output, and response) were related to increased decision thresholds, consistent with prior work demonstrating that the basal ganglia neural systems are associated with the impact of decision-conflict and associated adjustments decision threshold (Wiecki & Frank, 2013; Cavanagh et al., 2011). Together, these data reveal the neural and computational markers of how information is gated in working memory.
It is also worth mentioning that although model-based approaches are incredibly powerful, they are not immune to misinterpretation of neural mechanisms if not implemented carefully (Frömer, Nassar, Ehinger, & Shenhav, 2024). Therefore, greater rigor is necessary to determine the validity and reliability of computational models fit to behavioral and neural data. In recent years, researchers have sought to develop robust analysis pipelines and workflows for systematically evaluating the quality of model fitting and validation (e.g., posterior predictive checks), which will determine to what extent such model-based approaches can provide meaningful insight into cognitive and neural mechanisms of value-based decisions (Schad, Betancourt, & Vasishth, 2021; Gelman et al., 2020; Pedersen & Frank, 2020; Wilson & Collins, 2019; Wilson & Niv, 2015).
COMPUTATIONAL MODELS REVEAL HOW MOTIVATION CAN BOTH ENHANCE AND IMPAIR GOAL-DIRECTED DECISION-MAKING
Motivation shapes adaptive decision-making by biasing action toward reward and away from punishment (Swart et al., 2017; Guitart-Masip, Duzel, Dolan, & Dayan, 2014). Yet, recent research suggests that motivation may play a more nuanced role in both enhancing and impairing goal-directed decision-making (Yee, Leng, Shenhav, & Braver, 2022; Millner, Gershman, & Nock, 2018; Pessiglione & Delgado, 2015). The three empirical features in this Special Focus—that accompany this perspective—illustrate how computational modeling can be leveraged to formalize when and how motivational incentives impact decision-making. The following collection of studies reveals how motivation plays a multifaceted role in goal-directed decision-making.
First, Adkins and Lee (2023) use an innovative forced-response paradigm and complementary model to examine the effects of incentives on the preparation of goal-directed and habitual responses in a Simon task. In this task, response initiation time was fixed but the onset of a target stimulus varied between trials, thereby limiting how much time was available for response preparation (Hardwick, Forrence, Krakauer, & Haith, 2019). Using a probabilistic model that separately parameterizes the speed of habit and goal processing (mean = μ, SD = σ), they demonstrate that monetary rewards mitigate conflict by accelerating the preparation of goal-directed actions. Importantly, the inclusion of the computational model provided a uniquely powerful approach for generating explicit predictions that differentiate between two distinct hypotheses for how rewards impact cognitive strategies in this cognitive control task (i.e., enhanced goal-directed response preparation vs. impaired habitual response).
Second, Ballard, Waskom, Nix, and D’Esposito (2024) examine whether reward reinforcement can give rise to habitual goal selection (e.g., using contextual information to habitually select a specific stimulus–response rule), using a novel rewarded context-dependent, perceptual decision-making task. Whereas prior work has shown that rewards promote habitual action selection (e.g., automatic response to a specific stimulus), the authors reveal that rewards also reinforce abstract representations of task goals to adaptively guide action selection and, additionally, that learned associations persist even when rewards are no longer present. Using drift diffusion modeling, a specific type of SSM, they characterize the key qualitative features of the behavioral data (i.e., rewards promoted faster and more accurate responding for more difficult trials), which was reflected in greater efficiency of evidence accumulation (drift rate), higher response caution (decision threshold), as well as slower initiation times (nondecision time). Here, the drift diffusion model provided additional insight into the underlying mechanisms of the decision-making process, allowing for a more nuanced understanding of the role of reward incentives in enhancing versus impairing habitual goal selection.
Third, Zhang, Leng, and Shenhav (2024) investigate whether and how reward incentives interact with an expected challenge—which is inherently stressful—to determine cognitive control allocation. They develop a time-limited incentivized Stroop task requiring individuals to meet a goal threshold (number of correct trials) to receive accumulated monetary reward, and vary the goal threshold (i.e., “Easy” challenge level required only five correct responses to reach the goal, whereas the “Hard” challenge level required eight correct responses to reach the goal) and rewards at stake (i.e., the “Low” reward is equivalent to one gem, whereas the “High” reward is equivalent to multiple gems). In other words, the goal threshold determines the difficulty of a given interval. They observe an interesting behavioral pattern in which individuals invested a greater effort for higher rewards and greater challenge, consistent with model predictions from a reward rate optimization model, which assumes that reward promotes more efficient performance (Leng et al., 2021). Interestingly, despite both conditions improving overall task performance, each influenced divergent affective states, with greater reward associated with higher stress and higher positive affect versus greater challenge associated with higher stress and lower positive affect, and these affective ratings interacted with whether the goal was completed. Finally, analyses of temporal dynamics across task intervals reveal initial speeding at the start of an interval and more caution closer to goal completion, suggesting dynamic reconfigurations of control across information processing (Grahek et al., 2024; Ritz, Leng, & Shenhav, 2022). Together, these studies reveal an intriguing dissociation between how various motivational manipulations (reward vs. challenge) can similarly bias cognitive control yet contribute to dissociable affective experiences.
BEYOND MONETARY INCENTIVES: CONSIDERING THE NORMATIVE ROLE OF HOMEOSTASIS, EFFORT, AND AFFECT IN DECISION-MAKING
This collection highlights how computational modeling can provide a unique perspective into mechanisms that determine when and how motivational incentives can bias distinct components of decision-making processes in well-defined cognitive tasks. Yet, although these studies demonstrate the power of monetary incentives in guiding adaptive behavior in the service of a goal, it remains unclear to what extent these incentives are sufficient to modulate more complex decision-making processes. Alternatively, what happens when rewards or goals are delayed in time or more abstract?
A central assumption of many current influential models of reward and decision-making is that individuals behave to maximize expected (monetary) value, yet in the real world, we know that humans are motivated by a wide array of reinforcers (Krug & Braver, 2014; Bartra, McGuire, & Kable, 2013; Sescousse, Caldú, Segura, & Dreher, 2013). Although it is well known that extrinsic incentives or secondary reinforcers (e.g., money, food/drink, social interactions) serve as rewards that bias internal states (e.g., homeostatic drive, motives) to support adaptive goal-directed behavior (Molinaro & Collins, 2023; O’Reilly, 2020; Juechems & Summerfield, 2019), another puzzling phenomenon is that humans and animals often behave in a manner that some normative models would consider “suboptimal” (Adkins, Lewis, & Lee, 2022; Farashahi, Donahue, Hayden, Lee, & Solanti, 2019; Rouault, Drugowitsch, & Koechlin, 2019; Herrnstein, 1997; Kahneman & Tversky, 1979). That is, precisely what humans and animals are optimizing their behavior for when faced with extrinsic rewards remains unclear (Cohen & Blum, 2002). Below, I briefly describe three recent developments that may provide insight into this open question.
One provocative idea that has gained traction in recent years is that humans and animals adaptively learn about and seek “rewards” to regulate internal homeostasis of the organism (Keramati & Gutkin, 2014). Although the distinction between homeostatic and hedonic mechanisms in decision-making is well known (Saper, Chou, & Elmquist, 2002), how peripheral and central nervous systems interact to control adaptive decision-making and influence behavior remains an active area of research (Rossi & Stuber, 2018; de Araujo, Schatzker, & Small, 2020; Lutter & Nestler, 2009). For example, recent evidence from the ingestive literature has identified the postoral “primary reward signal” in the body as an interoceptive signal that represents key physiological resources essential for sustaining life, such as energy, nutrients, and hydration (Weber, Yee, Small, & Petzschner, 2024). Importantly, these primary reward signals depend on an organism's internal state (e.g., hunger, thirst) as well as its goals (e.g., satiety, quenching thirst), providing a potential explanation that may help reconcile the paradoxical and subjective nature of food and drink rewards (e.g., a drop of juice may be appetitive if an animal is thirsty, yet aversive if the animal is quenched). Neurocomputational frameworks that simultaneously consider homeostatic-based and reward-based mechanisms between the body and brain may facilitate a comprehensive understanding of how rewards bias adaptive decision-making (Plassmann, Schelski, Simon, & Koban, 2022; Petzschner, Garfinkel, Paulus, Koch, & Khalsa, 2021; Juechems & Summerfield, 2019; Hulme, Morville, & Gutkin, 2019; Paulus, 2007).
Second, an emerging body of research has shown that individuals seek to minimize the cost associated with exerting mental (or cognitive) effort in decision-making tasks (Kool & Botvnick, 2018; Shenhav et al., 2017; Westbrook & Braver 2015). Although it is well established that mental effort is phenomenologically aversive and that humans and animals typically seek to avoid exerting effort because of its subjective internal cost (Vogel et al., 2020; Kurzban, 2016), the precise mechanism underlying the effort cost remains elusive. Recent computational frameworks have sought to quantify effort as a cost–benefit analysis between reward and effort (Manohar et al., 2015), or as an opportunity cost (Otto & Daw, 2019). However, as “effort” is entirely subjective and intrinsic, a limitation of current empirical studies is that effort has been primarily manipulated by varying the magnitude of an extrinsic motivational incentive (e.g., monetary reward or loss). Moreover, an intriguing challenge of quantifying effort relates to its paradoxical nature in either adding to or subtracting from value depending on an organism's level of effort exertion (Inzlicht, Shenhav, & Olivola, 2018). Future work clarifying the computational mechanisms of when organisms exert mental effort as well how they experience it can provide insight into an important latent factor of adaptive decision-making.
Finally, growing evidence suggests that affect, in addition to reward, plays a prominent role in driving cognition and behavior (Schiller et al., 2024; Dukes et al., 2021; Lerner, Li, Valdesolo, & Kassam, 2015; Paulus & Yu, 2012). Researchers have shown dissociable influences of affect versus reward on cognitive control (Chiew, 2021; Grahek, Musslick, & Shenhav, 2020; Fröber & Dreisbach, 2014; Harlé, Shenoy, & Paulus, 2013; Chiew & Braver, 2011), and in this Special Focus, Zhang and colleagues demonstrate how different motivational influences that similarly improve performance can lead to distinct affective states. This notion that an organism's internal state determines the perception of the external rewarding stimulus is not new (Cabanac, 1971), and recent computational frameworks posit that humans make decisions in the service of their emotional and affective states (Shenhav, 2024; Emanuel & Eldar, 2023; Dayan, 2022). Importantly, these models assume that organisms act in accordance with their affective goals, which may help provide normative assumptions for when and how humans behave rationally (e.g., in the service of optimizing mood).
Conclusion
In conclusion, these recent developments suggest a necessary shift from the assumption that “motivation” is always implicitly manipulated in humans and animals through offering extrinsic incentives such as monetary rewards. Instead, given that motivation is an internal state that interacts with the physiological and psychological needs of the organism, it seems natural the computational models should account for neural/bodily signals, subjective effort costs, and internal affective states when describing the impact of motivation and adaptive goal-directed behavior. Such computational frameworks that formalize interactions between motivation, affect, and decision-making would enable a richer understanding of the multifaceted ways through which motivation and decision-making processes interact both in the laboratory and in the real world. Clarification of these interactions could lend crucial insight into how these mechanisms may become maladaptive in psychiatric disorders (Bishop & Gagne, 2018; Pessiglione, Vinckier, Bouret, Daunizeau, & Le Bouc, 2018), laying the groundwork for more effective translation from computation to the clinic (Yip et al., 2022).
Corresponding author: Debbie M. Yee, Cognitive and Psychological Sciences, Brown University, 190 Thayer Street, Metcalf Research Building, Rhode Island, or via e-mail: debbie_yee@brown.edu.
Funding Information
D. Y. was supported by the National Institute of Mental Health Training Program for Computational Psychiatry (https://dx.doi.org/10.13039/100000025; T32MH126388), and the Advancing Research Careers in Brain Science Award (R25NS124530). In addition, the NSF CAREER Award 204611 was awarded to Amitai Shenhav (PI). This perspective would not be possible without invigorating and thought-provoking discussions with members of the Shenhav Lab, as well as with Frederike Petzschner, Lilian Weber, and Dana Small.
Diversity in Citation Practices
Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance. The authors of this paper report its proportions of citations by gender category to be: M/M = .531; W/M = .208; M/W = .125; W/W = .135.
REFERENCES
- Adkins, T. J., & Lee, T. G. (2023). Reward accelerates the preparation of goal-directed actions under conflict. Journal of Cognitive Neuroscience, 36, 2831–2846. 10.1162/jocn_a_02072, [DOI] [PubMed] [Google Scholar]
- Adkins, T., Lewis, R., & Lee, T. (2022). Heuristics contribute to sensorimotor decision-making under risk. Psychonomic Bulletin & Review, 29, 145–158. 10.3758/s13423-021-01986-x, [DOI] [PubMed] [Google Scholar]
- Ahn, W.-Y., Krawitz, A., Kim, W., Busemeyer, J. R., & Brown, J. W. (2011). A model-based fMRI analysis with hierarchical Bayesian parameter estimation. Journal of Neuroscience, Psychology, and Economics, 4, 95–110. 10.1037/a0020684, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bahlmann, J., Aarts, E., & D’Esposito, M. D. (2015). Influence of motivation on control hierarchy in the human frontal cortex. Journal of Neuroscience, 35, 3207–3217. 10.1523/JNEUROSCI.2389-14.2015, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ballard, I. C., Waskom, M., Nix, K. C., & D’Esposito, M. (2024). Reward reinforcement creates enduring facilitation of goal-directed behavior. Journal of Cognitive Neuroscience, 36, 2847–2862. 10.1162/jocn_a_02150, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barch, D. M., Culbreth, A. J., Zeev, D. B., Campbell, A., Nepal, S., & Moran, E. K. (2023). Dissociation of cognitive effort-based decision making and its associations with symptoms, cognition, and everyday life function across schizophrenia, bipolar disorder, and depression. Biological Psychiatry, 94, 501–510. 10.1016/j.biopsych.2023.04.007, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bartra, O., McGuire, J. T., & Kable, J. W. (2013). The valuation system: A coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage, 76, 412–427. 10.1016/j.neuroimage.2013.02.063, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bishop, S. J., & Gagne, C. (2018). Anxiety, depression, and decision making: A computational perspective. Annual Review of Neuroscience, 41, 371–388. 10.1146/annurev-neuro-080317-062007, [DOI] [PubMed] [Google Scholar]
- Bogacz, R., Brown, E., Moehlis, J., Holmes, P., & Cohen, J. D. (2006). The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced-choice tasks. Psychological Review, 113, 700–765. 10.1037/0033-295X.113.4.700, [DOI] [PubMed] [Google Scholar]
- Botvinick, M., & Braver, T. (2015). Motivation and cognitive control: From behavior to neural mechanism. Annual Review of Psychology, 66, 83–113. 10.1146/annurev-psych-010814-015044, [DOI] [PubMed] [Google Scholar]
- Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D. (2001). Conflict monitoring and cognitive control. Psychological Review, 108, 624–652. 10.1037/0033-295X.108.3.624, [DOI] [PubMed] [Google Scholar]
- Braem, S., Hickey, C., Duthoo, W., & Notebaert, W. (2014). Reward determines the context-sensitivity of cognitive control. Journal of Experimental Psychology: Human Perception and Performance, 40, 1769–1778. 10.1037/a0037554, [DOI] [PubMed] [Google Scholar]
- Braver, T. S., & Cohen, J. D. (2000). On the control of control: The role of dopamine in regulating prefrontal function and working memory. In Monsell S. & Driver J. (Eds.), Making working memory work (pp. 551–581). Cambridge, MA: MIT Press. [Google Scholar]
- Braver, T. S., Krug, M. K., Chiew, K. S., Kool, W., Westbrook, J. A., Clement, N. J., et al. (2014). Mechanisms of motivation–cognition interaction: Challenges and opportunities. Cognitive, Affective, & Behavioral Neuroscience, 14, 443–472. 10.3758/s13415-014-0300-0, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bush, G., Vogt, B. A., Holmes, J., Dale, A. M., Greve, D., Jenike, M. A., et al. (2002). Dorsal anterior cingulate cortex: A role in reward-based decision making. Proceedings of the National Academy of Sciences, U.S.A., 99, 523–528. 10.1073/pnas.012470999, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cabanac, M. (1971). Physiological Role of Pleasure. Science, 173, 1103–1107. 10.1126/science.173.4002.1103, [DOI] [PubMed] [Google Scholar]
- Cavanagh, J. F., Wiecki, T. V., Cohen, M. X., Figueroa, C. M., Samanta, J., Sherman, S. J., et al. (2011). Subthalamic nucleus stimulation reverses mediofrontal influence over decision threshold. Nature Neuroscience, 14, 1462–1467. 10.1038/nn.2925, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chiew, K. S. (2021). Revisiting positive affect and reward influences on cognitive control. Current Opinion in Behavioral Sciences, 39, 27–33. 10.1016/j.cobeha.2020.11.010, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chiew, K. S., & Braver, T. S. (2011). Positive affect versus reward: Emotional and motivational influences on cognitive control. Frontiers in Psychology, 2, 279. 10.3389/fpsyg.2011.00279, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chiew, K. S., Stanek, J. K., & Adcock, R. A. (2016). Reward anticipation dynamics during cognitive control and episodic encoding: Implications for dopamine. Frontiers in Human Neuroscience, 10, 555. 10.3389/fnhum.2016.00555, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen, J. D., & Blum, K. I. (2002). Reward and decision. Neuron, 36, 193–198. 10.1016/S0896-6273(02)00973-X [DOI] [PubMed] [Google Scholar]
- Collins, A. G. E., Ciullo, B., Frank, M. J., & Badre, D. (2017). Working memory load strengthens reward prediction errors. Journal of Neuroscience, 37, 4332–4342. 10.1523/JNEUROSCI.2700-16.2017, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cools, R. (2016). The costs and benefits of brain dopamine for cognitive control. Wiley Interdisciplinary Reviews: Cognitive Science, 7, 317–329. 10.1002/wcs.1401, [DOI] [PubMed] [Google Scholar]
- Cools, R., Froböse, M., Aarts, E., & Hofmans, L. (2019). Dopamine and the motivation of cognitive control. In D’Esposito M. & Grafman J. H. (Eds.), Handbook of Clinical Neurology (Vol. 163, pp. 123–143). Springer. 10.1016/B978-0-12-804281-6.00007-0 [DOI] [PubMed] [Google Scholar]
- Cubillo, A., Makwana, A. B., & Hare, T. A. (2019). Differential modulation of cognitive control networks by monetary reward and punishment. Social Cognitive and Affective Neuroscience, 14, 305–317. 10.1093/scan/nsz006, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davidow, J. Y., Insel, C., & Somerville, L. H. (2018). Adolescent development of value-guided goal pursuit. Trends in Cognitive Sciences, 22, 725–736. 10.1016/j.tics.2018.05.003, [DOI] [PubMed] [Google Scholar]
- Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., & Dolan, R. J. (2011). Model-based influences on humans’ choices and striatal prediction errors. Neuron, 69, 1204–1215. 10.1016/j.neuron.2011.02.027, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dayan, P. (2022). “Liking” as an early and editable draft of long-run affective value. PLoS Biology, 20, e3001476. 10.1371/journal.pbio.3001476, [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Araujo, I. E., Schatzker, M., & Small, D. M. (2020). Rethinking food reward. Annual Review of Psychology, 71, 139–164. 10.1146/annurev-psych-122216-011643, [DOI] [PubMed] [Google Scholar]
- Dixon, M. L., & Christoff, K. (2012). The decision to engage cognitive control is driven by expected reward-value: Neural and behavioral evidence. PLoS One, 7, e51637. 10.1371/journal.pone.0051637, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dukes, D., Abrams, K., Adolphs, R., Ahmed, M. E., Beatty, A., Berridge, K. C., et al. (2021). The rise of affectivism. Nature Human Behaviour, 5, 816–820. 10.1038/s41562-021-01130-8, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emanuel, A., & Eldar, E. (2023). Emotions as computations. Neuroscience & Biobehavioral Reviews, 144, 104977. 10.1016/j.neubiorev.2022.104977, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Engelmann, J. B., & Pessoa, L. (2007). Motivation sharpens exogenous spatial attention. Emotion, 7, 668–674. 10.1037/1528-3542.7.3.668, [DOI] [PubMed] [Google Scholar]
- Farashahi, S., Donahue, C. H., Hayden, B. Y., Lee, D., & Solanti, A. (2019). Flexible combination of reward information across primates. Nature Human Behavior, 3, 1215–1224. 10.1038/s41562-019-0714-3, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fengler, A., Bera, K., Pedersen, M. L., & Frank, M. J. (2022). Beyond drift diffusion models: Fitting a broad class of decision and reinforcement learning models with HDDM. Journal of Cognitive Neuroscience, 34, 1780–1805. 10.1162/jocn_a_01902, [DOI] [PubMed] [Google Scholar]
- Ferdinand, N. K., & Czernochowski, D. (2018). Motivational influences on performance monitoring and cognitive control across the adult lifespan. Frontiers in Psychology, 9, 1018. 10.3389/fpsyg.2018.01018, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Forstmann, B. U., Ratcliff, R., & Wagenmakers, E.-J. (2015). Sequential sampling models in cognitive neuroscience: Advantages, applications, and extensions. Annual Review of Psychology, 67, 641–666. 10.1146/annurev-psych-122414-033645, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Forstmann, B. U., & Wagenmakers, E.-J. (2015). An introduction to model-based cognitive neuroscience. Cham: Springer. 10.1007/978-3-031-45271-0 [DOI] [Google Scholar]
- Frank, M. J., Gagne, C., Nyhus, E., Masters, S., Wiecki, T. V., Cavanagh, J. F., et al. (2015). fMRI and EEG predictors of dynamic decision parameters during human reinforcement learning. Journal of Neuroscience, 35, 485–494. 10.1523/JNEUROSCI.2036-14.2015, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fröber, K., & Dreisbach, G. (2014). The differential influences of positive affect, random reward, and performance-contingent reward on cognitive control. Cognitive, Affective, & Behavioral Neuroscience, 14, 530–547. 10.3758/s13415-014-0259-x, [DOI] [PubMed] [Google Scholar]
- Fröber, K., & Dreisbach, G. (2016). How performance (non-)contingent reward modulates cognitive control. Acta Psychologica, 168, 65–77. 10.1016/j.actpsy.2016.04.008, [DOI] [PubMed] [Google Scholar]
- Fröber, K., Jurczyk, V., Mendl, J., & Dreisbach, G. (2021). Investigating anticipatory processes during sequentially changing reward prospect: An ERP study. Brain and Cognition, 155, 105815. 10.1016/j.bandc.2021.105815, [DOI] [PubMed] [Google Scholar]
- Frömer, R., Lin, H., Wolf, C. K. D., Inzlicht, M., & Shenhav, A. (2021). Expectations of reward and efficacy guide cognitive control allocation. Nature Communications, 12, 1030. 10.1038/s41467-021-21315-z, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Frömer, R., Nassar, M. R., Ehinger, B. V., & Shenhav, A. (2024). Common neural choice signals can emerge artefactually amid multiple distinct value signals. Nature Human Behavior 10.1038/s41562-024-01971-z, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gelman, A., Vehtari, A., Simpson, D., Margossian, C. C., Carpenter, B., Yao, Y., et al. (2020). Bayesian workflow. arXiv. 10.48550/arXiv.2011.01808 [DOI] [Google Scholar]
- Grahek, I., Frömer, R., Fahey, M. P., & Shenhav, A. (2023). Learning when effort matters: Neural dynamics underlying updating and adaptation to changes in performance efficacy. Cerebral Cortex, 33, 2395–2411. 10.1093/cercor/bhac215, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grahek, I., Leng, X., Musslick, S., & Shenhav, A. (2024). Control adjustment costs limit goal flexibility: Empirical evidence and a computational account. bioRxiv. 10.1101/2023.08.22.554296, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grahek, I., Musslick, S., & Shenhav, A. (2020). A computational perspective on the roles of affect in cognitive control. International Journal of Psychophysiology, 151, 25–34. 10.1016/j.ijpsycho.2020.02.001, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grahek, I., Shenhav, A., Musslick, S., Krebs, R. M., & Koster, E. H. W. (2019). Motivation and cognitive control in depression. Neuroscience and Biobehavioral Reviews, 102, 371–381. 10.1016/j.neubiorev.2019.04.011, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guitart-Masip, M., Duzel, E., Dolan, R., & Dayan, P. (2014). Action versus valence in decision making. Trends in Cognitive Sciences, 18, 194–202. 10.1016/j.tics.2014.01.003, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hardwick, R. M., Forrence, A. D., Krakauer, J. W., & Haith, A. M. (2019). Time-dependent competition between goal-directed and habitual response preparation. Nature Human Behavior, 3, 1252–1262. 10.1038/s41562-019-0725-0, [DOI] [PubMed] [Google Scholar]
- Harlé, K. M., Shenoy, P., & Paulus, M. P. (2013). The influence of emotions on cognitive control: Feelings and beliefs—Where do they meet? Frontiers in Human Neuroscience, 7, 508. 10.3389/fnhum.2013.00508, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Herrnstein, R. J. (1997). The matching law: Papers in psychology and economics. Harvard University Press. [Google Scholar]
- Herz, D. M., Zavala, B. A., Bogacz, R., & Brown, P. (2016). Neural correlates of decision thresholds in the human subthalamic nucleus. Current Biology, 26, 916–920. 10.1016/j.cub.2016.01.051, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoy, C. W., de Hemptinne, C., Wang, S. S., Harmer, C. J., Apps, M. A. J., Husain, M., et al. (2024). Beta and theta oscillations track effort and previous reward in human basal ganglia and prefrontal cortex during decision making. Proceedings of the National Academy of Sciences, U.S.A., 121, e2322869121. 10.1073/pnas.2322869121, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hulme, O. J., Morville, T., & Gutkin, B. (2019). Neurocomputational theories of homeostatic control. Physics of Life Reviews, 31, 214–232. 10.1016/j.plrev.2019.07.005, [DOI] [PubMed] [Google Scholar]
- Insel, C., Charifson, M., & Somerville, L. H. (2019). Neurodevelopmental shifts in learned value transfer on cognitive control during adolescence. Developmental Cognitive Neuroscience, 40, 100730. 10.1016/j.dcn.2019.100730, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Inzlicht, M., Shenhav, A., & Olivola, C. Y. (2018). The effort paradox: Effort is both costly and valued. Trends in Cognitive Sciences, 22, 337–349. 10.1016/j.tics.2018.01.007, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Juechems, K., & Summerfield, C. (2019). Where does value come from? Trends in Cognitive Sciences, 23, 836–850. 10.1016/j.tics.2019.07.012, [DOI] [PubMed] [Google Scholar]
- Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263–292. 10.2307/1914185 [DOI] [Google Scholar]
- Keramati, M., & Gutkin, B. (2014). Homeostatic reinforcement learning for integrating reward collection and physiological stability. eLife, 3, e04811. 10.7554/eLife.04811, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kool, W., & Botvinick, M. (2018). Mental labour. Nature Human Behavior, 2, 899–908. 10.1038/s41562-018-0401-9, [DOI] [PubMed] [Google Scholar]
- Kool, W., Shenhav, A., & Botvinick, M. M. (2017). Cognitive control as cost–benefit decision making. In Egner T. (Ed.), The Wiley handbook of cognitive control (pp. 167–189). Wiley. 10.1002/9781118920497.ch10 [DOI] [Google Scholar]
- Kouneiher, F., Charron, S., & Koechlin, E. (2009). Motivation and cognitive control in the human prefrontal cortex. Nature Neuroscience, 12, 939–945. 10.1038/nn.2321, [DOI] [PubMed] [Google Scholar]
- Krebs, R. M., & Woldorff, M. G. (2017). Cognitive control and reward. In Egner T. (Ed.), The Wiley handbook of cognitive control (pp. 422–439). Wiley. 10.1002/9781118920497.ch24 [DOI] [Google Scholar]
- Kriegeskorte, N., & Douglas, P. K. (2018). Cognitive computational neuroscience. Nature Neuroscience, 21, 1148–1160. 10.1038/s41593-018-0210-5, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Krug, M. K., & Braver, T. S. (2014). Motivation and cognitive control: Going beyond monetary incentives. In Bijleveld E. & Aarts H. (Eds.), The psychological science of money (pp. 137–158). New York, NY: Springer. 10.1007/978-1-4939-0959-9_7 [DOI] [Google Scholar]
- Kurzban, R. (2016). The sense of effort. Current Opinion in Psychology, 7, 67–70. 10.1016/j.copsyc.2015.08.003 [DOI] [Google Scholar]
- Leng, X., Yee, D., Ritz, H., & Shenhav, A. (2021). Dissociable influences of reward and punishment on adaptive cognitive control. PLoS Computational Biology, 17, e1009737. 10.1371/journal.pcbi.1009737, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lerner, J. S., Li, Y., Valdesolo, P., & Kassam, K. S. (2015). Emotion and decision making. Annual Review of Psychology, 66, 799–823. 10.1146/annurev-psych-010213-115043, [DOI] [PubMed] [Google Scholar]
- Luna, B., Paulsen, D. J., Padmanabhan, A., & Geier, C. (2013). The teenage brain: Cognitive control and motivation. Current Directions in Psychological Science, 22, 94–100. 10.1177/0963721413478416 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lutter, M., & Nestler, E. J. (2009). Homeostatic and hedonic signals interact in the regulation of food intake. Journal of Nutrition, 139, 629–632. 10.3945/jn.108.097618, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Manohar, S. G., Chong, T. T.-J., Apps, M. A. J., Batla, A., Stamelous, M., Jarman, P. R., et al. (2015). Reward pays the cost of noise reduction in motor and cognitive control. Current Biology, 25, 1707–1716. 10.1016/j.cub.2015.05.038, [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGuire, J. T., & Botvinick, M. M. (2010). Prefrontal cortex, cognitive control, and the registration of decision costs. Proceedings of the National Academy of Sciences, U.S.A., 107, 7922–7926. 10.1073/pnas.0910662107, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Millner, A. J., Gershman, S. J., Nock, M. K., & den Ouden, H. E. M. (2018). Pavlovian control of escape and avoidance. Journal of Cognitive Neuroscience, 30, 1379–1390. 10.1162/jocn_a_01224, [DOI] [PubMed] [Google Scholar]
- Molinaro, G., & Collins, A. G. E. (2023). A goal-centric outlook on learning. Trends in Cognitive Sciences, 27, 1150–1164. 10.1016/j.tics.2023.08.011, [DOI] [PubMed] [Google Scholar]
- Musslick, S., Shenhav, A., Botvinick, M. M., & Cohen, J. D. (2015). A computational model of control allocation based on the expected value of control. Reinforcement Learning and Decision Making. [Google Scholar]
- Niv, Y., Daniel, R., Geana, A., Gershman, S. J., Leong, Y. C., Radulescu, A., et al. (2015). Reinforcement learning in multidimensional environments relies on attention mechanisms. Journal of Neuroscience, 35, 8145–8157. 10.1523/JNEUROSCI.2978-14.2015, [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Doherty, J. P., Hampton, A., & Kim, H. (2007). Model-based fMRI and its application to reward learning and decision making. Annals of the New York Academy of Sciences, 1104, 35–53. 10.1196/annals.1390.022, [DOI] [PubMed] [Google Scholar]
- O’Reilly, R. C. (2020). Unraveling the mysteries of motivation. Trends in Cognitive Science, 24, 425–434. 10.1016/j.tics.2020.03.001, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Otto, A. R., & Daw, N. D. (2019). The opportunity cost of time modulates cognitive effort. Neuropsychologia, 123, 92–105. 10.1016/j.neuropsychologia.2018.05.006, [DOI] [PubMed] [Google Scholar]
- Overmeyer, R., Kirschner, H., Fischer, A. G., & Endrass, T. (2023). Unraveling the influence of trial-based motivational changes on performance monitoring stages in a flanker task. Scientific Reports, 13, 19180. 10.1038/s41598-023-45526-0, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Padmala, S., & Pessoa, L. (2010). Interactions between cognition and motivation during response inhibition. Neuropsychologia, 48, 558–565. 10.1016/j.neuropsychologia.2009.10.017, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Parro, C., Dixon, M. L., & Christoff, K. (2018). The neural basis of motivational influences on cognitive control. Human Brain Mapping, 39, 5097–5111. 10.1002/hbm.24348, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paulus, M. P. (2007). Neural basis of reward and craving—A homeostatic point of view. Dialogues in Clinical Neuroscience, 9, 379–387. 10.31887/DCNS.2007.9.4/mpaulus, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paulus, M. P., & Yu, A. J. (2012). Emotion and decision-making: Affect-driven belief systems in anxiety and depression. Trends in Cognitive Sciences, 16, 476–483. 10.1016/j.tics.2012.07.009, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pedersen, M. L., & Frank, M. J. (2020). Simultaneous hierarchical Bayesian parameter estimation for reinforcement learning and drift diffusion models: A tutorial and links to neural data. Computational Brain & Behavior, 3, 458–471. 10.1007/s42113-020-00084-w, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pessiglione, M., & Delgado, M. R. (2015). The good, the bad and the brain: Neural correlates of appetitive and aversive values underlying decision making. Current Opinion in Behavioral Sciences, 5, 78–84. 10.1016/j.cobeha.2015.08.006, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pessiglione, M., Vinckier, F., Bouret, S., Daunizeau, J., & Le Bouc, R. (2018). Why not try harder? Computational approach to motivation deficits in neuro-psychiatric diseases. Brain, 141, 629–650. 10.1093/brain/awx278, [DOI] [PubMed] [Google Scholar]
- Pessoa, L. (2009). How do emotion and motivation direct executive control? Trends in Cognitive Sciences, 13, 160–166. 10.1016/j.tics.2009.01.006, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Petzschner, F. H., Garfinkel, S. N., Paulus, M. P., Koch, C., & Khalsa, S. S. (2021). Computational models of interoception and body regulation. Trends in Neurosciences, 44, 63–76. 10.1016/j.tins.2020.09.012, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Plassmann, H., Schelski, D. S., Simon, M.-C., & Koban, L. (2022). How we decide what to eat: Toward an interdisciplinary model of gut–brain interactions. Wiley Interdisciplinary Reviews: Cognitive Science, 13, e1562. 10.1002/wcs.1562, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Prater Fahey, M., Yee, D. M., Leng, X., Tarlow, M., & Shenhav, A. (2025). Motivational context determines the impact of aversive outcomes on mental effort allocation. Cognition, 254, 105973. 10.1016/j.cognition.2024.105973, [DOI] [PubMed] [Google Scholar]
- Rac-Lubashevsky, R., & Frank, M. J. (2021). Analogous computations in working memory input, output and motor gating: Electrophysiological and computational modeling evidence. PLoS Computational Biology, 17, e1008971. 10.1371/journal.pcbi.1008971, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ratcliff, R., Smith, P. L., Brown, S. D., & McKoon, G. (2016). Diffusion decision model: Current issues and history. Trends in Cognitive Sciences, 20, 260–281. 10.1016/j.tics.2016.01.007, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ritz, H., Leng, X., & Shenhav, A. (2022). Cognitive control as a multivariate optimization problem. Journal of Cognitive Neuroscience, 34, 569–591. 10.1162/jocn_a_01822, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rosa, E. D., Schiff, S., Cagnolati, F., & Mapelli, D. (2015). Motivation–cognition interaction: How feedback processing changes in healthy ageing and in Parkinson’s disease. Aging Clinical and Experimental Research, 27, 911–920. 10.1007/s40520-015-0358-8, [DOI] [PubMed] [Google Scholar]
- Rossi, M. A., & Stuber, G. D. (2018). Overlapping brain circuits for homeostatic and hedonic feeding. Cell Metabolism, 27, 42–56. 10.1016/j.cmet.2017.09.021, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rouault, M., Drugowitsch, J., & Koechlin, E. (2019). Prefrontal mechanisms combining rewards and beliefs in human decision-making. Nature Communications, 10, 301. 10.1038/s41467-018-08121-w, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Samanez-Larkin, G. R., & Knutson, B. (2015). Decision making in the ageing brain: Changes in affective and motivational circuits. Nature Reviews Neuroscience, 16, 278–289. 10.1038/nrn3917, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saper, C. B., Chou, T. C., & Elmquist, J. K. (2002). The need to feed: Homeostatic and hedonic control of eating. Neuron, 36, 199–211. 10.1016/S0896-6273(02)00969-8, [DOI] [PubMed] [Google Scholar]
- Schad, D. J., Betancourt, M., & Vasishth, S. (2021). Toward a principled Bayesian workflow in cognitive science. Psychological Methods, 26, 103–126. 10.1037/met0000275, [DOI] [PubMed] [Google Scholar]
- Schevernels, H., Krebs, R. M., Santens, P., Woldorff, M. G., & Boehler, C. N. (2014). Task preparation processes related to reward prediction precede those related to task-difficulty expectation. Neuroimage, 84, 639–647. 10.1016/j.neuroimage.2013.09.039, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schiller, D., Yu, A. N. C., Alia-Klein, N., Becker, S., Cromwell, H. C., Dolcos, F., et al. (2024). The human affectome. Neuroscience & Biobehavioral Reviews, 158, 105450. 10.1016/j.neubiorev.2023.105450, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sescousse, G., Caldú, X., Segura, B., & Dreher, J.-C. (2013). Processing of primary and secondary rewards: A quantitative meta-analysis and review of human functional neuroimaging studies. Neuroscience & Biobehavioral Reviews, 37, 681–696. 10.1016/j.neubiorev.2013.02.002, [DOI] [PubMed] [Google Scholar]
- Shenhav, A. (2024). The affective gradient hypothesis: An affect-centered account of motivated behavior. Trends in Cognitive Sciences, S1364-6613(24)00202-X. 10.1016/j.tics.2024.08.003, [DOI] [PMC free article] [PubMed]
- Shenhav, A., Botvinick, M. M., & Cohen, J. D. (2013). The expected value of control: An integrative theory of anterior cingulate cortex function. Neuron, 79, 217–240. 10.1016/j.neuron.2013.07.007, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shenhav, A., Musslick, S., Lieder, F., Kool, W., Griffiths, T. L., Cohen, J. D., et al. (2017). Toward a rational and mechanistic account of mental effort. Annual Review of Neuroscience, 40, 99–124. 10.1146/annurev-neuro-072116-031526, [DOI] [PubMed] [Google Scholar]
- Shenhav, A., Prater Fahey, M., & Grahek, I. (2021). Decomposing the motivation to exert mental effort. Current Directions in Psychological Science, 30, 307–314. 10.1177/09637214211009510, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Silvestrini, N., Musslick, S., Berry, A. S., & Vassena, E. (2023). An integrative effort: Bridging motivational intensity theory and recent neurocomputational and neuronal models of effort and control allocation. Psychological Review, 130, 1081–1103. 10.1037/rev0000372, [DOI] [PubMed] [Google Scholar]
- Swart, J. C., Froböse, M. I., Cook, J. L., Geurts, D. E. M., Frank, M. J., Cools, R., et al. (2017). Catecholaminergic challenge uncovers distinct Pavlovian and instrumental mechanisms of motivated (in)action. eLife, 6, e22169. 10.7554/eLife.22169, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Timmer, M. H. M., Aarts, E., Esselink, R. A. J., & Cools, R. (2018). Enhanced motivation of cognitive control in Parkinson’s disease. European Journal of Neuroscience, 48, 2374–2384. 10.1111/ejn.14137, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vogel, T. A., Savelson, Z. M., Otto, A. R., & Roy, M. (2020). Forced choices reveal a trade-off between cognitive effort and physical pain. eLife, 9, e59410. 10.7554/eLife.59410, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weber, L. A., Yee, D. M., Small, D. M., & Petzschner, F. H. (2024). Rethinking reinforcement learning: The interoceptive origin of reward. PsyRxiv. 10.31234/osf.io/be6nv [DOI] [Google Scholar]
- Westbrook, A., & Braver, T. S. (2015). Cognitive effort: A neuroeconomic approach. Cognitive Affective & Behavioral Neuroscience, 15, 395–415. 10.3758/s13415-015-0334-y, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Westbrook, A., van den Bosch, R., Määttä, J. I., Hofmans, L., Papadopetraki, D., Cools, R., et al. (2020). Dopamine promotes cognitive effort by biasing the benefits versus costs of cognitive work. Science, 367, 1362–1366. 10.1126/science.aaz5891, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wiecki, T. V., & Frank, M. J. (2013). A computational model of inhibitory control in frontal cortex and basal ganglia. Psychological Review, 120, 329–355. 10.1037/a0031542, [DOI] [PubMed] [Google Scholar]
- Wiecki, T. V., Poland, J., & Frank, M. J. (2015). Model-based cognitive neuroscience approaches to computational psychiatry: Clustering and classification. Clinical Psychological Science, 3, 378–399. 10.1177/2167702614565359 [DOI] [Google Scholar]
- Wiecki, T. V., Sofer, I., & Frank, M. J. (2013). HDDM: Hierarchical Bayesian estimation of the drift-diffusion model in Python. Frontiers in Neuroinformatics, 7, 14. 10.3389/fninf.2013.00014, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilson, R. C., & Collins, A. G. E. (2019). Ten simple rules for the computational modeling of behavioral data. eLife, 8, e49547. 10.7554/eLife.49547, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilson, R. C., & Niv, Y. (2015). Is model fitting necessary for model-based fMRI? PLoS Computational Biology, 11, e1004237. 10.1371/journal.pcbi.1004237, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xiao, J., Adkinson, J. A., Myers, J., Allawala, A. B., Mathura, R. K., Pirtle, V., et al. (2024). Beta activity in human anterior cingulate cortex mediates reward biases. Nature Communications, 15, 5528. 10.1038/s41467-024-49600-7, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yee, D. M., & Braver, T. S. (2018). Interactions of motivation and cognitive control. Current Opinion in Behavioral Sciences, 19, 83–90. 10.1016/j.cobeha.2017.11.009, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yee, D. M., Crawford, J. L., Lamichhane, B., & Braver, T. S. (2021). Dorsal anterior cingulate cortex encodes the integrated incentive motivational value of cognitive task performance. Journal of Neuroscience, 41, 3707–3720. 10.1523/JNEUROSCI.2550-20.2021, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yee, D. M., Leng, X., Shenhav, A., & Braver, T. S. (2022). Aversive motivation and cognitive control. Neuroscience & Biobehavioral Reviews, 133, 104493. 10.1016/j.neubiorev.2021.12.016, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yip, S. W., Barch, D. M., Chase, H. W., Flagel, S., Huys, Q. J. M., Konova, A. B., et al. (2022). From computation to clinic. Biological Psychiatry Global Open Science, 3, 319–328. 10.1016/j.bpsgos.2022.03.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang, Y., Leng, X., & Shenhav, A. (2024). Make or break: The influence of expected challenges and rewards on the motivation and experience associated with cognitive effort exertion. Journal of Cognitive Neuroscience, 36, 2863–2885. 10.1162/jocn_a_02247, [DOI] [PMC free article] [PubMed] [Google Scholar]
