Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Feb 1.
Published in final edited form as: Neurosci Biobehav Rev. 2021 Dec 12;133:104493. doi: 10.1016/j.neubiorev.2021.12.016

Aversive motivation and cognitive control

Debbie M Yee a,b,c,*, Xiamin Leng a,b, Amitai Shenhav a,b, Todd S Braver c
PMCID: PMC8792354  NIHMSID: NIHMS1767461  PMID: 34910931

Abstract

Aversive motivation plays a prominent role in driving individuals to exert cognitive control. However, the complexity of behavioral responses attributed to aversive incentives creates significant challenges for developing a clear understanding of the neural mechanisms of this motivation-control interaction. We review the animal learning, systems neuroscience, and computational literatures to highlight the importance of experimental paradigms that incorporate both motivational context manipulations and mixed motivational components (e.g., bundling of appetitive and aversive incentives). Specifically, we postulate that to understand aversive incentive effects on cognitive control allocation, a critical contextual factor is whether such incentives are associated with negative reinforcement or punishment. We further illustrate how the inclusion of mixed motivational components in experimental paradigms enables increased precision in the measurement of aversive influences on cognitive control. A sharpened experimental and theoretical focus regarding the manipulation and assessment of distinct motivational dimensions promises to advance understanding of the neural, monoaminergic, and computational mechanisms that underlie the interaction of motivation and cognitive control.

Keywords: Aversive motivation, Cognitive control, Mixed motivation, Negative reinforcement, Punishment, Serotonin, Habenula

1. Introduction

In daily life, individuals demonstrate an impressive ability to weigh the relevant incentives when deciding the amount and type of effort to invest when completing cognitively demanding tasks (Shenhav et al., 2017). These incentives can include both the potential positive outcomes obtained from task completion (e.g., bonus earned, social praise), as well as potential negative outcomes that can be avoided if the task is not completed (e.g., job termination, social admonishment). The ability to successfully adjust cognitive control based on diverse motivational incentives is highly significant for determining one’s future academic, career, and social goals (Bonner and Sprinkle, 2002; Duckworth et al., 2007; Mischel et al., 1989), as well as providing a necessary intermediary step for informing how motivational and cognitive deficits may arise in clinical disorders (Barch et al., 2015; Jean-Richard-Dit-Bressel et al., 2018).

Importantly, individuals often face a mixture – or “bundle” – of positive and negative incentives that may jointly occur as in relation to their behavior (e.g., the motivation to earn a good salary and to avoid being fired may jointly drive a worker to allocate more effort to optimize their performance relative to each incentive alone). A crucial factor often neglected in cognitive neuroscience studies of motivation and cognitive control is that the impact of a negative incentive on behavior may depend strongly on the context of how it is bundled (e.g., good salary plus the fear of job termination may motivate an individual to increase their effort, whereas a good salary accompanied by frequent and harsh criticism from a supervisor may cause that same person to decrease their effort). In this review, we provide a detailed examination of how contextual factors moderate bundled incentive effects to better elucidate the mechanisms that underlie interactions of motivation and cognitive control.

Recent empirical research has shed some light on the neural mechanisms of motivation and cognitive control interactions (Botvinick and Braver, 2015; Braver et al., 2014; Yee and Braver, 2018). In particular, dopamine has been widely postulated as a key neurotransmitter (Cools, 2008, 2019; Westbrook and Braver, 2016), and a broad network of brain regions have been shown to underlie these interactions (Parro et al., 2018). Extant studies in this domain have almost exclusively focused on the impact of expected rewards (e.g., monetary bonuses, social praise) on higher-order cognition and cognitive control (Aarts et al., 2011; Bahlmann et al., 2015; Braem et al., 2014; Chiew and Braver, 2016; Duverne and Koechlin, 2017; Etzel et al., 2015; Fröber and Dreisbach, 2016; Frömer et al., 2021; Kouneiher et al., 2009; Locke and Braver, 2008; Small et al., 2005). In contrast, much less is known about the mechanisms through which negative outcomes (e.g., monetary losses, shocks) interact with cognitive control (Braem et al., 2013; Fröbose and Cools, 2018). Although this dissociation by motivational valence (e.g., rewarding vs. aversive) in decision-making is not new (Pessiglione and Delgado, 2015; Plassmann et al., 2010), it remains a significant challenge to determine whether rewarding and aversive motivational values are processed in common or separate neural circuits (Hu, 2016; Morrison and Salzman, 2009).

A recent theoretical framework that shows great promise for integrating the role of aversive motivation in cognitive control is the Expected Value of Control (EVC) model (Shenhav et al., 2013, 2017). The EVC model utilizes a computationally explicit formulation of cognitive control in terms of reinforcement learning and decision-making processes in order to characterize how diverse motivational incentives (e.g., rewards, penalties) impact cognitive control allocation. Critically, EVC reframes adjustments in cognitive control as a fundamentally motivated process, determined by weighing effort costs against potential benefits of control to yield the integrated, net expected value. Although the EVC model has been successfully applied to characterize how rewarding incentives offset the cost of exerting cognitive control, the current cost-benefit analysis needs to be expanded to account for the diversity of strategies for control allocation that arise from aversive motivational incentives.

These important gaps in the literature highlight a ripe opportunity and unique challenge for expanding the investigation of motivation and cognitive control interactions. But why have researchers not yet made significant inroads into characterizing these mechanisms underlying aversive motivation effects on cognitive control? We argue that obstacles to progress can be attributed to two main factors. First, much of the contemporary neuroscience literature has often neglected to consider the motivational context through which aversive incentives influence different strategies for allocating cognitive control, that is, whether the motivational context is operationalized as the degree to which motivation to attain or avoid an outcome will increase (e.g., reinforcement) or decrease (e.g., punishment) behavioral responding. For example, whereas rewarding incentives typically increase behavioral responding to approach the expected reward, aversive incentives can lead an organism to either invigorate or attenuate behavioral responses to avoid the aversive outcome, depending on the motivational context (e.g., See Levy and Schiller, 2020; Mobbs et al., 2020). Second, current experimental paradigms rarely include bundled incentives (i.e., mixed motivation, when both appetitive and aversive outcomes are associated with a behavior), despite the intuition that people likely integrate diverse motivational incentives when deciding how much cognitive control to allocate in mentally demanding tasks. A particular challenge is the lack of well-controlled experimental assays that can explicitly quantify the diverse effects of aversive incentives on cognitive control.

In this review, our primary objective is to identify and highlight critical motivational dimensions (e.g., motivational context and mixed motivation), which for the most part have been neglected in prior treatments. In our opinion, these dimensions have strong potential to advance understanding regarding the neural, monoaminergic, and computational mechanisms of aversive motivational and cognitive control. In particular, we demonstrate how incorporating these motivational dimensions, which have played a prominent role in animal learning experimental paradigms, into experimental studies with humans, can improve the granularity and precision through which we can measure aversive incentive effects on cognitive control allocation. Specifically, we hypothesize that stronger consideration of the motivational context of aversive incentives can clarify the putative dissociable neural pathways and computational mechanisms through which aversive motivation may guide cognitive control allocation. Similarly, the inclusion of mixed motivational components in experimental paradigms will facilitate increased precision in measuring the aversive influences on cognitive control. In sum, we anticipate this review will invigorate greater appreciation for foundational learning and motivation theories that have guided the cornerstone discoveries over the past century, as well as catalyze innovative, groundbreaking research into the computations, brain networks, and neurotransmitter systems associated with aversive motivation and cognitive control.

2. Historical perspectives on aversively motivated behavior

2.1. Pavlovian vs. instrumental control of aversive outcomes

The dichotomy between Pavlovian and instrumental control of behavior has long played an influential role in our contemporary understanding of motivation (Guitart-Masip et al., 2014; Mowrer, 1947; Rescorla and Solomon, 1967). Here, Pavlovian control refers to when a conditioned stimulus (CS) elicits a conditioned response (CR) that is typically associated with an unconditioned stimulus (US) (Dickinson and Mackintosh, 1978; Pavlov, 1927; Rescorla, 1967, 1988). For example, a rat will learn to salivate when they hear a tone that predicts delivery of a food pellet, or alternatively learn to produce defensive responses (e.g., freezing, panic, anxiety) when they hear a tone that predicts electric shocks. In contrast to Pavlovian control, instrumental control describes when an ongoing behavior is “controlled by its consequences,” such that the likelihood of a behavioral response increases when an organism receives a reinforcing outcome for performing that response (Skinner, 1937; Staddon and Cerutti, 2003). For example, a rat will increase its rate of lever pressing if that action is followed by a food reward (e.g., reinforcement), whereas a rat will decrease its rate of lever pressing if that action is followed by an electric shock. Importantly, although both examples illustrate how Pavlovian and instrumental control lead to changes in behavior, the key distinction is that in the former, the appetitive or aversive outcome (US) is presented independent of an organism’s behavior, whereas, in the latter, an organism must perform a specific action in order to successfully attain or avoid a certain outcome. This distinction is detailed in Table 1 and Fig. 1a.

Table 1.

Pavlovian vs. Instrumental Control. Detailed comparison of key differences between Pavlovian and Instrumental Control.

Pavlovian Control (e.g., Classical, Respondent) Instrumental Control (e.g., Operant)
Behavior is controlled by stimulus preceding response Behavior is controlled by consequences of response
Responses are elicited by neutral stimuli repeatedly associated with an appetitive or aversive unconditioned outcome Responses are driven by the motivation to attain a rewarding outcome or avoid/escape from an aversive outcome
Goal is to increase the probability of a response (CR) to an initially neutral stimulus (CS) by associating the neutral stimulus with an unconditioned stimulus (US) Goal is to increase the probability of a response in the presence of a discriminative stimulus (SD) by following a desired response with a reinforcing outcome or following the undesired response with a punishing outcome
Stimulus-Stimulus contingencies Response-Outcome contingencies
US follows CS during training regardless of whether or not CS occurs. CR is brought under the control of a stimulus event CS that precedes the response, rather than the one that follows it Reinforcer or Punisher follows the response only if the organism performs the voluntary action

CR = conditioned response; CS = conditioned stimulus; US = unconditioned stimulus; SD = discriminative stimulus.

Fig. 1.

Fig. 1.

Pavlovian vs. Instrumental Control of Motivated Behavior. a) Schematic of how rewarding vs. aversive motivation may elicit various behavioral responses under Pavlovian vs. instrumental control paradigms. b) Instrumentally controlled responses by motivational valence (rewarding vs. aversive) and behavioral responding (activation vs. inhibition). Given that the same outcome may strengthen or weaken responses based on the context, consideration of both the motivational valence of the expected outcome and its impact on instrumental responding is of critical importance. In the case of aversive motivation, we highlight how avoidance motivation may lead to either behavioral activation (active avoidance) or behavioral inhibition (passive avoidance), depending on the context (negative reinforcement for the former, punishment for the latter).

Despite the utility of the Pavlovian-instrumental distinction in explaining the influence of rewarding and aversive incentives on behavior in the animal literature (e.g., conditioned responses), this distinction has largely been neglected in human cognitive neuroscience studies of motivation and cognitive control. This neglect may be a primary factor contributing to the contradictory findings associated with aversive motivation in the contemporary literature, which we describe in the subsequent sections.

2.2. Aversive outcomes may strengthen or weaken behavioral responses, depending on the motivational context

A large source of confusion in aversive motivation stems from the misuse of proper terms (i.e., the conflation between “aversive outcome” and “punishment”). This misunderstanding likely stems from insufficient clarity regarding reinforcement theory. Based on Thorndike’s “law of effect,” the theory posits that responses that produce a satisfying state will be strengthened, whereas responses that produce a discomforting state will be weakened (Thorndike, 1927, 1933). Formally, a reinforcer is anything that strengthens an immediately preceding instrumental response, whereas a punisher is anything that weakens an immediately preceding instrumental response (Premack, 1971; Skinner, 1953). Reinforcement is produced by denying the subject the opportunity to occupy a pleasant state as long as it would choose to, thus strengthening instrumental responding to approach or maintain that pleasant state; whereas punishment is produced by forcing the subject to occupy an unpleasant state longer than it would choose to, thus suppressing instrumental responding to avoid or escape from an unpleasant state (Estes, 1944; Solomon, 1964). A key insight arising from this distinction is that an aversive outcome can either reinforce (i.e., strengthen) or punish (i.e., weaken) an instrumentally conditioned response, depending on the context by which that outcome is presented (Crosbie, 1998; McConnell, 1990; Terhune and Premack, 1974). See Fig. 1b for illustration.

The distinction between negative reinforcement and punishment has great potential to provide insight into the interactions between aversive motivation and cognitive control. Typically, an individual will desire to avoid aversive outcomes. In these scenarios, Pavlovian conditioned stimuli can either strengthen or weaken instrumental responses to facilitate avoidance (Bull and Overmier, 1968; Overmier et al., 1971). Specifically, negative reinforcement refers to when successful escape from an aversive outcome strengthens instrumental responding in future trials (Masterson, 1970) and will produce a pleasant or rewarding affective response (H. Kim et al., 2006). Conversely, punishment refers to when the presence of aversive outcomes weakens instrumental responding in an approach motivation context (Dickinson and Pearce, 1976; Estes and Skinner, 1941) and will potentiate defensive responses such as anxiety, stress, arousal, vigilance, panic, or freezing (Hagenaars et al., 2012; Sege et al., 2017). Importantly, we suggest that the inclusion of aversive incentives can provide greater granularity into how distinct motivational factors can bias individuals to use different strategies for allocating cognitive control to accomplish behavioral goals. For example, a clear representation of the motivational context in which an aversive outcome will be encountered (e.g., punishment or negative reinforcement) can help individuals determine the amount of effort required to achieve their goal, as well as discern the strategy through which they will adjust their cognitive control allocation to meet that goal.

Mixed motivation: a key ingredient for motivational conflict and mutual inhibition

One particular challenge for quantifying the effects of aversive motivation is that their influence on behavior is much less parsimonious than appetitive motivation. Whereas approach-related motivation typically produces purely appetitive or consummatory responses to pursue a rewarding outcome, avoidance-related motivation typically engenders a wide range of behavioral responses to avoid or escape from detected threats (Fanselow, 1994; Fanselow and Lester, 1988; Masterson and Crawford, 1982) (See Fig. 2a). For example, in order to avoid receiving an electric shock (e.g., active avoidance), an organism may freeze, run (e.g., escape), produce a stress or fear response, or engage in a combination of such behaviors (Church, 1963).

Fig. 2.

Fig. 2.

a) Approach and avoidance motivation elicit divergent behavioral responses, with the former associated with actions to approach the rewarding outcome and the latter associated with actions to avoid or escape from the aversive outcome. b) According to Reinforcement Sensitivity Theory (RST), three core systems underlie human emotion: 1) fight-flight-freeze system (FFFS), 2) behavioral approach system (BAS), and 3) behavioral inhibition system (BIS). Adapted from Gray, 1982 & Gray and McNaughton, 2000. Relevant to the current proposal, the BIS system mediates the resolution of goal conflict (e.g., approach-avoidance motivational conflict). The intensity of this conflict is associated with increased subjective anxiety. c) Recent extensions of RST have suggested defensive distance and defensive direction as two important dimensions that may help organize defensive responses to aversive motivation. Defensive distance describes the perceived distance from a threat (proximal to distal) that influences the intensity of a defensive response. Defensive direction describes the range of responses between actively avoiding or escaping a threat (defensive avoidance) to cautiously approaching a threat (defensive approach). Relevant to the current review, this delineation between defensive avoidance and defensive approach reveals how the critical distinction between negative reinforcement and punishment may underlie distinct fear-mediated and anxiety-mediated defensive responses to aversive motivation. Simplified adaptation from McNaughton and Corr, 2004.

Although approach and avoidance motivation have long been theorized to be mediated by distinct systems (Carver, 2006; Carver and White, 1994; Elliot and Covington, 2001), the extent to which individuals exert mental and physical effort to complete behavioral goals is almost certainly determined by mixed motivation, i.e., the combined or integrated net value of multiple incentives which potentially increase or decrease behavior depending on the motivational context (Aupperle et al., 2015; Corr and McNaughton, 2012; Yee and Braver, 2018). For example, an individual who is motivated to increase their likelihood of attending a good medical school and avoid the consequences of failing their course (e.g., academic probation) may additionally exert more effort when vigorously studying for a final exam compared to a single motivation (e.g., approach or avoidance motivation only). Conversely, an individual may be motivated to perform well on their exam but also may find the content aversive (e.g., a student who has strong disgust reactions when studying human anatomy) may be overall less motivated to study relative to an exam on a less aversive topic (e.g., fly genetics).

Over the years, researchers have attempted to organize the diversity of defensive responses precipitated by aversive motivation (LeDoux and Pine, 2016; McNaughton, 2011). One well-established framework that has played an influential role is reinforcement sensitivity theory (RST) (Gray, 1982; Gray and McNaughton, 2000). According to RST, three core systems underlie human emotion: 1) a fight-flight-freeze system (FFFS) that is responsible for mediating behaviors in response to aversive stimuli (e.g., avoidance, escape, panic, phobia), 2) a behavioral approach system (BAS) that is responsible for mediating reactions to all appetitive stimuli (e.g., reward-orientation, impulsiveness), and 3) a behavioral inhibition system (BIS) which mediates the resolution of goal conflict (i.e., between FFFS and BAS, or even FFFS-FFFS and BAS-BAS conflict) (Pickering and Corr, 2008). Critically, the BIS is hypothesized to play a key role in generating anxiety during mixed motivational contexts. For example, during approach-avoidance conflict, activation of the BIS will increase attention to the environment and arousal, with the level of anxiety that is elicited tracking the intensity of conflict evoked by such attention (See Fig. 2b). Although approach-avoidance conflicts are more commonly observed, RST proposes that anxiety can also arise from approach-approach and avoidance-avoidance conflicts.

Recent extensions of the RST have postulated two relevant dimensions that may help organize the variety of defensive responses to aversive motivation (McNaughton, 2011; McNaughton and Corr, 2004). As illustrated in Fig. 2c, defensive direction describes the functional distinction between leaving a dangerous situation (active avoidance or escape; fear mediated by the FFFS system) and increasing caution to avoid a dangerous outcome (approach-avoidance conflict or passive avoidance; anxiety mediated by the BIS system). Conversely, defensive distance describes how an organism’s intensity of responding is associated with one’s perceived distance to the threat. For example, more proximal threats would elicit more overt reactive behavioral responses (e.g., panic, defensive quiescence). In contrast, distal threats may elicit more covert non-defensive behavior (e.g., obsessive attention towards the potential threat may drive compulsions to avoid that threat and increased anxiety).

The extended RST framework suggests the importance of mixed motivation for understanding incentive effects on behavior. In particular, the joint consideration of rewarding and aversive incentives associated with an outcome could have the effect of either further strengthening or competitively inhibiting motivational influences on instrumental or goal-directed behavior (Dickinson and Dealing, 1979; Dickinson and Pearce, 1977; Konorski, 1967). In addition to the impact of behavior, an important open question in this domain is whether and how the interaction between different motivational systems increases (Barker et al., 2019) or reduces (Solomon, 1980; Solomon and Corbit, 1974) affective or emotional responses. To further glean insight into the neural and computational mechanisms long associated with defensive responses to aversive motivation (Hofmann et al., 2012; Mobbs et al., 2009, 2020; Steimer, 2002), we argue that future work incorporating mixed motivation will help clarify of how aversive motivation modulates the intensity or frequency an individual’s effort allocation in mentally challenging tasks.

3. Experimental paradigms to investigate aversive motivation and cognitive control

The perspectives that arise from the animal learning literature suggest that a significant gap in characterizing the effects of aversive motivation of cognitive control is the lack of validated experimental paradigms to probe such interactions. Therefore, to make progress in this area of research, it is necessary to develop sensitive and specific task paradigms that allow researchers to systematically manipulate and measure how aversive outcomes influence goal-directed cognitive control. In the next section, we highlight several prominent experimental paradigms that have provided great insight into appetitive-aversive motivation interactions across animals and humans. Next, we describe several task paradigms that hold great promise for investigating aversive motivation and cognitive control interactions. In these paradigms, we show how the inclusion of both mixed motivation and motivational context can help quantify the extent to which aversive incentives may differentially guide cognitively controlled behavior, an important intermediary step for characterizing the engagement of underlying neural and computational mechanisms.

3.1. Experimental paradigms of aversive motivation on goal-directed behavior

Researchers in the animal learning domain have dedicated significant time and effort towards examining how aversive outcomes act as behavioral inhibitors of the response strength conditioned to appetitive outcomes (Dickinson and Dearing, 1979; Dickinson and Pearce, 1977; Nasser and McNally, 2012, 2013). Although the combination of aversive and appetitive motivational incentives is known to produce mutual inhibitory effects on instrumentally controlled responses (Dickinson and Pearce, 1977), researchers rarely consider the myriad of ways through which this mutual inhibition occurs when manipulating aversive motivation in behavioral tasks. As illustrated in Fig. 3, we describe four classic experimental paradigms that utilize distinct approaches to manipulate how combining rewarding and aversive incentives mutually inhibit instrumental responding in animals and humans. Importantly, our brief synthesis of such foundational paradigms aims to inspire novel insight for future experimental research that probes how aversive motivation influences the cognitive control processes guiding incentive-modulated goal-directed behavior (as described in Section 3.2).

Fig. 3.

Fig. 3.

Experimental Paradigms of Appetitive-Aversive Interactions. Four established paradigms to investigate approach-avoidance motivational conflict) are illustrated. These tasks highlight how including both appetitive and aversive incentives (e.g., mixed motivation) may exhibit mutual inhibition on instrumental behavior. A key distinction between these procedures is whether the aversive stimuli are conditioned in a Pavlovian or instrumental manner, as well as whether the presence of the aversive stimulus strengthens or weakens instrumental behavior (e.g., motivational context). The four paradigms are labeled as follows: a) Outcome devaluation. b) Conditioned Suppression. c) Pavlovian Instrumental Transfer. d) Counterconditioning. The black stimuli indicate a neutral stimulus, which is initially not paired with an unconditioned stimulus (e.g., food pellet or shock). The green stimuli indicate a rewarding incentive (e.g., food pellet) or a conditioned stimulus associated with a rewarding outcome (e.g., the rat learns that pulling the lever leads to a food pellet). In contrast, the red stimuli indicate an aversive incentive (e.g., shock) or a conditioned stimulus associated with an aversive outcome (e.g., the rat learns that a tone predicts the shock). The dashed rectangle indicates which incentives are bundled to facilitate mixed motivation.

3.1.1. Outcome devaluation

One classic approach for measuring aversive motivation effects on instrumental responding is outcome devaluation (also called reinforcer devaluation), a phenomenon in which the bundling of an aversive outcome (e.g., an electric shock) with a rewarding outcome will weaken instrumental responding towards the expected reward (e.g., a food pellet). Rachlin (1972) first demonstrated these punishment effects on pre-conditioned baseline excitatory responses in rats and pigeons. In these studies, the animals first learned to increase pressing a lever to obtain food rewards (e.g., positive reinforcement). Next, the food rewards were paired with electric shocks (e.g., appetitive-aversive motivation), and the decreased overall value of the bundled incentives suppressed the animal’s rate of lever pressing. Critically, this paradigm demonstrates how approach motivated behavior can be inhibited by including an additional aversive incentive in a measurable and systematic manner (Dickinson and Pearce, 1977). For example, the strength of an additional aversive incentive will determine the degree to which that aversive incentive inhibits approach-related behavior (e.g., greater suppression of instrumental responding occurs when food rewards are paired with more frequent and/or more intense shocks). Although some prior studies in rodents found that overtraining reduced the effect of outcome devaluation (Adams and Dickinson, 1981), recent work suggests that the degree to which additional aversive incentives may inhibit behavioral responding may depend on the training duration (Araiba et al., 2018). Interestingly, however, although outcome devaluation paradigms have been found to robustly elicit behavioral inhibition effects with outcome devaluation in rodents and monkeys (Balleine et al., 2005; Izquierdo and Murray, 2010; Murray and Rudebeck, 2013), there is mixed evidence of the degree to which overtraining impacts outcome devaluation in humans (Tricomi et al., 2009; de Wit et al., 2018), suggesting a need to examine the degree to which the findings obtained from this paradigm in animals are transferable to human studies. Nevertheless, outcome devaluation is a well-established paradigm that provides a promising approach to investigate how the bundling of rewarding and aversive incentives can modulate the strength of action-outcome contingencies (See Fig. 3a).

3.1.2. Conditioned suppression

Another approach for manipulating aversive motivation is via Pavlovian mechanisms, such that the presence of an aversive Pavlovian conditioned stimulus (CS) will inhibit instrumental behavior (Dickinson and Balleine, 2002; Mowrer, 1947, 1956). One such type of paradigm is conditioned suppression, which describes how a Pavlovian CS (e.g., a tone) paired with a noncontingent aversive stimulus (e.g., electric shock) may suppress instrumental responding (e.g., lever pressing) for a food reward (Lyon, 1968). In this paradigm, animals receive Pavlovian and instrumental training in separate phases. In the first phase, they learn an association between the Pavlovian CS and an aversive outcome (e.g., tone that predicts an electric shock) and develop a Pavlovian conditioned response to the aversive Pavlovian CS. In the second phase, they learn an association between performing an action and receiving a rewarding outcome (e.g., pressing a lever will lead to a food reward), and receipt of the food reward reinforces instrumental responding (e.g., positive reinforcement). The key manipulation that evaluates the extent to which the aversive Pavlovian CS inhibits responding is the test phase, through which both conditioning procedures are combined. Specifically, when both the Pavlovian CS and the lever are present, one can measure the extent to which the presence of the aversive Pavlovian CS (e.g., tone that predicts the shock) may suppress an animal’s desire to press the lever to receive a food reinforcer (Bouton and Bolles, 1980; Estes and Skinner, 1941). Notably, although some versions of this paradigm superimpose the Pavlovian conditioned aversive stimulus on an instrumentally controlled response (Blackman, 1970; Dickinson, 1976), others have noted that these conditioned suppression effects may also arise even during extinction of the aversive stimulus, similar to the aversive Pavlovian instrumental transfer paradigms discussed in the next section. This paradigm is illustrated in Fig. 3b.

3.1.3. Aversive Pavlovian Instrumental Transfer (PIT)

Aversive Pavlovian Instrumental Transfer (PIT; also referred to as transfer-of-control paradigms) is nearly identical to conditioned suppression, in that animals receive separate Pavlovian and instrumental training in separate phases. However, the main difference is that the test phase (transfer) occurs under extinction (Campese et al., 2013, 2020; Cartoni et al., 2016; Estes, 1943). Specifically, during the test (transfer) phase, the animal is presented with the Pavlovian CS in extinction (e.g., tone but no shock) while they perform the instrumental task (Scobie, 1972). This manipulation is important because the ‘transfer’ effect of the aversive motivation on instrumental responding is not conflated with the sensory properties of the aversive outcome. Some have even argued that because of this feature, aversive PIT is a ‘purer’ approach (than conditioned suppression) to study Pavlovian-instrumental interactions (Campese, 2021; Campese et al., 2013). The PIT paradigm is illustrated in Fig. 3c.

The aversive PIT paradigm has recently garnered much attention in human cognitive neuroscience, as it has been well documented to measure the effects of aversive motivation on instrumentally controlled behavior (Garofalo and Robbins, 2017; Geurts et al., 2013a; Lewis et al., 2013; Rigoli et al., 2012). In this human adaptation, participants first undergo instrumental conditional training through which they learn to push a button (approach-go) or do nothing (approach-no-go) to approach a rewarding stimulus (monetary gain) or push a button (withdrawal-go) or do nothing (withdrawal-no-go) to avoid an aversive stimulus (monetary loss). Participants would then undergo Pavlovian conditioning, through which unfamiliar audiovisual stimuli (pure tone and fractal) are paired with various appetitive or aversive liquids. Finally, during the testing phase (PIT), participants would perform the same task as during instrumental training, except that now the Pavlovian stimuli are tiled in the background. Critically, these PIT trials are performed in extinction, such that the liquid incentives were not presented. Interestingly, prior findings have demonstrated that the aversive Pavlovian CS’s inhibit approach-related instrumental responding and invigorate withdrawal-related instrumental responding, consistent with a successful PIT effect (Geurts et al., 2013a; Millner et al., 2018).

3.1.4. Counterconditioning

One important consideration not yet discussed is that an aversive stimulus may counterintuitively become less effective in suppressing instrumental responding when it predicts a rewarding outcome (i.e., it may reinforce instrumental responding). In the counterconditioning procedure (Dickinson and Pearce, 1976), the animal first learns an association between lever pressing and a food reward, which results in positive reinforcement of the lever pressing response. Next, an aversive stimulus (e.g., electric shock) is introduced and always precedes the positive food reinforcer (e.g., pressing a lever for a food reward). When the animal learns the association between the aversive stimulus (e.g., shock) and the food reinforcer, the aversive stimulus becomes less effective in its ability to act as a punisher (compared to without the food reinforcement) because it predicts a food reward. Interestingly, a separate experiment in this study replaced the electric shocks with an aversive Pavlovian CS (e.g., a tone predicting a shock) and found the same counterconditioning effects, confirming that the inhibitory effects of the positive reinforcement on the aversive Pavlovian CS were not simply due to the stimulus properties of the shock (Nasser and McNally, 2013). The counterconditioning paradigm is illustrated in Fig. 3d.

3.2. Experimental paradigms of aversive motivation and cognitive control

Despite the extensive history and foundational establishment of well-designed animal learning paradigms to characterize appetitive-aversive motivation interactions, this work has primarily been carried out in rodents. Conversely, there is much less work adapting these paradigms to investigate how mixed motivation impacts decision making in primates (Amemori et al., 2015; Amemori and Graybiel, 2012; Leathers and Olson, 2012; Roesch and Olson, 2004) and humans (Aupperle et al., 2011; Kirlic et al., 2017). Even when such paradigms have been implemented to examine how animals and humans make decisions based on “bundles” of rewarding and aversive incentives, only a very few studies have explicitly examined how mixed motivation impacts the allocation of cognitive control. Moreover, to account for the variety of behavioral strategies that arise from aversive incentives, e.g., penalties can facilitate enhancement or avoidance of cognitive control (Fröbose and Cools, 2018; Yee and Braver, 2018), it is imperative to design innovative experimental paradigms that can accurately characterize the full range of cognitively controlled behaviors that arise from these interactions.

Here, we draw inspiration from classical reinforcement theory and describe several recent paradigms that have examined the influence of aversive incentives on cognitive control. Similar to the classical paradigms previously described, these aversive motivation-control paradigms also incorporate mixed motivation, the combined influence of multiple incentives. In contrast to prior studies which have only looked at aversive incentives on conditioned behavioral responses (Bradshaw et al., 1979; Reynolds, 1968; Weiner, 1989), these paradigms explicitly manipulate how rewards and aversive motivational incentives combine to impact cognitive control. Moreover, the inclusion of multiple diverse types of motivational incentives is crucial for studying these interactions by valence, as they enable us to precisely quantify the relative influence of aversive incentives (e.g., monetary losses, shocks, saltwater) in terms of their recruitment and allocation of cognitive control in goal-directed tasks. Lastly, while we acknowledge that these paradigms are certainly not exhaustive, we hope that consideration of these motivational dimensions (e.g., motivational context, mixed motivation) will provide a broad foundation from which to drive future research that investigates the specific and nuanced ways through which aversive motivational value interacts with cognitive control.

3.2.1. Incentive integration and cognitive control

An experimental paradigm that holds great promise for investigating aversive motivation on cognitive control is the incentive integration and cognitive control paradigm (Yee et al., 2016, 2019, 2021). This paradigm illustrated in Fig. 4a parallels the outcome devaluation paradigm described earlier but replaces the instrumental conditioning procedure (e.g., lever pressing for a food reward) with a cognitive control task (cued task-switching). On each trial, a letter-number pair is visually presented (e.g., one letter and one number on the screen), and participants are tasked with categorizing the target symbol based on the task instructed briefly presented at the beginning of each trial (e.g., a randomized cue would indicate whether the participant should classify the letter as a vowel or consonant or classify a number as odd or even). A monetary reward cue is also randomly presented with each trial and indicates whether participants can earn low, medium, or high reward value (displayed as $, $$, or $$$$) for fast and accurate task performance (e.g., faster than a subjective RT criterion established during baseline blocks with no incentives present). Importantly, successful attainment of monetary reward is indicated by oral liquid delivery to the participant’s mouth as post-trial performance feedback. In contrast, participants did not receive money nor liquid if they were incorrect, too slow, or did not respond. Additionally, the type of liquid is blocked, such that the liquid feedback can be rewarding (apple juice), neutral (tasteless isotonic solution), or aversive (saltwater). However, as the symbolic meaning of the liquid is kept constant across conditions (i.e., always indicating performance success), any behavioral differences observed can be attributed to the differential subjective evaluation of the bundled monetary and liquid incentives. Previous results across multiple studies have consistently demonstrated that humans integrate the motivational value of monetary and liquid incentives to modulate cognitive task performance and self-reported motivation, such that greater performance improvements were observed with more rewarding bundled incentives (high monetary reward + juice) relative to less rewarding bundled incentives (low monetary reward + neural), while impairments were found for the most aversive bundles (low monetary reward + saltwater).

Fig. 4.

Fig. 4.

Experimental Paradigms for Investigating Aversive Motivation and Cognitive Control. a) Incentive Integration and Cognitive Control Paradigm. Participants performed cued task-switching and could earn monetary rewards and liquid incentives for fast and accurate performance (Yee et al., 2016). Manipulating the motivational value of the monetary and liquid incentives across bundled incentive conditions ensured a clear comparison of how the relative motivational value of these incentives influenced cognitive control. b) Dissociable Influences of Reward and Penalty on Cognitive Control Allocation. Participants performed a self-paced incentivized mental effort task (Leng et al., 2020). They were rewarded with monetary gains for correct responses and were penalized with monetary losses for incorrect responses. The motivational value of the rewards and penalties were varied, which enabled clear dissociation between how expected rewards increased response rate (via faster response times while maintaining accuracy) and expected penalties decreased response rate (via slower response times and increased accuracy). Together, these paradigms demonstrate the utility of using mixed motivation to more precisely evaluate how aversive motivation influences cognitive control.

Importantly, by manipulating both monetary and liquid incentives across rewarding and aversive domains, this paradigm enables straightforward examination of how much the aversive motivational value of the saltwater impacts cognitive control task performance relative to the neutral solution or apple juice. Moreover, manipulating the valence of the liquid incentives across bundled incentive conditions ensures that the comparison is related to the motivational value of the liquids on cognitive control rather than salience properties that are commonly associated with primary incentives. Broadly, the incentive integration paradigm demonstrates the utility of using bundled primary and secondary incentives to evaluate how the mutual inhibition between rewarding and aversive motivational processes influence cognitive control task performance. Moreover, the motivational manipulations in this paradigm hold great promise for examining how aversive incentives of different categories (Crawford et al., 2020) may similarly or differently impact performance in other cognitive control tasks (e.g., Flanker, Stroop, Simon, AX-CPT) and across the lifespan (Yee et al., 2019).

3.2.2. Dissociable influences of reward and penalty on cognitive control

Another approach for investigating the effect of aversive incentives on cognitive control is to examine the dissociable (rather than the integrated) influence of multiple incentives on cognitive control. Our group recently developed a novel task that examines how expected rewards and penalties influence the allocation of cognitive control on a self-paced Stroop task (Leng et al., 2020). Specifically, in contrast to previous studies that have primarily measured motivation in terms of performance on a fixed number of obligatory task trials, this task contains fixed time intervals through which a person can choose how much effort to invest based upon the expected rewards for success and penalties for failure (Schmidt et al., 2012). Critically, in addition to estimating the amount of effort invested within a given interval, this task enables us to measure the extent to which different incentives influence different types of mental effort investment (e.g., attentional control, response caution). In this paradigm, subjects earn monetary rewards for each correct response (high or low) and are penalized with monetary losses for each incorrect response (high or low). Each interval is preceded by a cue that indicates the consequences for correct and incorrect responses and is followed by feedback with the total reward and loss incurred for that interval. See Fig. 4b for an illustration of the task.

Because individuals need to consider both the motivational value of rewards and penalties when deciding how much cognitive control to allocate within a given interval, this paradigm enables an explicit comparison of the dissociable influences of reward and penalty on cognitive control allocation. Behavioral results revealed that participants maximized their net reward unit per time (e.g., reward rate) based on the bundled expected value of rewards and penalties, with better performance for higher expected rewards and worse performance for higher expected penalties. Post-hoc analyses of speed and accuracy revealed dissociable strategies for allocating effort based on both incentives. Higher rewards resulted in faster response times without a change in accuracy, whereas higher penalties resulted in slower but more accurate responses. Importantly, these data suggest the promise of this paradigm as another approach for evaluating the influence of aversive incentives on cognitive control.

4. Neural mechanisms of aversive motivation and cognitive control

In the next section, we propose that considering the motivational context of how aversive incentives influence behavior may help organize the wide range of neural processes underpinning aversive motivation and cognitive control. Although the neurobiological mechanisms of aversive motivation have been of longstanding interest (Campese et al., 2015; Jean-Richard-Dit-Bressel et al., 2018; Kobayashi, 2012; Levy and Schiller, 2020; Schiller et al., 2008; Seymour et al., 2007; Umberg and Pothos, 2011), and other regions such as orbitofrontal cortex, ventromedial PFC, insula, and amygdala have been broadly implicated in aversive processing (Atlas, 2019; Gehrlach et al., 2019; Kobayashi, 2012; Maren, 2001; Michely et al., 2020), we primarily focus on neural circuits implicated in motivation and cognitive control interactions. In particular, a significant challenge for developing a clear understanding of the mechanisms that underlie aversive motivation and cognitive control has been the perplexing spectrum of neural findings from extant studies involving aversive outcomes. Prior research has shown that active avoidance (e.g., increased behavioral responding to escape from the aversive outcome) is associated with increased dopamine (DA) release (Bromberg-Martin et al., 2010; Wenzel et al., 2018) as well as activation in the striatum and dorsal anterior cingulate cortex (Boeke et al., 2017; Delgado et al., 2009). In contrast, the anticipation of aversive incentives facilitates behavioral inhibition (e.g., decreased behavioral responding to avoid an aversive outcome) and is associated with increased serotonin (5-HT) release (Crockett et al., 2009, 2012; Geurts et al., 2013b) as well as activation in the lateral habenula (Jean-Richard-Dit-Bressel and McNally, 2014, 2015; Lawson et al., 2014; Webster et al., 2020) and dorsal anterior cingulate cortex (Fujiwara et al., 2009; Monosov, 2017). Importantly, we believe that greater emphasis on the distinction between how aversive incentives promote behavioral activation (e.g., negative reinforcement) versus behavioral inhibition (e.g., punishment) may help organize the diverse neural processes associated with aversive motivation and cognitive control. Below, we review the monoaminergic and neural mechanisms associated with negative reinforcement and punishment and present a novel framework (See Fig. 6a) that describes how the motivational context may delineate potential distinct neural pathways through which aversive incentives modulate cognitive control allocation.

Fig. 6.

Fig. 6.

Aversive Motivation and Cognitive Control. a) Neural mechanisms underlying aversive motivation and cognitive control. This framework considers the motivational context through which aversive incentives may facilitate either behavioral activation or behavioral inhibition. Dissociable monoaminergic mechanisms may underlie these two effort strategies (e.g., DA may promote negative reinforcement, 5-HT may promote punishment). The arrows represent information coding, such that reward-related information is passed along the green arrows to support reinforcement-related behavior. In contrast, aversive-related information is passed along the red arrows to support punishment-related behavior. Additionally, motivational opponency between DA and 5-HT (e.g., mutual inhibition; approach-avoidance motivational conflict) may help understand how “bundled incentives” (e.g., mixed motivation) signals are transmitted to the dorsal raphe nucleus, lateral habenula, and dorsal anterior cingulate cortex to promote divergent strategies for cognitive control allocation. b) Dorsal anterior cingulate cortex (dACC) integrates Expected Value of Control (EVC)-relevant information (e.g., expected positive and negative outcomes) to determine the allocation of cognitive control. Our current framework extends the EVC model from Shenhav et al., 2013 by including mixed motivation (e.g., the dotted rectangle indicates summed value of bundled incentives) to determine the EVC and cognitive control allocation (e.g., how much effort to exert). Thus, the inclusion of multiple diverse types of incentives is crucial for studying these interactions by valence. Specifically, they enable us to precisely quantify the relative influence of aversive incentives (e.g., monetary losses, shocks, saltwater) on the recruitment and allocation of cognitive control in goal-directed tasks.

4.1. Monoaminergic mechanisms of aversive motivation

4.1.1. Dopamine, behavioral activation, and negative reinforcement

It is unequivocal that dopamine (DA) is a key neurotransmitter involved in motivation-cognitive control interactions. Prior work has shown that the enhancement of cognitive performance (e.g., attentional processes, task-switching) by monetary rewards is specifically linked with increased dopamine release in the striatum and prefrontal cortex (Aarts et al., 2011; Braver and Cohen, 2000; Cools, 2008; Schouwenburg et al., 2010; Westbrook and Braver, 2016). However, while there is abundant evidence demonstrating the causal link between dopamine and exerting effort to obtain rewards (Hamid et al., 2016; Salamone, 2009; Walton and Bouret, 2019; Westbrook et al., 2020), there is also extensive literature on dopamine facilitating the avoidance of aversive outcomes (Lloyd and Dayan, 2016; Menegas et al., 2018; Nuland et al., 2020). Notably, although the role of dopamine in active avoidance seems somewhat counterintuitive, one plausible explanation may be that the successful avoidance of an aversive outcome may be intrinsically rewarding and thus drive active defensive strategies that increase effort to continually avoid the aversive outcome (McCullough et al., 1993; Sokolowski et al., 1994).

One compelling hypothesis that may reconcile these seemingly paradoxical results is that dopamine may modulate the reinforcement-related responses associated with motivational incentives (Dayan and Balleine, 2002; Wise, 2004). This idea is consistent with prior research, which has shown that dopamine modulates both positive reinforcement (Heymann et al., 2020; Steinberg et al., 2013, 2014) and negative reinforcement (Gentry et al., 2018; Navratilova et al., 2012; Pignatelli and Bonci, 2015). Others have observed that mesolimbic dopamine is associated with avoidance learning at the neural circuit level (Antunes et al., 2020; Ilango et al., 2012; Stelly et al., 2019; Wenzel et al., 2018), but there is not yet evidence that shows that dopamine modulates negative reinforcement in humans. Critically, validating this putative dopamine-reinforcement relationship in humans would provide an important stepping stone towards clarifying the putative role of dopamine in aversively motivated cognitive control.

4.1.2. Serotonin, behavioral inhibition, and punishment

Serotonin, also known as 5-Hydroxytryptamine (5-HT), has long been linked to aversive processes (Dayan and Huys, 2009; Deakin and Graeff, 1991; Soubrié, 1986), as well as a broad range of behavioral functions, including behavioral suppression, neuroendocrine function, feeding behavior, and aggression (Lucki, 1998). These diverse processes may be largely related to the numerous (at minimum 14) serotonin receptors in the brain (Carhart-Harris, 2018; Carhart-Harris and Nutt, 2017; Cools et al., 2008a; Cowen, 1991; Homberg, 2012), making it challenging to map serotonin’s specific role in motivational and cognitive processing. Prior work has shown that serotonin is linked to reward and punishment processing (Cohen et al., 2015; Hayes and Greenshaw, 2011; Kranz et al., 2010), coordinating defense mechanisms (Deakin and Graeff, 1991; Graeff, 2004), behavioral suppression (Soubrié, 1986), aversive learning (Cools et al., 2008b; Daw et al., 2002; Dayan and Huys, 2008; Ogren, 1982), cognitive flexibility (Clarke et al., 2004, 2005; Matias et al., 2017), impulsivity (Desrochers et al., 2020; Ranade et al., 2014), and motor control (Jacobs and Fornal, 1993; Wei et al., 2014), to name a few.

Perhaps one of the greatest challenges for developing a unified theory of 5-HT’s functional role relates to the observation that different 5-HT pathways mediate distinct adaptive responses to aversive outcomes (Deakin, 2013). For example, 5-HT projections from the dorsal raphe nucleus (DRN) to the amygdala facilitates anticipatory anxiety that can guide an organism away from the threat, whereas 5-HT projections to the periaqueductal gray (PAG) facilitate a reflexive fight/flight mechanism in response to unconditioned proximal threats (e.g., panic). It may initially seem paradoxical that 5-HT is engaged to facilitate both antic-ipatoiy anxiety and panic, behavioral responses that appear to be at odds with one another (e.g., anticipatory anxiety should inhibit panic). However, what is abundantly clear is that a functional topography underlies when and how 5-HT is released, and the adaptive behavioral response depends on the spatiotemporal distance of the anticipated or imminent aversive outcome or threat (Paul et al., 2014; Paul and Lowry, 2013).

Despite these neurobiological complexities associated with 5-HT, one promising motivational hypothesis that has gained traction over the years is that serotonin relates to aversive-related behavioral inhibition or punishment (Robinson and Roiser, 2016). Researchers have found evidence for this hypothesis in recent years using acute tryptophan depletion (ATD), a pharmacological challenge that reduces the availability of the essential amino acid and serotonin precursor tryptophan. ATD is hypothesized the selectively target the serotonin system (Fern-strom, 1979; Hood et al., 2005; Young, 2013, though see also Donkelaar et al., 2011). In particular, prior research has demonstrated that serotonin specifically modulates punishment-related behavioral inhibition in humans (Crockett et al., 2009, 2012) and attenuates the influence of aversive Pavlovian cues on instrumental behavior (Geurts et al., 2013b; den Ouden et al., 2015). Together, these human pharmacological studies demonstrate that serotonin plays a central role in punishment by linking Pavlovian-aversive predictions with behavioral inhibition (Crockett and Cools, 2015; Faulkner and Deakin, 2014), suggesting a potential mechanism through which aversive motivation may inhibit effort when allocating cognitive control.

4.1.3. Mutual inhibition between dopamine and serotonin in the dorsal raphé nucleus

The independent roles of dopamine and serotonin in modulating motivational valence and adaptive behavior (Hu, 2016; Rogers, 2011) are consistent with the idea that the motivational opponency between the two systems is what modulates activation responses and higher cognitive functioning (Boureau and Dayan, 2011; Cools et al., 2011; Daw et al., 2002; Samanin and Garattini, 1975). However, empirical studies attempting to validate this hypothesis have met with limited success (Fischer and Ullsperger, 2017; Seymour et al., 2012), although the neural mechanisms through which this mutual inhibition occurs remain an active area of research (Moran et al., 2018).

Recent evidence from the animal literature suggests that the dorsal raphe nucleus (DRN) may play a central role in modulating mutual inhibition between rewarding and aversive processes (Hayashi et al., 2015; Li et al., 2016; Nakamura, 2013; Nakamura et al., 2008). The DRN contains high concentrations of serotonin neurons (Huang et al., 2019; Kirby et al., 2003; Marinelli et al., 2004; Michelsen et al., 2008) as well as dopamine neurons (Cho et al., 2021; Lin et al., 2021; Matthews et al., 2016; Stratford and Wirtshafter, 1990; Yoshimoto and McBride, 1992). Some have shown that serotonergic DRN neurons play a key modulatory role in reward processing (Browne et al., 2019; Liu et al., 2020; Luo et al., 2015; Nagai et al., 2020; Ren et al., 2018), while dopaminergic DRN neurons appear to encode the motivational salience of incentives (Cho et al., 2021). Additionally, serotonergic DRN neurons project to the dopamine-rich ventral tegmental area (VTA) (Chang et al., 2021; Gervais and Rouillard, 2000), revealing its potentially crucial role in providing a more comprehensive understanding of mutual inhibition between DA and 5-HT. Taken together, one possible interpretation of these findings is that DRN may represent the benefits and costs of motivational incentives (Luo et al., 2016), and this signal may be relayed to cortical brain regions (e.g., frontal cortex) to drive behavioral control (Azmitia and Segal, 1978).

4.2. Neural circuit mechanisms of aversive motivation and cognitive control

4.2.1. Lateral habenula and aversive motivational value

The lateral habenula (LHb) has recently gained much attention as a promising candidate brain region involved in processing aversive motivational value (Hu et al., 2020; Lawson et al., 2014; Matsumoto and Hikosaka, 2009) due to its anatomical connections to motivational and emotional brain regions and influences of dopamine and serotonin neurons (Hikosaka et al., 2008). In particular, the LHb has been found to inhibit dopamine neurons (Brown and Shepard, 2016; Hikosaka, 2010; Lammel et al., 2012), but its activity is also suppressed by serotonin neurons (Shabel et al., 2012; Xie et al., 2016). These findings present provocative evidence that LHb serves as a critical functional hub for regulating how monoaminergic systems modulate motivated behavior and affective states (Namboodiri et al., 2016).

The LHb is part of a larger neural circuit, as illustrated in Fig. 5, and is highly connected to various subcortical brain structures such as the septum, hypothalamus, basal ganglia, globus pallidus) as well as dopamine and serotonin (Metzger et al., 2017). Within this putative aversive motivational value neural circuit, the LHb receives afferent projections from the ventral pallidum, globus pallidus internal segment (GPi), and ventral tegmental area (Haber and Knutson, 2010; Hong and Hikosaka, 2008; Root et al., 2014; Wulff et al., 2019). The LHb then sends efferent projections to brainstem nuclei, including dorsal and median raphe nuclei, ventral tegmental area, substantia nigra, and locus coeruleus (Akagi and Powell, 1968; Quina et al., 2015; Sutherland, 1982; Wang and Aghajanian, 1977; Zahm and Root, 2017). Importantly, these connections suggest LHb likely serves an important regulatory role of dopamine and serotonin (as well as norepinephrine).

Fig. 5.

Fig. 5.

Lateral Habenula and Aversive Motivational Value. The lateral habenula (LHb) receives excitatory afferent projections from the globus pallidus internal segment (GPi). The GPi is located more lateral but is placed in the same slice for illustration purposes. The LHb sends efferent projections to target the substantia nigra, ventral tegmental area, dorsal and medial raphé nuclei, and locus coeruleus, brainstem nuclei with high concentrations of dopamine, serotonin, and norepinephrine. These modulatory signals are mediated by the rostromedial tegmental nucleus (not pictured). Serotonin neurons send inhibitory projections to the GPi that suppress the excitatory projections from GPi to the LHb.

Recent evidence from the animal neuroscience literature lends support to the putative role of LHb in aversive motivational value, as LHb neurons in primates are strongly excited by aversive outcomes (e.g., absence of a liquid reward or presence of an air puff punisher). Interestingly, these “negative reward” signals from the LHb are mediated by the rostromedial tegmental nucleus (RMTg), a brain structure speculated to modulate both reward-related behaviors of DA neurons in the SNc/VTA and aversive-related behaviors of 5-HT neurons in the DRN (Hong et al., 2011; Jhou and Vento, 2019). Interestingly, others have observed that presentation of aversive stimuli to rodents increased LHb projections to RMTg neurons (Stamatakis and Stuber, 2012) and that stimulation of LHb-RMTg transmission in rodents reduced motivation to exert effort to earn rewards (Proulx et al., 2018). Moreover, recent studies have shown that LHb inactivation alters both choice flexibility and willingness to exert physical effort, demonstrating that this region is likely a key contributor in guiding behavior during mentally and/or physically demanding tasks (Baker et al., 2015; Sevigny et al., 2021). Finally, the LHb both receives direct projections from the dorsal anterior cingulate cortex (dACC; a critical brain region involved in cognitive control described in the next section (Chiba et al., 2001; U. Kim and Lee, 2012), and also indirectly influences dACC via inhibition the activity of midbrain dopamine neurons that project to dACC (Haber, 2014; Lammel et al., 2012; Williams and Goldman-Rakic, 1995). Thus, it is highly likely that both regions communicate with each other to support the transmission of aversive motivational value, in service of monitoring action outcomes and signaling necessary behavioral adjustments (Baker et al., 2016). Recent evidence from primates has shown that LHb represents ongoing negative outcomes in ongoing trials, while the dACC represents outcome information from past trials and signals behavioral adjustments in subsequent trials (Kawai et al., 2015). Yet, despite these promising results suggesting complementary roles for LHb and dACC processing aversive outcomes in behavioral control, many open questions remain regarding the nature of how neural signals jointly interact in the brain.

These studies demonstrate that the habenula plays a prominent role in the neural pathway through which aversive motivation interacts with cognitive control (Baker and Mizumori, 2017; Mizumori and Baker, 2017). However, a significant limitation for investigating the LHb in humans is its relatively small size, which is around 30 mm3 in volume (Boulos et al., 2017; Lawson et al., 2013; Strotmann et al., 2014). While some early fMRI studies suggest that the human habenula is activated for negative outcomes and negative reward prediction errors (Salas et al., 2010; Shepard et al., 2006; Ullsperger and Cramon, 2003), a potential limitation of this early work may be the lack of spatial specificity due to the available MRI methods at the time. Fortunately, more recent developments in 7 T MRI have enabled researchers to define the human habenula and associated functional networks with greater precision (Lawson et al., 2013; Torrisi et al., 2017). Although these high-resolution imaging techniques have demonstrated great promise in providing preliminary evidence in humans that the habenula is activated by aversive stimuli more broadly (Hennigan et al., 2015; Lawson et al., 2014; Shelton et al., 2012; Weidacker et al., 2021), much remains to be elucidated regarding its specific functional role in motivation and cognitive control interactions.

Finally, while we have emphasized the role of LHb in aversive motivational value, an important adjacent brain structure also relevant for aversive processing is the medial habenula (MHb), which some have argued is functionally distinct from the LHb (Namboodiri et al., 2016). Specifically, as neuroanatomical studies suggest that the MHb sends afferent projections to the amygdala, a region long implicated in representing Pavlovian conditioned values of threatening or noxious stimuli (Campese et al., 2015; Moscarello and LeDoux, 2013), or conditioned approach and avoidance behavior (Choi et al., 2010; Fernando et al., 2013; Schlund and Cataldo, 2010). Although much less is known about MHb’s impact on aversive motivational processing relative to the LHb, we speculate that one potential critical factor that may contribute to these functional differences are the parallel pathways through which DA and 5-HT project from the dorsal and median raphé nuclei to distinct cortical brain regions (Azmitia and Segal, 1978). Moreover, while many open questions remain regarding how these distinct pathways impact aversive processing, future work clarifying the neural circuitry between LHb and MHb may help elucidate the mechanisms by which organisms develop adaptive behavioral responses to aversive motivation.

4.2.2. Dorsal anterior cingulate cortex and the expected value of control

The dorsal anterior cingulate cortex (dACC) has long been implicated in cognitive control (Botvinick et al., 2001; Ridderinkhof et al., 2004; Sheth et al., 2012), as well as various cognitive, motor, and affective functions (Heilbronner and Hayden, 2016; Vassena et al., 2020; Vega et al., 2016), including affect (Braem et al., 2017; Etkin et al., 2011) and emotion-control interactions (Inzlicht et al., 2015; Pessoa, 2008). In recent years, growing evidence suggests that dACC plays a central role in modulating the interaction between motivation and cognitive control (Botvinick and Braver, 2015; Parro et al., 2018). However, despite dACC’s indisputable role in motivation/affect and cognitive control, surprisingly few studies have investigated aversive motivation and cognitive control in the brain (Cubillo et al., 2019). This provides a unique challenge and opportunity to develop a greater mechanistic understanding of exactly how aversive motivational value is transmitted to dACC to guide cognitive control (Yee and Braver, 2018).

As mentioned at the beginning of this review, the Expected Value of Control (EVC) model is a promising framework for addressing this crucial gap in the literature. In particular, the EVC attempts to integrate these broad neuroscientific findings posits that dACC serves as a central hub that integrates motivational values to modulate cognitive control (Shenhav et al., 2013, 2016). Recent evidence from the animal literature is consistent with the EVC account, as some studies have shown how rodent medial prefrontal cortex (one putative rodent analog of human dACC; but see Heukelum et al., 2020; Vogt et al., 2013) plays a central role in integrating rewarding and aversive motivational incentives to modulate effort and attention (Hosking et al., 2014; Schneider et al., 2020). Moreover, as illustrated in Fig 6a, incorporating the motivational context through which these incentives may help clarify how aversive incentives promote dissociable strategies for cognitive control allocation (e.g., DA may promote behavioral activation/negative reinforcement while 5-HT may promote behavioral inhibition/punishment). Although these neural pathways are still somewhat speculative and not yet validated in humans, future research combining innovative experimental tasks with high-resolution MRI or deep-brain stimulation could help fill this crucial gap in the literature (Boulos et al., 2017; Lawson et al., 2013).

Additionally, an important core assumption of the EVC model is that dACC “bundles” expected positive and negative outcomes into a net motivational value (e.g., mixed motivation) that modulates cognitive control signals in the brain (See Fig. 6b). Recent work from a human fMRI study provides evidence in support of dACC’s role in value integration and cognitive control (Yee et al., 2021). In particular, we used the incentive integration and cognitive control task (See Fig 4a) to explicitly test the hypothesis that bundled neural signals in dACC encoded the motivational value of monetary and liquid incentives in terms of their influence on cognitive performance and self-reported motivating ratings. In other words, dACC selectively encoded the integrated subjective motivational value of bundled incentives, and more importantly, the bundled neural signal was associated with motivated task performance (e.g., juice + monetary rewards increased dACC signals and boosted performance, whereas saltwater + monetary rewards decreased dACC signals and impaired performance). However, while these current results lend support for how mixed motivation may modulate cognitive control via an instrumental manner, it remains unknown how this integrated value signal may differentially impact cognitive control when incentives are conditioned in a Pavlovian or even combined (e.g., Pavlovian-Instrumental Transfer) manner. Future studies could explicitly examine the degree to which dACC activity reflects the integrated motivational value of different combinations of various types of motivational incentives on cognitive control processes (e.g., does receiving monetary loss + saltwater as performance feedback elicit lower activation relative to monetary loss + juice as performance feedback), or alternatively consider the motivational context of incentives modulate cognitive control allocations (e.g., are there dissociable dACC neural signals underlying whether aversive motivation elicits negative reinforcement vs. punishment behavior).

5. Computational mechanisms of aversive motivation and cognitive control

5.1. Dissociable influences of reinforcement and punishment on cognitive control allocation

In this section, we highlight recent theoretical work demonstrating how the inclusion of aversive motivational incentives enables us to reconceptualize cognitive control allocation, not as a one-dimensional problem – in which motivation monotonically influences cognitive control (e.g., higher or lower effort allocation) – but instead as a multidimensional one. For example, it is important to consider both the amount (e.g., how much effort) and type of effort strategy (e.g., what kind of effort) utilized for allocating cognitive control must be computed. Specifically, we describe an instantiation of the EVC model that offers an account of 1) how mixed motivation may influence the interaction of motivation and cognitive control, and 2) how the motivational context of aversive incentives can elicit dissociable effort strategies for cognitive control allocation. Notably, while the motivation to avoid negative outcomes might engage control processes during mentally challenging tasks, the context of how that outcome can be avoided may drive different kinds of control signals. For example, whereas the motivation to avoid or escape from expected negative outcomes may boost effort allocation on a mentally challenging task (e. g., negative reinforcement) via increasing attentional control, the motivation to avoid being penalized with negative outcomes may instead reduce effort allocation on a mentally challenging task (e.g., punishment) through increased response caution. This example clearly illustrates how the motivational context through which aversive motivation facilitates behavioral activation (Evans et al., 2019) or behavioral inhibition (Verharen et al., 2019) has significant implications for understanding how aversive incentives might drive divergent effort strategies for cognitive control allocation.

Recent theoretical work has demonstrated how these different forms of control adjustment (e.g., attentional control vs. response caution) can be formalized within the framework of formal models of evidence accumulation (Danielmeier et al., 2011; Danielmeier and Ullsperger, 2011; Ritz et al., 2021). In particular, the drift diffusion model (DDM) provides a useful framework for explicitly quantifying how different types of incentives (e.g., reward, penalty) can guide distinct adjustments in cognitive control allocation (Ratcliff et al., 2016; Ratcliff and McKoon, 2008). Moreover, normative models have been developed that incorporate such DDM parameters into an objective function which putatively accounts for how individuals optimally vary the intensity of their physical or mental effort to maximize their expected reward rate (Bogacz et al., 2006; Niv et al., 2007). However, an important gap in this theoretical research relates to characterizing the degree to which various motivational incentives might modulate similar or dissociable strategies for mental effort allocation.

The following implementation of the Expected Value of Control (EVC) model extends the existing reward rate framework to describe how individuals determine the appropriate amount of cognitive control to allocate in a given situation. A core assumption of the model is that individuals will allocate the amount and type of cognitive control that maximizes their expected reward rate while simultaneously minimizing the effort costs associated with exerting cognitive control (Lieder et al., 2018; Musslick et al., 2015). The difference between these two quantities, referred to as the Expected Value of Control (EVC; See Eq. 1), indexes the extent to which benefits outweigh the costs (Shenhav et al., 2013, 2017). The EVC model predicts that an individual will adjust control allocation based upon the expected benefits (e.g., the net value of monetary rewards and monetary losses earned for exerting control) and the expected costs (i.e., the mental effort required to exert control).

EVC(control,state)=RewardRate(control,state)Cost(control) (1)

In order to maximize EVC, cognitive control can be adjusted to modify specific parameters of the DDM, which govern how incentives may influence predicted behavior. For example, increased attentional control from expected reinforcers may correspond to the rate of evidence accumulation (e.g., drift rate), whereas increased response caution from expected punishers may correspond to the response threshold. Importantly, changes in drift rate and threshold may predict distinct patterns of changes in response time RT (which is a combination of both decision-related and decision-unrelated factors, i.e., decision time [DT] and non-decision time [NDT]) and error likelihood. For example, increases in drift rate result in faster RT and increased likelihood of error responses, whereas increases in threshold result in slower RT and increased accuracy (Bogacz et al., 2006). As described in Eq. 2, Reward Rate can be estimated as a function of resulting performance (e. g., error rate ER and response time RT), as well as the reinforcement for a correct response R and punishment for an incorrect response P. Critically, by integrating the influence of multiple incentives, this formulation accounts for contexts involving mixed motivation and thus has the potential to provide a more comprehensive picture of the explicit ways through which diverse forms of motivation can influence different strategies for allocating cognitive control.

RewardRate=R×(1ER)P×ERRT (2)

An exciting feature of this normative account is the ability to explicitly stipulate distinct parameters for positive reinforcement and negative reinforcement. For instance, in scenarios where accurate responses lead to obtaining rewarding incentives, this formulation can be used to explicitly estimate the effects of positive reinforcement [RPos] on reward rate. Conversely, in scenarios where accurate responses lead to successful avoidance of aversive incentives, the equation can instead account for the effects of negative reinforcement [RNeg] on reward rate, which may potentially be distinct from how positive reinforcement may modulate drift rate and threshold parameters during reward rate optimization. This distinction allows us to delineate the motivational context of whether an aversive incentive should be treated as negative reinforcement or punishment. Importantly, this formulation dictates divergent predictions for how an aversive incentive may modulate the intensity of mental effort allocated in a given cognitive control task based upon this motivational context. Moreover, the model has the potential to elucidate the degree to which negative reinforcement may produce similar patterns as positive reinforcement effects versus punishment effects on cognitive control allocation.

The other key component in the EVC model is the Cost of cognitive control, which refers to the aversiveness of the mental effort required to exert cognitive control and successfully perform the task (Kool and Botvinick, 2018; Shenhav et al., 2017). This cost is assumed to be a monotonic but likely non-linear function (e.g., quadratic) of the intensity of control being allocated (Massar et al., 2020; Petitet et al., 2021; Soutschek and Tobler, 2020; Vogel et al., 2020). Because the model assumes that it is optimal to maximize drift rate, the drift rate would not be constrained without a cost function. Thus, the inclusion of a cost function represented as a squared function of the drift rate, scaled by parameter E, allows for a more constrained set of parameter values for drift rate and threshold for reward rate maximization (Leng et al., 2020); shown in Eq. 3. For additional discussion about the potential forms and source of this cost function, see (Kool and Botvinick, 2018; Ritz et al., 2021; Westbrook and Braver, 2015). Integrating across considerations of expected reward rates and effort costs, the model can estimate the EVC of each possible combination of drift rate and threshold (shown as a heatmap in Fig. 7a) and then determine the settings of each of these control signals that is optimal (i.e., that maximizes EVC).

Fig. 7.

Fig. 7.

Dissociable Influences of Reinforcement and Punishment on Cognitive Control Allocation a) A core assumption of the Expected Value of Control (EVC) model is that individuals will adjust control allocation to maximize their expected reward rate and minimize their expected costs for exerting control. Expected outcomes are determined by considering the likelihood of an error (ER), the reinforcement for responding correctly (R), and the punishment for responding incorrectly (P). These expected outcomes are normalized by the expected response time RT (which is a combination of both decision-related and decision-unrelated factors, i.e., decision time [DT] and non-decision time [NDT]) to determine the expected reward rate. EVC is determined by subtracting from this reward rate a cost function (e.g., here represented by a parameter E that scales the square of drift rate), reflecting the non-linear effort cost associated with increased attention on a given trial (for discussions of alternate forms of effort functions, see Leng et al., 2020; Ritz et al., 2021). This formulation allows for a distinction between positive reinforcement and negative reinforcement. Critically, this enables us to delineate whether an aversive incentive should be treated as negative reinforcement or punishment. b) The EVC model predicts that individuals seek to configure drift rate and threshold to maximize their EVC and adjust this configuration as task incentives vary. Specifically, the model predicts that rewards for correct responses (e.g., positive reinforcement) will bias strategic adjusting in attention (drift rate). In contrast, penalties for incorrect responses (e.g., punishment) will bias a strategic adjustment in response caution (threshold). c) Task performance from our behavioral study using the task in Fig. 4b (Leng et al., 2020) was consistent with these normative predictions. The upright triangles indicate a higher value (e.g., high reward, high penalty), while the inverted triangles indicate a lower value (e.g., low reward, low penalty).

CostFunction=E×driftrate2 (3)
EVC=R×(1ER)P×ERRTE×driftrate2 (4)

Recent work from our group has adapted this formulation to estimate the optimal (i.e., EVC-maximizing; See Eq. 4) allocation of cognitive control across drift rate and threshold (Leng et al., 2020). The normative model predicts that individuals will seek to optimally combine drift rate and threshold parameters to maximize their reward rate and adapt this control configuration to match the current incentive structure in their environment. By estimating the optimal control configuration for different potential levels of reinforcement (R) and punishment (P), we showed that these two types of incentives should lead to distinct patterns of control adjustments (Fig. 7b). If participants optimize this formulation of EVC, they should adapt to higher levels of reinforcement by primarily increasing their drift rate potentially through adjustments of attentional control (and/or decreasing their threshold). Conversely, they should adapt to higher levels of punishment by primarily increasing their response threshold (potentially through adjustments of inhibitory control). We tested these predictions by estimating the drift diffusion parameters based on behavioral task performance from participants who performed the incentivized cognitive control allocation task in Fig. 4b. We found that these empirically-derived estimates of control configuration were remarkably consistent with our normative predictions (See Fig. 4c). These results provide compelling evidence that incentives associated with reinforcement or punishment should and do lead to dissociable strategies for allocating cognitive control. Moreover, the findings from this study illustrate the critical importance of incorporating mixed motivation and motivational context in motivation-control studies, which will optimistically provide theoretical and empirical tools that may help stimulate innovative novel research into how aversive motivation can influence divergent types of effort allocation in cognitive control tasks.

5.2. Predicting individual differences in approach vs. avoidance motivation on cognitive control allocation

An additional exciting aspect of our EVC implementation is the ability to generate normative predictions about the degree to which individuals are sensitive to rewarding and aversive motivational incentives. Such a model-based approach for quantifying individual differences may be significant for advancing longstanding interest in approach vs. avoidance motivation (Atkinson, 1957) or Pavlovian biases (Beierholm and Dayan, 2010; Guitart-Masip et al., 2014). Whereas much of the classical work in this domain has focused on using self-report measures to probe the extent to which approach and avoidance motivational systems may be linked to personality traits (e.g., disposition to achieve success or avoid failure; (Eder et al., 2013; Elliot and Thrash, 2002), our normative approach demonstrates the potential to reconcile the weak associations often observed between self-reported motivation and motivational influences on cognitive control task performance (Dang et al., 2020).

The current EVC model builds upon foundational achievement motivation theory (Atkinson, 1957) by integrating the assumption that an individual’s intensity of approach or avoidance motivation will be weighted by their sensitivity to reinforcement or punishment relative to their effort cost (e.g., reinforcement sensitivity is equivalent to the ratio [R/E], punishment sensitivity is equivalent to the ratio [P/E]). Specifically, because the normative model provides a mapping from incentives onto control configuration (e.g., reward-rate optimization), we can ‘reverse-engineer’ this approach to use the estimated control configuration across conditions (i.e., joint estimates of DDM parameters) for each participant to infer R and P, which represent individual-specific weights for reward and penalty sensitivity (Leng et al., 2020). An important feature of this approach of parameterizing individual sensitivities to reinforcement and punishment on cognitive control allocation is that it delineates how an individual’s sensitivity to positive versus negative motivational incentives may interact with how motivational influences impact cognitive control allocation (i.e., instrumental responding). Although additional theoretical and empirical work is required to validate this formal quantitative approach, our preliminary results demonstrate the promise of using our EVC model to estimate individual differences in sensitivity to rewarding and aversive motivational incentives in cognitive control tasks.

The EVC model also provides a powerful framework for addressing open questions regarding the neural mechanisms that underlie motivation and cognitive control. For example, the model may help dissociate between motivational accounts of how dopamine (DA) versus serotonin (5-HT) impact mental effort allocation. Given DA’s known role in promoting the willingness to exert effort in value-based decisions (Westbrook et al., 2020, 2021), one compelling hypothesis is that DA may impact the degree to which expected rewards increase attentional control (e.g., facilitating increases in drift rates in task designs where incentives facilitate reinforcement). Conversely, in light of 5-HT’s established role in punishment-induced response inhibition (Crockett et al., 2009; Faulkner and Deakin, 2014), one potential hypothesis is that 5-HT may impact the degree to which expected penalties impact response caution (e.g., facilitating increases in response thresholds in task designs where aversive incentives facilitate punishment).

In neural terms, the EVC theory proposes that distinct sub-regions or sub-circuits play differential roles in evaluating potential outcomes; integrating these and other considerations into the evaluation of EVC (in particular, via dACC); and executing the set of control signals determined to be EVC-maximizing (Shenhav et al., 2013, 2017). The reinforcement-related enhancement of attentional control (i.e., increased drift rate) could be mediated by connectivity between dACC and dorsolateral prefrontal cortex, a region strongly implicated in motivation and cognitive control interactions (Duverne and Koechlin, 2017; Kouneiher et al., 2009). Conversely, the punishment-related increases in response inhibition (i.e., increased response threshold) could be mediated by connectivity between dACC and ventrolateral prefrontal cortex or subthalamic nucleus (STN), regions strongly implicated in inhibitory control (Cavanagh et al., 2011; Forstmann et al., 2010; Wiecki and Frank, 2013). These control adjustments may be determined by inputs to dACC from regions sensitive to expected positive outcomes (e. g., ventral striatum) versus aversive outcomes (e.g., anterior insula), depending on whether the incentives are positive (i.e., positive reinforcement) versus negative (i.e., negative reinforcement or punishment). Though these hypotheses are somewhat speculative, our model provides important testable predictions to guide empirical investigations into how rewarding and aversive motivation dissociably influences cognitive control. Developing such a rigorous neuro-computational model would be highly significant for understanding how variability in incentive sensitivity and interactions may lead to the motivational impairments often observed in clinical disorders such as depression, anxiety, schizophrenia, and addiction (Barch et al., 2018; Clery-Melin et al., 2011; Grahek et al., 2019; Husain and Roiser, 2018; Ironside et al., 2019; Verharen et al., 2020).

6. Conclusion

This review highlights the pressing need to incorporate motivational context and mixed motivation to enhance the current understanding of the neural and computational mechanisms underlying aversive motivation and cognitive control. While this is not the first review of neural and computational mechanisms of aversive processes (Bissonette et al., 2014; Hayes and Northoff, 2011; Levy and Schiller, 2020), our broad interdisciplinary review cuts across cognitive/behavioral, neuroscience, and computational perspectives. Further, we highlight how incorporating these motivational dimensions will be critical for developing a more sophisticated understanding of diverse ways through which aversive motivation influences cognitive control allocation. We hope that the topics covered will provide an important key step towards stimulating novel, groundbreaking research towards elucidating these interactions, which will move us closer towards unlocking the enigmatic mechanisms of motivation and cognitive control.

Acknowledgements

We would like to thank Ryan Bogdan, Len Green, Mahalia Prater Fahey, Katherine Conen, as well as the CCP and Shenhav Labs for their invaluable discussions and feedback on various drafts of the manuscript, which have been instrumental for development of comprehensive breadth and depth of this extensive review.

Funding

This work was supported by National Institutes of Health Grants R21-AG058205 and R21-AG067295 to T.S.B., National Science Foundation CAREER Grant 2046111 to A.S., and Brown University Office of the Vice President Research Seed Award to A.S. and D.M.Y. DMY was supported by T32-NS073547, T32-AG000030, F31-DA042574, and T32-MH126388. X.L. was supported by T32-MH115895.

Footnotes

Declaration of Competing Interest

The authors report no declarations of interest.

Data availability

No data was used for the research described in the article.

Data will be made available on request.

All data is within the manuscript and figure.

The authors do not have permission to share data.

References

  1. Aarts E, Holstein Mvan, Cools R, 2011. Striatal dopamine and the interface between motivation and cognition. Front. Psychol 2 (July), 1–11. 10.3389/fpsyg.2011.00163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Adams CD, Dickinson A, 1981. Instrumental responding following reinforcer devaluation. Q. J. Exp. Psychol. B 33 (2), 109–121. 10.1080/14640748108400816. [DOI] [Google Scholar]
  3. Akagi K, Powell EW, 1968. Differential projections of habenular nuclei. J. Comp. Neurol 132 (2), 263–273. 10.1002/cne.901320204. [DOI] [PubMed] [Google Scholar]
  4. Amemori K, Graybiel AM, 2012. Localized microstimulation of primate pregenual cingulate cortex induces negative decision-making. Nat. Neurosci 15 (5), 776–785. 10.1038/nn.3088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Amemori K, Amemori S, Graybiel AM, 2015. Motivation and affective judgments differentially recruit neurons in the primate dorsolateral prefrontal and anterior cingulate cortex. J. Neurosci 35 (5), 1939–1953. 10.1523/jneurosci.1731-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Antunes GF, Gouveia FV, Rezende FS, Seno M.D. de J., Carvalho M.C. de, Oliveira C.C. de, Santos L.C.T dos, Castro M.C de, Kuroki MA, Teixeira MJ, Otoch JP, Brandao ML, Fonoff ET, Martinez RCR, 2020. Dopamine modulates individual differences in avoidance behavior: a pharmacological, immunohistochemical, neurochemical and volumetric investigation. Neurobiol. Stress 12, 100219. 10.1016/j.ynstr.2020.100219. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Araiba S, Massioui NE, Brown BL, Doyère V, 2018. Duration-specific effects of outcome devaluation in temporal control are differentially sensitive to amount of training. Learn. Mem 25 (12), 629–633. 10.1101/1m.047878.118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Atkinson JW, 1957. Motivational determinants of risk-taking behavior. Psychol. Rev 64 (6 PART 1), 359–372. 10.1037/h0043445. [DOI] [PubMed] [Google Scholar]
  9. Atlas LY, 2019. How instructions shape aversive learning: higher order knowledge, reversal learning, and the role of the amygdala. Curr. Opin. Behav. Sci 26, 121–129. 10.1016/j.cobeha.2018.12.008. [DOI] [Google Scholar]
  10. Aupperle RL, Sullivan S, Melrose AJ, Paulus MP, Stein MB, 2011. A reverse translational approach to quantify approach-avoidance conflict in humans. Behav. Brain Res 225 (2), 455–463. 10.1016/j.bbr.2011.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Aupperle RL, Melrose AJ, Francisco A, Paulus MP, Stein MB, 2015. Neural substrates of approach-avoidance conflict decision-making. Hum. Brain Mapp 36 (2), 449–462. 10.1002/hbm.22639. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Azmitia EC, Segal M, 1978. An autoradiographic analysis of the differential ascending projections of the dorsal and median raphe nuclei in the rat. J. Comp. Neurol 179 (3), 641–667. 10.1002/cne.901790311. [DOI] [PubMed] [Google Scholar]
  13. Bahlmann J, Aarts E, D’Esposito M, 2015. Influence of motivation on control hierarchy in the human frontal cortex. J. Neurosci 35 (7), 3207–3217. 10.1523/jneurosci.2389-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Baker PM, Mizumori SJY, 2017. Control of behavioral flexibility by the lateral habenula. Pharmacol. Biochem. Behav 162, 62–68. 10.1016/j.pbb.2017.07.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Baker PM, Oh SE, Kidder KS, Mizumori SJY, 2015. Ongoing behavioral state information signaled in the lateral habenula guides choice flexibility in freely moving rats. Front. Behav. Neurosci 9, 295. 10.3389/fnbeh.2015.00295. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Baker PM, Jhou T, Li B, Matsumoto M, Mizumori SJY, Stephenson-Jones M, Vicentic A, 2016. The lateral habenula circuitry: reward processing and cognitive control. J. Neurosci 36 (45), 11482–11488. 10.1523/jneurosci.2350-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Balleine BW, Paredes-Olay C, Dickinson A, 2005. Effects of outcome devaluation on the performance of a heterogenous instrumental chain. Intern. J. Comp. Psychol 18 (4), 257–272. [Google Scholar]
  18. Barch DM, Pagliaccio D, Luking, 2015. Mechanisms underlying motivational deficits in psychopathology: similarities and differences in depression and schizophrenia. In: Simpson E, Balsam P (Eds.), Behavioral Neuroscience of Motivation. Current Topics in Behavioral Neurosciences Springer, Cham, pp. 411–449. 10.1007/78542011176. [DOI] [PubMed] [Google Scholar]
  19. Barch DM, Pagliaccio D, Luking KR, 2018. Neurobiology of abnormal emotion and motivated behaviors. PLoS One 5 (2), 278–304. 10.1016/b978-0-12-813693-5.00015-0, 2010. [DOI] [Google Scholar]
  20. Barker TV, Buzzell GA, Fox NA, 2019. Approach, avoidance, and the detection of conflict in the development of behavioral inhibition. New Ideas Psychol. 53, 2–12. 10.1016/j.newideapsych.2018.07.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Beierholm U, Dayan P, 2010. Pavlovian-instrumental interaction in ‘observing behavior’. PLoS Comput. Biol 6 (9), e1000903. 10.1371/journal.pcbi.1000903. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Bissonette GB, Gentry RN, Padmala S, Pessoa L, Roesch MR, 2014. Impact of appetitive and aversive outcomes on brain responses: linking the animal and human literatures. Front. Syst. Neurosci 8 (March), 24. 10.3389/fnsys.2014.00024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Blackman DE, 1970. Conditioned suppression of avoidance behaviour in rats. Q. J. Exp. Psychol 22 (3), 547–553. 10.1080/14640747008401932. [DOI] [PubMed] [Google Scholar]
  24. Boeke EA, Moscarello JM, LeDoux JE, Phelps EA, Hartley CA, 2017. Active avoidance: neural mechanisms and attenuation of pavlovian conditioned responding. J. Neurosci 37 (18), 4808–4818. 10.1523/jneurosci.3261-16.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Bogacz R, Brown E, Moehlis J, Holmes P, Cohen JD, 2006. The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks. Psychol. Rev 113 (4), 700–765. 10.1037/0033-295x.113.4.700. [DOI] [PubMed] [Google Scholar]
  26. Bonner SE, Sprinkle GB, 2002. The effects of monetary incentives on effort and task performance: theories, evidence, and a framework for research. Account. organ. Soc 27 (4–5), 303–345. 10.1016/s0361-3682(01)00052-6. [DOI] [Google Scholar]
  27. Botvinick MM, Braver T, 2015. Motivation and cognitive control: from behavior to neural mechanism. Annu. Rev. Psychol 66 (1), 83–113. 10.1146/annurev-psych-010814-015044. [DOI] [PubMed] [Google Scholar]
  28. Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD, 2001. Conflict monitoring and cognitive control. Psychol. Rev 108 (3), 624–652. 10.1037/0033-295x.108.3.624. [DOI] [PubMed] [Google Scholar]
  29. Boulos L-J, Darcq E, Kieffer BL, 2017. Translating the habenula—from rodents to humans. Biol. Psychiatry 81 (4), 296–305. 10.1016/j.biopsych.2016.06.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Boureau Y-L, Dayan P, 2011. Opponency revisited: competition and cooperation between dopamine and serotonin. Neuropsychopharmacology 36 (1), 74–97. 10.1038/npp.2010.151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Bouton ME, Bolles RC, 1980. Conditioned fear assessed by freezing and by the suppression of three different baselines. Anim. Learn. Behav 8 (3), 429–434. 10.3758/bf03199629. [DOI] [Google Scholar]
  32. Bradshaw CM, Szabadi E, Bevan P, 1979. The effect of punishment on free-operant choice behavior in humans. J. Exp. Anal. Behav 31 (1), 71–81. 10.1901/jeab.l979.31-71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Braem S, Duthoo W, Notebaert W, 2013. Punishment sensitivity predicts the impact of punishment on cognitive control. PLoS One 8 (9), e74106. 10.1371/journal.pone.0074106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Braem S, Hickey C, Duthoo W, Notebaert W, 2014. Reward determines the context-sensitivity of cognitive control. J. Exp. Psychol. Hum. Percept. Perform 40 (5), 1769–1778. 10.1037/a0037554. [DOI] [PubMed] [Google Scholar]
  35. Braem S, King JA, Korb FM, Krebs RM, Notebaert W, Egner T, 2017. The role of anterior cingulate cortex in the affective evaluation of conflict. J. Cogn. Neurosci 29 (1), 137–149. 10.1162/jocn_a_01023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Braver TS, Cohen JD, 2000. On the control of control: the role of dopamine in regulating prefrontal function and working memory. In: Monsell S, Driver Jon (Eds.), Making Working Memory Work. MIT Press, pp. 551–581. 10.1016/s0165-0173(03)00143-7. [DOI] [Google Scholar]
  37. Braver TS, Krug MK, Chiew KS, Kool W, Westbrook JA, Clement NJ, Adcock RA, Barch DM, Botvinick MM, Carver CS, Cools R, Custers R, Dickinson A, Dweck CS, Fishbach A, Gollwitzer PM, Hess TM, Isaacowitz DM, Mather M, et al. , 2014. Mechanisms of motivation-cognition interaction: challenges and opportunities. Cogn. Affect. Behav. Neurosci 14 (2), 443–472. 10.3758/s13415-014-0300-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Bromberg-Martin ES, Matsumoto M, Hikosaka O, 2010. Dopamine in motivational control: rewarding, aversive, and alerting. Neuron 68 (5), 815–834. 10.1016/j.neuron.2010.11.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Brown PL, Shepard PD, 2016. Functional evidence for a direct excitatory projection from the lateral habenula to the ventral tegmental area in the rat. J. Neurophysiol 116 (3), 1161–1174. 10.1152/jn.00305.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Browne CJ, Abela AR, Chu D, Li Z, Ji X, Lambe EK, Fletcher PJ, 2019. Dorsal raphe serotonin neurons inhibit operant responding for reward via inputs to the ventral tegmental area but not the nucleus accumbens: evidence from studies combining optogenetic stimulation and serotonin reuptake inhibition. Neuropsychopharmacology 44 (4), 793–804. 10.1038/s41386-018-0271-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Bull JA, Overmier JB, 1968. Additive and subtractive properties of excitation and inhibition. J. Comp. Physiol. Psychol 66 (2), 511–514. 10.1037/h0026362. [DOI] [PubMed] [Google Scholar]
  42. Campese VD, 2021. The lesser evil: pavlovian-instrumental transfer & aversive motivation. Behav. Brain Res 412, 113431 10.1016/j.bbr.2021.113431. [DOI] [PubMed] [Google Scholar]
  43. Campese VD, McCue M, Lazaro-Munoz G, Ledoux JE, Cain CK, 2013. Development of an aversive Pavlovian-to-instrumental transfer task in rat. Front. Behav. Neurosci 7 (November), 176. 10.3389/fnbeh.2013.00176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Campese VD, Gonzaga R, Moscarello JM, LeDoux JE, 2015. Modulation of instrumental responding by a conditioned threat stimulus requires lateral and central amygdala. Front. Behav. Neurosci 9, 293. 10.3389/fnbeh.2015.00293. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Campese VD, Kim IT, Kurpas B, Branigan L, Draus C, LeDoux JE, 2020. Motivational factors underlying aversive Pavlovian-instrumental transfer. Learn. Mem 27 (11), 477–482. 10.1101/lm.052316.120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Carhart-Harris RL, Nutt D, 2017. Serotonin and brain function: a tale of two receptors. J. Psychopharmacol 10.1177/0269881117725915, 026988111772591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Carhart-Harris RL, 2018. Serotonin, psychedelics and psychiatry. World Psychiatry 17 (3), 358–359. 10.1002/wps.20555. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Cartoni E, Balleine B, Baldassarre G, 2016. Appetitive Pavlovian-instrumental transfer: a review. Neurosci. Biobehav. Rev 71, 829–848. 10.1016/j.neubiorev.2016.09.020. [DOI] [PubMed] [Google Scholar]
  49. Carver CS, 2006. Approach, avoidance, and the self-regulation of affect and action. Motiv. Emot 30 (2), 105–110. 10.1007/s11031-006-9044-7. [DOI] [Google Scholar]
  50. Carver CS, White TL, 1994. Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: the BIS/BAS Scales. J. Pers. Soc. Psychol 67 (2), 319–333. 10.1037/0022-3514.67.2.319. [DOI] [Google Scholar]
  51. Cavanagh JF, Wiecki TV, Cohen MX, Figueroa CM, Samanta J, Sherman SJ, Frank MJ, 2011. Subthalamic nucleus stimulation reverses mediofrontal influence over decision threshold. Nat. Neurosci 14 (11), 1462–1467. 10.1038/nn.2925. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Chang AJ, Wang L, Lucantonio F, Adams M, Lemire AL, Dudman JT, Cohen JY, 2021. Neuron-type specificity of dorsal raphe projections to ventral tegmental area. BioRxiv. 10.1101/2021.01.06.425641, 2021.01.06.425641. [DOI] [Google Scholar]
  53. Chiba T, Kayahara T, Nakano K, 2001. Efferent projections of infralimbic and prelimbic areas of the medial prefrontal cortex in the Japanese monkey, Macaca fuscata. Brain Res. 888 (1), 83–101. 10.1016/s0006-8993(00)03013-4. [DOI] [PubMed] [Google Scholar]
  54. Chiew KS, Braver TS, 2016. Reward favors the prepared: incentive and task-informative cues interact to enhance attentional control. J. Exp. Psychol. Hum. Percept. Perform 42 (1), 52–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Cho JR, Chen X, Kahan A, Robinson JE, Wagenaar DA, Gradinaru V, 2021. Dorsal raphe dopamine neurons signal motivational salience dependent on internal state, expectation, and behavioral context. J. Neurosci 41 (12), 2645–2655. 10.1523/jneurosci.2690-20.2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Choi J-S, Cain CK, LeDoux JE, 2010. The role of amygdala nuclei in the expression of auditory signaled two-way active avoidance in rats. Learn. Mem. (Cold Spring Harbor, N.Y.) 17 (3), 139–147. 10.1101/lm.1676610. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Church RM, 1963. The varied effects of punishment on behavior. Psychol. Rev 70 (5), 369–402. 10.1037/h0046499. [DOI] [PubMed] [Google Scholar]
  58. Clarke HF, Dalley JW, Crofts HS, Robbins TW, Roberts AC, 2004. Cognitive inflexibility after prefrontal serotonin depletion. Science 304 (5672), 878–880. 10.1126/science.1094987. [DOI] [PubMed] [Google Scholar]
  59. Clarke HF, Walker SC, Crofts HS, Dalley JW, Robbins TW, Roberts AC, 2005. Prefrontal serotonin depletion affects reversal learning but not attentional set shifting. J. Neurosci 25 (2), 532–538. 10.1523/jneurosci.3690-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Clery-Melin ML, Schmidt L, Lafargue G, Baup N, Fossati P, Pessiglione M, 2011. Why don’t you try harder? An investigation of effort production in major depression. PLoS One 6 (8), 1–8. 10.1371/journal.pone.0023178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Cohen JY, Amoroso MW, Uchida N, 2015. Serotonergic neurons signal reward and punishment on multiple timescales. ELife 2015 (4), 1–25. 10.7554/elife.06346. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Cools R, 2008. Role of dopamine in the motivational and cognitive control of behavior. Neuroscientist 14 (4), 381–395. 10.1177/1073858408317009. [DOI] [PubMed] [Google Scholar]
  63. Cools R, 2019. Chemistry of the adaptive mind: lessons from dopamine. Neuron 104 (1), 113–131. 10.1016/j.neuron.2019.09.035. [DOI] [PubMed] [Google Scholar]
  64. Cools R, Roberts AC, Robbins TW, 2008a. Serotoninergic regulation of emotional and behavioural control processes. Trends Cogn. Sci 12 (1), 31–40. 10.1016/j.tics.2007.10.011. [DOI] [PubMed] [Google Scholar]
  65. Cools R, Robinson OJ, Sahakian B, 2008b. Acute tryptophan depletion in healthy volunteers enhances punishment prediction but does not affect reward prediction. Neuropsychopharmacology 33 (9), 2291–2299. 10.1038/sj.npp.1301598. [DOI] [PubMed] [Google Scholar]
  66. Cools R, Nakamura K, Daw ND, 2011. Serotonin and dopamine: unifying affective, activational, and decision functions. Neuropsychopharmacology 36 (1), 98–113. 10.1038/npp.2010.121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Corr PJ, McNaughton N, 2012. Neuroscience and approach/avoidance personality traits: a two stage (valuation-motivation) approach. Neurosci. Biobehav. Rev 36 (10), 2339–2354. 10.1016/j.neubiorev.2012.09.013. [DOI] [PubMed] [Google Scholar]
  68. Cowen PJ, 1991. Serotonin receptor subtypes: implications for psychopharmacology. Br. J. Psychiatry 159 (S12), 7–14. 10.1192/s0007125000296190. [DOI] [PubMed] [Google Scholar]
  69. Crawford JL, Yee DM, Hallenbeck HW, Naumann A, Shapiro K, Thompson RJ, Braver TS, 2020. Dissociable effects of monetary, liquid, and social incentives on motivation and cognitive control. Front. Psychol 11, 2212. 10.3389/fpsyg.2020.02212. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Crockett MJ, Cools R, 2015. Serotonin and aversive processing in affective and social decision-making. Curr. Opin. Behav. Sci 5, 64–70. 10.1016/j.cobeha.2015.08.005. [DOI] [Google Scholar]
  71. Crockett MJ, Clark L, Robbins TW, 2009. Reconciling the role of serotonin in behavioral inhibition and aversion: acute tryptophan depletion abolishes punishment-induced inhibition in humans. J. Neurosci 29 (38), 11993–11999. 10.1523/jneurosci.2513-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Crockett MJ, Clark L, Apergis-Schoute AM, Morein-Zamir S, Robbins TW, 2012. Serotonin modulates the effects of pavlovian aversive predictions on response vigor. Neuropsychopharmacology 37 (10), 2244–2252. 10.1038/npp.2012.75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Crosbie J, 1998. Negative reinforcement and positive punishment. In: Lattal KA, Perone Michael (Eds.), Handbook of Research Methods in Human Operant Behavior, 1st ed. Springer Science+Business Media, pp. 163–189. 10.1007/978-1-4899-1947-2. [DOI] [Google Scholar]
  74. Cubillo A, Makwana AB, Hare TA, 2019. Differential modulation of cognitive control networks by monetary reward and punishment. Soc. Cogn. Affect. Neurosci 14 (3), nsz006. 10.1093/scan/nsz006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Dang J, King KM, Inzlicht M, 2020. Why Are Self-Report and Behavioral Measures Weakly Correlated? Trends Cogn. Sci. (Regul. Ed.) 24 (4), 267–269. 10.1016/j.tics.2020.01.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Danielmeier C, Ullsperger M, 2011. Post-error adjustments. Front. Psychol 2, 233. 10.3389/fpsyg.2011.00233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Danielmeier C, Eichele T, Forstmann BU, Tittgemeyer M, Ullsperger M, 2011. Posterior medial frontal cortex activity predicts post-error adaptations in task-related visual and motor areas. J. Neurosci 31 (5), 1780–1789. 10.1523/jneurosci.4299-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Daw ND, Kakade S, Dayan P, 2002. Opponent interactions between serotonin and dopamine. Neural Netw 15 (4–6), 603–616. 10.1016/s0893-6080(02)00052-7. [DOI] [PubMed] [Google Scholar]
  79. Dayan P, Balleine B, 2002. Reward, motivation, and reinforcement learning. Neuron 36, 285–298. 10.1016/s0896-6273(02)00963-7. [DOI] [PubMed] [Google Scholar]
  80. Dayan P, Huys QJM, 2008. Serotonin, inhibition, and negative mood. PLoS Comput. Biol 4 (2) 10.1371/journal.pcbi.0040004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Dayan P, Huys QJM, 2009. Serotonin in affective control. Annu. Rev. Neurosci 32, 95–126. 10.1146/annurev.neuro.051508.135607. [DOI] [PubMed] [Google Scholar]
  82. De Wit S, Kindt M, Knot SL, Verhoeven AAC, Robbins TW, Gasull-Camos J, Evans M, Mirza H, Gillan CM, 2018. Shifting the balance between goals and habits: five failures in experimental habit induction. J. Exp. Psychol. Gen 147 (7), 1043–1065. 10.1037/xge0000402. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Deakin JFW, 2013. The origins of “5-HT and mechanisms of defence” by Deakin and Graeff: a personal perspective. J. Psychopharmacol 27 (12), 1084–1089. 10.1177/0269881113503508. [DOI] [PubMed] [Google Scholar]
  84. Deakin JFW, Graeff FG, 1991. 5-HT and the mechanisms of defence. J. Psychopharmacol 5 (4), 305–315. 10.1016/j.amjmed.2015.10.002.this. [DOI] [PubMed] [Google Scholar]
  85. Delgado MR, Jou RL, Ledoux JE, Phelps EA, 2009. Avoiding negative outcomes: tracking the mechanisms of avoidance learning in humans during fear conditioning. Front. Behav. Neurosci 3, 33. 10.3389/neuro.08.033.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Den Ouden HEM, Swart JC, Schmidt K, Fekkes D, Geurts DEM, Cools R, 2015. Acute serotonin depletion releases motivated inhibition of response vigour. Psychopharmacology 232 (7), 1303–1312. 10.1007/s00213-014-3762-4. [DOI] [PubMed] [Google Scholar]
  87. Desrochers SS, Lesko E, Magalong VM, Balsam PD, Nautiyal KM, 2020. A role for reward sensitivity in the serotonergic modulation of impulsivity. BioRxiv. 10.1101/2020.01.17.910778, 2020.01.17.910778. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Dickinson A, 1976. Appetitive-aversive interactions: superconditioning of fear by an appetitive CS. Q. J. Exp. Psychol 29 (1), 71–83. 10.1080/00335557743000044. [DOI] [Google Scholar]
  89. Dickinson A, Balleine B, 2002. The role of learning in the operation of motivational systems. Stevens’ Handbook of Experimental Psychology, 10.1002/0471214426.pas0312. [DOI] [Google Scholar]
  90. Dickinson A, Dearing MF, 1979. Appetitive-aversive interactions and inhibitory processes. Mechanisms of Learning and Motivation, pp. 203–231. [Google Scholar]
  91. Dickinson A, Mackintosh NJ, 1978. Classical conditioning in animals. Annu. Rev. Psychol 587–612. [DOI] [PubMed] [Google Scholar]
  92. Dickinson A, Pearce JM, 1976. Preference and response suppression under different correlations between shock and a positive reinforcer in rats. Learn. Motiv 7 (1), 66–85. 10.1016/0023-9690(76)90018-7. [DOI] [Google Scholar]
  93. Dickinson A, Pearce JM, 1977. Inhibitory interactions between appetitive and aversive stimuli. Psychol. Bull 84 (4), 690–711. 10.1037/0033-2909.84.4.690. [DOI] [Google Scholar]
  94. Donkelaar E.Lvan, Blokland A, Ferrington L, Kelly Pa T., Steinbusch HWM, Prickaerts J, 2011. Mechanism of acute tryptophan depletion: is it only serotonin? Mol. Psychiatry 16 (7), 695–713. 10.1038/mp.2011.9. [DOI] [PubMed] [Google Scholar]
  95. Duckworth AL, Peterson C, Matthews MD, Kelly DR, 2007. Grit: perseverance and passion for long-term goals. J. Pers. Soc. Psychol 92 (6), 1087–1101. 10.1037/0022-3514.92.6.1087. [DOI] [PubMed] [Google Scholar]
  96. Duverne S, Koechlin E, 2017. Rewards and cognitive control in the human prefrontal cortex. Cereb. Cortex (September), 1–16. 10.1093/cercor/bhx210. [DOI] [PubMed] [Google Scholar]
  97. Eder AB, Elliot AJ, Harmon-Jones E, 2013. Approach and avoidance motivation: issues and advances. Emot. Rev 5 (3), 227–229. 10.1177/1754073913477990. [DOI] [Google Scholar]
  98. Elliot AJ, Covington MV, 2001. Approach and avoidance motivation. Educ. Psychol. Rev 13 (2), 73–92. 10.1023/a:1009009018235. [DOI] [Google Scholar]
  99. Elliot AJ, Thrash TM, 2002. Approach-avoidance motivation in personality: approach and avoidance temperaments and goals. J. Pers. Soc. Psychol 82 (5), 804–818. 10.1037/0022-3514.82.5.804. [DOI] [PubMed] [Google Scholar]
  100. Estes WK, 1943. Discriminative conditioning. 1. A discriminative property of conditioned anticipation. J. Exp. Psychol 32, 150–155. 10.1037/h0058316. [DOI] [Google Scholar]
  101. Estes WK, 1944. An experimental study of punishment. Psychol. Monogr 57 (3), 1–40. 10.1037/h0093550. [DOI] [Google Scholar]
  102. Estes WK, Skinner BF, 1941. Some quantitative properties of anxiety. J. Exp. Psychol 29 (5), 390–400. 10.1037/h0062283. [DOI] [Google Scholar]
  103. Etkin A, Egner T, Kalisch R, 2011. Emotional processing in anterior cingulate and medial prefrontal cortex. Trends Cogn. Sci 15 (2), 85–93. 10.1016/j.tics.2010.11.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Etzel J, Cole MW, Zacks JM, Kay KN, Braver TS, 2015. Reward motivation enhances task representation coding in frontoparietal cortex. Cerebral Cortex (New York, N.Y. : 1991) 1–13. 10.1093/cercor/bhu327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Evans DA, Stempel AV, Vale R, Branco T, 2019. Cognitive control of escape behaviour. Trends Cogn. Sci 23 (4), 334–348. 10.1016/j.tics.2019.01.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Fanselow MS, 1994. Neural organization of the defensive behavior system responsible for fear. Psychon. Bull. Rev 1 (4), 429–438. 10.3758/bf03210947. [DOI] [PubMed] [Google Scholar]
  107. Fanselow MS, Lester LS, 1988. A functional behavioristic approach to aversively motivated behavior: predatory imminence as a determinant of the topography of defensive behavior. In: Bolles RC, Beecher MD (Eds.), Evolution and Learning, pp. 185–212. [Google Scholar]
  108. Faulkner P, Deakin JFW, 2014. The role of serotonin in reward, punishment and behavioural inhibition in humans: insights from studies with acute tryptophan depletion. Neurosci. Biobehav. Rev 46 (P3), 365–378. 10.1016/j.neubiorev.2014.07.024. [DOI] [PubMed] [Google Scholar]
  109. Fernando ABP, Murray JE, Milton AL, 2013. The amygdala: securing pleasure and avoiding pain. Front. Behav. Neurosci 7, 190. 10.3389/fnbeh.2013.00190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Fernstrom JD, 1979. Diet-induced changes in plasma amino acid pattern: effects on the brain uptake of large neutral amino acids, and on brain serotonin synthesis. J. Neural Transm 15, 55–67. [DOI] [PubMed] [Google Scholar]
  111. Fischer AG, Ullsperger M, 2017. An update on the role of serotonin and its interplay with dopamine for reward. Front. Hum. Neurosci 11 (October), 1–10. 10.3389/fnhum.2017.00484. [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. Forstmann BU, Anwander A, Schäfer A, Neumann J, Brown S, Wagenmakers E-J, Bogacz R, Turner R, 2010. Cortico-striatal connections predict control over speed and accuracy in perceptual decision making. Proc. Natl. Acad. Sci 107 (36), 15916–15920. 10.1073/pnas.1004932107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Fröber K, Dreisbach G, 2016. How performance (non-)contingent reward modulates cognitive control. Acta Psychol. 168, 65–77. 10.1016/j.actpsy.2016.04.008. [DOI] [PubMed] [Google Scholar]
  114. Fröbose MI, Cools R, 2018. Chemical neuro modulation of cognitive control avoidance. Curr. Opin. Behav. Sci 22, 121–127. 10.1016/j.cobeha.2018.01.027. [DOI] [Google Scholar]
  115. Frömer R, Lin H, Wolf CKD, Inzlicht M, Shenhav A, 2021. Expectations of reward and efficacy guide cognitive control allocation. Nat. Commun 12 (1), 1030. 10.1038/S41467-021-21315-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Fujiwara J, Tobler PN, Taira M, Iijima T, Tsutsui KI, 2009. Segregated and integrated coding of reward and punishment in the cingulate cortex. J. Neurophysiol 101 (6), 3284–3293. 10.1152/jn.90909.2008. [DOI] [PubMed] [Google Scholar]
  117. Garofalo S, Robbins TW, 2017. Triggering avoidance: dissociable influences of aversive pavlovian conditioned stimuli on human instrumental behavior. Front. Behav. Neurosci 11, 63. 10.3389/fnbeh.2017.00063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Gehrlach DA, Dolensek N, Klein AS, Chowdhury RR, Matthys A, Junghänel M, Gaitanos TN, Podgornik A, Black TD, Vaka NR, Conzelmann K-K, Gogolla N, 2019. Aversive state processing in the posterior insular cortex. Nat. Neurosci 22 (9), 1424–1437. 10.1038/s41593-019-0469-l. [DOI] [PubMed] [Google Scholar]
  119. Gentry RN, Schuweiler DR, Roesch MR, 2018. Dopamine signals related to appetitive and aversive events in paradigms that manipulate reward and avoidability. Brain Res. (October), 0–1. 10.1016/j.brainres.2018.10.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Gervais J, Rouillard C, 2000. Dorsal raphe stimulation differentially modulates dopaminergic neurons in the ventral tegmental area and substantia nigra. Synapse 35 (4), 281–291. . [DOI] [PubMed] [Google Scholar]
  121. Geurts DEM, Huys QJM, den Ouden HEM, Cools R, 2013a. Aversive pavlovian control of instrumental behavior in humans. J. Cogn. Neurosci 25 (9), 1428–1441. 10.1162/jocn_a_00425. [DOI] [PubMed] [Google Scholar]
  122. Geurts DEM, Huys QJM, den Ouden HEM, Cools R, 2013b. Serotonin and aversive pavlovian control of instrumental behavior in humans. J. Neurosci 33 (48), 18932–18939. 10.1523/jneurosci.2749-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Graeff FG, 2004. Serotonin, the periaqueductal gray and panic. Neurosci. Biobehav. Rev 28 (3), 239–259. 10.1016/j.neubiorev.2003.12.004. [DOI] [PubMed] [Google Scholar]
  124. Grahek I, Shenhav A, Musslick S, Krebs RM, Koster EHW, 2019. Motivation and cognitive control in depression. Neurosci. Biobehav. Rev 102 (April), 371–381. 10.1016/j.neubiorev.2019.04.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  125. Gray JA, 1982. The Neuropsychology of Anxiety: An Enquiry Into the Functions of the Septo-hippocampal System, 1st ed. Oxford University Press. [Google Scholar]
  126. Gray JA, McNaughton N, 2000. The Neuropsychology of Anxiety: An Enquiry into the Functions of the Septo-Hippocampal System, second edition. [Google Scholar]
  127. Guitart-Masip M, Duzel E, Dolan R, Dayan P, 2014. Action versus valence in decision making. Trends Cogn. Sci 1–9. 10.1016/j.tics.2014.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Haber SN, 2014. The place of dopamine in the cortico-basal ganglia circuit. Neuroscience 282, 248–257. 10.1016/j.neuroscience.2014.10.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  129. Haber SN, Knutson B, 2010. The reward circuit: linking primate anatomy and human imaging. Neuropsycho pharmacology 35 (1), 4–26. 10.1038/npp.2009.129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  130. Hagenaars MA, Stins JF, Roelofs K, 2012. Aversive life events enhance human freezing responses. J. Exp. Psychol. Gen 141 (1), 98–105. 10.1037/a0024211. [DOI] [PubMed] [Google Scholar]
  131. Hamid AA, Pettibone JR, Mabrouk OS, Hetrick VL, Schmidt R, Weele CMV, Kennedy RT, Aragona BJ, Berke JD, 2016. Mesolimbic dopamine signals the value of work. Nat. Neurosci 19 (1), 117–126. 10.1038/nn.4173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  132. Hayashi K, Nakao K, Nakamura K, 2015. Appetitive and aversive coding by the primate dorsal raphe nucleus. J. Neurosci 35 (15), 6195–6208. 10.3389/conf.fnhum.2011.207.00559. [DOI] [PMC free article] [PubMed] [Google Scholar]
  133. Hayes DJ, Greenshaw AJ, 2011. 5-HT receptors and reward-related behaviour: a review. Neurosci. Biobehav. Rev 35 (6), 1419–1449. 10.1016/j.neubiorev.2011.03.005. [DOI] [PubMed] [Google Scholar]
  134. Hayes DJ, Northoff G, 2011. Identifying a network of brain regions involved in aversion-related processing: a cross-species translational investigation. Front. Integr. Neurosci 5 (October), 1–21. 10.3389/fnint.2011.00049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Heilbronner SR, Hayden BY, 2016. Dorsal anterior cingulate cortex: a bottom-up view. Annu. Rev. Neurosci 39 (1), 149–170. 10.1146/annurev-neuro-070815-013952. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Hennigan K, D’Ardenne K, McClure SM, 2015. Distinct midbrain and habenula pathways are involved in processing aversive events in humans. J. Neurosci 35 (1), 198–208. 10.1523/jneurosci.0927-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Heukelum Svan, Mars RB, Guthrie M, Buitelaar JK, Beckmann CF, Tiesinga PHE, Vogt BA, Glennon JC, Havenith MN, 2020. Where is cingulate cortex? A cross-species view. Trends Neurosci. xx (xx), 1–15. 10.1016/j.tins.2020.03.007. [DOI] [PubMed] [Google Scholar]
  138. Heymann G, Jo YS, Reichard KL, McFarland N, Chavkin C, Palmiter RD, Soden ME, Zweifel LS, 2020. Synergy of distinct dopamine projection populations in behavioral reinforcement. Neuron 105 (5), 909–920. 10.1016/j.neuron.2019.11.024 e5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. Hikosaka O, 2010. The habenula: from stress evasion to value-based decision-making. Nat. Rev. Neurosci 11 (7), 503–513. 10.1038/nrn2866. [DOI] [PMC free article] [PubMed] [Google Scholar]
  140. Hikosaka O, Sesack SR, Lecourtier L, Shepard PD, 2008. Habenula: crossroad between the basal ganglia and the limbic system. J. Neurosci 28 (46), 11825–11829. 10.1523/jneurosci.3463-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  141. Hofmann SG, Ellard KK, Siegle GJ, 2012. Neurobiological correlates of cognitions in fear and anxiety: a cognitive–neurobiological information-processing model. Cogn. Emot 26 (2), 282–299. 10.1080/02699931.2011.579414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  142. Homberg JR, 2012. Serotonin and decision making processes. Neurosci. Biobehav. Rev 36 (1), 218–236. 10.1016/j.neubiorev.2011.06.001. [DOI] [PubMed] [Google Scholar]
  143. Hong S, Hikosaka O, 2008. The globus pallidus sends reward-related signals to the lateral habenula. Neuron 60 (4), 720–729. 10.1016/j.neuron.2008.09.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  144. Hong S, Jhou TC, Smith M, Saleem KS, Hikosaka O, 2011. Negative reward signals from the lateral habenula to dopamine neurons are mediated by rostromedial tegmental nucleus in primates. J. Neurosci 31 (32), 11457–11471. 10.1523/jneurosci.1384-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  145. Hood SD, Bell CJ, Nutt DJ, 2005. Acute tryptophan depletion. Part I: rationale and methodology. Aust. N. Z. J. Psychiatry 39 (7), 558–564. 10.1111/j.1440-1614.2005.01627.x. [DOI] [PubMed] [Google Scholar]
  146. Hosking JG, Cocker PJ, Winstanley CA, 2014. Dissociable contributions of anterior cingulate cortex and basolateral amygdala on a rodent cost/benefit decision-making task of cognitive effort. Neuropsychopharmacology 39 (7), 1558–1567. 10.1038/npp.2014.27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  147. Hu H, 2016. Reward and aversion. Annu. Rev. Neurosci 39 (1), 297–324. 10.1146/annurev-neuro-070815-014106. [DOI] [PubMed] [Google Scholar]
  148. Hu H, Cui Y, Yang Y, 2020. Circuits and functions of the lateral habenula in health and in disease. Nat. Rev. Neurosci 21 (5), 277–295. 10.1038/S41583-020-0292-4. [DOI] [PubMed] [Google Scholar]
  149. Huang KW, Ochandarena NE, Philson AC, Hyun M, Birnbaum JE, Cicconet M, Sabatini BL, 2019. Molecular and anatomical organization of the dorsal raphe nucleus. ELife 8, e46464. 10.7554/elife.46464. [DOI] [PMC free article] [PubMed] [Google Scholar]
  150. Husain M, Roiser JP, 2018. Neuroscience of apathy and anhedonia: a transdiagnostic approach. Nat. Rev. Neurosci 19, 470–484. 10.1038/s41583-018-0029-9. [DOI] [PubMed] [Google Scholar]
  151. Ilango A, Shumake J, Wetzel W, Scheich H, Ohl FW, 2012. The role of dopamine in the context of aversive stimuli with particular reference to acoustically signaled avoidance learning. Front. Neurosci 6, 132. 10.3389/fnins.2012.00132. [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Inzlicht M, Bartholow BD, Hirsh JB, 2015. Emotional foundations of cognitive control. Trends Cogn. Sci 19 (3), 126–132. 10.1016/j.tics.2015.01.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  153. Ironside M, Amemori Kichi, McGrath CL, Pedersen ML, Kang MS, Amemori S, Frank MJ, Graybiel AM, Pizzagalli DA, 2019. Approach-avoidance conflict in major depressive disorder: congruent neural findings in humans and nonhuman primates. Biol. Psychiatry 1–10. 10.1016/j.biopsych.2019.08.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  154. Izquierdo A, Murray EA, 2010. Functional interaction of medial mediodorsal thalamic nucleus but not nucleus accumbens with amygdala and orbital prefrontal cortex is essential for adaptive response selection after reinforcer devaluation. J. Neurosci 30 (2), 661–669. 10.1523/jneurosci.3795-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Jacobs BL, Fornal CA, 1993. 5-HT and motor control: a hypothesis. Trends Neurosci. 16 (9), 346–352. 10.1016/0166-2236(93)90090-9. [DOI] [PubMed] [Google Scholar]
  156. Jean-Richard-Dit-Bressel P, McNally GP, 2014. The role of the lateral habenula in punishment. PLoS One 9 (11), e111699. 10.1371/journal.pone.0111699. [DOI] [PMC free article] [PubMed] [Google Scholar]
  157. Jean-Richard-Dit-Bressel P, McNally GP, 2015. The role of the basolateral amygdala in punishment. Learn. Mem 22 (2), 128–137. 10.1101/lm.035907.114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  158. Jean-Richard-Dit-Bressel P, Killcross S, McNally GP, 2018. Behavioral and neurobiological mechanisms of punishment: implications for psychiatric disorders. Neuropsychopharmacology 43 (8), 1639–1650. 10.1038/s41386-018-0047-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  159. Jhou TC, Vento PJ, 2019. Bidirectional regulation of reward, punishment, and arousal by dopamine, the lateral habenula and the rostromedial tegmentum (RMTg). Curr. Opin. Behav. Sci 26, 90–96. 10.1016/j.cobeha.2018.ll.001. [DOI] [Google Scholar]
  160. Kawai T, Yamada H, Sato N, Takada M, Matsumoto M, 2015. Roles of the lateral habenula and anterior cingulate cortex in negative outcome monitoring and behavioral adjustment in nonhuman primates. Neuron 88 (4), 792–804. 10.1016/j.neuron.2015.09.030. [DOI] [PubMed] [Google Scholar]
  161. Kim U, Lee T, 2012. Topography of descending projections from anterior insular and medial prefrontal regions to the lateral habenula of the epithalamus in the rat. Eur. J. Neurosci 35 (8), 1253–1269. 10.llll/j.1460-9568.2012.08030.x. [DOI] [PubMed] [Google Scholar]
  162. Kim H, Shimojo S, O’Doherty JP, 2006. Is avoiding an aversive outcome rewarding? Neural substrates of avoidance learning in the human brain. PLoS Biol. 4 (8), 1453–1461. 10.1371/journal.pbio.0040233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Kirby LG, Pernar L, Valentino RJ, Beck SG, 2003. Distinguishing characteristics of serotonin and non-serotonin-containing cells in the dorsal raphe nucleus: electrophysiological and immunohistochemical studies. Neuroscience 116 (3), 669–683. 10.1016/s0306-4522(02)00584-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  164. Kirlic N, Young J, Aupperle RL, 2017. Animal to human translational paradigms relevant for approach avoidance conflict decision making. Behav. Res. Ther 1–16. 10.1016/j.brat.2017.04.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  165. Kobayashi S, 2012. Organization of neural systems for aversive information processing: pain, error, and punishment. Front. Neurosci 6 (September), 1–9. 10.3389/fnins.2012.00136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  166. Konorski J, 1967. Integrative Activity of the Brain: An Interdisciplinary Approach. University of Chicago Press. [Google Scholar]
  167. Kool W, Botvinick M, 2018. Mental labour. Nat. Hum. Behav 2 (12), 899–908. 10.1038/s41562-018-0401-9. [DOI] [PubMed] [Google Scholar]
  168. Kouneiher F, Charron S, Koechlin E, 2009. Motivation and cognitive control in the human prefrontal cortex. Nat. Neurosci 12 (7), 939–945. 10.1038/nn.2321. [DOI] [PubMed] [Google Scholar]
  169. Kranz GS, Kasper S, Lanzenberger R, 2010. Reward and the serotonergic system. Neuroscience 166 (4), 1023–1035. 10.1016/j.neuroscience.2010.01.036. [DOI] [PubMed] [Google Scholar]
  170. Lammel S, Lim BK, Ran C, Huang KW, Betley MJ, Tye KM, Deisseroth K, Malenka RC, 2012. Input-specific control of reward and aversion in the ventral tegmental area. Nature 491 (7423), 212–217. 10.1038/naturell527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  171. Lawson RP, Drevets WC, Roiser JP, 2013. Defining the habenula in human neuroimaging studies. Neuroimage 64 (1), 722–727. 10.1016/j.neuroimage.2012.08.076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  172. Lawson RP, Seymour B, Loh E, Lutti A, Dolan RJ, Dayan P, Weiskopf N, Roiser JP, 2014. The habenula encodes negative motivational value associated with primary punishment in humans. Proc. Natl. Acad. Sci 111 (32), 11858–11863. 10.1073/pnas.1323586111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  173. Leathers ML, Olson CR, 2012. In monkeys making value-based decisions, LIP neurons encode cue salience and not action value. Science 338 (October), 132–135. 10.1126/science.1226405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  174. LeDoux JE, Pine DS, 2016. Using neuroscience to help understand fear and anxiety: a two-system framework. Am. J. Psychiatry 173 (11), 1083–1093. 10.1176/appi.ajp.2016.16030353. [DOI] [PubMed] [Google Scholar]
  175. Leng X, Yee D, Ritz H, Shenhav A, 2020. Dissociable influences of reward and punishment on adaptive cognitive control. BioRxiv. 10.1101/2020.09.11.294157, 2020.09.11.294157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  176. Levy I, Schiller D, 2020. Neural computations of threat. Trends Cogn. Sci 10.1016/j.tics.2020.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  177. Lewis AH, Niznikiewicz MA, Delamater AR, Delgado MR, 2013. Avoidance-based human pavlovian-to-instrumental transfer. Eur. J. Neurosci 38 (12), 3740–3748. 10.1111/ejn.12377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  178. Li Y, Zhong W, Wang D, Feng Q, Liu Z, Zhou J, Jia C, Hu F, Zeng J, Guo Q, Fu L, Luo M, 2016. Serotonin neurons in the dorsal raphe nucleus encode reward signals. Nat. Commun 7, 10503. 10.1038/ncommsl0503. [DOI] [PMC free article] [PubMed] [Google Scholar]
  179. Lieder F, Shenhav A, Musslick S, Griffiths TL, 2018. Rational metareasoning and the plasticity of cognitive control. PLoS Comput. Biol 14 (4), e1006043. 10.1371/journal.pcbi.l006043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  180. Lin R, Liang J, Luo M, 2021. The raphe dopamine system: roles in salience encoding, memory expression, and addiction. Trends Neurosci. 44 (5), 366–377. 10.1016/j.tins.2021.01.002. [DOI] [PubMed] [Google Scholar]
  181. Liu Z, Lin R, Luo M, 2020. Reward contributions to serotonergic functions. Annu. Rev. Neurosci 43 (1), 141–162. 10.1146/annurev-neuro-093019-112252. [DOI] [PubMed] [Google Scholar]
  182. Lloyd K, Dayan P, 2016. Safety out of control: dopamine and defence. Behav. Brain Funct 12 (1), 1–23. 10.1186/s12993-016-0099-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  183. Locke HS, Braver TS, 2008. Motivational influences on cognitive control: behavior, brain activation, and individual differences. Cogn. Affect. Behav. Neurosci 8 (1), 99–112. 10.3758/cabn.8.l.99. [DOI] [PubMed] [Google Scholar]
  184. Lucki I, 1998. The spectrum of behaviors influenced by serotonin. Biol. Psychiatry 44 (3), 151–162. 10.1016/s0006-3223(98)00139-5. [DOI] [PubMed] [Google Scholar]
  185. Luo M, Zhou J, Liu Z, 2015. Reward processing by the dorsal raphe nucleus: 5-HT and beyond. Learn. Mem 22 (9), 452–460. 10.1101/lm.037317.114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  186. Luo M, Li Y, Zhong W, 2016. Do dorsal raphe 5-HT neurons encode “beneficialness”? Neurobiol. Learn. Mem 135, 40–49. 10.1016/j.nlm.2016.08.008. [DOI] [PubMed] [Google Scholar]
  187. Lyon DO, 1968. Conditioned suppression: operating variables and aversive control. Psychol. Rec 18 (3), 317–338 http://uml.idm.oclc.org/login?url=http://search.proquest.com/docview/615547094?accountid=145695Cnhttp://primo-pmtna01.hosted.exlibrisgroup.com/openurl/01UMBINST/umbservicespage??urlver=Z39.88-2004rftvalfmt=info:ofi/fmt:kev:mtx:journalgenre=article. [Google Scholar]
  188. Maren S, 2001. Neurobiology of pavlovian fear conditioning. Annu. Rev. Neurosci 24, 897–931. 10.1146/annurev.neuro.24.1.897. [DOI] [PubMed] [Google Scholar]
  189. Marinelli S, Schnell SA, Hack SP, Christie MJ, Wessendorf MW, Vaughan CW, 2004. Serotonergic and nonserotonergic dorsal raphe neurons are pharmacologically and electrophysiologically heterogeneous. J. Neurophysiol 92 (6), 3532–3537. 10.1152/jn.00437.2004. [DOI] [PubMed] [Google Scholar]
  190. Massar SAA, Pu Z, Chen C, Chee MWL, 2020. Losses motivate cognitive effort more than gains in effort-based decision making and performance. Front. Hum. Neurosci 14 (July), 287. 10.3389/fnhum.2020.00287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  191. Masterson FA, 1970. Is termination of a warning signal an effective reward for the rat? J. Comp. Physiol. Psychol 72 (3), 471–475. 10.1037/h0029748. [DOI] [Google Scholar]
  192. Masterson FA, Crawford M, 1982. The defense motivation system: a theory of avoidance behavior. Brain Behav. Sci 5 (4), 661–675. 10.1017/s0140525x00014114. [DOI] [Google Scholar]
  193. Matias S, Lottem E, Dugué GP, Mainen ZF, 2017. Activity patterns of serotonin neurons underlying cognitive flexibility. ELife 6, e20552. 10.7554/elife.20552. [DOI] [PMC free article] [PubMed] [Google Scholar]
  194. Matsumoto M, Hikosaka O, 2009. Representation of negative motivational value in the primate lateral habenula. Nat. Neurosci 12 (1), 77–84. 10.1038/nn.2233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  195. Matthews GA, Nieh EH, Weele CMV, Halbert SA, Pradhan RV, Yosafat AS, Glober GF, Izadmehr EM, Thomas RE, Lacy GD, Wildes CP, Ungless MA, Tye KM, 2016. Dorsal raphe dopamine neurons represent the experience of social isolation. Cell 164 (4), 617–631. 10.1016/j.cell.2015.12.040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  196. McConnell JV, 1990. Negative reinforcement and positive punishment. Teach. Psychol 17 (4), 247–249. 10.1207/s15328023topl704_10. [DOI] [Google Scholar]
  197. McCullough LD, Sokolowski JD, Salamone JD, 1993. A neurochemical and behavioral investigation of the involvement of nucleus accumbens dopamine in instrumental avoidance. Neuroscience 52 (4), 919–925. 10.1016/0306-4522(93)90538-q. [DOI] [PubMed] [Google Scholar]
  198. McNaughton N, 2011. Fear, anxiety and their disorders: past, present and future neural theories. Psychol. Neurosci 4 (2), 173–181. 10.3922/j.psns.2011.2.002. [DOI] [Google Scholar]
  199. McNaughton N, Corr PJ, 2004. A two-dimensional neuropsychology of defense: fear/anxiety and defensive distance. Neurosci. Biobehav. Rev 28 (3), 285–305. 10.1016/j.neubiorev.2004.03.005. [DOI] [PubMed] [Google Scholar]
  200. Menegas W, Akiti K, Amo R, Uchida N, Watabe-Uchida M, 2018. Dopamine neurons projecting to the posterior striatum reinforce avoidance of threatening stimuli. Nat. Neurosci 21 (10), 1421–1430. 10.1038/s41593-018-0222-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  201. Metzger M, Bueno D, Lima LB, 2017. The lateral habenula and the serotonergic system. Pharmacol. Biochem. Behav 162, 22–28. 10.1016/j.pbb.2017.05.007. [DOI] [PubMed] [Google Scholar]
  202. Michelsen KA, Prickaerts J, Steinbusch HWM, 2008. The dorsal raphe nucleus and serotonin: implications for neuroplasticity linked to major depression and Alzheimer’s disease. Prog. Brain Res 172, 233–264. 10.1016/S0079-6123(08)00912-6. [DOI] [PubMed] [Google Scholar]
  203. Michely J, Rigoli F, Rutledge RB, Hauser TU, Dolan RJ, 2020. Distinct processing of aversive experience in amygdala subregions. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 5 (3), 291–300. 10.1016/j.bpsc.2019.07.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  204. Millner AJ, Gershman SJ, Nock MK, den Ouden HEM, 2018. Pavlovian control of escape and avoidance. J. Cogn. Neurosci 30 (10), 1379–1390. 10.1162/jocn_a_01224. [DOI] [PubMed] [Google Scholar]
  205. Mischel W, Shoda Y, Rodriguez M, 1989. Delay of gratification in children. Science 244 (4907), 933–938. 10.1126/science.2658056. [DOI] [PubMed] [Google Scholar]
  206. Mizumori SJY, Baker PM, 2017. The lateral habenula and adaptive behaviors. Trends Neurosci. 40 (8), 481–493. 10.1016/j.tins.2017.06.001. [DOI] [PubMed] [Google Scholar]
  207. Mobbs D, Marchant JL, Hassabis D, Seymour B, Tan G, Gray M, Petrovic P, Dolan RJ, Frith CD, 2009. From threat to fear: the neural organization of defensive fear systems in humans. J. Neurosci 29 (39), 12236–12243. 10.1523/jneurosci.2378-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  208. Mobbs D, Headley DB, Ding W, Dayan P, 2020. Space, time, and fear: survival computations along defensive circuits. Trends Cogn. Sci 24 (3), 228–241. 10.1016/j.tics.2019.12.016. [DOI] [PubMed] [Google Scholar]
  209. Monosov IE, 2017. Anterior cingulate is a source of valence-specific information about value and uncertainty. Nat. Commun 8 (1), 134. 10.1038/s41467-017-00072-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  210. Moran RJ, Kishida KT, Lohrenz T, Saez I, Laxton AW, Witcher MR, Tatter SB, Ellis TL, Phillips PE, Dayan P, Montague PR, 2018. The protective action encoding of serotonin transients in the human brain. Neuropsychopharmacology 43 (6), 1425–1435. 10.1038/npp.2017.304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  211. Morrison SE, Salzman CD, 2009. The convergence of information about rewarding and aversive stimuli in single neurons. J. Neurosci 29 (37), 11471–11483. 10.1523/jneurosci.1815-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  212. Moscarello JM, LeDoux JE, 2013. Active avoidance learning requires prefrontal suppression of amygdala-mediated defensive reactions. J. Neurosci 33 (9), 3815–3823. 10.1523/jneurosci.2596-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  213. Mowrer OH, 1947. On the dual nature of learning—a re-interpretation of “conditioning” and “problem-solving”. Harv. Educ. Rev 17, 102–148. [Google Scholar]
  214. Mowrer OH, 1956. Two-factor learning theory reconsidered, with special reference to secondary reinforcement and the concept of habit. Psychol. Rev 63 (2), 114–128. 10.1037/h0040613. [DOI] [PubMed] [Google Scholar]
  215. Murray EA, Rudebeck PH, 2013. The drive to strive: goal generation based on current needs. Front. Neurosci 7, 112. 10.3389/fnins.2013.00112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  216. Musslick S, Shenhav A, Botvinick MM, Cohen JD, 2015. A Computational Model of Control Allocation based on the Expected Value of Control. Reinforcement Learning and Decision Making. [Google Scholar]
  217. Nagai Y, Takayama K, Nishitani N, Andoh C, Koda M, Shirakawa H, Nakagawa T, Nagayasu K, Yamanaka A, Kaneko S, 2020. The role of dorsal raphe serotonin neurons in the balance between reward and aversion. Int. J. Mol. Sci 21 (6), 2160. 10.3390/ijms21062160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  218. Nakamura K, 2013. The role of the dorsal raphé nucleus in reward-seeking behavior. Front. Integr. Neurosci 7, 60. 10.3389/ihint.2013.00060. [DOI] [PMC free article] [PubMed] [Google Scholar]
  219. Nakamura K, Matsumoto M, Hikosaka O, 2008. Reward-dependent modulation of neuronal activity in the primate dorsal raphe nucleus. J. Neurosci 28 (20), 5331–5343. 10.1523/jneurosci.0021-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  220. Namboodiri VMK, Rodriguez-Romaguera J, Stuber GD, 2016. The habenula. Curr. Biol 26 (19), R873–R877. 10.1016/j.cub.2016.08.051. [DOI] [PubMed] [Google Scholar]
  221. Nasser HM, McNally GP, 2012. Appetitive–Aversive interactions in pavlovian fear conditioning. Behav. Neurosci 126 (3), 404–422. 10.1037/a0028341. [DOI] [PubMed] [Google Scholar]
  222. Nasser HM, McNally GP, 2013. Neural correlates of appetitive-aversive interactions in Pavlovian fear conditioning. Learn. Mem 20 (4), 220–228. 10.1101/lm.029744.112. [DOI] [PubMed] [Google Scholar]
  223. Navratilova E, Xie JY, Okun A, Qu C, Eyde N, Ci S, Ossipov MH, King T, Fields HL, Porreca F, 2012. Pain relief produces negative reinforcement through activation of mesolimbic reward–valuation circuitry. Proc. Natl. Acad. Sci 109 (50), 20709–20713. 10.1073/pnas.1214605109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  224. Niv Y, Daw ND, Joel D, Dayan P, 2007. Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology 191 (3), 507–520. 10.1007/s00213-006-0502-4. [DOI] [PubMed] [Google Scholar]
  225. Nuland A.Jvan, Helmich RC, Dirkx MF, Zach H, Toni I, Cools R, den Ouden HEM, 2020. Effects of dopamine on reinforcement learning in Parkinson’s disease depend on motor phenotype. Brain 143 (11), awaa335. 10.1093/brain/awaa335. [DOI] [PMC free article] [PubMed] [Google Scholar]
  226. Ogren SO, 1982. Forebrain serotonin and avoidance learning: behavioral and biochemical studies on the acute effect of p-chloroamphetamine on one-way active avoidance learning in the male rat. Pharmacol. Biochem. Behav 16 (6), 881–895. 10.1016/0091-3057(82)90040-5. [DOI] [PubMed] [Google Scholar]
  227. Overmier JB, Bull JA, Pack K, 1971. On instrumental response interaction as explaining the influences of pavlovian CS+s upon avoidance behavior. Learn. Mem 2, 130–132. [Google Scholar]
  228. Parro C, Dixon ML, Christoff K, 2018. The neural basis of motivational influences on cognitive control. Hum. Brain Mapp 39 (12), 5097–5111. 10.1002/hbm.24348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  229. Paul ED, Lowry CA, 2013. Functional topography of serotonergic systems supports the Deakin/Graeff hypothesis of anxiety and affective disorders. J. Psychopharmacol 27 (12), 1090–1106. 10.1177/0269881113490328. [DOI] [PubMed] [Google Scholar]
  230. Paul ED, Johnson PL, Shekhar A, Lowry CA, 2014. The Deakin/Graeff hypothesis: focus on serotonergic inhibition of panic. Neurosci. Biobehav. Rev 46. 379–396. 10.1016/j.neubiorev.2014.03.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  231. Pavlov IP, 1927. Conditioned Reflexes. Oxford University Press. [Google Scholar]
  232. Pessiglione M, Delgado MR, 2015. The good, the bad and the brain: neural correlates of appetitive and aversive values underlying decision making. Curr. Opin. Behav. Sci 5, 78–84. 10.1016/j.cobeha.2015.08.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  233. Pessoa L, 2008. On the relationship between emotion and cognition. Nat. Rev. Neurosci 9 (2), 148–158. 10.1038/nrn2317 [DOI] [PubMed] [Google Scholar]
  234. Petitet P, Attaallah B, Manohar SG, Husain M, 2021. The computational cost of active information sampling before decision-making under uncertainty. Nat. Hum. Behav 1–12. 10.1038/s41562-021-01116-6. [DOI] [PubMed] [Google Scholar]
  235. Pickering AD, Corr PJ, 2008. J.A. Gray’s reinforcement sensitivity theory (RST) of personality. In: The SAGE Handbook of Personality: Theory and Assessment Personality Measurement and Testing, Vol. 2, pp. 239–255. [Google Scholar]
  236. Pignatelli M, Bonci A, 2015. Role of dopamine neurons in reward and aversion: a synaptic plasticity perspective. Neuron 86 (5), 1145–1157. 10.1016/j.neuron.2015.04.015. [DOI] [PubMed] [Google Scholar]
  237. Plassmann H, O’Doherty JP, Rangel A, 2010. Appetitive and aversive goal values are encoded in the medial orbitofrontal cortex at the time of decision making. J. Neurosci 30 (32), 10799–10808. 10.1523/jneurosci.0788-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  238. Premack D, 1971. Catching up on common sense, or two sides of a generalization: reinforcement and punishment. In: Glaser R (Ed.), The Nature of Reinforcement, 1st ed. Academic Press, Inc, pp. 121–149. [Google Scholar]
  239. Proulx CD, Aronson S, Milivojevic D, Molina C, Loi A, Monk B, Shabel SJ, Malinow R, 2018. A neural pathway controlling motivation to exert effort. Proc. Natl. Acad. Sci. U. S. A 115 (22), 5792–5797. 10.1073/pnas.1801837115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  240. Quina LA, Tempest L, Ng L, Harris JA, Ferguson S, Jhou TC, Turner EE, 2015. Efferent pathways of the mouse lateral habenula. J. Comp. Neurol 523 (1), 32–60. 10.1002/cne.23662. [DOI] [PMC free article] [PubMed] [Google Scholar]
  241. Rachlin H, 1972. Response control with titration of punishment. J. Exp. Anal. Behav 17 (2), 147–157. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1333953tool=pmcentrezrendertype=abstract. [DOI] [PMC free article] [PubMed] [Google Scholar]
  242. Ranade S, Pi HJ, Kepecs A, 2014. Neuroscience: waiting for serotonin. Curr. Biol 24 (17), R803–R805. 10.1016/j.cub.2014.07.024. [DOI] [PubMed] [Google Scholar]
  243. Ratcliff R, McKoon G, 2008. The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput. 20 (4), 873–922. 10.1162/neco.2008.12-06-420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  244. Ratcliff R, Smith PL, Brown SD, McKoon G, 2016. Diffusion decision model: current issues and history. Trends Cogn. Sci 20 (4), 260–281. 10.1016/j.tics.2016.01.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  245. Ren J, Friedmann D, Xiong J, Liu CD, Ferguson BR, Weerakkody T, DeLoach KE, Ran C, Pun A, Sun Y, Weissbourd B, Neve RL, Huguenard J, Horowitz MA, Luo L, 2018. Anatomically defined and functionally distinct dorsal raphe serotonin sub-systems. Cell 175 (2), 472–487. 10.1016/j.cell.2018.07.043 e20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  246. Rescorla RA, 1967. Pavlovian conditioning and its proper control procedures. Psychol. Rev 74 (1), 71–80. [DOI] [PubMed] [Google Scholar]
  247. Rescorla RA, 1988. Behavioral studies of pavlovian conditioning. Annu. Rev. Neurosci 11, 329–352. [DOI] [PubMed] [Google Scholar]
  248. Rescorla RA, Solomon RL, 1967. Two-process learning theory: relationships between pavlovian conditioning and instrumental learning. Psychol. Rev 74 (3), 151–182. [DOI] [PubMed] [Google Scholar]
  249. Reynolds GS, 1968. Aversive control: escape, avoidance, and punishment. A Primer of Operant Conditioning. Scott, Foresman and Company, pp. 103–117. [Google Scholar]
  250. Ridderinkhof KR, Ullsperger M, Crone EA, Nieuwenhuis S, 2004. The role of the medial frontal cortex in cognitive control. Science 306, 443–447. [DOI] [PubMed] [Google Scholar]
  251. Rigoli F, Pavone EF, Pezzulo G, 2012. Aversive pavlovian responses affect human instrumental motor performance. Front. Neurosci 6, 134. 10.3389/fnins.2012.00134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  252. Ritz H, Leng X, Shenhav A, 2021. Cognitive control as a multivariate optimization problem. ArXiv. [DOI] [PMC free article] [PubMed] [Google Scholar]
  253. Robinson OJ, Roiser JP, 2016. The role of serotonin in aversive inhibition: behavioural, cognitive and neural perspectives. Psychopathol. Rev a3 (1), 29–40. 10.5127/pr.034013. [DOI] [Google Scholar]
  254. Roesch MR, Olson CR, 2004. Neuronal activity related to reward value and motivation in primate frontal cortex. Science 304 (April), 307–310. 10.1126/science.1093223. [DOI] [PubMed] [Google Scholar]
  255. Root DH, Mejias-Aponte CA, Qi J, Morales M, 2014. Role of glutamatergic projections from ventral tegmental area to lateral habenula in aversive conditioning. J. Neurosci 34 (42), 13906–13910. 10.1523/jneurosci.2029-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  256. Salamone JD, 2009. Dopamine, behavioral economics, and effort. Front. Behav. Neurosci 3 (September), 1–12. 10.3389/neuro.08.013.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  257. Salas R, Baldwin P, Biasi Mde, Montague PR, 2010. BOLD responses to negative reward prediction errors in human habenula. Front. Hum. Neurosci 4, 36. 10.3389/fnhum.2010.00036. [DOI] [PMC free article] [PubMed] [Google Scholar]
  258. Samanin R, Garattini S, 1975. The serotonergic system in the brain and its possible functional connections with other aminergic systems. Life Sci. 17, 1201–1210. [DOI] [PubMed] [Google Scholar]
  259. Schiller D, Levy I, Niv Y, LeDoux JE, Phelps EA, 2008. From fear to safety and back: reversal of fear in the human brain. J. Neurosci 28 (45), 11517–11525. 10.1523/jneurosci.2265-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  260. Schlund MW, Cataldo MF, 2010. Amygdala involvement in human avoidance, escape and approach behavior. NeuroImage 53 (2), 769–776. 10.1016/j.neuroimage.2010.06.058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  261. Schmidt L, Lebreton M, Clery-Melin ML, Daunizeau J, Pessiglione M, 2012. Neural mechanisms underlying motivation of mental versus physical effort. PLoS Biol. 10 (2) 10.1371/journal.pbio.1001266. [DOI] [PMC free article] [PubMed] [Google Scholar]
  262. Schneider KN, Sciarillo XA, Nudelman JL, Cheer JF, Roesch MR, 2020. Anterior cingulate cortex signals attention in a social paradigm that manipulates reward and shock. Curr. Biol 30 (19), 3724–3735. 10.1016/j.cub.2020.07.039 e2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  263. Schouwenburg Mvan, Aarts E, Cools R, 2010. Dopaminergic modulation of cognitive control: distinct roles for the prefrontal cortex and the basal ganglia. Curr. Pharm. Des 16 (18), 2026–2032. 10.2174/138161210791293097. [DOI] [PubMed] [Google Scholar]
  264. Scobie SR, 1972. Interaction of an aversive Pavlovian conditional stimulus with aversively and appetitively motivated operants in rats. J. Comp. Physiol. Psychol 79 (2), 171–188. 10.1037/h0032559. [DOI] [PubMed] [Google Scholar]
  265. Sege CT, Bradley MM, Lang PJ, 2017. Escaping aversive exposure. Psychophysiology 54 (6), 857–863. 10.1111/psyp.12842. [DOI] [PMC free article] [PubMed] [Google Scholar]
  266. Sevigny JP, Bryant EN, Encarnacion É, Smith DF, Acosta R, Baker PM, 2021. Lateral habenula inactivation alters willingness to exert physical effort using a maze task in rats. Front. Behav. Neurosci 15, 652793. 10.3389/fnbeh.2021.652793. [DOI] [PMC free article] [PubMed] [Google Scholar]
  267. Seymour B, Singer T, Dolan R, 2007. The neurobiology of punishment. Nat. Rev. Neurosci 8 (4), 300–311. 10.1038/nrn2119. [DOI] [PubMed] [Google Scholar]
  268. Seymour B, Daw ND, Roiser JP, Dayan P, Dolan R, 2012. Serotonin selectively modulates reward value in human decision-making. J. Neurosci 32 (17), 5833–5842. 10.1523/jneurosci.0053-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  269. Shabel SJ, Proulx CD, Trias A, Murphy RT, Malinow R, 2012. Input to the lateral habenula from the basal ganglia is excitatory, aversive, and suppressed by serotonin. Neuron 74 (3), 475–481. 10.1016/j.neuron.2012.02.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  270. Shelton L, Pendse G, Maleki N, Moulton EA, Lebel A, Becerra L, Borsook D, 2012. Mapping pain activation and connectivity of the human habenula. J. Neurophysiol 107 (10), 2633–2648. 10.1152/jn.00012.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  271. Shenhav A, Botvinick MM, Cohen JD, 2013. The expected value of control: an integrative theory of anterior cingulate cortex function. Neuron 79 (2), 217–240. 10.1016/j.neuron.2013.07.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  272. Shenhav A, Cohen JD, Botvinick MM, 2016. Dorsal anterior cingulate cortex and the value of control. Nat. Neurosci 19 (10), 1286–1291. 10.1038/nn.4384. [DOI] [PubMed] [Google Scholar]
  273. Shenhav A, Musslick S, Lieder F, Kool W, Griffiths TL, Cohen JD, Botvinick MM, 2017. Toward a rational and mechanistic account of mental effort. Annu. Rev. Neurosci 40 (1), 99–124. 10.1146/annurev-neuro-072116-031526. [DOI] [PubMed] [Google Scholar]
  274. Shepard PD, Holcomb HH, Gold JM, 2006. Schizophrenia in translation: the presence of absence: habenular regulation of dopamine neurons and the encoding of negative outcomes. Schizophr. Bull 32 (3), 417–421. 10.1093/schbul/sbj083. [DOI] [PMC free article] [PubMed] [Google Scholar]
  275. Sheth SA, Mian MK, Patel SR, Asaad WF, Williams ZM, Dougherty DD, Bush G, Eskandar EN, 2012. Human dorsal anterior cingulate cortex neurons mediate ongoing behavioural adaptation. Nature 000, 1–5. 10.1038/nature11239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  276. Skinner BF, 1937. Two types of conditioned reflex: a reply to Konorski and Miller. J. Gen. Psychol 16, 272–279. [Google Scholar]
  277. Skinner BF, 1953. Science and Human Behavior. Macmillan. [Google Scholar]
  278. Small DM, Gitelman D, Simmons K, Bloise SM, Parrish T, Mesulam MM, 2005. Monetary incentives enhance processing in brain regions mediating top-down control of attention. Cerebral Cortex (New York, N.Y. : 1991) 15 (12), 1855–1865. 10.1093/cercor/bhi063. [DOI] [PubMed] [Google Scholar]
  279. Sokolowski JD, McCullough LD, Salamone JD, 1994. Effects of dopamine depletions in the medial prefrontal cortex on active avoidance and escape in the rat. Brain Res. 651 (1-2), 293–299. 10.1016/0006-8993(94)90709-9. [DOI] [PubMed] [Google Scholar]
  280. Solomon RL, 1964. Punishment. Am. Psychol 19 (4), 239. [Google Scholar]
  281. Solomon RL, 1980. The opponent-process theory of acquired motivation: the costs of pleasure and the benefits of pain. Am. Psychol 35 (8), 691–712. 10.1037/0003-066x.35.8.691. [DOI] [PubMed] [Google Scholar]
  282. Solomon RL, Corbit JD, 1974. An opponent-process theory of motivation. I. Temporal dynamics of affect. Psychol. Rev 81 (2), 119–145. 10.1037/h0036128. [DOI] [PubMed] [Google Scholar]
  283. Soubrié P, 1986. Reconciling the role of central serotonin neurons in human and animal behavior. Behav. Brain Sci 9, 319–364. [Google Scholar]
  284. Soutschek A, Tobler PN, 2020. Causal role of lateral prefrontal cortex in mental effort and fatigue. Hum. Brain Mapp 41 (16), 4630–4640. 10.1002/hbm.25146. [DOI] [PMC free article] [PubMed] [Google Scholar]
  285. Staddon JE, Cerutti DT, 2003. Operant conditioning. Annu. Rev. Psychol 54, 115–144. 10.1146/annurev.psych.54.101601.145124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  286. Stamatakis AM, Stuber GD, 2012. Activation of lateral habenula inputs to the ventral midbrain promotes behavioral avoidance. Nat. Neurosci 15 (8), 1105–1107. 10.1038/nn.3145. [DOI] [PMC free article] [PubMed] [Google Scholar]
  287. Steimer T, 2002. The biology of fear- and anxiety-related behaviors. Dialogues Clin. Neurosci 4 (3), 231–249. 10.31887/dcns.2002.4.3/tsteimer. [DOI] [PMC free article] [PubMed] [Google Scholar]
  288. Steinberg EE, Keiflin R, Boivin JR, Witten IB, Deisseroth K, Janak PH, 2013. A causal link between prediction errors, dopamine neurons and learning. Nat. Neurosci 16 (7), 966–973. 10.1038/nn.3413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  289. Steinberg EE, Boivin JR, Saunders BT, Witten IB, Deisseroth K, Janak PH, 2014. Positive reinforcement mediated by midbrain dopamine neurons requires D1 and D2 receptor activation in the nucleus accumbens. PLoS One 9 (4), e94771. 10.1371/journal.pone.0094771. [DOI] [PMC free article] [PubMed] [Google Scholar]
  290. Stelly CE, Haug GC, Fonzi KM, Garcia MA, Tritley SC, Magnon AP, Ramos MAP, Wanat MJ, 2019. Pattern of dopamine signaling during aversive events predicts active avoidance learning. Proc. Natl. Acad. Sci 116 (27), 13641–13650. 10.1073/pnas.1904249116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  291. Stratford TR, Wirtshafter D, 1990. Ascending dopaminergic projections from the dorsal raphe nucleus in the rat. Brain Res. 511 (1), 173–176. 10.1016/0006-8993(90)90239-8. [DOI] [PubMed] [Google Scholar]
  292. Strotmann B, Heidemann RM, Anwander A, Weiss M, Trampel R, Villringer A, Turner R, 2014. High-resolution MRI and diffusion-weighted imaging of the human habenula at 7 Tesla. J. Magn. Reson. Imaging 39 (4), 1018–1026. 10.1002/jmri.24252. [DOI] [PubMed] [Google Scholar]
  293. Sutherland RJ, 1982. The dorsal diencephalic conduction system: a review of the anatomy and functions of the habenular complex. Neurosci. Biobehav. Rev 6 (1), 1–13. 10.1016/0149-7634(82)90003-3. [DOI] [PubMed] [Google Scholar]
  294. Terhune JG, Premack D, 1974. Comparison of reinforcement and punishment functions produced by the same contingent event in the same subjects. Learn. Motiv 5 (2), 221–230. 10.1016/0023-9690(74)90028-9. [DOI] [Google Scholar]
  295. Thorndike EL, 1927. The law of effect. Am. J. Sci 39 (1), 212–222. [Google Scholar]
  296. Thorndike EL, 1933. A proof of the law of effect. Science 77 (1989), 173–175. [DOI] [PubMed] [Google Scholar]
  297. Torrisi S, Nord CL, Balderston NL, Roiser JP, Grillon C, Ernst M, 2017. Resting state connectivity of the human habenula at ultra-high field. NeuroImage 147, 872–879. 10.1016/j.neuroimage.2016.10.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  298. Tricomi E, Balleine BW, O’Doherty JP, 2009. A specific role for posterior dorsolateral striatum in human habit learning. Eur. J. Neurosci 29 (11), 2225–2232. 10.1111/j.1460-9568.2009.06796.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  299. Ullsperger M, Cramon D.Yvon., 2003. Error monitoring using external feedback: specific roles of the habenular complex, the reward system, and the cingulate motor area revealed by functional magnetic resonance imaging. J. Neurosci 23 (10), 4308–4314. 10.1523/jneurosci.23-10-04308.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  300. Umberg EN, Pothos EN, 2011. Neurobiology of aversive states. Physiol. Behav 104 (1), 69–75. 10.1016/j.physbeh.2011.04.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  301. Vassena E, Deraeve J, Alexander WH, 2020. Surprise, value and control in anterior cingulate cortex during speeded decision-making. Nat. Hum. Behav 4 (4), 412–422. 10.1038/s41562-019-0801-5. [DOI] [PubMed] [Google Scholar]
  302. Vega AD, Chang XLJ, Banich MT, Wager XTD, Yarkoni XT, 2016. Large-scale meta-analysis of human medial frontal cortex reveals tripartite functional organization. J. Neurosci 36 (24), 6553–6562. 10.1523/jneurosci.4402-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  303. Verharen JPH, Heuvel M.W vanden, Luijendijk M, Vanderschuren LJMJ, Adan RAH, 2019. Corticolimbic mechanisms of behavioral inhibition under threat of punishment. J. Neurosci 39 (22), 4353–4364. 10.1523/jneurosci.2814-18.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  304. Verharen JPH, Adan RAH, Vanderschuren LJMJ, 2020. How reward and aversion shape motivation and decision making: a computational account. Neuroscientist 26 (1), 87–99. 10.1177/1073858419834517. [DOI] [PubMed] [Google Scholar]
  305. Vogel TA, Savelson ZM, Otto AR, Roy M, 2020. Forced choices reveal a trade-off between cognitive effort and physical pain. ELife 9, e59410. 10.7554/elife.59410. [DOI] [PMC free article] [PubMed] [Google Scholar]
  306. Vogt BA, Hof PR, Zilles K, Vogt LJ, Herold C, Palomero-Gallagher N, 2013. Cingulate area 32 homologies in mouse, rat, macaque and human: cytoarchitecture and receptor architecture. J. Comp. Neurol 521 (18), 4189–4204. 10.1002/cne.23409. [DOI] [PubMed] [Google Scholar]
  307. Walton ME, Bouret S, 2019. What is the relationship between dopamine and effort? Trends Neurosci. 42 (2), 79–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  308. Wang R, Aghajanian G, 1977. Physiological evidence for habenula as major link between forebrain and midbrain raphe. Science 197 (4298), 89–91. 10.1126/science.194312. [DOI] [PubMed] [Google Scholar]
  309. Webster JF, Vroman R, Balueva K, Wulff P, Sakata S, Wozny C, 2020. Disentangling neuronal inhibition and inhibitory pathways in the lateral habenula. Sci. Rep 10 (1), 8490. 10.1038/s41598-020-65349-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  310. Wei K, Glaser JI, Deng L, Thompson CK, Stevenson IH, Wang Q, Hornby TG, Heckman CJ, Kording KP, 2014. Serotonin affects movement gain control in the spinal cord. J. Neurosci 34 (38), 12690–12700. 10.1523/jneurosci.1855-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  311. Weidacker K, Kim S-G, Nord CL, Rua C, Rodgers CT, Voon V, 2021. Avoiding monetary loss: a human habenula functional MRI ultra-high field study. Cortex 142, 62–73. 10.1016/j.cortex.2021.05.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  312. Weiner B, 1989. Human Motivation. Lawrence Erlbaum Assosciates, Inc. [Google Scholar]
  313. Wenzel JM, Oleson EB, Gove WN, Cole AB, Gyawali U, Dantrassy HM, Bluett RJ, Dryanovski DI, Stuber GD, Deisseroth K, Mathur BN, Patel S, Lupica CR, Cheer JF, 2018. Phasic dopamine signals in the nucleus accumbens that cause active avoidance require endocannabinoid mobilization in the midbrain. Curr. Biol 28 (9), 1392–1404. 10.1016/j.cub.2018.03.037e5 e5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  314. Westbrook A, Braver TS, 2015. Cognitive effort: a neuroeconomic approach. Cogn. Affect. Behav. Neurosci (February), 395–415. 10.3758/s13415-015-0334-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  315. Westbrook A, Braver TS, 2016. Dopamine does double duty in motivating cognitive effort. Neuron 89 (4), 695–710. 10.1016/j.neuron.2015.12.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  316. Westbrook A, Bosch Rvanden, Määttä JI, Hofmans L, Papadopetraki D, Cools R, Frank MJ, 2020. Dopamine promotes cognitive effort by biasing the benefits versus costs of cognitive work. Science 367 (6484), 1362–1366. 10.1126/science.aaz5891. [DOI] [PMC free article] [PubMed] [Google Scholar]
  317. Westbrook A, Frank MJ, Cools R, 2021. A mosaic of cost-benefit control over cortico-striatal circuitry. Trends Cogn. Sci 10.1016/j.tics.2021.04.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  318. Wiecki TV, Frank MJ, 2013. A computational model of inhibitory control in frontal cortex and basal ganglia. Psychol. Rev 120 (2), 329–355. 10.1037/a0031542. [DOI] [PubMed] [Google Scholar]
  319. Williams GV, Goldman-Rakic PS, 1995. Modulation of memory fields by dopamine D1 receptors in prefrontal cortex. Nature 376 (6541), 572–575. 10.1038/376572a0. [DOI] [PubMed] [Google Scholar]
  320. Wise RA, 2004. Dopamine, learning and motivation. Nat. Rev. Neurosci 5 (June), 483–494. 10.1038/nrn1406. [DOI] [PubMed] [Google Scholar]
  321. Wulff AB, Tooley J, Marconi LJ, Creed MC, 2019. Ventral pallidal modulation of aversion processing. Brain Res. 1713, 62–69. 10.1016/j.brainres.2018.10.010. [DOI] [PubMed] [Google Scholar]
  322. Xie G, Zuo W, Wu L, Li W, Wu W, Bekker A, Ye J, 2016. Serotonin Modulates Glutamatergic Transmission to Neurons in the Lateral Habenula. Nature Publishing Group, pp. 1–11. 10.1038/srep23798. April. [DOI] [PMC free article] [PubMed] [Google Scholar]
  323. Yee DM, Braver TS, 2018. Interactions of motivation and cognitive control. Curr. Opin. Behav. Sci 19, 83–90. 10.1016/j.cobeha.2017.11.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  324. Yee DM, Krug MK, Allen A, Braver TS, 2016. Humans integrate monetary and liquid incentives to motivate cognitive task performance. Front. Psychol 6 (January), 1–17. 10.3389/fpsyg.2015.02037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  325. Yee DM, Adams S, Beck A, Braver TS, 2019. Age-related differences in motivational integration and cognitive control. Cogn. Affect. Behav. Neurosci 19 (3), 692–714. 10.3758/s13415-019-00713-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  326. Yee DM, Crawford JL, Lamichhane B, Braver TS, 2021. Dorsal anterior cingulate cortex encodes the integrated incentive motivational value of cognitive task performance. J. Neurosci 41 (16), 3707–3720. 10.1523/jneurosci.2550-20.2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  327. Yoshimoto K, McBride WJ, 1992. Regulation of nucleus accumbens dopamine release by the dorsal raphe nucleus in the rat. Neurochem. Res 17 (5), 401–407. 10.1007/bf00969884. [DOI] [PubMed] [Google Scholar]
  328. Young SN, 2013. Acute tryptophan depletion in humans: a review of theoretical, practical and ethical aspects. J. Psychiatry Neurosci.: JPN 38 (5), 294–305. 10.1503/jpn.120209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  329. Zahm DS, Root DH, 2017. Review of the cytology and connections of the lateral habenula, an avatar of adaptive behaving. Pharmacol. Biochem. Behav 162, 3–21. 10.1016/j.pbb.2017.06.004. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

No data was used for the research described in the article.

Data will be made available on request.

All data is within the manuscript and figure.

The authors do not have permission to share data.

RESOURCES