Skip to main content
Elsevier Sponsored Documents logoLink to Elsevier Sponsored Documents
. 2019 Feb;42(2):79–91. doi: 10.1016/j.tins.2018.10.001

What Is the Relationship between Dopamine and Effort?

Mark E Walton 1, Sebastien Bouret 2,
PMCID: PMC6352317  EMSID: EMS80528  PMID: 30391016

Abstract

The trade-off between reward and effort is at the heart of most behavioral theories, from ecology to economics. Compared to reward, however, effort remains poorly understood, both at the behavioral and neurophysiological levels. This is important because unwillingness to overcome effort to gain reward is a common feature of many neuropsychiatric and neurological disorders. A recent surge in interest in the neurobiological basis of effort has led to seemingly conflicting results regarding the role of dopamine. We argue here that, upon closer examination, there is actually striking consensus across studies: dopamine primarily codes for future reward but is less sensitive to anticipated effort cost. This strong association between dopamine and the incentive effects of rewards places dopamine in a key position to promote reward-directed action.

Keywords: electrophysiology, voltammetry, cost–benefit decision making, motivation, midbrain, striatum

Highlights

Compared to reward, effort remains poorly understood, both at the behavioral and neurophysiological levels.

Dopamine has been proposed as central to effort-related decision making, but its role is not clearly defined.

In fact, the activity of midbrain dopamine neurons and mesolimbic dopamine levels are consistently modulated by anticipated future reward more strongly and consistently than effort, even when the weight of reward and effort on behavior are equated.

These signals may promote decisions to act based on the potential gain from a future reward.

Dopamine, Benefits, and Costs

Dopamine plays a central role in reward-guided learning and motivation. In general terms, there is a broad agreement on its role in appetitive motivation, namely that dopamine mediates the positive influence of potential future rewards on behavior (action intensity, approach, learning, and decision making) 1, 2, 3, 4, 5, 6, 7, 8, 9. In particular, a recurring theme is that normal dopamine transmission is necessary to activate organisms, which in turn allows them to exert effort and gain positive outcomes [10].

However, beyond this initial consensus, there is much debate over the precise relationship between dopamine – particularly the rapid and transient (‘phasic’) changes in dopamine observed when animals are presented with or approach response options – and effortful choices. Part of the reason is that, to date, experiments that directly investigated effort-based decision making and dopamine have not always measured the separable influences of effort and reward. For instance, in many situations, effort has been studied in paradigms where the amount of force produced conjointly scaled with the amount of reward (e.g., [11]). In other experiments, subjects could earn more or better rewards by exerting more effort (e.g., 12, 13). In these experiments one can only assess the relative sensitivity to effort/reward; effort cannot be disentangled from reward because increases in difficulty can be compensated by increase in reward. Moreover, to be able to draw conclusions about the effect of different economic parameters on dopamine, it is vital that parameters are conjointly quantified and controlled 14, 15, 16, 17. This is particularly important when different studies require animals to overcome different effort demands (the varieties of effort cost are discussed in Box 1).

Box 1. Not All Effort Is Equal.

In neuroscientific experiments investigating effort-related choice, effort is usually treated as a cost which can diminish preference for rewards or other goals that an organism may find more desirable [75]. Perhaps not surprisingly, animals have been shown to be very sensitive to energy demands and metabolic state when foraging 52, 76, 77. In rodent studies of effort-related decision making, the two most common ways of loading an effort cost have been, on a maze, to place a physical barrier between the start point of the rodent and a reward or, in an operant chamber, to impose a lever-press requirement to obtain the reward 78, 79. In monkeys and humans, studies have tended to use either repeated actions, as in some rodent studies, or a hand-held grip that requires a particular force to be produced 11, 18, 30. More recently, there have also been attempts to examine other forms of effort that are independent of changes in reward rate, and which tend to fall under the general rubric of ‘cognitive’ effort 80, 81, 82. All these types of manipulation can systematically alter the choices of the animals, with higher costs resulting in lower preference for the respective option over a lower-effort alternative.

However, the precise nature of these effort costs is not straightforward to isolate, quantify, and compare across studies. For instance, preferences for scaling a physical barrier can be temporarily reduced by inducing physical fatigue [83], suggesting that this cost may be related to exertion. Perhaps mitigating this caveat, however, is the fact that the energy requirements of climbing and jumping in small animals such as rodents are likely negligible compared to other homeostatic needs ([84] and A. Kacelnik, personal communication), suggesting that a barrier imposes more than a simple physiological cost. The effort involved in repeated responding is even less clear, particularly because this usually involves responding over longer periods of time (note, however, that several studies have demonstrated that changes in effort-related choice cannot be simply explained by an increased delay to reward or change in reward rate 85, 86). Moreover, effort costs may differentially influence distinct elements of motivation. For instance, a high response schedule can change the likelihood of initiating an action but not influence simple Pavlovian responses such as appetitive lipping by a monkey in anticipation of the upcoming delivery of water 36, 87. Equally, pharmacological manipulations can leave patterns of schedule length-related choice relatively unaltered while modulating the amount of force that is produced [88].

These distinctions are important not only when considering the normative decision-making strategy in different model organisms but also because there is increasing evidence that the neural circuits necessary to assess and overcome different types of effort cost are partially distinct. For instance, whereas depletion of nucleus accumbens dopamine can cause animals to be effort-averse when choosing whether or not to scale a barrier or when needing to make multiple lever presses, they have little effect on choices between options with different force requirements [89].

Alt-text: Box 1

Another reason for the conflicting interpretations about the relationship between dopamine and effort is that it has not always been clearly specified at which points during an effort-based decision dopamine might be acting. The reduction in willingness to overcome effort constraints observed following pharmacological manipulations could result from at least three factors: (i) changes in the way costs and benefits are valued or compared, (ii) the willingness to initiate a choice, and/or (i) the motivation to overcome the cost (of note, there might also be changes in how subjects learn about effort, but given that data on this issue are currently lacking, it will not be discussed further here).

To make matters even more complicated, there have been misunderstandings about how to interpret those existing data that do directly speak to the issue of the relationship between dopamine and effort. We have undertaken studies to systemically examine how both reward and effort influence dopamine activity [18] and release 19, 20. Despite using different techniques in different species at different levels of the dopamine system, we have obtained – in our opinion – strikingly convergent results. Nevertheless, we have found our data being used, respectively, to argue for effort coding and effort insensitivity of dopamine, sometimes even in the same article 21, 22.

Motivated by these discrepancies, controversies, and conflicting interpretations, we review here the evidence for a role for phasic changes in dopamine during effort-related decision making. We outline areas where there seems to be a consensus and areas where disagreements remain. Finally, we address the open questions that we see as being crucial for further progress in this area.

The Influence of Upcoming Effort on Dopamine Neuronal Activity

There is an extensive literature, both in primates and rodents, showing that the responses of many midbrain dopamine neurons elicited by predictive cues reflect the expected value of a future reward. Such signals are consistently larger when the benefits will be greater 18, 23, 24, 25, but can also be negatively influenced by costs that directly relate to a decrease in reward availability, such as the delay in that future reward or its probability 24, 26, 27, 28, 29. This coding also incorporates individual preferences for different reward types, and can be modulated by risk inclination, fatigue, and satiety 16, 18, 30, 31. Not surprisingly, this rich set of findings has been used to support a position that dopamine activity encodes a subjective value, or utility, signal that is appropriate for driving decision making [32]. Of note, these data were obtained in well-trained animals because variability in behavior in early phases of training makes the interpretation of the relationship between brain activity and behavior more difficult. Crucially, however, the animals in these experiments showed sufficient behavioral flexibility to rule out an interpretation in terms of habit or overtraining. Instead, they mastered the task rules well enough to use information appropriately to make decisions that minimize costs and maximize benefits.

However, because we all know from daily life situations (for instance, whether or not to run to catch a train) that any decision relies not only on the anticipated value of future reward but also on predictions of the effort that must be exerted to gain this benefit (Box 2 for consideration of how to define a cost). Therefore, if dopamine is to be considered a suitable signal to drive economic choice – in other words, if dopamine represents a net utility signal – it must also be modulated by anticipated effort costs. However, to date, the evidence for this is lacking.

Box 2. Effort: A Special Type of Cost?

The trade-off between costs and benefits is central to many disciplines interested in behavior. In economics, decision making is captured by a comparison between the utility of potential items (goods or services) on offer ([90] for review). The utility of an item increases with the expected benefits (the size of a reward, for instance), and decreases with the expected costs. In that frame, any feature that leads to a decrease in utility can be considered as a cost: the effort required to obtain it, the delay until it is obtained, or the risk of not obtaining it (if it is chosen). Optimal decision making occurs here when the choice is based on a perfect evaluation of the (expected) utility of the items to be chosen from. In other words, what is maximized is the precision with which the function that relates the objective measures (reward size, effort, delay, risk) maps onto the subjective value of an individual, as measured using choices.

In biology, the trade-off between costs and benefits is also central, but what animals are thought to optimize in their behavior is the allocation of energy, rather than utility. This has been captured by the theory of optimal foraging, which predicts that the energy spent to obtain food should not exceed the energy provided by the food obtained 77, 91. Note that animals must also solve life-history trade-offs, for instance between reproductive costs (mating, parenting) and somatic efforts (growth, storage, maintenance) 92, 93. Over time, animals are thought to maximize the rate of energy intake and minimize their energy expenditure to maintain a positive energy balance. Crucially, this balance between energy costs and benefits could be achieved through two formally distinct (but not exclusive) strategies: (i) a selection of appropriate behaviors through evolution, which implies that such behaviors would be both relatively fixed for a given species and adaptive in their natural environment, and (ii) a cognitive evaluation of potential costs and benefits associated with a given action, which would allow finer adaptation to individual choices. In the latter case, effort costs correspond to the anticipated energy expenditure per unit time of the potential actions, allowing planning of upcoming behavior. Various species presumably display distinct combinations of these strategies, and therefore different ways to adjust their energy balance as a function of the ecological constraints in which they evolved (94, 95, 96, 97, 98, 99, 100 and Louail et al., unpublished).

Alt-text: Box 2

One example comes from a reward schedule task where monkeys have to complete a series of between 1 and 4 correct trials to gain reward, signaled by a predictive cue indicating the number of trials remaining in the schedule. In these experiments, the majority of dopamine neurons showed no sensitivity to cued information about schedule length, even though this information had a strong influence on the motivation of the monkey to perform the task (i.e., monkeys were much more likely to engage in the task for short compared to long schedules) [33]. Notably, the strong but equivalent levels of dopamine activity to the first cue in each sequence was mirrored by an equivalent level of appetitive lipping, an appetitive Pavlovian response that has been shown to reflect information about outcome value 34, 35, following presentation of that cue [36]. In other words, the intensity of the dopamine responses appears driven by reward information (indexed by lipping) rather than by changes in effort level. In line with these data, Pasquereau and Turner [30] also found little effect of upcoming energy expenditure on the activity of dopamine response, with only 13% of dopamine neurons encoding effort (compared to 47% for reward) (Figure 1A).

Figure 1.

Figure 1

Relative Sensitivity of Dopamine (DA) Neurons to reward Benefits and Effort Costs. Sensitivity of substantia nigra pars compacta dopamine neurons in monkeys to information about upcoming reward benefits (blue) and effort costs (red) in two recent neurophysiological studies (A,B) in behaving monkeys. In both studies, monkeys were required to perform a given action to obtain a given reward. Reward sizes and physical difficulty (effort cost) were manipulated independently across trials, and each trial started with a visual cue indicating the upcoming effort and reward. Regression coefficients were calculated using a sliding-window procedure to evaluate the difference in firing across reward (blue) and effort (red conditions) at each time-point around stimulus onset. The firing of dopamine neurons shows reliable positive encoding of reward size (firing rates are greater for cues indicating large versus small rewards) within 200 ms after cue onset. At the same time, dopamine neurons also display negative modulation by effort level (firing rates are smaller for larger effort levels). Crucially, the magnitude of the reward modulation is greater than the effort modulation in both studies, even though they clearly differ in the way animals needed to cope with the expected difficulty. The difference in sensitivity cannot be simply due to a difference in subjective sensitivity to reward, as compared to effort, because these two variables had an equivalent weight on the willingness to work of the animal, at least in [18]. Panel (A) reproduced, with permission, from [30]; panel (B) adapted, with permission, from [18].

Although effort costs did influence performance parameters such as reaction times, it could be argued that this discrepancy between reward and effort coding came from the fact that the weights of the cost and benefit parameters on value were not completely equated, particularly because the willingness of the monkeys to work was only influenced by the effort costs on a small minority of sessions. However, a subsequent study by Varazzani et al. [18] also found that dopamine neurons were significantly more sensitive to information about upcoming reward size than about upcoming energy expenditure (Figure 1B). This occurred even though here the weight of reward and effort on the willingness to work of the monkeys was similar in magnitude (although opposite in direction), ruling out the possibility that the difference in sensitivity to reward versus effort in dopamine neurons is related to a lower behavioral sensitivity to effort. Because of this reduced sensitivity to effort costs relative to reward benefits, the responses of dopamine neurons in this task, in isolation, did not predict upcoming choices, in contrast to other tasks where the outcome value only depended upon reward information 16, 29.

The Influence of Upcoming Effort on Dopamine Release

One possible reason for the limited evidence for effort encoding by dopamine, outlined above, is that the studies to date have mostly targeted neurons in the substantia nigra pars compacta, which projects to dorsal parts of the striatum. By contrast, manipulations of dopamine transmission in rodents has strongly implicated mesolimbic pathways from the ventral tegmental area (VTA), particularly to the core of the nucleus accumbens (NAcC), as being necessary to allow animals to select options requiring additional effort for greater reward over a lower-reward alternative [10]. At first glance, this might suggest that dopamine levels in the NAcC would be strongly sensitive to upcoming effort costs. However, an alternative, in accordance with the electrophysiological data, would be that dopamine levels might principally encode predictions of future reward to allow animals to determine what would be an appropriate level of effort to exert [5].

To test this, Gan and colleagues [19] used fast-scan cyclic voltammetry to measure subsecond fluctuations in NAcC dopamine levels in response to cues signaling the availability of one of two options. One option always required animals to pay a ‘reference’ effort cost for reward (16 lever presses for 1 food pellet), and the other either had a different associated effort cost (2 or 32 lever presses) or a different reward value (4 or 0 food pellets). Importantly, the reduced cost and increased reward levels were selected to confer on average equal utility, and therefore any differences in dopamine release between the cost or reward conditions cannot be attributed to differences in net value. Exactly like the activity of dopamine neurons, cue-elicited dopamine release consistently scaled with anticipated future reward. Importantly, despite having an equivalent influence on response latency, learning rates, and overall preference, changes in effort costs had substantially less impact on dopamine levels (Figure 2A).

Figure 2.

Figure 2

Cue-Elicited Dopamine (DA) Levels Are Primarily Modulated by Expected Future Benefits and Not by Anticipated Costs. (A) Behavior (choice performance and response latencies, upper panels) and cue-elicited dopamine levels (lower panels), recorded with fast-scan cyclic voltammetry in rat nucleus accumbens core, in conditions where rats were presented with options signaling availability of a future reward (food pellets) after paying an effort cost (repeated lever presses). In each condition there was one reference option (16 presses/1 pellet, blue bar/line) and one alternative that was associated with a higher benefit (16 presses/4 pellets, purple bar/line, left panels), lower cost (2 presses/1 pellet, red bar/line, mid panels), or higher cost (32 presses/1 pellet, burgundy bar/line, right panels). Filled lines correspond to the preferred option of each pair. (B) Difference in peak cue-elicited dopamine on higher benefit/lower cost trials ([DA]HR/LC) compared to reference trials ([DA]REF) in individual rats as a function of the amount of experience they had with those particular cost–benefit contingencies (standard training, <10 sessions of experience, lighter colored dots; extended training, ≥10 sessions of experience, darker colored dots). Although a greater peak dopamine was consistently recorded on higher-benefit trials regardless of training experience (left panel), the difference in dopamine between lower-cost and reference trials reduced with increasing experience of this condition (right panel). Specifically, after extended training with the lower-cost contingencies, there was no reliable difference in cue-elicited dopamine on lower cost and reference trials, even though rats still exhibited a strong preference for the lower-cost option and responded faster on lower-cost trials. Note that, unlike in some studies (e.g., [60]), the cost/benefit contingencies in the experiments depicted here reversed each day, and that these data come only from trials after animals had achieved a stable preference for the higher-benefit or lower-cost option in each session. Therefore, these results neither reflect learning (in early sessions) nor habitual responding (in later sessions). Panels adapted, with permission, from [19]. (C) (Upper panel, left) dopamine responses to cues signaling availability of either a low-reward/low-cost option (LR/LC: 1 reward/4 presses) or a high-reward/mid-cost option (HR/MC, 8 presses/4 rewards; left panel). (Upper panel, right) As in the left panel, but comparing responses to the LR/LC option to a HR/high-cost option (HR/HC, ≥32 presses/4 rewards). The cost–benefit contingencies were again reversed every few sessions, and dopamine data were collected from trials after animals had achieved a stable preference for the HR/MC (versus LR/LC) or LR/LC (versus HR/HC). (Lower panel) A ‘dopamine discriminability index’ plotted against average preference across a session, quantified using a ‘choice index’ (HR−LR choices). The dopamine discriminability index was based on the area under the receiver operating characteristic (auROC) classifying dopamine release as discriminable on HR from LR trials in each session. As can be observed, although the choice index spans the full range of values, the dopamine index is strongly positively skewed, showing that it was more common to classify dopamine release as greater on HR than LR trials irrespective of preference (blue dots: LR/LC versus HR/MC; red dots: LR/LC versus HR/HC). Panels adapted, with permission, from [20].

However, it is not the case, as has been sometimes described, that there was no influence of effort costs on dopamine levels; the modulation by effort was present, but was weaker and transient (it was only present for costs that were less than the reference and in animals who had not undergone extensive training) (Figure 2B). As with neuronal activity, this meant that dopamine levels in isolation could not be used to predict effort-based choices.

A complementary picture emerges from a companion study by Hollon and colleagues [20]. Instead of one option being objectively of higher value than the alternative, here rats were faced with a decision between one low-effort cost/low-reward option and another higher-effort cost/high-reward alternative. In separate conditions, the effort cost associated with the high reward was altered such that in one condition the animals preferred overcoming the higher effort/high reward, whereas in the other the cost was increased until the average preference was for the low-cost/low-reward option. Average cue-elicited dopamine levels were significantly greater on high-reward than on low-reward trials. As before, there was also a small, consistent influence of effort costs on cue-evoked dopamine, particularly the initial cue-evoked response. Importantly, this influence was again much weaker than the expectation of future reward, meaning that, even in conditions where the rats preferred and responded faster to the low-reward option, average dopamine levels were still greater following presentation of the high-effort/high-reward option (Figure 2C).

What Role Might Rapid Changes in Dopamine Play in Effort-Related Choice?

The emerging picture from the studies discussed above is one of convergence: dopamine is reliably modulated by expectations of future reward, whereas the negative influence of effort costs is much more limited and is only observed in restricted task conditions (Figure 3, Key Figure). What is remarkable is that such a consistent picture emerges despite differences in species, task, technique, and dopamine subsystem (nigrostriatal dopamine neurons versus mesolimbic dopamine release) (Box 3 for discussion of homogeneity and diversity of dopamine). Moreover, these results align with several other studies implicating dopamine more strongly in reward than in effort processing 37, 38, 39, 40. Thus, we believe that this feature is neither anecdotal nor due to an experimental artifact.

Figure 3.

Figure 3

Key Figure: Influence of Expected Reward and Effort on Behavior

Information about upcoming benefits and costs are separately integrated into incentives and effort, respectively. Incentives have a dual effect on behavior: first, they have a positive influence on action selection (animals select the most beneficial options) and, second, incentive processes stimulate action execution. Effort, defined as the amount of anticipated resources necessary for action, negatively affects decisions: animals tend to select actions that minimize energy expenditure (Box 2). Nonetheless, the influence of this information on action execution might be dualistic: although anticipated greater demand can retard action initiation, animals once committed may need to boost their motivation to overcome effort costs. However, to date, the processes involved in surmounting effortful challenges remain little explored. Abbreviation: DA, dopamine.

Box 3. Is Dopamine Signaling a Single Functional Entity?

For the sake of simplicity, we have chosen to treat the dopamine system as a functional unit. Indeed, our deliberate aim has been to focus on the concordance in the literature regarding dopamine and effort, highlighting the common features that were reliable enough to be described regardless of the species, the task, or the method used to measure activity. Nonetheless, we do not wish to underplay the widespread evidence for diversity of molecular features, anatomical connections, and coding across dopamine neurons. For instance:

(i) At the anatomical level, the projections of dopamine neurons vary as a function of their ventromedial to dorsolateral location in the midbrain and of their molecular properties 101, 102, 103. Note, however, that these are graded distinctions [104], and several studies point to interactions among these channels [105].

(ii) At a physiological level, a significant proportion of dopamine neurons within particular midbrain regions respond in a stereotyped way to rewards and their predictors 6, 28. However, dopamine neurons recorded in the distinct regions of the midbrain can also display distinct functional correlates, particularly concerning coding of reward and aversion (71, 106; but see also [107]). Moreover, there can even be heterogeneous activity patterns in subpopulations of dopamine neurons within midbrain regions 18, 30, 70. Such diversity is also apparent in terminal regions 72, 73, 108.

(iii) At a functional level, experiments have shown that artificial activation of dopamine neurons or their projections originating from distinct midbrain regions can have distinct functional consequences 109, 110, even if stimulation of both SNc and VTA can induce appetitive effects [111].

Although this evidence converges to suggest that the dopamine system can be broken down into several functional units, the exact boundaries between these units and the exact nature of their functions remain to be elucidated. Crucially, the level of description remains strongly dependent upon the question of interest and the scale at which it is studied (e.g., the information encoded by single dopamine neurons versus that communicated by diffusion-based volume transmission in terminal regions). A useful comparison may be with primary visual cortex which, for some purposes, can be considered as a functional entity even though it can be broken down into several patches based on ocular dominance or the position of the visual input, and is also part of an interconnected set of cortical regions involved in visual processing.

Alt-text: Box 3

Crucially, this holds true even though, behaviorally, the choices of the animals were strongly modulated by both effort and reward to a degree that was quantitatively equivalent. In other words, there is a clear uncoupling between cue-elicited dopamine activity and trial-by-trial effort-related decisions. Therefore, we would contend that dopamine cannot act as a ‘common currency’ that integrates across all economic variables to signal the net utility of an available option (or, for that matter, deviations from average expectation, i.e., a utility prediction error).

How then can one reconcile these data with the careful, elegant experiments showing a strong relationship between the activity of dopamine neurons and quantitative utility prediction error 32, 41? One way would be simply to assume that choices about whether or not to invest effort form a fundamentally distinct class of decisions that are separate from other cost–benefit scenarios such as intertemporal or risky choice (Box 2). In contrast to costs such as delay or probability, effort is much more closely aligned with action systems. Indeed, several studies have shown that the evaluation of effort costs involve cortical–striatal circuits closely related to response selection, such as dorsomedial frontal cortex (anterior cingulate and supplementary motor cortex) and dorsal striatum/putamen 42, 43, 44, 45, 46, 47, 48, and not circuits centered on orbital and ventromedial prefrontal cortices and parts of ventral striatum, which are more commonly implicated in benefits-based decisions (e.g., 49, 50). Therefore, decisions relying on a cost–benefit analysis would result from a joint influence of these two systems on motor output.

However, although it is indisputable that dopamine encodes parameters about expected future outcomes relevant to a decision, there is also increasing evidence that its function may not be to directly guide action selection between simultaneously available options. Indeed, even when choices are defined solely by differences in outcomes, midbrain dopamine activity and NAcC dopamine levels in response to individual options are not necessarily a reliable predictor of preference because they often signal the value of an upcoming choice even when it is not reward-maximizing 29, 51. From this perspective, dopamine seems to signal the potential benefits of a decision that has already been finalized.

What then might be the functions of transient increases in dopamine before effort-related actions? One possibility is that dopamine does not signal predictions of future reward to guide what action to take, but instead provides a signal to shape whether (and possibly also when and how fast) to act given the potential benefits of taking a presented opportunity in a particular environment. In naturalistic settings, potential rewards are often encountered sequentially rather than simultaneously. This implies that a key computation, recurring across species, is whether or not to engage with a presented opportunity [52]. Thus, we would argue that dopamine activation reflects the incentive influence of a potential reward on behavior that could lead to obtaining it (Figure 3). While such signals will tend to be elicited by external stimuli, they can nonetheless be contextually regulated by afferent input 53, 54, allowing control over when it is beneficial to engage versus when it is better to display restraint.

For instance, NAcC dopamine levels are dynamically modulated not only by reward prediction errors but also by whether or not an action should be made to gain the reward 55, 56. Similarly, a comparable phenomenon has been observed in the firing of dopamine neurons recorded from substantia nigra pars compacta in monkeys 18, 36, indicating that this phenomenon is again reliable across species, task, recording technique, and dopamine subsystem. Put another way, dopamine activity would reflect an instantaneous estimate of the change in average reward rate – or potential net energy gain, based on the internal state of the animal 57, 58 – that would be achieved by pursuing an option. This signal could, in turn, motivate a new course of action to be initiated [59]. Indeed, because effort costs can cause animals to act more slowly or perform less reliably, it is worth considering that what may appear in some experimental situations as an influence of effort on dopamine may, without careful controls (such as equating trial rates irrespective of high- or low-effort choices: compare [19] with [60]), actually relate to changes in reward availability.

We would speculate that this signal incorporates effort costs only if the pursuit of the reward has an influence on the net income or reward rate of the animal. This close connection with rates of reward might also explain the important link between dopamine and time judgments [61], and between dopamine and action vigor 37, 62, 63, given that decisions of when and how fast to act both will influence the rate of events. Of note, effort costs in laboratory paradigms are typically of relatively low energy demand and also tend not to induce large fluctuations of reward rate within an individual session. According to the framework outlined above, a natural consequence of these characteristics would be that recorded dopamine signals come to be dominated by predictions of future reward.

Linking Recording and Manipulation Studies

The ideas discussed in earlier sections about the relationship between dopamine and effort emerged primarily from recent correlative recording studies in behaving animals. In addition, there is a history of studies examining the effects of pharmacological manipulations of dopamine transmission during effort-based decision making 2, 8, 10. However, it is not always straightforward to align the findings from recording studies with the pharmacological literature on this topic given the differences in timescale and approach. An important future step will be to combine recording and manipulation techniques, and to use manipulation techniques with better temporal resolution, to understand better how disrupting neurochemistry can in turn influence dopamine coding. Nonetheless, even given these caveats, we believe that the proposed role for dopamine outlined here can fit well alongside the existing literature on the effects of disrupting dopamine transmission.

A frequent observation, for instance, is that blocking dopamine receptors reduces the likelihood and speed of engagement as a function of future reward 9, 64, 65. This observation fits nicely into a picture where rapid increases in dopamine provide a unitary signal to different terminal regions about the potential net gain of a presented opportunity, and can thus help to authorize decisions about when to act and when not to act. The fact that we have observed such signals particularly clearly in situations when animals are required to switch from an inactive or disengaged state to an active one may also be related to a particular role for dopamine when needing to initiate a non-stereotyped response [3].

We have emphasized what we see as a consensus in the literature on the point that effort costs have a limited influence on dopamine activity. This may seem, at first glance, contradictory to the fact that pharmacological manipulations of dopamine often cause pronounced changes in allocation of effort. There are some subtleties here, however, that can perhaps easily be missed, particularly when comparing correlative and interventional approaches. One subtle distinction is that, even if dopamine can promote energy expenditure, it only does so as a function of the upcoming reward, and not as a function of the upcoming (energy) cost itself. Crucially, the few recent experiments that examined the influence of dopamine treatments in tasks where efforts costs and reward benefits were dissociated, also found a stronger influence on reward-based over effort-based decisions 37, 40. Therefore, even if these manipulations lack the temporal and anatomical precision of recording studies, they reinforce the idea that the dopamine system as a whole is much more sensitive to potential benefits than to potential effort costs, and demonstrate the generality of this relation, as well as its causal nature.

If effort costs are not directly encoded by dopamine signals, an obvious question remains: which circuits signal effort costs, and how these might interact with dopamine? As described earlier, there is general agreement that anterior cingulate cortex and dorsomedial motor areas play some role in representing effort, and the former has recently been shown to modulate VTA activity during an effort task [53]. When it comes to the question of what neural pathways allow effort costs to be overcome, one intriguing possibility is that, despite all the attention on dopamine, it may be that other neurochemical systems are crucial in this context. For instance, locus coeruleus neurons are strongly active both immediately before and after an effortful action is initiated 18, 36, suggesting that noradrenaline might be crucial to mobilize energy to overcome an effort cost. Some striatal cholinergic interneurons also increase their activity when engaging in a high-effort or small-reward trial [66]. Although serotonin has tended to be linked more closely to delay-based decisions, recent evidence suggests it may also affect how effort costs accumulate over time as well as the vigor of responding (the latter possibly via dopamine interactions) 67, 68.

Concluding Remarks

The function of dopamine has long generated a great deal of debate, and will likely continue to do so. We highlight here what we believe to be a conspicuous point of consensus. Crucially, this consensus is only arrived at when the data are considered for what they specifically show, putting aside any attempt to fit them into one or other established position. To us, it seems that the data converge to a picture where dopamine signals display strong and consistent reward encoding, but limited and sometimes transient effort encoding.

Clearly, there are many issues that remain to be addressed (see Outstanding Questions) and we would be surprised if there are no new points of divergence once various aspects of effort and different dopamine pathways are examined in more detail. For instance, our starting position is that the convergence we observe suggests there may be a unitary, or at least general, function for dopamine in promoting action based on predictions of future benefits (cf [69] and [9] for complementary ideas). However, given ongoing debates concerning the degree of diversity of coding in midbrain dopamine neurons (e.g., 18, 30, 70, 71), as well as clear variations in patterns of release in terminal regions 72, 73, we are cognizant that there may be finer-grained distinctions to be drawn. In particular, it will be important to understand better how the role of dopamine in spontaneous movement and timing dovetails with some of our viewpoints 61, 63, 70, 74. Nonetheless, we believe that the potential to find correspondences across species, technique, and task shows that it should be possible to harness the advantages that the different approaches confer to uncover common and conserved functions of dopamine.

Outstanding Questions.

What is the timescale of dopamine action? Does the average background (‘tonic’) dopamine level have a role distinct from that of the transient cue-evoked signals in motivating animals to overcome effort, or has the distinction between its action at fast and slow timescales been overstated?

Does dopamine also influence movement parameters? NAcC dopamine levels do not straightforwardly correlate with movement times. However, several studies have suggested a link between nigrostriatal dopamine, at least, and the vigor with which an action is performed.

What is the relationship between dopamine and other structures, such as anterior cingulate cortex, and other neurochemical systems during effort-related choices?

Acknowledgments

M.E.W. was supported by a Wellcome Trust Senior Research Fellowship (202831/Z/16/Z). S.B. was supported by the CNRS and a research grant from the Fondation de France. We would like to thank Rob Turner and Benjamin Pasquereau for allowing us to reproduce part of their data, and Nick Hollon for help with the adaptation of parts of Figure 2. M.E.W. would like to acknowledge the influence of Paul Phillips on shaping several of the ideas expressed here, as well as useful discussions with Alex Kacelnik.

Contributor Information

Mark E. Walton, Email: mark.walton@psy.ox.ac.uk.

Sebastien Bouret, Email: sebastien.bouret@icm-institute.org.

References

  • 1.Berridge K.C. The debate over dopamine’s role in reward: the case for incentive salience. Psychopharmacology (Berl.) 2007;191:391–431. doi: 10.1007/s00213-006-0578-x. [DOI] [PubMed] [Google Scholar]
  • 2.Chong T.T., Husain M. The role of dopamine in the pathophysiology and treatment of apathy. Prog. Brain Res. 2016;229:389–426. doi: 10.1016/bs.pbr.2016.05.007. [DOI] [PubMed] [Google Scholar]
  • 3.Nicola S.M. The flexible approach hypothesis: unification of effort and cue-responding hypotheses for the role of nucleus accumbens dopamine in the activation of reward-seeking behavior. J. Neurosci. 2010;30:16585–16600. doi: 10.1523/JNEUROSCI.3958-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Robbins T.W., Everitt B.J. Functions of dopamine in the dorsal and ventral striatum. Semin. Neurosci. 1992;4:119–127. [Google Scholar]
  • 5.Phillips P.E. Calculating utility: preclinical evidence for cost–benefit analysis by mesolimbic dopamine. Psychopharmacology (Berl.) 2007;191:483–495. doi: 10.1007/s00213-006-0626-6. [DOI] [PubMed] [Google Scholar]
  • 6.Schultz W. Getting formal with dopamine and reward. Neuron. 2002;36:241–263. doi: 10.1016/s0896-6273(02)00967-4. [DOI] [PubMed] [Google Scholar]
  • 7.Guitart-Masip M. Action versus valence in decision making. Trends Cogn. Sci. 2014;18:194–202. doi: 10.1016/j.tics.2014.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Pessiglione M. Why not try harder? Computational approach to motivation deficits in neuro-psychiatric diseases. Brain. 2017 doi: 10.1093/brain/awx278. Published online November 29, 2017. [DOI] [PubMed] [Google Scholar]
  • 9.Berke J.D. What does dopamine mean? Nat. Neurosci. 2018;21:787–793. doi: 10.1038/s41593-018-0152-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Salamone J.D., Correa M. The mysterious motivational functions of mesolimbic dopamine. Neuron. 2012;76:470–485. doi: 10.1016/j.neuron.2012.10.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Schmidt L. Disconnecting force from money: effects of basal ganglia damage on incentive motivation. Brain. 2008;131:1303–1310. doi: 10.1093/brain/awn045. [DOI] [PubMed] [Google Scholar]
  • 12.Cousins M.S., Salamone J.D. Nucleus accumbens dopamine depletions in rats affect relative response allocation in a novel cost/benefit procedure. Pharmacol. Biochem. Behav. 1994;49:85–91. doi: 10.1016/0091-3057(94)90460-x. [DOI] [PubMed] [Google Scholar]
  • 13.Cousins M.S. Nucleus accumbens dopamine depletions alter relative response allocation in a T-maze cost/benefit task. Behav. Brain Res. 1996;74:189–197. doi: 10.1016/0166-4328(95)00151-4. [DOI] [PubMed] [Google Scholar]
  • 14.Fiorillo C.D. Multiphasic temporal dynamics in responses of midbrain dopamine neurons to appetitive and aversive stimuli. J. Neurosci. 2013;33:4710–4725. doi: 10.1523/JNEUROSCI.3883-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Fiorillo C.D. Two dimensions of value: dopamine neurons represent reward but not aversiveness. Science. 2013;341:546–549. doi: 10.1126/science.1238699. [DOI] [PubMed] [Google Scholar]
  • 16.Stauffer W.R. Dopamine reward prediction error responses reflect marginal utility. Curr. Biol. 2014;24:2491–2500. doi: 10.1016/j.cub.2014.08.064. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Matsumoto H. Midbrain dopamine neurons signal aversion in a reward-context-dependent manner. eLife. 2016;5 doi: 10.7554/eLife.17328. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Varazzani C. Noradrenaline and dopamine neurons in the reward/effort trade-off: a direct electrophysiological comparison in behaving monkeys. J. Neurosci. 2015;35:7866–7877. doi: 10.1523/JNEUROSCI.0454-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Gan J.O. Dissociable cost and benefit encoding of future rewards by mesolimbic dopamine. Nat. Neurosci. 2010;13:25–27. doi: 10.1038/nn.2460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Hollon N.G. Dopamine-associated cached values are not sufficient as the basis for action selection. Proc. Natl. Acad. Sci. U. S. A. 2014;111:18357–18362. doi: 10.1073/pnas.1419770111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Schultz W. Phasic dopamine signals: from subjective reward value to formal economic utility. Curr. Opin. Behav. Sci. 2015;5:147–154. doi: 10.1016/j.cobeha.2015.09.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Shenhav A. Toward a rational and mechanistic account of mental effort. Annu. Rev. Neurosci. 2017;40:99–124. doi: 10.1146/annurev-neuro-072116-031526. [DOI] [PubMed] [Google Scholar]
  • 23.Tobler P.N. Adaptive coding of reward value by dopamine neurons. Science. 2005;307:1642–1645. doi: 10.1126/science.1105370. [DOI] [PubMed] [Google Scholar]
  • 24.Roesch M.R. Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nat. Neurosci. 2007;10:1615–1624. doi: 10.1038/nn2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Nakamura K. Reward-dependent modulation of neuronal activity in the primate dorsal raphe nucleus. J. Neurosci. 2008;28:5331–5343. doi: 10.1523/JNEUROSCI.0021-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Kobayashi S., Schultz W. Influence of reward delays on responses of dopamine neurons. J. Neurosci. 2008;28:7837–7846. doi: 10.1523/JNEUROSCI.1600-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Fiorillo C.D. Discrete coding of reward probability and uncertainty by dopamine neurons. Science. 2003;299:1898–1902. doi: 10.1126/science.1077349. [DOI] [PubMed] [Google Scholar]
  • 28.Eshel N. Dopamine neurons share common response function for reward prediction error. Nat. Neurosci. 2016;19:479–486. doi: 10.1038/nn.4239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Morris G. Midbrain dopamine neurons encode decisions for future action. Nat. Neurosci. 2006;9:1057–1063. doi: 10.1038/nn1743. [DOI] [PubMed] [Google Scholar]
  • 30.Pasquereau B., Turner R.S. Limited encoding of effort by dopamine neurons in a cost–benefit trade-off task. J. Neurosci. 2013;33:8288–8300. doi: 10.1523/JNEUROSCI.4619-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Satoh T. Correlated coding of motivation and outcome of decision by dopamine neurons. J. Neurosci. 2003;23:9913–9923. doi: 10.1523/JNEUROSCI.23-30-09913.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Stauffer W.R. Components and characteristics of the dopamine reward utility signal. J. Comp. Neurol. 2016;524:1699–1711. doi: 10.1002/cne.23880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Ravel S., Richmond B.J. Dopamine neuronal responses in monkeys performing visually cued reward schedules. Eur. J. Neurosci. 2006;24:277–290. doi: 10.1111/j.1460-9568.2006.04905.x. [DOI] [PubMed] [Google Scholar]
  • 34.Bouret S., Richmond B.J. Ventromedial and orbital prefrontal neurons differentially encode internally and externally driven motivational values in monkeys. J. Neurosci. 2010;30:8591–8601. doi: 10.1523/JNEUROSCI.0049-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.San-Galli A., Bouret S. Assessing value representation in animals. J. Physiol. Paris. 2015;109:64–69. doi: 10.1016/j.jphysparis.2014.07.003. [DOI] [PubMed] [Google Scholar]
  • 36.Bouret S. Complementary neural correlates of motivation in dopaminergic and noradrenergic neurons of monkeys. Front. Behav. Neurosci. 2012;6:40. doi: 10.3389/fnbeh.2012.00040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Le Bouc R. Computational dissection of dopamine motor and motivational functions in humans. J. Neurosci. 2016;36:6623–6633. doi: 10.1523/JNEUROSCI.3078-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Wanat M.J. Delays conferred by escalating costs modulate dopamine release to rewards but not their predictors. J. Neurosci. 2010;30:12020–12027. doi: 10.1523/JNEUROSCI.2691-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Skvortsova V. A selective role for dopamine in learning to maximize reward but not to minimize effort: evidence from patients with Parkinson’s disease. J. Neurosci. 2017;37:6087–6097. doi: 10.1523/JNEUROSCI.2081-16.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Chong T.T. Dissociation of reward and effort sensitivity in methcathinone-induced Parkinsonism. J. Neuropsychol. 2018;12:291–297. doi: 10.1111/jnp.12122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Schultz W. The phasic dopamine signal maturing: from reward via behavioural activation to formal economic utility. Curr. Opin. Neurobiol. 2017;43:139–148. doi: 10.1016/j.conb.2017.03.013. [DOI] [PubMed] [Google Scholar]
  • 42.Croxson P.L. Effort-based cost–benefit valuation and the human brain. J. Neurosci. 2009;29:4531–4541. doi: 10.1523/JNEUROSCI.4515-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Kurniawan I.T. Choosing to make an effort: the role of striatum in signaling physical effort of a chosen action. J. Neurophysiol. 2010;104:313–321. doi: 10.1152/jn.00027.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Klein-Flugge M.C. Neural signatures of value comparison in human cingulate cortex during decisions requiring an effort-reward trade-off. J. Neurosci. 2016;36:10002–10015. doi: 10.1523/JNEUROSCI.0292-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Walton M.E. The role of rat medial frontal cortex in effort-based decision making. J. Neurosci. 2002;22:10996–11003. doi: 10.1523/JNEUROSCI.22-24-10996.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Prevost C. Separate valuation subsystems for delay and effort decision costs. J. Neurosci. 2010;30:14080–14090. doi: 10.1523/JNEUROSCI.2752-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Walton M.E. Functional specialization within medial frontal cortex of the anterior cingulate for evaluating effort-related decisions. J. Neurosci. 2003;23:6475–6479. doi: 10.1523/JNEUROSCI.23-16-06475.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Hosokawa T. Single-neuron mechanisms underlying cost–benefit analysis in frontal cortex. J. Neurosci. 2013;33:17385–17397. doi: 10.1523/JNEUROSCI.2221-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.San-Galli A. Primate ventromedial prefrontal cortex neurons continuously encode the willingness to engage in reward-directed behavior. Cereb. Cortex. 2018;28:73–89. doi: 10.1093/cercor/bhw351. [DOI] [PubMed] [Google Scholar]
  • 50.Strait C.E. Signatures of value comparison in ventral striatum neurons. PLoS Biol. 2015;13 doi: 10.1371/journal.pbio.1002173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Walton M.E. The influence of dopamine in generating action from motivation. In: Mars R.B., editor. Neural Basis of Motivational and Cognitive Control. MIT Press; 2011. pp. 163–187. [Google Scholar]
  • 52.Aw J.M. How costs affect preferences: experiments on state dependence, hedonic state and within-trial contrast in starlings. Anim. Behav. 2011;81:1117–1128. [Google Scholar]
  • 53.Elston T.W., Bilkey D.K. Anterior cingulate cortex modulation of the ventral tegmental area in an effort task. Cell Rep. 2017;19:2220–2230. doi: 10.1016/j.celrep.2017.05.062. [DOI] [PubMed] [Google Scholar]
  • 54.Jones J.L. Basolateral amygdala modulates terminal dopamine release in the nucleus accumbens and conditioned responding. Biol. Psychiatry. 2010;67:737–744. doi: 10.1016/j.biopsych.2009.11.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Syed E.C. Action initiation shapes mesolimbic dopamine encoding of future rewards. Nat. Neurosci. 2016;19:34–36. doi: 10.1038/nn.4187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Roitman M.F. Dopamine operates as a subsecond modulator of food seeking. J. Neurosci. 2004;24:1265–1271. doi: 10.1523/JNEUROSCI.3823-03.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.McCutcheon J.E. The role of dopamine in the pursuit of nutritional value. Physiol. Behav. 2015;152:408–415. doi: 10.1016/j.physbeh.2015.05.003. [DOI] [PubMed] [Google Scholar]
  • 58.Beeler J.A. Putting desire on a budget: dopamine and energy expenditure, reconciling reward and resources. Front. Integr. Neurosci. 2012;6:49. doi: 10.3389/fnint.2012.00049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Hamid A.A. Mesolimbic dopamine signals the value of work. Nat. Neurosci. 2016;19:117–126. doi: 10.1038/nn.4173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Day J.J. Phasic nucleus accumbens dopamine release encodes effort- and delay-related costs. Biol. Psychiatry. 2010;68:306–309. doi: 10.1016/j.biopsych.2010.03.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Soares S. Midbrain dopamine neurons control judgment of time. Science. 2016;354:1273–1277. doi: 10.1126/science.aah5234. [DOI] [PubMed] [Google Scholar]
  • 62.Panigrahi B. Dopamine is required for the neural representation and control of movement vigor. Cell. 2015;162:1418–1430. doi: 10.1016/j.cell.2015.08.014. [DOI] [PubMed] [Google Scholar]
  • 63.da Silva J.A. Dopamine neuron activity before action initiation gates and invigorates future movements. Nature. 2018;554:244–248. doi: 10.1038/nature25457. [DOI] [PubMed] [Google Scholar]
  • 64.Nicola S.M. Nucleus accumbens dopamine release is necessary and sufficient to promote the behavioral response to reward-predictive cues. Neuroscience. 2005;135:1025–1033. doi: 10.1016/j.neuroscience.2005.06.088. [DOI] [PubMed] [Google Scholar]
  • 65.Choi W.Y. Dopamine D1 and D2 antagonist effects on response likelihood and duration. Behav. Neurosci. 2009;123:1279–1287. doi: 10.1037/a0017702. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Nougaret S., Ravel S. Modulation of tonically active neurons of the monkey striatum by events carrying different force and reward information. J. Neurosci. 2015;35:15214–15226. doi: 10.1523/JNEUROSCI.0039-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Bailey M.R. An interaction between serotonin receptor signaling and dopamine enhances goal-directed vigor and persistence in mice. J. Neurosci. 2018;38:2149–2162. doi: 10.1523/JNEUROSCI.2088-17.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Meyniel F. A specific role for serotonin in overcoming effort cost. eLife. 2016;5 doi: 10.7554/eLife.17282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Lau B. The many worlds hypothesis of dopamine prediction error: implications of a parallel circuit architecture in the basal ganglia. Curr. Opin. Neurobiol. 2017;46:241–247. doi: 10.1016/j.conb.2017.08.015. [DOI] [PubMed] [Google Scholar]
  • 70.Dodson P.D. Representation of spontaneous movement by dopaminergic neurons is cell-type selective and disrupted in parkinsonism. Proc. Natl. Acad. Sci. U. S. A. 2016;113:E2180–E2188. doi: 10.1073/pnas.1515941113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Matsumoto M., Hikosaka O. Two types of dopamine neuron distinctly convey positive and negative motivational signals. Nature. 2009;459:837–841. doi: 10.1038/nature08028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Brown H.D. Primary food reward and reward-predictive stimuli evoke different patterns of phasic dopamine signaling throughout the striatum. Eur. J. Neurosci. 2011;34:1997–2006. doi: 10.1111/j.1460-9568.2011.07914.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Saddoris M.P. Differential dopamine release dynamics in the nucleus accumbens core and shell reveal complementary signals for error prediction and incentive motivation. J. Neurosci. 2015;35:11572–11582. doi: 10.1523/JNEUROSCI.2344-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Howe M.W., Dombeck D.A. Rapid signalling in distinct dopaminergic axons during locomotion and reward. Nature. 2016;535:505–510. doi: 10.1038/nature18942. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Hull C.L. Appleton-Century; 1943. Principles of Behavior. [Google Scholar]
  • 76.Bautista L.M. To walk or to fly? How birds choose among foraging modes. Proc. Natl. Acad. Sci. U. S. A. 2001;98:1089–1094. doi: 10.1073/pnas.98.3.1089. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Stephens D.W., Krebs J.R. Princeton University Press; 1986. Foraging Theory. [Google Scholar]
  • 78.Salamone J.D. Nucleus accumbens dopamine release increases during instrumental lever pressing for food but not free food consumption. Pharmacol. Biochem. Behav. 1994;49:25–31. doi: 10.1016/0091-3057(94)90452-9. [DOI] [PubMed] [Google Scholar]
  • 79.Walton M.E. Weighing up the benefits of work: behavioral and neural analyses of effort-related decision making. Neural Netw. 2006;19:1302–1314. doi: 10.1016/j.neunet.2006.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Cocker P.J. Sensitivity to cognitive effort mediates psychostimulant effects on a novel rodent cost/benefit decision-making task. Neuropsychopharmacology. 2012;37:1825–1837. doi: 10.1038/npp.2012.30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Kool W. Decision making and the avoidance of cognitive demand. J. Exp. Psychol. Gen. 2010;139:665–682. doi: 10.1037/a0020198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Schmidt L. Neural mechanisms underlying motivation of mental versus physical effort. PLoS Biol. 2012;10 doi: 10.1371/journal.pbio.1001266. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Iodice P. Fatigue modulates dopamine availability and promotes flexible choice reversals during decision making. Sci. Rep. 2017;7:535. doi: 10.1038/s41598-017-00561-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Taylor C.R. Scaling of energetic cost of running to body size in mammals. Am. J. Physiol. 1970;219:1104–1107. doi: 10.1152/ajplegacy.1970.219.4.1104. [DOI] [PubMed] [Google Scholar]
  • 85.Floresco S.B. Dopaminergic and glutamatergic regulation of effort- and delay-based decision making. Neuropsychopharmacology. 2008;33:1966–1979. doi: 10.1038/sj.npp.1301565. [DOI] [PubMed] [Google Scholar]
  • 86.Mingote S. Ratio and time requirements on operant schedules: effort-related effects of nucleus accumbens dopamine depletions. Eur. J. Neurosci. 2005;21:1749–1757. doi: 10.1111/j.1460-9568.2005.03972.x. [DOI] [PubMed] [Google Scholar]
  • 87.Bouret S., Richmond B.J. Relation of locus coeruleus neurons in monkeys to Pavlovian and operant behaviors. J. Neurophysiol. 2009;101:898–911. doi: 10.1152/jn.91048.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Jahn C.I. Dual contributions of noradrenaline to behavioural flexibility and motivation. Psychopharmacology (Berl.) 2018;235:2687–2702. doi: 10.1007/s00213-018-4963-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Ishiwari K. Accumbens dopamine and the regulation of effort in food-seeking behavior: modulation of work output by different ratio or force requirements. Behav. Brain Res. 2004;151:83–91. doi: 10.1016/j.bbr.2003.08.007. [DOI] [PubMed] [Google Scholar]
  • 90.Rangel A. A framework for studying the neurobiology of value-based decision making. Nat. Rev. Neurosci. 2008;9:545–556. doi: 10.1038/nrn2357. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Milton K., May M.L. Body weight, diet and home range area in primates. Nature. 1976;259:459–462. doi: 10.1038/259459a0. [DOI] [PubMed] [Google Scholar]
  • 92.Stearns S.C. Oxford University Press; 1992. The Evolution of Life Histories. [Google Scholar]
  • 93.Rosetta L. Energetics during reproduction: a doubly labeled water study of lactating baboons. Am. J. Phys. Anthropol. 2011;144:661–668. doi: 10.1002/ajpa.21475. [DOI] [PubMed] [Google Scholar]
  • 94.Karasov W.H. Nutrient constraints in the feeding ecology of an omnivore in a seasonal environment. Oecologia. 1985;66:280–290. doi: 10.1007/BF00379866. [DOI] [PubMed] [Google Scholar]
  • 95.van Marken Lichtenbelt W.D. Optimal foraging of a herbivorous lizard, the green iguana in a seasonal environment. Oecologia. 1993;95:246–256. doi: 10.1007/BF00323497. [DOI] [PubMed] [Google Scholar]
  • 96.Hanya G. Seasonal variations in the activity budget of Japanese macaques in the coniferous forest of Yakushima: effects of food and temperature. Am. J. Primatol. 2004;63:165–177. doi: 10.1002/ajp.20049. [DOI] [PubMed] [Google Scholar]
  • 97.DeCasien A.R. Primate brain size is predicted by diet but not sociality. Nat. Ecol. Evol. 2017;1:112. doi: 10.1038/s41559-017-0112. [DOI] [PubMed] [Google Scholar]
  • 98.Stevens J.R. The ecology and evolution of patience in two New World monkeys. Biol. Lett. 2005;1:223–226. doi: 10.1098/rsbl.2004.0285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99.Stevens J.R. Will travel for food: spatial discounting in two new world monkeys. Curr. Biol. 2005;15:1855–1860. doi: 10.1016/j.cub.2005.09.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100.Evans T.A. Working and waiting for better rewards: self-control in two monkey species (Cebus apella and Macaca mulatta) Behav. Process. 2014;103:236–242. doi: 10.1016/j.beproc.2014.01.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Roeper J. Dissecting the diversity of midbrain dopamine neurons. Trends Neurosci. 2013;36:336–342. doi: 10.1016/j.tins.2013.03.003. [DOI] [PubMed] [Google Scholar]
  • 102.Poulin J.F. Mapping projections of molecularly defined dopamine neuron subtypes using intersectional genetic approaches. Nat. Neurosci. 2018;21:1260–1271. doi: 10.1038/s41593-018-0203-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Haber S.N. Subsets of midbrain dopaminergic neurons in monkeys are distinguished by different levels of mRNA for the dopamine transporter: comparison with the mRNA for the D2 receptor, tyrosine hydroxylase and calbindin immunoreactivity. J. Comp. Neurol. 1995;362:400–410. doi: 10.1002/cne.903620308. [DOI] [PubMed] [Google Scholar]
  • 104.Ikemoto S. Dopamine reward circuitry: two projection systems from the ventral midbrain to the nucleus accumbens–olfactory tubercle complex. Brain Res. Rev. 2007;56:27–78. doi: 10.1016/j.brainresrev.2007.05.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105.Haber S.N. Striatonigrostriatal pathways in primates form an ascending spiral from the shell to the dorsolateral striatum. J. Neurosci. 2000;20:2369–2382. doi: 10.1523/JNEUROSCI.20-06-02369.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 106.Lammel S. Projection-specific modulation of dopamine neuron synapses by aversive and rewarding stimuli. Neuron. 2011;70:855–862. doi: 10.1016/j.neuron.2011.03.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 107.Fiorillo C.D. Diversity and homogeneity in responses of midbrain dopamine neurons. J. Neurosci. 2013;33:4693–4709. doi: 10.1523/JNEUROSCI.3886-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108.Badrinarayan A. Aversive stimuli differentially modulate real-time dopamine transmission dynamics within the nucleus accumbens core and shell. J. Neurosci. 2012;32:15779–15790. doi: 10.1523/JNEUROSCI.3557-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109.Saunders B.T. Dopamine neurons create Pavlovian conditioned stimuli with circuit-defined motivational properties. Nat. Neurosci. 2018;21:1072–1083. doi: 10.1038/s41593-018-0191-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110.Parker N.F. Reward and choice encoding in terminals of midbrain dopamine neurons depends on striatal target. Nat. Neurosci. 2016;19:845–854. doi: 10.1038/nn.4287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Ilango A. Similar roles of substantia nigra and ventral tegmental dopamine neurons in reward and aversion. J. Neurosci. 2014;34:817–822. doi: 10.1523/JNEUROSCI.1703-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES