Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Jan 21.
Published in final edited form as: Front Biosci (Elite Ed). 2013 Jan 1;5:273–288. doi: 10.2741/e615

Rapid dopamine dynamics in the accumbens core and shell: Learning and action

Michael P Saddoris 1, Jonathan A Sugam 1, Fabio Cacciapaglia 1,2, Regina M Carelli 1,3
PMCID: PMC3897221  NIHMSID: NIHMS546229  PMID: 23276989

Abstract

The catecholamine dopamine (DA) has been implicated in a host of neural processes as diverse as schizophrenia, parkinsonism and reward encoding. Importantly, these distinct features of DA function are due in large part to separate neural circuits involving connections arising from different DA-releasing nuclei and projections to separate afferent targets. Emerging data has suggested that this same principle of separate neural circuits may be applicable within structural subregions, such as the core and shell of the nucleus accumbens (NAc). Further, DA may act selectively on smaller ensembles of cells (or, microcircuits) via differential DA receptor density and distinct inputs and outputs of the microcircuits, thus enabling new learning about Pavlovian cues, instrumental responses, subjective reward processing and decision-making. In this review, by taking advantage of studies using subsecond voltammetric techniques in behaving animals to study how rapid changes in DA levels affect behavior, we examine the spatial and temporal features of DA release and how it relates to both normal learning and similarities to pathological learning in the form of addiction.

Keywords: Ventral Striatum, Associative Learning, Catecholamine, Review, Decision Making, Reward, Microcircuit, Error Prediction, Review

2. INTRODUCTION

The central role of dopamine (DA) in a variety of biological processes has been well documented for decades, though the range of these functions is incredibly diverse. Dopaminergic dysfunction has been established as the cause of debilitating illnesses such as Parkinson’s disease, as well as mental disorders like schizophrenia. More recently, much research has focused on the role that DA may play in mediating the encoding of learning (13).

DA-expressing neurons in the ventral tegmental area (VTA) normally fire in a slow, tonic pattern but periodically discharge in short bursts (4) that result in relatively high concentrations of dopamine release in the nucleus accumbens (NAc) within hundreds of milliseconds (56). Burst firing, particularly observed during presentation of rewards and their associated cues (7), has been shown to be the origins of phasic discharges of DA (or, transients) in the NAc (6). One powerful tool used to unravel DA dynamics in contemporary studies is the use of fast-scan cyclic voltammetry (FSCV), a technique that allows for subsecond detection of DA release in highly specific areas. Using FSCV at the carbon-fiber microelectrode (100 ms resolution) we and others have extensively reported that rapid fluctuations in dopamine release events are typically associated with aspects of goal-directed behaviors and learning processes implicated in stimulus-reward and action-reward associations. As such, phasic dopamine release can be detected within hundreds of milliseconds of motivationally significant events such as receipt of natural rewards like food, sucrose (8) or sex (910), brain stimulation (1113), or drugs of abuse like cocaine (1415).

Additionally, DA has been traditionally considered a “volume transmitter”, in that synchronous burst-firing of DAergic neurons are able to phasically modulate relatively large territories of neural tissue (12, 1618). Convergent data from a variety of studies has confirmed this observation, but also shown that mechanisms of DA action are differential and highly specific across and within brain regions (1921). Because of the high spatial precision of FSCV (tip of the carbon recording fiber is ~100 μm long and 10 μm in diameter), DA release in the NAc can be observed within substructures (e.g., core and shell). Studies involving the electrophysiological recording of DAergic neurons projecting to the NAc have similar subsecond temporal precision as FSCV (typically < 200 msec). However, physiological tools used to identify these putative DA neurons had some difficulty in dissociating between cell bodies in the VTA and the related substantia nigra which had different projection pathways, and more controversially, whether or not recorded neurons in the midbrain are DA-expressing or not (22). As such, none of these studies in awake behaving animals have to date dissociated differential DAergic input to NAc subregions of the core and shell. This is an important distinction as differences in DA release between these regions are proving to be critical for understanding the role of DA in learning and addiction. Given the anatomy and specificity of DA release within subregions of the NAc, FSCV allows for both the subsecond temporal and detailed spatial precision necessary for understanding the role of DA in discrete NAc “microcircuits” in many motivationally-salient tasks (23).

3. ANATOMY OF THE DOPAMINE SYSTEM: GENERAL OVERVIEW

Though many portions of the forebrain receive dense dopaminergic innervation, the vast majority of the work done to date on rapid dopamine release dynamics has focused on the striatal regions, specifically the dorsal striatum and NAc. Generally, these regions are defined by strong midbrain dopamine input, as well as by separate circuits of afferent and efferent projections. The NAc receives largely glutamatergic inputs from limbic structures such as the basolateral amygdala [BLA] (2426), prefrontal regions (2427) and hippocampus (25), as well as dopaminergic input from VTA (2829). In turn, the NAc sends efferent projections to nuclei that organize motor behavior including the ventral pallidum and subthalamic nucleus (30). Detailed anatomical studies have shown that NAc neurons receive convergent information from dopaminergic and corticolimbic regions. For example, VTA terminals have been shown to synapse on medium spiny neurons (MSNs) in the NAc that also receive direct input from hippocampus (31), while terminals from PFC afferents onto MSNs are modulated by VTA activity (32). Thus, NAc neurons are in a prime position to coordinate associative learning from afferents with action selection in downstream targets.

The NAc has been further shown to be comprised of two important subregions; the core and shell (33). In addition to histochemical differences (3436), these subregions differ in terms of afferent and efferent projections (25). For example, the NAc core receives the majority of its prefrontal input from the prelimbic region and lateral OFC (i.e., dorsal agranular insula), while the NAc shell receives input from the infralimbic cortex and more medial lateral OFC (ventral agranular insula) (24, 27). Further, such regions are also dissociable by pallidal projection outputs, notably the core predominantly projects to the substantia nigra (SN) while the shell targets the VTA (26, 33, 3739). However, core and shell regions may also have important intra-structural connections; afferents from the core to the shell (but little from shell to core) may be computationally important for allowing diverse limbic inputs to be integrated into a whole for later output (4041). These differences in connectivity, strongly suggest that core and shell are differentially engaged in the control of motivational behavior.

At the cellular level, neurons in both the core and shell are primarily (95%) MSNs, which act on downstream targets via GABAergic release (4243). In the NAc, distinct populations of MSNs selectively express D1-like and D2-like receptors in the core and shell (3435, 42, 4445). Even within core and shell subregions, cellular organization within the NAc is compartmentalized (46), with a patch-matrix organization similar to that found in the dorsal striatum. Indeed, van Dongen and colleagues report that MSN dendritic arbors are largely contained within compartments, but also strongly respect core and shell boundaries by asymmetrically sending fibers in opposite directions at the core-shell border (41). Thus, processing and dopaminergic influence in the NAc are likely differentiated at the regional (i.e., core and shell) and microcircuit (neural ensemble) level.

In contrast to the NAc, the rodent dorsal striatum [DS] is a relatively massive structure, but similar to the NAc is comprised primarily (>90%) of GABAergic MSNs (43). Importantly, the DS receives dopaminergic input from the substantia nigra [SN]. This nigrostriatal circuit has been shown to be critical for volitional movement and dysfunction due to loss of DA tone underlies the movement symptoms of parkinsonism. However, more recent research has suggested that the DS is the striatal component of a series of dorsally and laterally oriented “spirals” of connections between the DS, SN and related cortical and thalamic targets (30, 4750), suggesting a possible role in memory and habit formation. Thus, the anatomical organization of the striatum strongly suggests putative functional dissociations in behavior and learning.

4. DOPAMINE AND NEURAL ENCODING IN THE NAC: CRITICAL COMPONENTS OF LEARNING

Animals are faced with uncertain outcomes in a highly complex environment. Thriving in that environment depends on the ability for animals to determine regularities in the world that are predictive of relevant outcomes whether they are rewarding like food or mates, or aversive like predators. For obvious reasons, the ability to detect these predictive conditions provides a strong advantage in terms of survival.

Modeling this complex behavior has been at the heart of psychological research for at least a century. From the earliest components of Pavlov’s famous experiments to contemporary models of animal neuroeconomic strategies, the essential components of these learning processes are similar across paradigms. In a simple form: (1) animals experience a motivationally salient outcome (for example, food) and determine the value of that reinforcer, (2) cues predictive of those outcomes are detected and attended to, and (3) any actions necessary to produce and obtain the reinforcer are acquired and performed. In more complex settings when multiple outcomes compete for attention, animals must also determine the value of those different reinforcers and the costs associated with the actions to acquire them. These features have each been shown to be associated with neural encoding and phasic DA release in the NAc, and will be discussed in turn below.

5. REWARD PROCESSING AND THE NAC

The NAc has been repeatedly implicated in the processing of reward. For example, experimenter-delivered intraoral infusions of a palatable sucrose solution induced rapid changes in neural firing in naïve subjects at the time of reward receipt (51), while tastants experienced during simple Pavlovian (5253) or instrumental (5455) conditioning tasks similarly cause rapid neural deflections. Notably, multiple labs that have carefully dissociated the electrophysiological characteristics of taste processing in the NAc and found that the direction of these responses is largely inhibitory during consumption of palatable tastants (56). Indeed, Wheeler et al. (57) have shown that this inhibitory encoding is likely involved in encoding hedonic properties of those tastants. That is, when a palatable saccharine solution was delivered intra-orally to naïve rats, the majority (~75%) of neurons showed an inhibitory encoding pattern during receipt. However, when subjects learned that that same tastant was now predictive of a delay to self-administer cocaine (and thus induced a negative affective state), the encoding changed to predominantly (~75%) excitatory. This proportion of excitatory encoding was similar to that seen during unsignaled delivery of a bitter quinine solution (51) or when a tastant had developed a conditioned taste aversion through pairing with illness (58). Interestingly, there have not been any detectable differences in this taste encoding between the NAc core and shell.

In contrast, neurochemical studies have revealed differences in DA release dynamics in the core and shell during reward receipt suggesting differences in neural processing of reward between these regions. Microdialysis techniques have shown that uncued presentations of a palatable food increase DA release in the shell but not the core (5960). Similarly, FSCV data indicates that rewarding tastants (sucrose) induce phasic and time-locked increases in DA release in the NAc shell, while aversive tastes induce decreases (pauses) in DA release in the shell (6162). Importantly, these dopamine release profiles were not observed in the core (62).

These findings are similar to those found in uncued drug exposure. Direct comparisons of core and shell regions during cocaine administration receipt showed a slightly more nuanced take on this shell-specific encoding. While DA transmission increases for both core and shell following non-contingent cocaine delivery (63), DA transient increases were significantly greater in the shell compared to the core (21).

Though less robust than rewarding outcomes, DA release has also been implicated in aversive processing independent of tastants. Microdialysis techniques indicate that DA may be involved in the processing of unsignaled aversive stimuli. While some studies indicated an increase in DA release in the shell but not the core during unsignaled footshocks (64), other studies that more strictly controlled for effects of associative learning (such as contextual conditioning) failed to show such differences (65). However, as discussed above, unsignaled delivery of an aversive quinine solution, or a conditioned aversive tastant induced significant decreases in DA release in the shell (6162). Rapid (but not tonic) changes in DA release suggest that DA may play some essential role in signaling the value of stimuli.

Taken together, these findings indicate that regional differences in DA dynamics, but not neural processing, are present in the core and shell of the NAc relative to both appetitive rewards and aversive events. These findings suggest that DA transmission may have differing effects on distinct neural populations in the NAc. However, some theories have argued that DA release is both necessary and sufficient to elicit phasic neural firing in the NAc (66). Resolving this discrepancy in the literature will require the ability to study the simultaneous release of DA and its effects on neural processing, a technique used in our laboratory that will be described in greater detail below.

6. PAVLOVIAN CUE LEARNING

Pavlovian conditioning is generally understood as a model of learning in which animals associate predictive cues in their environment (Conditioned stimuli, CS) with motivationally significant outcomes (Unconditioned stimuli, US) (6768). Following repeated contingent CS-US pairings, the animal exhibits a learned response during the CS, referred to as the conditioned response (CR). Importantly, under these conditions, stimuli are presented non-contingently to the animals such that unlike instrumental actions, the animals’ behaviors are not required to produce the outcomes (for example, a reward). Despite this simple premise, the details of how Pavlovian (or Classical) conditioning occurs have provided central insights into the fundamentals of learning and the organization of the brain. For example, for animals to form a Pavlovian association between a cue (CS) and an outcome (US), they must learn a host of critical information including: the identity and value of the US, the identity of the predictive CS, the contingency between the CS and US, the temporal relationship between CS and US, and so on. In this scenario, Pavlovian conditioning is not simply a reflex, but instead reflects a rich understanding of the relationship of the organism and its motivational state to distinct and important stimuli in its environment (69). Crucially, many of the aspects of learning have been reflected in both the neural encoding and DA release dynamics in subregions of the NAc.

The NAc appears to play a role in encoding specific features of Pavlovian conditioning, though this role may be subtle. For example, very few studies have shown that lesions of the NAc abolish all aspects of Pavlovian conditioning. In one of these studies, electrolytic lesions of the NAc core and shell showed only modest effects on cue conditioning, despite chronically damaging cell bodies as well as fibers of passage (70). Similarly, neurotoxic lesions of the NAc failed to detectably disrupt simple Pavlovian conditioning (71). However, smaller lesions of NAc core have been shown to contribute to deficits in autoshaping (72) and discriminated approach (73).

One possible explanation for these differences may be related to whether Pavlovian learning in the absence of a functional NAc is the same as in normal animals. Recently, Singh et al. (74) trained rats to associate a Pavlovian cue light (CS+) with delivery of food. Later, for some subjects, the food was devalued by pairing consumption with illness with a conditioned taste aversion (CTA), while for other subjects the food and illness were never paired. On test day, sham-lesioned animals that had received CTA showed significantly less approach behavior during presentations of the CS+ alone compared to controls that did not receive food-illness pairing. In contrast, animals that had lesions of either the core or shell failed to show this spontaneous decrease in motivated behavior following CTA, suggesting that lesioned animals formed abnormal associations that did not have access to the current value of the reinforcer. In support of this theory, rats with lesions selective to the NAc core failed to show “unblocking” when compound cues predicted a change in the expected reinforcer, either by altering the expected value (1 vs 3 pellets) or by changing the expected outcome identity (banana vs grape sucrose pellets) (75). Collectively, these findings indicate that the core and possibly the shell are essential for allowing Pavlovian cues to enter into associations with the value and identity of the reinforcer.

In support, neural firing in the NAc has recently been analyzed in our laboratory while rats performed Pavlovian associations. In one experiment (54), one auditory cue was predictive of delivery of food pellets (CS+) while another cue had no significance (CS-). After successfully learning to discriminate between cues, neurons in the NAc core were significantly more likely than those in the shell to selectively encode information about the cues. Further, the majority of cells in both the core and shell encoded information about the delivery of the reinforcer. These findings support those found in a similar autoshaping task (76) and suggest that core is more likely involved in making predictions during cue presentations, while both core and shell may be involved with assessing the value of the reinforcer.

The role of DA in the NAc has been the subject of great interest since the foundational discovery that midbrain DA neurons encode predictions and prediction errors during learning (7). A prediction error in this case is the notion that learning should be maximal when the difference between the animal’s expectation of an outcome and the actual outcome received is large. For example, when one food pellet is delivered unexpectedly, the animal is “expecting” no food, so the difference between the actual outcome (1) and the expected outcome (0) is +1, indicating a positive prediction error. In contrast, if an outcome is well-predicted (expect 1 pellet) and the outcome is 1 pellet, then there is no prediction error because the difference between expectation (1) and the actual outcome (1) is zero. Models of learning such as the Rescorla-Wagner model (77) and other similar models (7880) predict that this error term (ΔV in their nomenclature) is the amount of learning that occurs on any given trial. Thus, when there is a large discrepancy between the expected value (ΣV) and the actual value, such as the presentation of an unexpected reward, there should be more learning, while the more predicted the outcome (no matter how valuable), the less learning should occur.

In a series of experiments, Schultz and colleagues (7, 81) demonstrated that DA neurons in the VTA and SN exhibited neural activity that tracked the error prediction term at the time of reward receipt. For example, when a cue predicted a juice reward for a monkey but the association was not yet learned, putative DA neurons fired maximally at the time of reward receipt because the error prediction term (ΔV) was high. However, as the monkey learned that the cue predicted the juice, firing in these cells during juice receipt monotonically decreased as ΔV similarly became minimal. These data indicated that DA neurons were encoding not the reward value itself, but rather the error between an expected outcome and the actual outcome. Notably, also the learning model includes nomenclature for the prediction itself, ΣV. During initial presentations, the cue did not have any predictive associations with the reward, so when ΣV was minimal, there was little firing of DA neurons during the cue. However, when the ΣV increased, making a better prediction of the expected reinforcer, firing during the cue similarly increased. Thus, DA neurons encoded both the prediction (ΣV) and prediction error (ΔV) over the course of a simple Pavlovian association. This pattern of activity thus provides a substrate for a teaching mechanism by which animals can attend to and learn about essential cues in their environment as they come to predict valued outcomes.

While this observation has guided much research in recent years, difficulties in identifying putative DA midbrain neurons and their projection pathways have made more precise functional descriptions difficult (8283). Microdialysis findings during Pavlovian learning have suggested that cues predictive of rewards increase DA in the core but not the shell, though the temporal resolution of these studies make it difficult to dissociate DA release to the cue compared to the reward (5960). To address this explicitly, work in this lab measured DA release in the NAc core of rats during acquisition of a Pavlovian task (84). Similar to the findings of Schultz et al., DA release during the cue gradually increased across days reaching a maximum when rats showed the best discrimination, while DA release to the food reward was greatest when the pellet was unexpected, but lower when it became more fully predicted. Further, the FSCV findings show that the prediction error encoding of midbrain DA neurons recorded by Schultz and colleagues projects to the NAc core and thus the DAergic projections to the NAc are implicated in stimulus-outcome learning. Further, these findings using FSCV confirms and extends the hypothesis that DA release in Pavlovian learning tracks both the prediction and prediction error during learning, critical factors essential for the assessment of expected value.

7. DOPAMINE DETECTION DURING GOAL-DIRECTED ACTION

Because the NAc is the primary target of the mesolimbic DA system, its role in goal-directed (operant) behavior has been extensively studied. Neurotoxic lesions of the NAc core, for example, have been shown to modestly impair responding in a variety of instrumental tasks, though these effects were most profound when animals were required to select different actions when the value of the expected reinforcer was altered (71, 8586). This finding indicates that as the “limbic-motor interface” (87), the importance of the NAc exceeds that of action performance alone, and is essential for incorporating aspects of value into chosen courses of action.

Early studies in rapid DA release during operant performance have shown a highly selective response to the actions resulting in valued outcomes. In one early study, rats that were trained to press a lever to receive rewarding intra-cranial self-stimulation (ICSS) of DA-expressing neurons rapidly learned to press at a high rate. DA was released following stimulation as well as time-locked to the lever press (88), though DA transients to the press attenuated over time. However, in studies where animals pressed a lever to self-administer cocaine, DA transients time-locked to the press were reliably found in both the core and shell (1415, 63).

Additional studies have examined the role that discriminative stimulus (DS) cues have on modulating instrumental performance. DS cues, unlike simple Pavlovian cues, inform animals that they can make a response to get a reinforcer during DS presentation. It has reliably been shown that DS-evoked DA release in instrumental settings is regionally-specific within the NAc and that the dynamics and volume of the DA release encodes critical aspects about expected actions and outcome. For example, DA is released in the NAc shell during DS cues predictive of a valued ICSS stimulation (13). Further, when the value of the ICSS reinforcing stimulus is increased, DA release in the shell concomitantly increases during the DS and is correlated with faster operant response times for the higher stimulation (89).

Similar findings were observed during operant responding for natural rewards, as DA is reliably released in the core and the shell during DS cues predictive of food (8). However, more detailed analysis has revealed some reliable differences in this type of encoding across regions. For example, in one recent study, we observed DA release relative to a DS linked with impending food reward in both the core and shell. However, in the core, DA release appears to track the predicted value of the impending action since it is only phasically and transiently active during the DS presentation. In contrast, DA release in the shell was greater than in the core during the DS, but also showed a second peak in release at the time of the press, and remained elevated at the time of sucrose consumption (90). These differences suggest a possible separation in DA processing in goal-directed behavior, such that while both the core and shell may make predictions about impending outcome, only the shell may be monitoring the current value of a specific action.

8. HIGHER-ORDER LEARNING AND REINFORCEMENT IN THE NAC

Studies incorporating conditioning designs have reliably demonstrated that animals acquire critical information about the expected outcome (69). These expectant representations of the anticipated reinforcer include details such as the reinforcer identity (e.g., grape flavor compared to orange), and hedonic value. Indeed, in multiple regions in the limbic system that project to the NAc, associative cues are capable of reactivating the same cells as those activated by the reinforcer alone (91). One consequence of this feature is that motivationally salient cues are able to support new learning or modulate ongoing behavior. In a model of this called Pavlovian second-order conditioning, animals first learn to associate a cue (S1) with a desirable outcome. After learning, animals then receive a new cue (S2) that predicts the occurrence of S1 in the absence of reinforcement. Despite the S2 never being directly paired with the outcome, its association with the value accrued to the S1 allows for conditioning to the S2 to occur (9295).

Further, it has been shown that the NAc plays an essential role in encoding this type of associative information. Contralateral lesions of the NAc and basolateral amygdala (BLA; a primary source of limbic input to the NAc) made prior to learning appear to leave S1 learning intact, but abolish the ability to learn S2 associations (96). Recently, neural activity was recorded in this task while rats learned the S1 and S2 associations (97). In normal animals, a subset of NAc neurons encoded information about the different cues, and the percent of cue-encoding neurons increased commensurate with learning. However, if rats had earlier modest but sustained access to self-administered cocaine (at least 8 days) they were unable to learn the S2 association. Neural recordings made during learning showed that despite apparently normal behavior during the S1, cocaine-treated animals showed virtually no evidence of cue-selective encoding over S1 training. Thus, impoverished S1 encoding in the NAc prevented animals from acquiring detailed associative information during initial learning, which subsequently prevented any S2 learning.

Numerous studies have also shown that Pavlovian cues can potentiate operant responding, as demonstrated in a task known as Pavlovian-to-instrumental transfer (PIT) (9899). In a PIT task, animals first learn that a Pavlovian cue is predictive of reinforcement, and separately learn that an instrumental response will also lead to reinforcement. During the “transfer” portion of the task, the Pavlovian cue is presented while the animal is performing the instrumental behavior, and the cue typically increases, or ‘invigorates’ responding. This modulatory activity of a previously learned Pavlovian cue on the operant response in PIT appears to depend critically on the acquired motivational significance and even sensory identity of the reinforcer. For example, in the presence of a food-paired cue, responses are invigorated (100101), while cues paired with aversive events (e.g., mild foot shock) will inhibit operant activity (102). Notably, lesions of NAc fail to disrupt the simple conditioning aspects of either Pavlovian or instrumental performance alone, but selectively impair the “transfer” ability of the cue to modulate operant activity (71, 86, 103), while DA-enhancing infusions of amphetamine into the NAc shell enhance this effect (104). Attempts to dissociate the relative function of the core and shell in PIT have been equivocal, as different studies have indicated a necessity for either the core (86, 103) or shell (71) in general PIT. Using electrophysiological techniques, we recently reported that NAc core neurons were more likely to encode cue-specific information, even during transfer, and that this information was correlated with the degree of transfer. In contrast, cells that responded to both the cue and response in the shell accurately predicted transfer success. Further, a subset of animals was given access to self-administered cocaine after initial learning, but prior to transfer. These animals showed greatly potentiated transfer compared to controls, and showed a significant increase in all task-related encoding that was specific to the shell (54). These findings suggest that the role of NAc in higher-order learning involves both the acquisition of the subjective value of cues in the core, but that the shell is involved in more general aspects of hedonic encoding and integration with action.

9. NAC DA AND DECISION-MAKING

Given its role in the encoding of value and goal-directed behavior, it is perhaps not surprising that the mesolimbic DA system, particularly its projections to the NAc, is also important in value-based decision making. At its root, decision-making processes can be broken down into several independent processes. First, animals must be aware of valued contingencies in their environment, and for each option, be able to estimate the subjective value of each outcome. However, the most valuable outcome (e.g., mate) may come only after great costs (e.g., dangerous fights with competing suitors), while the opportunity to select particular options may permanently preclude other valuable options (e.g., losing the fight for the desired mate may prevent any mating that season). Thus, assessing optimal choices for the organism depends critically on not only learning the behaviorally appropriate actions to obtain valued outcomes, but also upon the various costs and benefits associated with each action.

Recent findings suggest an important role of the NAc in critical aspects of this complex process (105108). In the laboratory, models of decision making are employed in which subjects are exposed to cues which predict outcomes of different value, and are allowed to make choices between these different options. In these experiments, the expected value of one option is pitted against another of a different value by manipulating various features of the outcome or task costs (108). For example, the expected value may differ in features of the reinforcing outcome, such as magnitude (1 food pellet versus 5) or identity (banana pellet versus orange pellet) while keeping constant the task costs for each option (e.g., FR1 lever press). Another way to change expected value is to change the costs associated with the different actions to obtain then, e.g., by making one response more effortful, by imposing a delay to reward, or by decreasing the probability of obtaining reinforcement.

Damage to the NAc has repeatedly been shown to disrupt the ability of animals to show normal decision making in these experiments. While it does not appear that the NAc core is necessary for simply choosing between a small or large reward, animals with lesions or temporary inactivation of the core were unwilling to choose the better reward after an imposed delay or decrease in reward probability, even though those options would result overall in more rewards (109113). There is limited evidence that inactivation of the NAc shell has similar effects on decision-making, as behavior following these manipulations often did not vary from controls (113). However, in a recent study, shell inactivation or combinations of both shell and core inactivation made animals less sensitive to reward magnitude. Specifically, when these animals were forced to choose between a small (2 pellet) or large (4 pellet) reward with the same response probability (100%), they chose the large reward significantly less often than controls. Indeed, though the loss of magnitude sensitivity following inactivation of the core or shell alone was relatively small, this effect was much bigger following general NAc inactivation of both core and shell (114).

It is not immediately clear from these studies what the specific role of DA is with respect to appropriate action selection. For example, if DA blockade is associated with increased impulsivity (i.e., the taking of a smaller or less desirable outcome because it is comes with less delay or less effort), then one interpretation is that greater DA is necessary to “override” the default selection if an impulsive action/outcome. Another related possibility is that DA could act as a working memory signal, holding the expected reward value over delays to reinforcement so that animals can avoid less desirable impulsive actions, and thus DA would be greater for longer delays but not for shorter ones. In contrast, consistent with the earlier described learning models, another option is that DA release acts as a prediction and teaching mechanism, indicating the predicted value of the various options available. Here, DA would increase for the more valuable option (e.g., the short delay) but not the less valuable (long delay) option. Notably, these hypotheses make unique predictions (e.g., greater DA release for more delayed or effortful options in the first hypothesis compared to greater DA release in for the easier or more desirable option in the last).

Recent evidence suggests that phasic DA release in decision-making is most consistent with the learning hypothesis. For example, dopaminergic midbrain neurons display differential activity based on several factors that reflect reward value such as the cost of performing tasks, the delay to reinforcement, and the probability and magnitude of reinforcement (115117). Much of the activity appears to depend on modulating NAc neural functions, as blocking the activity of DA transmission through receptor antagonists bias animals towards emitting more “impulsive” responses that lead to less desirable rewards, but require less effort (118119), shorter latency to reward (112) or higher probability of reinforcement (120121). Recent evidence from our laboratory and others (122123) has shown that DA release within the NAc encodes predicted reward values when animals are actively making decisions. Specifically, the presentation of reward predictive cues that signal lower response costs, shorter delays (122), increased reward magnitude (123), or increased reward probability (124) evoke higher DA release than cues that signal high response costs, long delays, lower magnitude, and lower probability rewards.

Notably, these findings suggest the possibility that DA release may function to signal the expected value of any associative function. If this were the case, then DA release should be greater for an objectively better option, (for example, 2 pellets compared to 1 or a reward obtained with minimal effort versus high exertion), but not when the expected values were the same. To test this, recent work in our laboratory used a risky decision making task (125). In this task, animals had the option of pressing one lever (FR1) for 1 sucrose pellet 100% of the time, or a second lever (FR1) for 2 pellets 50% of the time. As such, rats could ‘play it safe’ and press the lever with the guaranteed small reward, or ‘take a risk’ and potentially double their outcome on that trial. Because the expected value was the same across the session (1 pellet × p(1.0) = 1, versus 2 pellets × p(0.5) = 1), any differences in preference were due to subjective biases rather than differences in outcome value. Nearly all rats developed a reliable preference for one response-outcome contingency, and DA release scaled to these preferences. When faced with the choice between the two options, DA release was similar to that observed for the animals preferred lever. Critically, DA release during choice trials was present at a high level regardless of the ultimate action selection, suggesting that DA was not tracking motor planning and was instead involved in biasing animals towards the best available option. Likewise, studies recording DA release in the NAc and from putative DA projections neurons in the VTA show a similar bias towards encoding the best available option instead of the chosen action (116, 122).

DA release in both the NAc core and shell has been implicated in encoding cue outcome associations during complex decision making tasks (122, 124125). However, the NAc core and shell appear to be differentially involved in value-based decision making. FCSV recordings in the NAc core versus shell have shown that value encoding of the mesolimbic DA system is restricted to the NAc core when animals are choosing between low effort and high effort choices (122). Further, phasic DA is released in the NAc shell during cues that predict future rewards but, unlike the in the core, fail to differentiate between subjective value of the different options (125).

10. MICROCIRCUIT ANALYSIS OF DA INFLUENCE ON NAC FUNCTION: SIMULTANEOUS FSCV AND ELECTROPHYSIOLOGY DURING LEARNING

One important issue to consider is whether DA release has a direct impact on the encoding of neural activity during motivated behavior. While some theories have suggested that DA release in the NAc is both necessary and sufficient for phasic neural encoding (66), other findings show a dissociation between DA release and neural encoding in the core and shell. To address this question directly, work in this lab has employed combined electrophysiology and FSCV using the same carbon fiber microelectrode which allows for simultaneous measurement of DA release and cellular activity in the same location during behavior (12, 14, 126).

In an initial study, simultaneous electrophysiological and electrochemical recordings were taken in rats during ICSS. The results revealed that rapid DA release in NAc was associated with coincident changes in neuronal activity of specific subsets of NAc neurons (12, 126). Significantly, it was primarily at sites where neurons encoded information about ICSS that NAc DA release was detected (126). Similar results were observed during cocaine self-administration. Specifically, coincident changes in rapid dopamine release and NAc cell firing were observed within seconds of lever press responding for cocaine but not at locations where neurons exhibited nonphasic activity (14). In addition to the coincident encoding of DA and neural activity in the NAc, the relative intensity of the two events were also coupled; the greater the strength of neuronal activity, the larger the DA release event (14). This same pattern of coincident activation occurs during goal-directed behavior for “natural” reward such as sucrose self-administration (20). Here, discriminative stimuli or cues predictive of reward availability evoked increases in rapid DA signaling and simultaneous changes in NAc cell firing (both inhibitions and excitations). In contrast, locations at which neurons exhibited non-phasic activity (no change in firing rate relative to key events in the task) no significant changes in DA signaling were observed.

However, coincident activation does not necessarily indicate a causal link. For example blockade of DA release using a vesicular monoamine transporter inhibitor failed to alter the inhibitory neural responses of NAc neurons during ICSS (12). Thus, although DA projections play a central role in modulating NAc neuronal output, rapid DA signaling operates within a complex microcircuit in which the NAc is embedded. In order to dissect this circuit we recently coupled pharmacological inactivation of VTA neuron burst-firing, the origin of rapid DA signaling (6), with electrophysiological recording of NAc neurons during sucrose self-administration (20). Related to the above finding, blockade of DA signaling with the selective NMDA receptor antagonist AP-5 only affected excitatory neural activity by selectively dampening the time-locked activity during cues and goal-directed actions without affecting baseline firing rate. In contrast, phasic blockade of DA had no effect on inhibitory goal-directed NAc activity (20).

One important issue is how inactivation of burst firing of VTA DA neurons can selectively modulate subsets of NAc neurons while leaving other cells unaffected. This may depend in part on the cellular organization of the NAc, where distinct subsets of medium spiny neurons (MSNs, the primary cell type in the NAc) differentially express either D1 (low affinity for DA) or D2 (high affinity) receptors, while only 25% of NAc neurons co-express both receptor subtypes (127). Thus, phasic and tonic dopamine release may differentially modulate subsets of NAc neurons depending on connectivity and the type of receptor expressed on those cells. For example, the high DAergic affinity of D2 receptors are thought to be chronically occupied with DA even at tonic levels, and thus may be relatively unaffected by the loss of phasic DA. In contrast, bursts of phasic DA levels may have much more notable effects on low-affinity D1 receptors, where relatively saturating levels of DA may occupy more receptors than at tonic levels (128). Given this, one explanation for differences in inhibitions and excitations may be explained in terms of primary receptor type. Importantly, predominantly D1- and D2-expressing cells are thought to have distinct patterns of projections to afferent regions (35), suggesting that even similar levels of phasic DA release in the NAc may have distinct circuit-level effects on the organization of learning and behavior.

11. BUILDING CIRCUITS OF MOTIVATION AND LEARNING: BLA INFLUENCES ON NAC ACTIVITY DURING BEHAVIOR

Contemporary studies have explored the role of larger circuits in which the NAc is embedded in an attempt to isolate the essential functions of related structures. In particular, NAc inputs from regions such as the orbitofrontal cortex, medial prefrontal regions, BLA and hippocampus each play important roles in modulating learning about actions, outcomes and contexts (25, 3132, 129135). One well-explored part of this circuit involves BLA afferents to the NAc core and shell (136137), as disconnection of this circuit through asymmetric lesions has profound effects on motivated learning (96, 138).

Using a combination of neurophysiological recordings in the NAc and reversible inactivation, the underlying mechanisms of NAc processing are starting to be more fully elucidated. In two similar studies (138139), rats were trained in a basic discriminative operant task where a DS cue predicted the availability to press a lever for food, while a control stimulus (NS) meant that presses would be unrewarded. Electrophysiological recordings taken in the NAc showed an increase in neural encoding in the core for the DS cue compared to the control NS cue. However, when the BLA of these animals was transiently inactivated with baclofen and muscimol, task-related encoding for the DS was impaired, but NS encoding was unaffected. This was not a general effect, as inactivation limited to the BLA contralateral to the recording site was without effect on NAc encoding (139). Supporting these claims, stimulation of the BLA during cue presentations significantly increased excitatory DS encoding in the NAc (138). Interestingly, despite the presence of BLA projections to both core and shell, inactivation of BLA dampened DS encoding to the cue in the core, but had no effect in the shell (139).

While these inactivation and lesion studies show that BLA and NAc are part of a circuit for learning about goal-directed behavior, it is not clear that the circuit necessarily involves a direct connection from BLA to NAc to effect these changes. A possible alternative might involve BLA projections to prefrontal areas, which in turn might have dramatic influence on NAc function. To differentiate these possibilities, optogenetic techniques were employed by first infusing a glutamate-linked virus into BLA cell bodies (140141). Once taken up by the cells, excitatory CaM2K-expressing neurons expressed light-sensitive receptors on the membrane up to the terminals. When the appropriate wavelength light was shone on these cells by laser via a fiber optic cable, they would either depolarize (channelrhodopsin) or hyperpolarize (halorhodopsin) during the light presentation. Because of the specificity of the light and anterograde transport of the light-sensitive receptors, only those fibers arising from the BLA to the NAc would be activated by light stimulation. Using this optogenetic technique, excitatory stimulation of channelrhodopsin-expressing BLA afferents to the NAc was sufficient to support goal-directed behavior. Conversely, brief inhibition of these fibers delivered during a Pavlovian cue reduced some measures of motivated behavior (142). These findings suggest that BLA activation of NAc neurons is both necessary and sufficient to support learning, consonant with the earlier inactivation and lesion work.

Interestingly, similar techniques have revealed that BLA inactivation plays a role in modulating DA release in the NAc. Afferents from the BLA to midbrain DA nuclei are almost completely absent (143), and thus changes in BLA function should not directly affect DAergic function. However, inhibitions of BLA have been demonstrated to blunt rapid DA release in the NAc core during DS cues (144). The precise mechanism of this is not known, but suggests that glutamatergic afferents from the BLA may be able to modulate DA terminal release onto NAc neurons. Understanding these complex mechanisms of limbic inputs impinging on the NAc (and including those arising from other regions like OFC or medial prefrontal areas (e.g., 145)) will be essential for understanding microcircuitry mechanisms of motivated learning.

12. CONCLUSIONS: TOWARDS A THEORY OF NAC CORE AND SHELL DIFFERENCES

Across multiple behaviors, the role for DA in the NAc in motivated action is broadly consistent – signaling predictions and expectations about motivationally-salient stimuli for the putative purpose of guiding action. However, this general mechanism also reveals subtle but important distinctions across multiple levels such as the valence of the reinforcer, the type of action or learning being undertaken, and decisions being made to acquire those outcomes. Further, at the microcircuit level of encoding, the NAc appears comprised anatomically and functionally into subregions capable of encoding discrete yet interdependent features of goal-directed action. No theory to date has satisfactorily shown a consistent segregation of function between core and shell. These discrepancies likely result in the vast array of tasks that have been employed to understand these regions and thus differing demands placed on the core and shell may be task-specific. Further, while anatomical projections are clearer from limbic structures to the core and shell (27), mesolimbic input from the VTA follows a more mediolateral organization which may have important functional implications for how these regions process information (33). With these caveats, however, here we propose a hypothesis that can organize what is currently known about the separate functions of DA function in the NAc.

Across multiple platforms discussed above, the NAc core appears to be critically involved with a host of “cognitive” style computations. At the neural level, the NAc core is implicated not only in simple Pavlovian cue encoding, but also in goal-directed actions and the performance of selecting between actions of differing value. The core is critically situated such that arising fibers from similarly “cognitive” limbic regions of the prelimbic cortex, OFC and BLA impinge on this region, allowing for the monitoring and implementing of a wide array of value-driven decision-making options. Consistent with this, lesions of NAc core affect the ability for animals to alter behavior when expected outcomes change regardless of the change (value or identity), while OFC lesions only affect a subset of these decision types (75). Interestingly, it is not necessarily apparent that the core is essential for putting this information into action across all conditions. In our lab, potentiated PIT transfer following cocaine exposure did not alter NAc core encoding (54), nor was core encoding affected in rats that failed to learn Pavlovian second-order conditioning (97).

Similarly, DA release in the core was associated not with the action selected, but with the prediction made during associative cues and behaviors for salient stimuli (122). Critically, the value encoded in these DA signals was not the costs involved with decisions (e.g., more DA release to overcome increasing costs), but instead scaled with the subjective value and preference of the animal. Thus, the DA signal in the core is related to the neural response; specifically the computation of the cost-benefit calculation that allows the animal to have access to the expected value of each option which allows for later selection of action.

In contrast, the NAc shell presents a relatively simpler version of encoding reward value. Shell inactivation is associated with deficits in judging the reward magnitude of outcomes (146), but not necessarily with the various costs associated with those outcomes (113). In addition, lesions limited to the shell prevented rats from preferring maze arms baited with larger rewards, though performance on learning the arm maze itself was unaffected (147). At the cellular level, shell neurons shift firing properties (from inhibitory to excitatory) after taste cues are devalued with a conditioned taste aversion (57). Similarly, the role of DA release in the shell has indicated a role for this NAc subregion in reward encoding. For example, DA release scales with the intensity of ICSS stimulation just prior to presses made on the lever (89), suggesting a key role in mediating reward value and seeking. In a different vein, similar data from our lab has presented the intriguing possibility that DA may be tracking reward value that is unrelated to task costs. When animals are forced to choose between one lever that required low effort (1 press) to obtain a pellet, compared to another lever that required high effort (16 presses) to obtain a pellet, animals routinely chose the low-effort response. As mentioned above, DA in the core tracked the subjective preference of the available responses. However, while DA was significantly released in the shell, the release levels did not change as a factor of the different options in either this task (148) or similar tasks where probability or delay rather than effort was manipulated (122, 125). Notably, in this situation, the outcome was the same in both situations (1 pellet), despite different effort requirements between those options.

Thus, one possibility for the role of DA in the NAc shell is the maintenance of reward-specific encoding. In normal animals, then, coordinated activity between core and shell regions can provide a specific cost and reward analysis of different options for action selection. Anatomically, core neurons project to the shell, but there are only sparse connections from the shell to the core (40). Thus, optimally, information originating in the core may “pass through” the shell and integrate expectations about various options with the specificity of the reward options.

These putative functions of the core and shell provide a rough framework for understanding and organizing our current knowledge of NAc processing and function. By incorporating FSCV findings into these models, we are now beginning to grasp the complexity and importance of DA as a neuromodulator as well as a powerful signal for learning-related changes in this system. Future studies employing circuit-level manipulation of these individual aspects of reward, cost, action and error correction with tools like optogenetics should provide exciting insight into this complex system.

Acknowledgments

This work was supported by National Institute on Drug Abuse (Grant Nos. DA 017318 to RMC, DA 029156 to MPS, and DA 030307 to JAS).

Abbreviations

DA

dopamine

NAc

nucleus accumbens

VTA

ventral tegmental area

FSCV

ast scan cyclic voltammetry

CS

conditioned stimulus

US

unconditioned stimulus

CR

conditioned response

CTA

conditioned taste aversion

ICSS

intracranial self-stimulation

BLA

basolateral amygdala

DS

discriminative stimulus

PIT

Pavlovian-to-instrumental transfer

References

  • 1.Parr-Brownlie LC, Hyland BI. Bradykinesia induced by dopamine D2 receptor blockade is associated with reduced motor cortex activity in the rat. J Neurosci. 2005;25(24):5700–9. doi: 10.1523/JNEUROSCI.0523-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Schultz W. Reward signaling by dopamine neurons. Neuroscientist. 2001;7(4):293–302. doi: 10.1177/107385840100700406. [DOI] [PubMed] [Google Scholar]
  • 3.Breier A, Su TP, Saunders R, Carson RE, Kolachana BS, de Bartolomeis A, Weinberger DR, Weisenfeld N, Malhotra AK, Eckelman WC, Pickar D. Schizophrenia is associated with elevated amphetamine-induced synaptic dopamine concentrations: evidence from a novel positron emission tomography method. Proc Natl Acad Sci U S A. 1997;94(6):2569–74. doi: 10.1073/pnas.94.6.2569. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Grace AA. Phasic versus tonic dopamine release and the modulation of dopamine system responsivity: a hypothesis for the etiology of schizophrenia. Neuroscience. 1991;41(1):1–24. doi: 10.1016/0306-4522(91)90196-u. [DOI] [PubMed] [Google Scholar]
  • 5.Robinson DL, Venton BJ, Heien ML, Wightman RM. Detecting subsecond dopamine release with fast-scan cyclic voltammetry in vivo. Clin Chem. 2003;49(10):1763–73. doi: 10.1373/49.10.1763. [DOI] [PubMed] [Google Scholar]
  • 6.Sombers LA, Beyene M, Carelli RM, Wightman RM. Synaptic overflow of dopamine in the nucleus accumbens arises from neuronal activity in the ventral tegmental area. J Neurosci. 2009;29(6):1735–42. doi: 10.1523/JNEUROSCI.5562-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275(5306):1593–9. doi: 10.1126/science.275.5306.1593. [DOI] [PubMed] [Google Scholar]
  • 8.Roitman MF, Stuber GD, Phillips PE, Wightman RM, Carelli RM. Dopamine operates as a subsecond modulator of food seeking. J Neurosci. 2004;24(6):1265–71. doi: 10.1523/JNEUROSCI.3823-03.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Robinson DL, Heien ML, Wightman RM. Frequency of dopamine concentration transients increases in dorsal and ventral striatum of male rats during introduction of conspecifics. J Neurosci. 2002;22(23):10477–86. doi: 10.1523/JNEUROSCI.22-23-10477.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Robinson DL, Phillips PE, Budygin EA, Trafton BJ, Garris PA, Wightman RM. Sub-second changes in accumbal dopamine during sexual behavior in male rats. Neuroreport. 2001;12(11):2549–52. doi: 10.1097/00001756-200108080-00051. [DOI] [PubMed] [Google Scholar]
  • 11.Beyene M, Carelli RM, Wightman RM. Cue-evoked dopamine release in the nucleus accumbens shell tracks reinforcer magnitude during intracranial self-stimulation. Neuroscience. 2010;169(4):1682–1688. doi: 10.1016/j.neuroscience.2010.06.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Cheer JF, Heien ML, Garris PA, Carelli RM, Wightman RM. Simultaneous dopamine and single-unit recordings reveal accumbens GABAergic responses: implications for intracranial self-stimulation. Proc Natl Acad Sci U S A. 2005;102(52):19150–5. doi: 10.1073/pnas.0509607102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Owesson-White CA, Cheer JF, Beyene M, Carelli RM, Wightman RM. Dynamic changes in accumbens dopamine correlate with learning during intracranial self-stimulation. Proc Natl Acad Sci U S A. 2008;105(33):11957–62. doi: 10.1073/pnas.0803896105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Owesson-White CA, Ariansen J, Stuber GD, Cleaveland NA, Cheer JF, Wightman RM, Carelli RM. Neural encoding of cocaine-seeking behavior is coincident with phasic dopamine release in the accumbens core and shell. Eur J Neurosci. 2009;30(6):1117–27. doi: 10.1111/j.1460-9568.2009.06916.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Phillips PE, Stuber GD, Heien ML, Wightman RM, Carelli RM. Subsecond dopamine release promotes cocaine seeking. Nature. 2003;422(6932):614–8. doi: 10.1038/nature01476. [DOI] [PubMed] [Google Scholar]
  • 16.Gonon F. Prolonged and extrasynaptic excitatory action of dopamine mediated by D1 receptors in the rat striatum in vivo. J Neurosci. 1997;17(15):5972–8. doi: 10.1523/JNEUROSCI.17-15-05972.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Venton BJ, Michael DJ, Wightman RM. Correlation of local changes in extracellular oxygen and pH that accompany dopaminergic terminal activity in the rat caudate-putamen. J Neurochem. 2003;84(2):373–81. doi: 10.1046/j.1471-4159.2003.01527.x. [DOI] [PubMed] [Google Scholar]
  • 18.Robinson DL, Hermans A, Seipel AT, Wightman RM. Monitoring rapid chemical communication in the brain. Chem Rev. 2008;108(7):2554–84. doi: 10.1021/cr068081q. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Kawagoe KT, Garris PA, Wiedemann DJ, Wightman RM. Regulation of transient dopamine concentration gradients in the microenvironment surrounding nerve terminals in the rat striatum. Neuroscience. 1992;51(1):55–64. doi: 10.1016/0306-4522(92)90470-m. [DOI] [PubMed] [Google Scholar]
  • 20.Cacciapaglia F, Wightman RM, Carelli RM. Rapid dopamine signaling differentially modulates distinct microcircuits within the nucleus accumbens during sucrose-directed behavior. J Neurosci. 2011;31(39):13860–13869. doi: 10.1523/JNEUROSCI.1340-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Aragona BJ, Cleaveland NA, Stuber GD, Day JJ, Carelli RM, Wightman RM. Preferential enhancement of dopamine transmission within the nucleus accumbens shell by cocaine is attributable to a direct increase in phasic dopamine release events. J Neurosci. 2008;28(35):8821–31. doi: 10.1523/JNEUROSCI.2225-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Margolis EB, Lock H, Hjelmstad GO, Fields HL. The ventral tegmental area revisited: is there an electrophysiological marker for dopaminergic neurons? J Physiol. 2006;577(Pt 3):907–24. doi: 10.1113/jphysiol.2006.117069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Carelli RM, Wightman RM. Functional microcircuitry in the accumbens underlying drug addiction: insights from real-time signaling during behavior. Current Opinion in Neurobiology. 2004;14(6):763–8. doi: 10.1016/j.conb.2004.10.001. [DOI] [PubMed] [Google Scholar]
  • 24.Wright CI, Groenewegen HJ. Patterns of overlap and segregation between insular cortical, intermediodorsal thalamic and basal amygdaloid afferents in the nucleus accumbens of the rat. Neuroscience. 1996;73(2):359–73. doi: 10.1016/0306-4522(95)00592-7. [DOI] [PubMed] [Google Scholar]
  • 25.Brog JS, Salyapongse A, Deutch AY, Zahm DS. The patterns of afferent innervation of the core and shell in the “accumbens” part of the rat ventral striatum: immunohistochemical detection of retrogradely transported fluoro-gold. J Comp Neurol. 1993;338(2):255–78. doi: 10.1002/cne.903380209. [DOI] [PubMed] [Google Scholar]
  • 26.Zahm DS, Brog JS. On the significance of subterritories in the “accumbens” part of the rat ventral striatum. Neuroscience. 1992;50(4):751–67. doi: 10.1016/0306-4522(92)90202-d. [DOI] [PubMed] [Google Scholar]
  • 27.Berendse HW, Galis-de Graaf Y, Groenewegen HJ. Topographical organization and relationship with ventral striatal compartments of prefrontal corticostriatal projections in the rat. J Comp Neurol. 1992;316(3):314–47. doi: 10.1002/cne.903160305. [DOI] [PubMed] [Google Scholar]
  • 28.Swanson LW. The projections of the ventral tegmental area and adjacent regions: a combined fluorescent retrograde tracer and immunofluorescence study in the rat. Brain Research Bulletin. 1982;9(1–6):321–53. doi: 10.1016/0361-9230(82)90145-9. [DOI] [PubMed] [Google Scholar]
  • 29.Ferreira JG, Del-Fava F, Hasue RH, Shammah-Lagnado SJ. Organization of ventral tegmental area projections to the ventral tegmental area-nigral complex in the rat. Neuroscience. 2008;153(1):196–213. doi: 10.1016/j.neuroscience.2008.02.003. [DOI] [PubMed] [Google Scholar]
  • 30.Nauta WJ, Smith GP, Faull RL, Domesick VB. Efferent connections and nigral afferents of the nucleus accumbens septi in the rat. Neuroscience. 1978;3(4–5):385–401. doi: 10.1016/0306-4522(78)90041-6. [DOI] [PubMed] [Google Scholar]
  • 31.Totterdell S, Smith AD. Convergence of hippocampal and dopaminergic input onto identified neurons in the nucleus accumbens of the rat. J Chem Neuroanat. 1989;2(5):285–98. [PubMed] [Google Scholar]
  • 32.Brady AM, O’Donnell P. Dopaminergic modulation of prefrontal cortical input to nucleus accumbens neurons in vivo. Journal of Neuroscience. 2004;24(5):1040–9. doi: 10.1523/JNEUROSCI.4178-03.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Ikemoto S. Dopamine reward circuitry: two projection systems from the ventral midbrain to the nucleus accumbens-olfactory tubercle complex. Brain Res Rev. 2007;56(1):27–78. doi: 10.1016/j.brainresrev.2007.05.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.David HN, Ansseau M, Abraini JH. Dopamine-glutamate reciprocal modulation of release and motor responses in the rat caudate-putamen and nucleus accumbens of “intact” animals. Brain Res Brain Res Rev. 2005;50(2):336–60. doi: 10.1016/j.brainresrev.2005.09.002. [DOI] [PubMed] [Google Scholar]
  • 35.Lu XY, Ghasemzadeh MB, Kalivas PW. Expression of D1 receptor, D2 receptor, substance P and enkephalin messenger RNAs in the neurons projecting from the nucleus accumbens. Neuroscience. 1998;82(3):767–80. doi: 10.1016/s0306-4522(97)00327-8. [DOI] [PubMed] [Google Scholar]
  • 36.Zahm DS. An integrative neuroanatomical perspective on some subcortical substrates of adaptive responding with emphasis on the nucleus accumbens. Neurosci Biobehav Rev. 2000;24(1):85–105. doi: 10.1016/s0149-7634(99)00065-2. [DOI] [PubMed] [Google Scholar]
  • 37.Zahm DS, Heimer L. Specificity in the efferent projections of the nucleus accumbens in the rat: comparison of the rostral pole projection patterns with those of the core and shell. J Comp Neurol. 1993;327(2):220–32. doi: 10.1002/cne.903270205. [DOI] [PubMed] [Google Scholar]
  • 38.Usuda I, Tanaka K, Chiba T. Efferent projections of the nucleus accumbens in the rat with special reference to subdivision of the nucleus: biotinylated dextran amine study. Brain Res. 1998;797(1):73–93. doi: 10.1016/s0006-8993(98)00359-x. [DOI] [PubMed] [Google Scholar]
  • 39.Heimer L, Zahm DS, Churchill L, Kalivas PW, Wohltmann C. Specificity in the projection patterns of accumbal core and shell in the rat. Neuroscience. 1991;41(1):89–125. doi: 10.1016/0306-4522(91)90202-y. [DOI] [PubMed] [Google Scholar]
  • 40.van Dongen YC, Deniau JM, Pennartz CM, Galis-de Graaf Y, Voorn P, Thierry AM, Groenewegen HJ. Anatomical evidence for direct connections between the shell and core subregions of the rat nucleus accumbens. Neuroscience. 2005;136(4):1049–71. doi: 10.1016/j.neuroscience.2005.08.050. [DOI] [PubMed] [Google Scholar]
  • 41.van Dongen YC, Mailly P, Thierry AM, Groenewegen HJ, Deniau JM. Three-dimensional organization of dendrites and local axon collaterals of shell and core medium-sized spiny projection neurons of the rat nucleus accumbens. Brain Struct Funct. 2008;213(1–2):129–47. doi: 10.1007/s00429-008-0173-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Wong AC, Shetreat ME, Clarke JO, Rayport S. D1- and D2-like dopamine receptors are co-localized on the presynaptic varicosities of striatal and nucleus accumbens neurons in vitro. Neuroscience. 1999;89(1):221–33. doi: 10.1016/s0306-4522(98)00284-x. [DOI] [PubMed] [Google Scholar]
  • 43.Groves PM. A theory of the functional organization of the neostriatum and the neostriatal control of voluntary movement. Brain Res. 1983;286(2):109–32. doi: 10.1016/0165-0173(83)90011-5. [DOI] [PubMed] [Google Scholar]
  • 44.Shen W, Flajolet M, Greengard P, Surmeier DJ. Dichotomous dopaminergic control of striatal synaptic plasticity. Science. 2008;321(5890):848–51. doi: 10.1126/science.1160575. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Bertran-Gonzalez J, Bosch C, Maroteaux M, Matamales M, Herve D, Valjent E, Girault JA. Opposing patterns of signaling activation in dopamine D1 and D2 receptor-expressing striatal neurons in response to cocaine and haloperidol. J Neurosci. 2008;28(22):5671–85. doi: 10.1523/JNEUROSCI.1039-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Arts MP, Groenewegen HJ. Relationships of the Dendritic Arborizations of Ventral Striatomesencephalic Projection Neurons With Boundaries of Striatal Compartments. An In vitro Intracellular Labelling Study in the Rat. Eur J Neurosci. 1992;4(6):574–588. doi: 10.1111/j.1460-9568.1992.tb00907.x. [DOI] [PubMed] [Google Scholar]
  • 47.Haber SN, Calzavara R. The cortico-basal ganglia integrative network: the role of the thalamus. Brain Res Bull. 2009;78(2–3):69–74. doi: 10.1016/j.brainresbull.2008.09.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Belin D, Everitt BJ. Cocaine seeking habits depend upon dopamine-dependent serial connectivity linking the ventral with the dorsal striatum. Neuron. 2008;57(3):432–41. doi: 10.1016/j.neuron.2007.12.019. [DOI] [PubMed] [Google Scholar]
  • 49.Belin D, Jonkman S, Dickinson A, Robbins TW, Everitt BJ. Parallel and interactive learning processes within the basal ganglia: relevance for the understanding of addiction. Behav Brain Res. 2009;199(1):89–102. doi: 10.1016/j.bbr.2008.09.027. [DOI] [PubMed] [Google Scholar]
  • 50.Robbins TW, Ersche KD, Everitt BJ. Drug addiction and the memory systems of the brain. Ann N Y Acad Sci. 2008;1141:1–21. doi: 10.1196/annals.1441.020. [DOI] [PubMed] [Google Scholar]
  • 51.Roitman MF, Wheeler RA, Carelli RM. Nucleus accumbens neurons are innately tuned for rewarding and aversive taste stimuli, encode their predictors, and are linked to motor output. Neuron. 2005;45(4):587–97. doi: 10.1016/j.neuron.2004.12.055. [DOI] [PubMed] [Google Scholar]
  • 52.Taha SA, Fields HL. Inhibitions of nucleus accumbens neurons encode a gating signal for reward-directed behavior. J Neurosci. 2006;26(1):217–22. doi: 10.1523/JNEUROSCI.3227-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Taha SA, Fields HL. Encoding of palatability and appetitive behaviors by distinct neuronal populations in the nucleus accumbens. J Neurosci. 2005;25(5):1193–202. doi: 10.1523/JNEUROSCI.3975-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Saddoris MP, Stamatakis A, Carelli RM. Neural correlates of Pavlovian-to-instrumental transfer in the nucleus accumbens shell are selectively potentiated following cocaine self-administration. Eur J Neurosci. 2011;33(12):2274–87. doi: 10.1111/j.1460-9568.2011.07683.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Setlow B, Schoenbaum G, Gallagher M. Neural encoding in ventral striatum during olfactory discrimination learning. Neuron. 2003;38(4):625–36. doi: 10.1016/s0896-6273(03)00264-2. [DOI] [PubMed] [Google Scholar]
  • 56.Wheeler RA, Carelli RM. Dissecting motivational circuitry to understand substance abuse. Neuropharmacology. 2009;56(Suppl 1):149–59. doi: 10.1016/j.neuropharm.2008.06.028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Wheeler RA, Twining RC, Jones JL, Slater JM, Grigson PS, Carelli RM. Behavioral and electrophysiological indices of negative affect predict cocaine self-administration. Neuron. 2008;57(5):774–85. doi: 10.1016/j.neuron.2008.01.024. [DOI] [PubMed] [Google Scholar]
  • 58.Roitman MF, Wheeler RA, Tiesinga PH, Roitman JD, Carelli RM. Hedonic and nucleus accumbens neural responses to a natural reward are regulated by aversive conditioning. Learn Mem. 2010;17(11):539–46. doi: 10.1101/lm.1869710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Bassareo V, Di Chiara G. Differential influence of associative and nonassociative learning mechanisms on the responsiveness of prefrontal and accumbal dopamine transmission to food stimuli in rats fed ad libitum. J Neurosci. 1997;17(2):851–61. doi: 10.1523/JNEUROSCI.17-02-00851.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Bassareo V, Di Chiara G. Differential responsiveness of dopamine transmission to food-stimuli in nucleus accumbens shell/core compartments. Neuroscience. 1999;89(3):637–41. doi: 10.1016/s0306-4522(98)00583-1. [DOI] [PubMed] [Google Scholar]
  • 61.Roitman MF, Wheeler RA, Wightman RM, Carelli RM. Real-time chemical responses in the nucleus accumbens differentiate rewarding and aversive stimuli. Nat Neurosci. 2008;11(12):1376–7. doi: 10.1038/nn.2219. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Wheeler RA, Aragona BJ, Fuhrmann KA, Jones JL, Day JJ, Cacciapaglia F, Wightman RM, Carelli RM. Cocaine cues drive opposing context-dependent shifts in reward processing and emotional state. Biol Psychiatry. 2011;69(11):1067–74. doi: 10.1016/j.biopsych.2011.02.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Stuber GD, Roitman MF, Phillips PE, Carelli RM, Wightman RM. Rapid dopamine signaling in the nucleus accumbens during contingent and noncontingent cocaine administration. Neuropsychopharmacology. 2005;30(5):853–63. doi: 10.1038/sj.npp.1300619. [DOI] [PubMed] [Google Scholar]
  • 64.Kalivas PW, Duffy P. Selective activation of dopamine transmission in the shell of the nucleus accumbens by stress. Brain Res. 1995;675(1–2):325–8. doi: 10.1016/0006-8993(95)00013-g. [DOI] [PubMed] [Google Scholar]
  • 65.Levita L, Dalley JW, Robbins TW. Nucleus accumbens dopamine and learned fear revisited: a review and some new findings. Behav Brain Res. 2002;137(1–2):115–27. doi: 10.1016/s0166-4328(02)00287-5. [DOI] [PubMed] [Google Scholar]
  • 66.Nicola SM, Taha SA, Kim SW, Fields HL. Nucleus accumbens dopamine release is necessary and sufficient to promote the behavioral response to reward-predictive cues. Neuroscience. 2005;135(4):1025–33. doi: 10.1016/j.neuroscience.2005.06.088. [DOI] [PubMed] [Google Scholar]
  • 67.Rescorla RA. Pavlovian conditioning. It’s not what you think it is. American Psychologist. 1988;43(3):151–60. doi: 10.1037//0003-066x.43.3.151. [DOI] [PubMed] [Google Scholar]
  • 68.Dickinson A. Contemporary Animal Learning Theory. Cambridge University Press; Cambridge: 1980. [Google Scholar]
  • 69.Holland PC. Event representation in Pavlovian conditioning: image and action. Cognition. 1990;37(1–2):105–31. doi: 10.1016/0010-0277(90)90020-k. [DOI] [PubMed] [Google Scholar]
  • 70.Cassaday HJ, Horsley RR, Norman C. Electrolytic lesions to nucleus accumbens core and shell have dissociable effects on conditioning to discrete and contextual cues in aversive and appetitive procedures respectively. Behav Brain Res. 2005;160(2):222–35. doi: 10.1016/j.bbr.2004.12.012. [DOI] [PubMed] [Google Scholar]
  • 71.Corbit LH, Muir JL, Balleine BW. The role of the nucleus accumbens in instrumental conditioning: Evidence of a functional dissociation between accumbens core and shell. J Neurosci. 2001;21(9):3251–60. doi: 10.1523/JNEUROSCI.21-09-03251.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Parkinson JA, Cardinal RN, Everitt BJ. Limbic cortical-ventral striatal systems underlying appetitive conditioning. Progress in Brain Research. 2000;126:263–85. doi: 10.1016/S0079-6123(00)26019-6. [DOI] [PubMed] [Google Scholar]
  • 73.Blaiss CA, Janak PH. The nucleus accumbens core and shell are critical for the expression, but not the consolidation, of Pavlovian conditioned approach. Behav Brain Res. 2008 doi: 10.1016/j.bbr.2008.12.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Singh T, McDannald MA, Haney RZ, Cerri DH, Schoenbaum G. Nucleus Accumbens Core and Shell are Necessary for Reinforcer Devaluation Effects on Pavlovian Conditioned Responding. Front Integr Neurosci. 2010;4:126. doi: 10.3389/fnint.2010.00126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.McDannald MA, Lucantonio F, Burke KA, Niv Y, Schoenbaum G. Ventral striatum and orbitofrontal cortex are both required for model-based, but not model-free, reinforcement learning. J Neurosci. 2011;31(7):2700–5. doi: 10.1523/JNEUROSCI.5499-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Day JJ, Wheeler RA, Roitman MF, Carelli RM. Nucleus accumbens neurons encode Pavlovian approach behaviors: evidence from an autoshaping paradigm. Eur J Neurosci. 2006;23(5):1341–51. doi: 10.1111/j.1460-9568.2006.04654.x. [DOI] [PubMed] [Google Scholar]
  • 77.Rescorla RA, Wagner AD. A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In: Black AH, Prokasy WF, editors. Classical Conditioning II: Theory and Research. Appleton-Century-Crofts; New York: 1972. [Google Scholar]
  • 78.Pearce JM, Hall G. A model for Pavlovian learning: variations in the effectiveness of conditioned but not of unconditioned stimuli. Psychological Reviews. 1980;87(6):532–52. [PubMed] [Google Scholar]
  • 79.Mackintosh NJ. The Psychology of Animal Learning. Academic Press; London: 1974. [Google Scholar]
  • 80.Esber GR, Haselgrove M. Reconciling the influence of predictiveness and uncertainty on stimulus salience: a model of attention in associative learning. Proc Biol Sci. 2011;278(1718):2553–61. doi: 10.1098/rspb.2011.0836. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Waelti P, Dickinson A, Schultz W. Dopamine responses comply with basic assumptions of formal learning theory. Nature. 2001;412(6842):43–8. doi: 10.1038/35083500. [DOI] [PubMed] [Google Scholar]
  • 82.Margolis EB, Mitchell JM, Ishikawa J, Hjelmstad GO, Fields HL. Midbrain dopamine neurons: projection target determines action potential duration and dopamine D(2) receptor inhibition. J Neurosci. 2008;28(36):8908–13. doi: 10.1523/JNEUROSCI.1526-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Margolis EB, Coker AR, Driscoll JR, Lemaitre AI, Fields HL. Reliability in the identification of midbrain dopamine neurons. PLoS One. 2010;5(12):e15222. doi: 10.1371/journal.pone.0015222. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Day JJ, Roitman MF, Wightman RM, Carelli RM. Associative learning mediates dynamic shifts in dopamine signaling in the nucleus accumbens. Nat Neurosci. 2007;10(8):1020–8. doi: 10.1038/nn1923. [DOI] [PubMed] [Google Scholar]
  • 85.Schoenbaum G, Setlow B. Lesions of nucleus accumbens disrupt learning about aversive outcomes. Journal of Neuroscience. 2003;23(30):9833–41. doi: 10.1523/JNEUROSCI.23-30-09833.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.de Borchgrave R, Rawlins JN, Dickinson A, Balleine BW. Effects of cytotoxic nucleus accumbens lesions on instrumental conditioning in rats. Exp Brain Res. 2002;144(1):50–68. doi: 10.1007/s00221-002-1031-y. [DOI] [PubMed] [Google Scholar]
  • 87.Mogenson GJ, Jones DL, Yim CY. From motivation to action: functional interface between the limbic system and the motor system. Prog Neurobiol. 1980;14(2–3):69–97. doi: 10.1016/0301-0082(80)90018-0. [DOI] [PubMed] [Google Scholar]
  • 88.Garris PA, Kilpatrick M, Bunin MA, Michael D, Walker QD, Wightman RM. Dissociation of dopamine release in the nucleus accumbens from intracranial self-stimulation. Nature. 1999;398(6722):67–9. doi: 10.1038/18019. [DOI] [PubMed] [Google Scholar]
  • 89.Beyene M, Carelli RM, Wightman RM. Cue-evoked dopamine release in the nucleus accumbens shell tracks reinforcer magnitude during intracranial self-stimulation. Neuroscience. 2010;169(4):1682–8. doi: 10.1016/j.neuroscience.2010.06.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Cacciapaglia F, Saddoris MP, Wightman RM, Carelli RM. Differential dopamine release dynamics in the nucleus accumbens core and shell track distinct aspects of goal-directed behavior for sucrose. Neuropharmacology. doi: 10.1016/j.neuropharm.2011.12.027. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Saddoris MP, Holland PC, Gallagher M. Associatively learned representations of taste outcomes activate taste-encoding neural ensembles in gustatory cortex. J Neurosci. 2009;29(49):15386–96. doi: 10.1523/JNEUROSCI.3233-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Rizley RC, Rescorla RA. Associations in second-order conditioning and sensory preconditioning. J Comp Physiol Psychol. 1972;81(1):1–11. doi: 10.1037/h0033333. [DOI] [PubMed] [Google Scholar]
  • 93.Holland PC, Rescorla RA. The effect of two ways of devaluing the unconditioned stimulus after first- and second-order appetitive conditioning. Journal of Experimental Psychology: Animal Behavior Processes. 1975;1(4):355–363. doi: 10.1037//0097-7403.1.4.355. [DOI] [PubMed] [Google Scholar]
  • 94.Holland PC, Rescorla RA. Second-order conditioning with food unconditioned stimulus. J Comp Physiol Psychol. 1975;88(1):459–67. doi: 10.1037/h0076219. [DOI] [PubMed] [Google Scholar]
  • 95.Rescorla RA. Aspects of the reinforcer learned in second-order Pavlovian conditioning. J Exp Psychol Anim Behav Process. 1979;5(1):79–95. doi: 10.1037//0097-7403.5.1.79. [DOI] [PubMed] [Google Scholar]
  • 96.Setlow B, Holland PC, Gallagher M. Disconnection of the basolateral amygdala complex and nucleus accumbens impairs appetitive pavlovian second-order conditioned responses. Behav Neurosci. 2002;116(2):267–75. doi: 10.1037//0735-7044.116.2.267. [DOI] [PubMed] [Google Scholar]
  • 97.Saddoris MP, Cameron CM, Briley JD, Carelli RM. Long-term exposure to cocaine self-administration disrupts the behavioral and neural correlates of Pavlovian second-order conditioning in the nucleus accumbens of rats. Society for Neuroscience Annual Meeting; San Diego, CA. 2010. [Google Scholar]
  • 98.Estes WK. Discriminative conditioning; effects of a Pavlovian conditioned stimulus upon a subsequently established operant response. J Exp Psychol. 1948;38(2):173–7. doi: 10.1037/h0057525. [DOI] [PubMed] [Google Scholar]
  • 99.Rescorla RA, Solomon RL. Two-process learning theory: Relationships between Pavlovian conditioning and instrumental learning. Psychol Rev. 1967;74(3):151–82. doi: 10.1037/h0024475. [DOI] [PubMed] [Google Scholar]
  • 100.Holland PC. Relations between Pavlovian-instrumental transfer and reinforcer devaluation. Journal of Experimental Psychology: Animal Behavior Processes. 2004;30(2):104–17. doi: 10.1037/0097-7403.30.2.104. [DOI] [PubMed] [Google Scholar]
  • 101.Delamater AR, Holland PC. The influence of CS-US interval on several different indices of learning in appetitive conditioning. J Exp Psychol Anim Behav Process. 2008;34(2):202–22. doi: 10.1037/0097-7403.34.2.202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.McDannald MA, Galarce EM. Measuring Pavlovian fear with conditioned freezing and conditioned suppression reveals different roles for the basolateral amygdala. Brain Res. 2011;1374:82–9. doi: 10.1016/j.brainres.2010.12.050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Hall J, Parkinson JA, Connor TM, Dickinson A, Everitt BJ. Involvement of the central nucleus of the amygdala and nucleus accumbens core in mediating Pavlovian influences on instrumental behaviour. Eur J Neurosci. 2001;13(10):1984–92. doi: 10.1046/j.0953-816x.2001.01577.x. [DOI] [PubMed] [Google Scholar]
  • 104.Wyvell CL, Berridge KC. Intra-accumbens amphetamine increases the conditioned incentive salience of sucrose reward: enhancement of reward “wanting” without enhanced “liking” or response reinforcement. J Neurosci. 2000;20(21):8122–30. doi: 10.1523/JNEUROSCI.20-21-08122.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105.Cardinal RN. Neural systems implicated in delayed and probabilistic reinforcement. Neural Netw. 2006;19(8):1277–301. doi: 10.1016/j.neunet.2006.03.004. [DOI] [PubMed] [Google Scholar]
  • 106.Green L, Myerson J. A discounting framework for choice with delayed and probabilistic rewards. Psychol Bull. 2004;130(5):769–92. doi: 10.1037/0033-2909.130.5.769. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 107.Phillips PE, Walton ME, Jhou TC. Calculating utility: preclinical evidence for cost-benefit analysis by mesolimbic dopamine. Psychopharmacology (Berl) 2007;191(3):483–95. doi: 10.1007/s00213-006-0626-6. [DOI] [PubMed] [Google Scholar]
  • 108.Rangel A, Camerer C, Montague PR. A framework for studying the neurobiology of value-based decision making. Nat Rev Neurosci. 2008;9(7):545–56. doi: 10.1038/nrn2357. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109.Cardinal RN, Cheung TH. Nucleus accumbens core lesions retard instrumental learning and performance with delayed reinforcement in the rat. BMC Neurosci. 2005;6:9. doi: 10.1186/1471-2202-6-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110.Cardinal RN, Howes NJ. Effects of lesions of the nucleus accumbens core on choice between small certain rewards and large uncertain rewards in rats. BMC Neurosci. 2005;6:37. doi: 10.1186/1471-2202-6-37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Cardinal RN, Pennicott DR, Sugathapala CL, Robbins TW, Everitt BJ. Impulsive choice induced in rats by lesions of the nucleus accumbens core. Science. 2001;292(5526):2499–501. doi: 10.1126/science.1060818. [DOI] [PubMed] [Google Scholar]
  • 112.Floresco SB, Tse MT, Ghods-Sharifi S. Dopaminergic and glutamatergic regulation of effort- and delay-based decision making. Neuropsychopharmacology. 2008;33(8):1966–79. doi: 10.1038/sj.npp.1301565. [DOI] [PubMed] [Google Scholar]
  • 113.Ghods-Sharifi S, Floresco SB. Differential effects on effort discounting induced by inactivations of the nucleus accumbens core or shell. Behav Neurosci. 2010;124(2):179–91. doi: 10.1037/a0018932. [DOI] [PubMed] [Google Scholar]
  • 114.Stopper C, Floresco S. Contributions of the nucleus accumbens and its subregions to different aspects of risk-based decision making. Cognitive, Affective, & Behavioral Neuroscience. 2011;11(1):97–112. doi: 10.3758/s13415-010-0015-9. [DOI] [PubMed] [Google Scholar]
  • 115.Fiorillo CD, Tobler PN, Schultz W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science. 2003;299(5614):1898–902. doi: 10.1126/science.1077349. [DOI] [PubMed] [Google Scholar]
  • 116.Roesch MR, Calu DJ, Schoenbaum G. Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nat Neurosci. 2007;10(12):1615–24. doi: 10.1038/nn2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 117.Tobler PN, Fiorillo CD, Schultz W. Adaptive coding of reward value by dopamine neurons. Science. 2005;307(5715):1642–5. doi: 10.1126/science.1105370. [DOI] [PubMed] [Google Scholar]
  • 118.Ishiwari K, Weber SM, Mingote S, Correa M, Salamone JD. Accumbens dopamine and the regulation of effort in food-seeking behavior: modulation of work output by different ratio or force requirements. Behav Brain Res. 2004;151(1–2):83–91. doi: 10.1016/j.bbr.2003.08.007. [DOI] [PubMed] [Google Scholar]
  • 119.Mingote S, Weber SM, Ishiwari K, Correa M, Salamone JD. Ratio and time requirements on operant schedules: effort-related effects of nucleus accumbens dopamine depletions. Eur J Neurosci. 2005;21(6):1749–57. doi: 10.1111/j.1460-9568.2005.03972.x. [DOI] [PubMed] [Google Scholar]
  • 120.St Onge JR, Floresco SB. Dopaminergic modulation of risk-based decision making. Neuropsychopharmacology. 2009;34(3):681–97. doi: 10.1038/npp.2008.121. [DOI] [PubMed] [Google Scholar]
  • 121.St Onge J, Chiu Y, Floresco S. Differential effects of dopaminergic manipulations on risky choice. Psychopharmacology (Berl) 2010;211(2):209–221. doi: 10.1007/s00213-010-1883-y. [DOI] [PubMed] [Google Scholar]
  • 122.Day JJ, Jones JL, Wightman RM, Carelli RM. Phasic nucleus accumbens dopamine release encodes effort- and delay-related costs. Biol Psychiatry. 2010;68(3):306–9. doi: 10.1016/j.biopsych.2010.03.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 123.Gan JO, Walton ME, Phillips PE. Dissociable cost and benefit encoding of future rewards by mesolimbic dopamine. Nat Neurosci. 2010;13(1):25–7. doi: 10.1038/nn.2460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Nasrallah NA, Clark JJ, Collins AL, Akers CA, Phillips PE, Bernstein IL. Risk preference following adolescent alcohol use is associated with corrupted encoding of costs but not rewards by mesolimbic dopamine. Proc Natl Acad Sci U S A. 2011;108(13):5466–71. doi: 10.1073/pnas.1017732108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 125.Sugam JA, Day JJ, Wightman RM, Carelli RM. Phasic nucleus accumbens dopamine encodes risk-based decision-making behavior. Biol Psychiatry. 2012;71:199–205. doi: 10.1016/j.biopsych.2011.09.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 126.Cheer JF, Aragona BJ, Heien ML, Seipel AT, Carelli RM, Wightman RM. Coordinated accumbal dopamine release and neural activity drive goal-directed behavior. Neuron. 2007;54(2):237–44. doi: 10.1016/j.neuron.2007.03.021. [DOI] [PubMed] [Google Scholar]
  • 127.Hasbi A, O’Dowd BF, George SR. Heteromerization of dopamine D2 receptors with dopamine D1 or D5 receptors generates intracellular calcium signaling by different mechanisms. Curr Opin Pharmacol. 2010;10(1):93–9. doi: 10.1016/j.coph.2009.09.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 128.Dreyer JK, Herrik KF, Berg RW, Hounsgaard JD. Influence of phasic and tonic dopamine release on receptor activation. J Neurosci. 2010;30(42):14273–83. doi: 10.1523/JNEUROSCI.1894-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 129.Kalivas PW, Volkow N, Seamans J. Unmanageable motivation in addiction: a pathology in prefrontal-accumbens glutamate transmission. Neuron. 2005;45(5):647–50. doi: 10.1016/j.neuron.2005.02.005. [DOI] [PubMed] [Google Scholar]
  • 130.Burns LH, Annett L, Kelley AE, Everitt BJ, Robbins TW. Effects of lesions to amygdala, ventral subiculum, medial prefrontal cortex, and nucleus accumbens on the reaction to novelty: implication for limbic-striatal interactions. Behav Neurosci. 1996;110(1):60–73. doi: 10.1037//0735-7044.110.1.60. [DOI] [PubMed] [Google Scholar]
  • 131.Burns LH, Robbins TW, Everitt BJ. Differential effects of excitotoxic lesions of the basolateral amygdala, ventral subiculum and medial prefrontal cortex on responding with conditioned reinforcement and locomotor activity potentiated by intra-accumbens infusions of D-amphetamine. Behavioral Brain Research. 1993;55(2):167–83. doi: 10.1016/0166-4328(93)90113-5. [DOI] [PubMed] [Google Scholar]
  • 132.Franklin TR, Druhan JP. Involvement of the nucleus accumbens and medial prefrontal cortex in the expression of conditioned hyperactivity to a cocaine-associated environment in rats. Neuropsychopharmacology. 2000;23(6):633–44. doi: 10.1016/S0893-133X(00)00162-7. [DOI] [PubMed] [Google Scholar]
  • 133.Goto Y, O’Donnell P. Prefrontal lesion reverses abnormal mesoaccumbens response in an animal model of schizophrenia. Biol Psychiatry. 2004;55(2):172–6. doi: 10.1016/s0006-3223(03)00783-2. [DOI] [PubMed] [Google Scholar]
  • 134.Krettek JE, Price JL. Amygdaloid projections to subcortical structures within the basal forebrain and brainstem in the rat and cat. Journal of Comparative Neurology. 1978;178(2):225–54. doi: 10.1002/cne.901780204. [DOI] [PubMed] [Google Scholar]
  • 135.Ramirez DR, Savage LM. Differential involvement of the basolateral amygdala, orbitofrontal cortex, and nucleus accumbens core in the acquisition and use of reward expectancies. Behav Neurosci. 2007;121(5):896–906. doi: 10.1037/0735-7044.121.5.896. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 136.McDonald AJ. Organization of amygdaloid projections to the prefrontal cortex and associated striatum in the rat. Neuroscience. 1991;44(1):1–14. doi: 10.1016/0306-4522(91)90247-l. [DOI] [PubMed] [Google Scholar]
  • 137.McDonald AJ. Topographical organization of amygdaloid projections to the caudatoputamen, nucleus accumbens, and related striatal-like areas of the rat brain. Neuroscience. 1991;44(1):15–33. doi: 10.1016/0306-4522(91)90248-m. [DOI] [PubMed] [Google Scholar]
  • 138.Ambroggi F, Ishikawa A, Fields HL, Nicola SM. Basolateral amygdala neurons facilitate reward-seeking behavior by exciting nucleus accumbens neurons. Neuron. 2008;59(4):648–61. doi: 10.1016/j.neuron.2008.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 139.Jones JL, Day JJ, Wheeler RA, Carelli RM. The basolateral amygdala differentially regulates conditioned neural responses within the nucleus accumbens core and shell. Neuroscience. 2010;169(3):1186–98. doi: 10.1016/j.neuroscience.2010.05.073. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 140.Deisseroth K. Optogenetics. Nat Methods. 2011;8(1):26–9. doi: 10.1038/nmeth.f.324. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 141.Zhang F, Gradinaru V, Adamantidis AR, Durand R, Airan RD, de Lecea L, Deisseroth K. Optogenetic interrogation of neural circuits: technology for probing mammalian brain structures. Nat Protoc. 2010;5(3):439–56. doi: 10.1038/nprot.2009.226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 142.Stuber GD, Sparta DR, Stamatakis AM, van Leeuwen WA, Hardjoprajitno JE, Cho S, Tye KM, Kempadoo KA, Zhang F, Deisseroth K, Bonci A. Excitatory transmission from the amygdala to nucleus accumbens facilitates reward seeking. Nature. 2011;475(7356):377–80. doi: 10.1038/nature10194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 143.Zahm DS, Cheng AY, Lee TJ, Ghobadi CW, Schwartz ZM, Geisler S, Parsely KP, Gruber C, Veh RW. Inputs to the midbrain dopaminergic complex in the rat, with emphasis on extended amygdala-recipient sectors. J Comp Neurol. 2011;519(16):3159–88. doi: 10.1002/cne.22670. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 144.Jones JL, Day JJ, Aragona BJ, Wheeler RA, Wightman RM, Carelli RM. Basolateral amygdala modulates terminal dopamine release in the nucleus accumbens and conditioned responding. Biol Psychiatry. 2010;67(8):737–44. doi: 10.1016/j.biopsych.2009.11.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 145.Ishikawa A, Ambroggi F, Nicola SM, Fields HL. Dorsomedial prefrontal cortex contribution to behavioral and nucleus accumbens neuronal responses to incentive cues. J Neurosci. 2008;28(19):5088–98. doi: 10.1523/JNEUROSCI.0253-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 146.Stopper CM, Floresco SB. Contributions of the nucleus accumbens and its subregions to different aspects of risk-based decision making. Cogn Affect Behav Neurosci. 2011;11(1):97–112. doi: 10.3758/s13415-010-0015-9. [DOI] [PubMed] [Google Scholar]
  • 147.Albertin SV, Mulder AB, Tabuchi E, Zugaro MB, Wiener SI. Lesions of the medial shell of the nucleus accumbens impair rats in finding larger rewards, but spare reward-seeking behavior. Behav Brain Res. 2000;117(1–2):173–83. doi: 10.1016/s0166-4328(00)00303-x. [DOI] [PubMed] [Google Scholar]
  • 148.Day JJ, Jones JL, Carelli RM. Nucleus accumbens neurons encode predicted and ongoing reward costs in rats. Eur J Neurosci. 2011;33(2):308–21. doi: 10.1111/j.1460-9568.2010.07531.x. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES