Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2006 Sep 13;26(37):9530–9537. doi: 10.1523/JNEUROSCI.2915-06.2006

Dissociable Systems for Gain- and Loss-Related Value Predictions and Errors of Prediction in the Human Brain

Juliana Yacubian 1,*, Jan Gläscher 1,*, Katrin Schroeder 2, Tobias Sommer 1, Dieter F Braus 2, Christian Büchel 1,
PMCID: PMC6674602  PMID: 16971537

Abstract

Midbrain dopaminergic neurons projecting to the ventral striatum code for reward magnitude and probability during reward anticipation and then indicate the difference between actual and predicted outcome. It has been questioned whether such a common system for the prediction and evaluation of reward exists in humans. Using functional magnetic resonance imaging and a guessing task in two large cohorts, we are able to confirm ventral striatal responses coding both reward probability and magnitude during anticipation, permitting the local computation of expected value (EV). However, the ventral striatum only represented the gain-related part of EV (EV+). At reward delivery, the same area shows a reward probability and magnitude-dependent prediction error signal, best modeled as the difference between actual outcome and EV+. In contrast, loss-related expected value (EV) and the associated prediction error was represented in the amygdala. Thus, the ventral striatum and the amygdala distinctively process the value of a prediction and subsequently compute a prediction error for gains and losses, respectively. Therefore, a homeostatic balance of both systems might be important for generating adequate expectations under uncertainty. Prevalence of either part might render expectations more positive or negative, which could contribute to the pathophysiology of mood disorders like major depression.

Keywords: amygdala, punishment, ventral striatum, prediction error, fMRI, dopaminergic

Introduction

In nonhuman primates, mesolimbic dopaminergic neurons are involved in the representation of reward probability and reward magnitude (Schultz et al., 1997; Fiorillo et al., 2003; Tobler et al., 2005). In humans, these response properties have been observed in the ventral striatum (Pagnoni et al., 2002; McClure et al., 2003; O'Doherty et al., 2003; Ramnani et al., 2004), a region known to receive afferent input from midbrain dopaminergic neurons (Haber et al., 1995). The ventral striatum responds to a conditioned stimulus predicting reward delivery (McClure et al., 2003; O'Doherty et al., 2003) and shows a strong outcome-related response when a reward occurs unexpectedly or an activity decrease when an expected reward is omitted (Pagnoni et al., 2002; McClure et al., 2003). These findings suggest that ventral striatal activations resemble a prediction error signal similar to the dopaminergic midbrain signal in the primate (Schultz and Dickinson, 2000).

Reward processing in the human has also been investigated using other incentive tasks containing a guessing or gambling component (Rogers et al., 1999; Elliott et al., 2000; Knutson et al., 2000; Breiter et al., 2001; Delgado et al., 2003; Ernst et al., 2004; Matthews et al., 2004; Abler et al., 2006). However, in contrast to reinforcement learning, a proper model for the prediction signal that combines reward magnitude and probability has not been established. “Expected value” (EV), defined as the product of reward magnitude times reward probability (Machina, 1987), is a likely basis for such a model (Knutson et al., 2005). In a guessing task with two possible outcomes (i.e., gain or loss), the total EV is the sum of the gain-related EV (EV+) and the loss-related EV (EV). The former is the probability of a gain times the magnitude of the gain, whereas the latter is the probability of a loss times the magnitude of the loss. Previous studies investigating the neuronal basis of EV have used tasks with gain versus no-gain outcomes or loss versus no-loss outcomes (Knutson et al., 2005; Dreher et al., 2006). In the former case, EV equals EV+, simply because no loss can occur (i.e., EV = 0), and in the latter case EV equals EV (i.e., EV+ = 0).

We used a factorial design in combination with functional magnetic resonance imaging (fMRI) in which volunteers could gain and lose different amounts of money with different probabilities in each trial. This allowed us to explicitly test whether EV+ and EV are processed in the same or different brain areas. Based on recent data (Bayer and Glimcher, 2005), showing a limited dynamic range of dopaminergic midbrain neurons, we expected that in such a task, the ventral striatum would only be able to signal EV+ but not EV, and that an additional system exists that represents EV. Because the amygdala has been implicated in the prediction of aversive events (Büchel et al., 1998; LaBar et al., 1998; Breiter et al., 2001; Kahn et al., 2002; Glascher and Buchel, 2005; Trepel et al., 2005), this structure is a possible candidate for such a system.

Materials and Methods

Subjects.

Forty-two healthy male volunteers, 27.3 ± 5.5 years of age (mean age ± SD), participated in the main study. A second cohort of 24 healthy male volunteers, 24.9 ± 4.9 years of age (mean age ± SD), was investigated for replication purposes. We concentrated our investigation on male volunteers to minimize the influence of differences in hormonal state during the menstrual cycle. Gonadal steroids have a regulatory influence on the reward system in female rats (Bless et al., 1994), and estradiol in particular has been shown to modulate dopamine (DA) release, synthesis, and receptor binding in the striatum (Pasqualini et al., 1996).

The local ethics committee approved the study and all participants gave written informed consent before participating. Volunteers were evaluated with a structured psychiatric interview (the Mini-International Neuropsychiatric Interview) (Sheehan et al., 1998) and with a gambling questionnaire (Kurzfragebogen Glücksspielverhalten) (Petry, 1996) to exclude psychiatric diseases and pathological gambling. All underwent a urine drug screening to exclude cocaine, amphetamine, cannabis, and opiate abuse.

Guessing task.

The paradigm used was a simple guessing task subdivided into two phases: anticipation and outcome. Each trial began with the presentation of the backside of eight playing cards (Fig. 1a,b, top). In the initial phase, volunteers had to place money on individual playing cards. In some trials, they could place the money on the corners of four adjacent cards (Fig. 1a) and in others on a single card (Fig. 1b). This manipulation allowed us to control reward probability (low for a single card and high for four cards). Altogether, volunteers played a series of 200 trials. Because of trial randomization, the probability for the low-probability trials was 26 and 66% for the high-probability trials. This is a small deviation from the graphically expected probabilities of one-eighth (i.e., 0.125) and four-eighths (i.e., 0.5), which was necessary to avoid a rapid decrease in balance resulting from the unfortunate average gain/loss ratio of 31.25/68.75% when the individual gain probabilities are 12.5 and 50%. The inclusion of a third (i.e., very high) probability of seven-eighths (i.e., 0.875) as an alternative was dismissed, because it would have increased the total number of conditions from 8 to 12.

Figure 1.

Figure 1.

Stimulus layout for the guessing task, expected values, and associated prediction errors. a, b, Each guessing trial began with the presentation of the backside of eight playing cards (top). Initially, volunteers had to place money on individual playing cards. In some trials, money could be placed on the corners of four adjacent cards (a) and in others on a single card only (b). This manipulation allowed us to control reward probability. The cards were flipped 4.2 s after placing the bet (a, b, bottom). Seven of eight cards were black, and the remaining one was a red ace (a, b, bottom). If the red ace was selected (a), the volunteer gained the amount of money, and otherwise lost the money (b). c–g, Expected values and associated prediction errors in this paradigm. The actual values taken from Table 1 are convolved with a Gaussian for a better visual comparison with BOLD responses. The total expected value is shown in c. The gain-related expected value is depicted in d with the associated prediction error (e). The loss-related expected value is depicted in f with the associated prediction error (g). €, Euro.

In summary, this can be seen as a 2 × 2 × 2 factorial design with the factors probability (high or low), magnitude (one or five Euro) and outcome (gain or loss), resulting in eight different conditions.

Initial credit was set to 20 Euro and continuously displayed on the screen. The money presented was either a one Euro coin (Fig. 1a) or a five Euro bill (Fig. 1b). Volunteers were able to place their bet using a magnetic resonance (MR) compatible optical mouse for 3034 ms. After placing the bet, the display was kept constant during an additional anticipation period of 4207 ms, after which all cards were flipped, and the volunteers could immediately see the outcome of the trial. Another 2015 ms later, the continuously visible credit display was updated and another 3006 ms (in 171 trials) or 12262 ms (in 29 trials) later, the next trial began. This resulted in 171 trials with an interstimulus interval (ISI) of 12.26 s and 29 trials with a longer ISI (21.46 s), introducing 14.6% null-events.

Seven of eight cards were black, the remaining one was a red ace (Fig. 1a,b, bottom row). If the red ace was touched by the bet (Fig. 1a, bottom), the volunteer gained the amount of money and otherwise lost the money (Fig. 1b, bottom). The order of trials was pseudorandomized and predetermined (i.e., the volunteer had no influence on the probability and the magnitude of each individual trial).

Before entering the scanner, subjects received a standardized verbal description of the task and completed a practice session, including all possible combinations of probability, magnitude, and outcome.

Volunteers were told explicitly before the experiment that they would receive their balance in cash. In case of a negative balance, they were told that the amount would be deducted from the payment offered for participating in this study. Volunteers ended the game with a negative balance of eight Euro, which was waived.

MRI acquisition.

MR scanning was performed on a 3T MR Scanner (Siemens Trio; Siemens, Erlangen, Germany) with a standard headcoil. Thirty-eight continuous axial slices (slice thickness, 2 mm) were acquired using a gradient echo echo-planar T2*-sensitive sequence (repetition time, 2.22 s; echo time, 25 ms; flip angle, 80°; matrix, 64 × 64; field of view, 192 × 192 mm). High-resolution (1 × 1 × 1 mm voxel size) T1-weighted structural MRI was acquired for each volunteer using a three-dimensional FLASH sequence.

A liquid crystal display video-projector back-projected the stimuli on a screen positioned behind the head of the participant. Subjects lay on their backs within the bore of the magnet and viewed the stimuli comfortably via a 45° mirror placed on top of the head coil that reflected the images displayed on the screen. To minimize head movement, all subjects were stabilized with tightly packed foam padding surrounding the head.

The task presentation and the recording of behavioral responses were performed with Cogent 2000v1.24 (http://www.vislab.ucl.ac.uk/Cogent/index.html) and Matlab 6.5 (MathWorks, Natick, MA).

Image processing.

Image processing and statistical analyses were performed using SPM2 (www.fil.ion.ucl.ac.uk/spm). All volumes were realigned to the first volume, spatially normalized (Friston et al., 1995) to an echoplanar imaging template in a standard coordinate system (Evans et al., 1994), resampled to a voxel size of 3 × 3 × 3 mm and finally smoothed using a 10 mm full-width at half-maximum isotropic Gaussian kernel.

Statistical analysis.

All eight conditions of the paradigm were modeled separately in the context of the general linear model as implemented in SPM2. We used two different models to characterize the data. In the first model, the anticipation and the outcome phase were modeled as individual hemodynamic responses (beginning of a trial and 7241 ms after trial onset), leading to 16 regressors (2 × 2 × 2 conditions times 2 regressors). The anticipation-related response was modeled as a small box-car with a duration of 7241 ms, and the outcome-related response was modeled as a single hemodynamic response. An additional covariate was incorporated into the model, representing the anticipation response modulated by the total amount of mouse movements in the choice period of this trial. This ensured that movement-related activation during the early trial period is modeled independently from the regressors of interest (Knutson et al., 2005).

To average the poststimulus BOLD response for display purposes, we defined a second model using a finite impulse response (FIR) basis function with a bin width of 2 s, modeling a total of 10 bins from 0 to 20 s poststimulus. This results in 10 regressors for each condition and 80 regressors for all conditions. Intuitively, this basis set considers each time bin after stimulus onset individually to model the BOLD response and can capture any possible shape of response function up to a given frequency limit. In this model, the parameter estimate for each time bin represents the average BOLD response at that time. In Figures 2–5, we therefore labeled the y-axis as “parameter estimates a.u.” Importantly, these parameter estimates are directly proportional to the BOLD signal. This additional analysis was only conducted to display activation time courses.

Figure 2.

Figure 2.

Activation during the anticipation of monetary rewards overlaid on a template T1-weighted MR image at p < 0.001 (uncorrected). Parameter estimates from the FIR model (i.e., averaged activation time courses) for each peak voxel are plotted in bins of 2 s for all 42 volunteers. The figure background shows a grayscale-coded representation of the envelope of both expected hemodynamic responses (i.e., anticipation and outcome; high values, white; low values, dark). The bright (i.e., white) area ∼12 s after stimulus onset represents the predicted peak of the response evoked by the outcome related response; the gray plateau from ∼6–10 s is related to the BOLD response elicited by reward anticipation. a, Main effect of magnitude showing stronger BOLD signal changes for trials with five Euro as opposed to one Euro. Parameter estimates from the FIR model (i.e., time courses) are averaged across outcomes and reward probability, showing a clear main effect of reward magnitude (left ventral striatum x, y, z, −12, 3, 0 mm). b, Main effect of probability showing stronger BOLD signal for trials with high reward probability. Parameter estimates from the FIR model (i.e., time courses) are averaged across reward magnitude and outcomes (left ventral striatum x, y, z, −12, 15, −3 mm). 5 €, Five Euro. R, Right side of the brain.

Data were analyzed for each subject individually (first-level analysis) and for the group (second level analysis). At the single-subject level, we applied a high-pass filter with a cutoff of 120 s to remove baseline drifts. All 16 parameter estimate images for the first analysis and all 80 parameter estimate images for the second analysis (FIR) were subsequently entered into a random effects analysis. The problem of nonindependent data within subjects as well as error variance heterogeneity was addressed by performing a nonsphericity correction.

For all analyses, the threshold was set to p < 0.05 corrected for multiple comparisons. For reasons of brevity, we focus our report on subcortical and frontal areas. Based on previous data, correction for hypothesized regions was based on volumes of interest. In particular, correction for the ventral striatum was based on an 18-mm-diameter sphere centered on x, y, z: ±15, 9, −9 mm (O'Doherty et al., 2004). Magnitude-dependent activation during the anticipation phase was expected in the orbital frontal cortex (Knutson et al., 2005), and correction was based on a 60-mm-diameter sphere centered on x, y, z: ±21, 42, −9 mm.

The involvement of the amygdala in predicting aversive events (i.e., losses) has been reported previously (Glascher and Buchel, 2005), and correction for multiple comparisons was based on the amygdala regions of interest provided by the Anatomical Automatic Labeling project at http://www.cyceron.fr/freeware/ (Tzourio-Mazoyer et al., 2002). Correction for hypothesized ventromedial prefrontal cortex activation (Knutson et al., 2003) was based on an anatomically defined 36-mm-diameter sphere centered between the genu of the corpus callosum and the anterior pole (center: x, y, z = 0, 52, −3).

We were interested in regions showing signal changes for prediction during the anticipation phase and a prediction error during the outcome phase. This commonality constraint was incorporated by using a conjunction analysis comprising the contrasts for prediction and prediction error. Intuitively, the ensuing conjunction analysis only shows areas in which both contrasts individually reach significance (Nichols et al., 2005).

Prediction error model.

In fMRI studies of reinforcement learning, the predictions and prediction errors have been used to model fMRI data (O'Doherty et al., 2003). The prediction error represents the difference between the actual outcome and the prediction. In reinforcement learning, this prediction error is then used to update future predictions. Although in guessing tasks, there is nothing to be learned per se, the concept of predictions and prediction errors can also be applied. Using a guessing task with fixed probabilities, we can express the prediction error δ as follows:

graphic file with name zns03706-2212-m01.jpg
graphic file with name zns03706-2212-m02.jpg

where V is the prediction, R is the actual outcome, and p is the gain probability. This model can now be extended to also incorporate reward magnitude x into the prediction term V, which then becomes the expected value as follows:

graphic file with name zns03706-2212-m03.jpg
graphic file with name zns03706-2212-m04.jpg

where EV indicates the predicted outcome (i.e., expected value). The prediction error δ is now the difference between actual outcome R and expected value EV.

EV can be further divided in gain (EV+)- and loss (EV)-related EV as follows:

graphic file with name zns03706-2212-m05.jpg
graphic file with name zns03706-2212-m06.jpg

It should be noted that the concept of expected value was unable to explain some phenomena in human choice behavior, and thus more general forms of the value function have been derived (Edwards, 1955; Kahneman and Tversky, 1991). In these models, x and p do not directly enter into the estimation but rather nonlinear functions of both (Machina, 1987; Kahneman and Tversky, 2000; Trepel et al., 2005). However, it should be noted that the deviation from linearity of these functions is most pronounced at the extremes. Analogous to previous studies (Knutson et al., 2005), we assumed local linearity and based the predictions on the expected value to explain BOLD responses in the human brain.

Dynamic model using trial-based probabilities.

The true average probability of all trials was different from what could be guessed by the visual card layout. We therefore created a model, which iteratively updates the probabilities for the high- and low-probability conditions on a trial-by-trial basis. For the beginning of the trial (i.e., before the first gain trial), the graphically visible probabilities (12.5 and 50%) were used. Figure 6a shows the traces of both (high and low) probabilities over the course of the experiment. This dynamic probability trace was then used to calculate trial-specific gain- and loss-related expected values and prediction errors and used to explain the fMRI data. The basis functions used for the anticipation and the outcome regressor were identical to the original model. However, in contrast to the original model, we entered gain- and loss-related expected value and the respective prediction errors as parametric modulations. In analogy to the first analysis, the parameter estimates for EV+, EV−, and the respective prediction errors were subsequently entered into a random effects analysis.

Figure 6.

Figure 6.

a, Dynamic probabilities during the experiment. Each point on the dashed line denotes the gain probability up to this trial for the low (red; one card selected) and high probability (green; 4 cards selected) trials. The solid line depicts the overall probability (i.e., as calculated over all trials). b, Ventral striatal activation showing EV+-related activation during anticipation and EV+-related prediction error during outcome overlaid on a coronal T1-weighted MR image at p < 0.001 (uncorrected). The left part shows the activation pattern from Figure 5a, using the actual gain probabilities (26 and 66%) over the course of the experiment. The right panel shows the result as obtained when using the time varying probabilities from the model depicted in a. c, The same analysis for EV-related activation patterns (compare Fig. 5). 5 €, Five Euro. R, Right side of the brain.

Results

Behavioral data

We continuously monitored all mouse movements during the choice period and could therefore compare the amount of mouse movements between different conditions. We observed a negative main effect of reward magnitude (Z = 2.5; p < 0.05) (i.e., more mouse-movement for one-Euro trials) (294.1 ± 18.0 pixels, mean ± SEM) compared with five-Euro trials (276.1 ± 17.5 pixels; mean ± SEM). Not surprisingly, more mouse movements were also observed (Z = 9.1; p < 0.05) for low-probability trials (318.3 ± 17.3 pixels; mean ± SEM) compared with high-probability trials (251.9 ± 17.6 pixels; mean ± SEM) attributable to more degrees of freedom in placing the bet in low-probability trials. No significant interaction was observed.

Anticipation phase

All eight conditions (all possible combinations of two reward probabilities, two reward magnitudes, and two outcomes; i.e., gain/loss) of the paradigm were modeled separately. To test for signal differences during anticipation, parameter estimates for the first hemodynamic response (i.e., modeling the anticipation phase of each trial) were compared. In addition, the total amount of mouse movements was modeled as a condition-specific nuisance covariate removing movement-related signal changes.

Reward magnitude-related activation

Bilateral ventral striatum showed a main effect of magnitude (i.e., stronger BOLD signal for trials with five Euro as opposed to one Euro) (Fig. 2a). The peak of this activation was located in bilateral ventral striatum (peak: x, y, z: −12, 3, 0 mm, Z = 5.6; peak: x, y, z: 12, 6, 0 mm, Z = 5.2; both p < 0.05, corrected). Other cortical areas showing a main effect of magnitude during anticipation comprised bilateral anterior insula (peak x, y, z: −33, 21, −6 mm, Z = 5.5; peak: x, y, z: 33, 24, −6 mm, Z = 6.7; both p < 0.05, corrected) and bilateral anterior orbitofrontal cortex (peak: x, y, z: −39, 57, 3 mm, Z = 4.0; peak: x, y, z: 36, 60, −3 mm, Z = 4.6; both p < 0.05, corrected).

Reward probability-related activation

The bilateral ventral striatum showed a main effect of probability (i.e., stronger BOLD signal for more likely gains) (Fig. 2b). The peak of this activation was located in the anterior ventral striatum (peak: x, y, z: −12, 15, −3 mm, Z = 3.4; peak: x, y, z: 15, 15, −6 mm; Z = 4.2; both p < 0.05, corrected). Additional reward probability-related activation was observed in ventromedial prefrontal cortex (peak: x, y, z: 3, 51, −6 mm; Z = 3.3; p < 0.05, corrected).

Main effect of gain-related expected value

BOLD responses that strongly covaried with the linear model of EV+ (Fig. 1d), but not total EV (Fig. 1c), were observed in bilateral ventral striatum (peak: x, y, z: 12, 9, −3 mm, Z = 5.2; peak: x, y, z: −12, 6, −3 mm, Z = 5.2; both p < 0.05, corrected) (Fig. 3a) and the right orbitofrontal cortex (peak: x, y, z: 36, 63, 0 mm; Z = 4.8; p < 0.05, corrected). We replicated this important finding in an additional cohort of 24 volunteers. Peak signal changes that correlate with EV+ were observed in bilateral ventral striatum (peak: x, y, z: 12, 9, −3 mm, Z = 5.8; peak: x, y, z: −12, 6, −3 mm, Z = 5.3; both p < 0.05, corrected) (Fig. 3b).

Figure 3.

Figure 3.

Activations expressing gain-related expected value overlaid on a template T1-weighted MR image at p < 0.001 (uncorrected). a, Bilateral ventral striatum shows BOLD signal changes covarying with a linear model of EV+. Parameter estimates from the FIR model (i.e., averaged activation time courses) for the right-sided peak (x, y, z, 12, 9, −3 mm) in 42 volunteers. b, In addition, we replicated this analysis in a second cohort of volunteers (n = 24). As in the first cohort, bilateral ventral striatum shows BOLD signal changes covarying with a linear model of EV+. The parameter estimates of the FIR model (i.e., activation time courses) are for the right-sided peak (x, y, z, 12, 9, −3 mm) for the second cohort (n = 24). R, Right side of the brain.

Outcome phase

The outcome phase was defined as a BOLD response evoked by neuronal activity at the moment when the result of the trial was revealed (i.e., the cards were flipped).

Gain-related responses

Gain-related activation (i.e., gain > loss) was observed in bilateral ventral striatum (peak: x, y, z: 12, 9, −3 mm, Z = 11.8; peak: x, y, z: −12, 9, −3 mm; Z = 11.7; both p < 0.05, corrected) and in bilateral orbitofrontal cortex (peak: x, y, z: 48, 39, −18 mm, Z = 4.7; peak: x, y, z: −45, 45, −15 mm, Z = 5.7; both p < 0.05, corrected).

Prediction error-related responses

Because we observed ventral striatal responses during anticipation that were correlated with the linear model of gain-related expected value, we tested the hypothesis that a prediction error signal is computed as the difference between outcome and EV+ (see Materials and Methods) and therefore created a contrast according to mean corrected predictions from this model (Table 1). Most importantly, we were interested in identifying areas coexpressing both patterns, i.e., signal changes correlated with EV+ during the anticipation phase (Fig. 1d) and signal changes correlated with the prediction error based on EV+ during the outcome phase (Fig. 1e) as predicted by nonhuman primate data (Schultz et al., 1997). A conjunction analysis was used to identify such areas. Based on this conjunction analysis, we detected signal changes in the bilateral ventral striatum (peak: x, y, z, −12, 6, −3 mm, Z = 5.2; peak: x, y, z, 12, 9, −3 mm, Z = 5.2; both p < 0.05, corrected) that closely resemble EV+ during reward anticipation and an EV+-based prediction error signal during the outcome phase (Fig. 4a). We replicated this important finding in an additional cohort of 24 volunteers (Fig. 4b). Voxels that coexpress signal changes related to EV+ during anticipation and the related prediction error during outcome were observed in bilateral ventral striatum (peak: x, y, z, 12, 9, −3 mm, Z = 5.8; peak: x, y, z, −12, 6, −3 mm, Z = 5.3; both p < 0.05, corrected) (Fig. 4b). Interestingly, the time course in the left ventral striatum (Fig. 4c) shows more pronounced deactivations for loss trials (cyan) than activations for gain trials (magenta) in accordance with the EV+-based prediction error model (Fig. 1e). Because the actual gain probabilities (26 and 66%) were slightly higher compared with the graphically expected probabilities (12.5 and 50%), we replicated this result with an analysis using dynamic probabilities on a trial-by-trial basis (see Materials and Methods for details). This analysis showed activation patterns in the ventral striatum that were almost indistinguishable from the original analysis (peak: x, y, z, 12, 9, −3 mm, Z = 4.4; peak: x, y, z, −12, 3, −3 mm, Z = 4.7; both p < 0.05, corrected) (Fig. 6b).

Table 1.

Expected values and prediction errors for all different conditions

Condition 1 2 3 4 5 6 7 8
Probability 0.26 0.66 0.26 0.66 0.26 0.66 0.26 0.66
Magnitude 1 1 5 5 1 1 5 5
Outcome 1 1 5 5 −1 −1 −5 −5
EV total −0.48 0.32 −2.40 1.60 −0.48 0.32 −2.40 1.60
Prediction error EV total 1.48 0.68 7.40 3.40 −0.52 −1.32 −2.60 −6.60
EV+ 0.26 0.66 1.30 3.30 0.26 0.66 1.30 3.30
Prediction error EV+ 0.74 0.34 3.70 1.70 −1.26 −1.66 −6.30 −8.30
EV −0.74 −0.34 −3.70 −1.70 −0.74 −0.34 −3.70 −1.70
Prediction error EV 1.74 1.34 8.70 6.70 −0.26 −0.66 −1.30 −3.30

Mean corrected versions of these vectors were used as linear contrasts in subsequent SPM analyses.

Figure 4.

Figure 4.

Ventral striatal activation showing EV+-related activation during anticipation and an EV+-related prediction error during outcome (conjunction analysis) overlaid on a coronal T1-weighted MR image at p < 0.001 (uncorrected). a, Bilateral ventral striatum coexpressed EV+-related predictions and the ensuing prediction error. b, This was replicated in a second cohort (n = 24). c, Parameter estimates from the FIR model (i.e., averaged activation time courses) for five-Euro trials are plotted in bins of 2 s from the peak voxel (x, y, z, −12, 6, −3 mm) for the first cohort (n = 42) (a). The gray shades in the background are pictorial representations of the expected hemodynamic responses for anticipation and outcome (compare Fig. 2). 5 €, Five Euro. R, Right side of the brain.

Loss-related expected value and the associated prediction error

Analogous to our model driven analysis for EV+ and the associated prediction error, the same analysis was performed for loss-related expected value, EV. Areas showing both EV-related signal changes during anticipation (Fig. 1f) and an EV-associated prediction error response during the outcome phase (Fig. 1g) were again identified using a conjunction analysis. In contrast to EV+, EV-related activations showed a maximum in bilateral amygdala (peak x, y, z, 30, −3, −12 mm, Z = 5.4; peak x, y, z, −27, −3, −18 mm, Z = 3.9; both p < 0.05, corrected) (Fig. 5a). Again, this finding was replicated in an independent cohort of 24 volunteers (amygdala peak x, y, z, 27, −3, −18 mm, Z = 4.3; peak x, y, z, −24, −3, −15 mm, Z = 4.1; both p < 0.05, corrected) (Fig. 5b). Compared with the prediction error based on EV+ in the ventral striatum, the time course in the amygdala (Fig. 5c) shows less pronounced or no deactivations for loss trials (cyan), in accordance with the EV based prediction error model (Fig. 1g). Analogous to the analysis of EV+-related responses, we replicated this analysis with an analysis using dynamic probabilities on a trial-by-trial basis. This analysis showed activation patterns in the amygdala that were similar to the original analysis (peak x, y, z, 27, 0, −18 mm, Z = 4.8; peak x, y, z, −27, −3, −18 mm, Z = 3.3; both p < 0.05, corrected) (Fig. 6c).

Figure 5.

Figure 5.

Responses in the amygdala in relationship to loss-related expected value and the associated prediction error (conjunction analysis) at p < 0.001. a, Bilateral amygdala showed BOLD signal changes that resembled EV during reward anticipation and an EV-based prediction error signal during the outcome phase. b, This was replicated in a second cohort (n = 24). c, Parameter estimates from the FIR model (i.e., averaged activation time courses) for five Euro trials are shown for the right amygdala (x, y, z, 27, 0, −15) in bins of 2 s for 42 volunteers. The gray shades in the background are pictorial representations of the expected hemodynamic responses for anticipation and outcome (compare Fig. 2). 5 €, Five Euro. R, Right side of the brain.

Discussion

We systematically varied the characteristics of reward-related processing using a factorial design that allowed for all possible combinations of reward magnitude, reward probability, and outcome in combination with fMRI. In two large cohorts of healthy volunteers, we were able to show ventral striatal responses coding expected value (i.e., the product of reward probability and magnitude during anticipation). Importantly, ventral striatal responses did not express the full range of expected value but only gain-related expected value (EV+). At reward delivery, the same area showed a reward probability and magnitude-dependent prediction error signal, parsimoniously modeled as the difference between actual outcome and EV+. Conversely, loss-related expected value (EV) and the associated prediction error were identified in the amygdala.

Task

Most previous fMRI studies have either varied reward magnitude (Knutson et al., 2000, 2001a,b; Delgado et al., 2003) or reward predictability (Berns et al., 2001; Abler et al., 2006) or used a fixed combination of probability and magnitude (Rogers et al., 1999; Ernst et al., 2004; Matthews et al., 2004; Coricelli et al., 2005; Dreher et al., 2006). In most of these studies, volunteers had the choice between different gambles and therefore did not include the combination of low-gain probability and low magnitude, because this combination is least lucrative than the others, and normal volunteers would not choose such a gamble. More recent studies (Knutson et al., 2005) investigated different magnitudes and probabilities but restricted the analysis to the anticipation phase or did not use a full factorial design (Dreher et al., 2006). Based on these studies, we decided to independently manipulate anticipated reward magnitude and probability by presenting guessing scenarios with fixed probabilities and magnitudes. As in previous studies (Elliott et al., 2004; Zink et al., 2004; Knutson et al., 2005), volunteers were engaged in the task. Differences in motor behavior were included in the statistical model and thus are unlikely to confound the observed effects (Knutson et al., 2005).

Reward anticipation

During the anticipation phase, we were able to demonstrate a robust relationship between ventral striatal activation and reward magnitude. This finding is in accord with previous reports showing magnitude-dependent activation in the ventral striatum (Knutson et al., 2003). In addition, we observed a weaker main effect of probability showing more activation in the ventral striatum during the anticipation of more probable rewards consistent with a recent fMRI study (Abler et al., 2006). It is not surprising that these responses were observed in the ventral striatum rather than the midbrain, because the BOLD response reflects presynaptic input and processing (Logothetis et al., 2001). Therefore, spiking activity of dopaminergic midbrain neurons is expected to change the BOLD signal in areas to which these neurons project, such as the ventral striatum.

Prediction error-related responses

The observation that ventral striatal responses are stronger after delivery of a less likely gain is in agreement with the hypothesized role of the ventral striatum in encoding a reward-related prediction error. Previous studies have suggested that ventral striatal responses are correlated with a prediction error signal by either using Pavlovian or instrumental conditioning tasks (McClure et al., 2003; O'Doherty et al., 2003, 2004; Ramnani et al., 2004) or showing that the omission of reward leads to a deactivation in the ventral striatum (Pagnoni et al., 2002).

Our study confirms recent data (Abler et al., 2006) showing that in the ventral striatum, the positive response after reward delivery (i.e., gain trials) was greater if the reward was less likely to occur. Importantly, our data also extend these findings by showing a stronger deactivation in loss trials, when the loss was less likely to occur. Second, our data show a decrease of the BOLD signal below baseline in loss trials. As a consequence of omitted but predicted rewards, a decrease in neuronal firing has been observed in dopaminergic midbrain neurons in nonhuman primates (Schultz et al., 1997). The ventral striatum is one target of those dopaminergic midbrain neurons, and one might expect less presynaptic input and processing in the ventral striatum after omitted rewards. This reduction of presynaptic input can lead to a negative BOLD signal, as has been shown recently (Shmuel et al., 2006).

Prediction error signal scaled by magnitude

Primate data have suggested that dopaminergic midbrain neurons should be able to signal the magnitude of a prediction error (Tobler et al., 2005). In agreement with this data, we observed a prediction error signal that was not only modulated by the probability of the reward but also by its magnitude. Intuitively, this modulation is biologically plausible, because it is important for an organism to register whether an error in prediction concerns a small or a large reward. We noted that in a previous study, a magnitude-related outcome signal was observed in the dorsal rather than the ventral striatum (Delgado et al., 2003). However, the investigation of a prediction error signal was not the goal of this study.

Loss-related expected value and prediction error

We found a colocalization of EV during anticipation and the associated prediction error during outcome in the amygdala, in accord with previous data (Breiter et al., 2001; Kahn et al., 2002; Glascher and Buchel, 2005), showing that the amygdala was involved in expressing predictions of aversive events.

Another study on classical conditioning using appetitive and aversive outcomes has shown the amygdala to play a role in signaling appetitive prediction errors and the lateral orbitofrontal and genual anterior cingulate cortex in prediction errors concerning aversive outcomes (Seymour et al., 2005), which seems to disagree with our findings.

However, this might be related to differences in the tasks used. In the study by Seymour et al. (2005), two specific conditioned stimuli (CSs) were either predictive of an appetitive (i.e., pain relief) or aversive (i.e., pain exacerbation) outcome, the alternative outcome was no change in state. In contrast, our paradigm used mixed gambles, i.e., a certain stimulus configuration could be considered as a single CS that can predict both an appetitive (i.e., gain) or an aversive (i.e., loss) outcome. A gambling task analogous to the learning paradigm by Seymour et al. (2005) would have been if the outcome was either a gain versus nothing or a loss versus nothing. Such a task has been used previously (Knutson et al., 2005), and the ventral striatum was found to express expected value. However, it should be noted that in designs in which the alternative to an appetitive outcome is no change in state, total EV and EV+ are identical. Therefore, such a paradigm cannot be used to disentangle both possibilities.

Model for prediction error signal

Our data show that the same parts of the ventral striatum that signal gain-related expected value during reward anticipation code the prediction error at outcome. The peak activations for EV+ and the related prediction error are almost identical, and the activated clusters overlap at p < 0.001. Moreover, our data lend support to the notion that not total EV but only EV+ represents the “prediction” against which outcomes are compared that generate the ventral striatal prediction error signal.

A recent primate study (Bayer and Glimcher, 2005), as well as a study on Parkinson's disease (PD) patients (Frank et al., 2004), has already hinted at the possibility that only gain-related predictions and the associated prediction errors might be expressed in the ventral striatum. The primate study showed that dopamine spike rates in the postreward interval seem to only encode positive reward prediction errors, and dopamine was therefore attributed to the positive reward prediction error term of reinforcement learning models (Bayer and Glimcher, 2005). In addition, it has been shown that PD patients, who have a dopaminergic deficit in the midbrain, are better at learning to avoid choices that lead to negative outcomes than learning from positive outcomes. Dopamine medication reversed this bias and made patients more sensitive to positive than negative outcomes (Frank et al., 2004). This finding might be related to our observation that the ventral striatum, which receives dopaminergic inputs from the midbrain, is predominantly expressing gain-related predictions.

With respect to the neurotransmitter system involved in the loss-related predictions, it has been advocated recently that the serotonergic system, which directly projects to the ventral striatum, is involved in this effect (Daw et al., 2002). However, an indirect effect through the amygdala, as would be expected by our data, are equally likely, given the presence of 5-HT receptors in the amygdala (Aggleton, 2000).

In summary, our data represent evidence for two dissociable value systems for gains and losses. The ventral striatum generates value predictions based on possible gains against which actual outcomes are compared. Conversely, the amygdala makes predictions concerning possible losses and, similar to the ventral striatum, compares these predictions against actual outcomes.

Footnotes

J.Y. was supported by the National Council of Technological and Scientific Development–CNPq, Brazil. J.G. was supported by the Studienstiftung des Deutschen Volkes. C.B. was supported by Volkswagenstiftung, the German Bundesministerium für Bildung und Forschung, and the Deutsche Forschungsgemeinschaft. We thank Eszter Schoell for helpful suggestions on a previous draft of this manuscript. We declare that we do not have any competing financial interest.

References

  • Abler et al., 2006.Abler B, Walter H, Erk S, Kammerer H, Spitzer M. Prediction error as a linear function of reward probability is coded in human nucleus accumbens. NeuroImage. 2006;31:790–795. doi: 10.1016/j.neuroimage.2006.01.001. [DOI] [PubMed] [Google Scholar]
  • Aggleton, 2000.Aggleton JP. The amygdala. A functional analysis. Oxford: Oxford UP; 2000. [Google Scholar]
  • Bayer and Glimcher, 2005.Bayer HM, Glimcher PW. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron. 2005;47:129–141. doi: 10.1016/j.neuron.2005.05.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Berns et al., 2001.Berns GS, McClure SM, Pagnoni G, Montague PR. Predictability modulates human brain response to reward. J Neurosci. 2001;21:2793–2798. doi: 10.1523/JNEUROSCI.21-08-02793.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Bless et al., 1994.Bless EP, McGinnis KA, Mitchell AL, Hartwell A, Mitchell JB. The effects of gonadal steroids on brain stimulation reward in female rats. Behav Brain Res. 1994;82:235–244. doi: 10.1016/s0166-4328(96)00129-5. [DOI] [PubMed] [Google Scholar]
  • Breiter et al., 2001.Breiter HC, Aharon I, Kahneman D, Dale A, Shizgal P. Functional imaging of neural responses to expectancy and experience of monetary gains and losses. Neuron. 2001;30:619–639. doi: 10.1016/s0896-6273(01)00303-8. [DOI] [PubMed] [Google Scholar]
  • Büchel et al., 1998.Büchel C, Morris J, Dolan RJ, Friston KJ. Brain systems mediating aversive conditioning: an event-related fMRI study. Neuron. 1998;20:947–957. doi: 10.1016/s0896-6273(00)80476-6. [DOI] [PubMed] [Google Scholar]
  • Coricelli et al., 2005.Coricelli G, Critchley HD, Joffily M, O'Doherty JP, Sirigu A, Dolan RJ. Regret and its avoidance: a neuroimaging study of choice behavior. Nat Neurosci. 2005;8:1255–1262. doi: 10.1038/nn1514. [DOI] [PubMed] [Google Scholar]
  • Daw et al., 2002.Daw ND, Kakade S, Dayan P. Opponent interactions between serotonin and dopamine. Neural Netw. 2002;15:603–616. doi: 10.1016/s0893-6080(02)00052-7. [DOI] [PubMed] [Google Scholar]
  • Delgado et al., 2003.Delgado MR, Locke HM, Stenger VA, Fiez JA. Dorsal striatum responses to reward and punishment: effects of valence and magnitude manipulations. Cogn Affect Behav Neurosci. 2003;3:27–38. doi: 10.3758/cabn.3.1.27. [DOI] [PubMed] [Google Scholar]
  • Dreher et al., 2006.Dreher JC, Kohn P, Berman KF. Neural coding of distinct statistical properties of reward information in humans. Cereb Cortex. 2006;16:561–573. doi: 10.1093/cercor/bhj004. [DOI] [PubMed] [Google Scholar]
  • Edwards, 1955.Edwards W. The prediction of decisions among bets. J Exp Psychol. 1955;50:201–214. doi: 10.1037/h0041692. [DOI] [PubMed] [Google Scholar]
  • Elliott et al., 2000.Elliott R, Friston KJ, Dolan RJ. Dissociable neural responses in human reward systems. J Neurosci. 2000;20:6159–6165. doi: 10.1523/JNEUROSCI.20-16-06159.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Elliott et al., 2004.Elliott R, Newman J, Longe O, Deakin J. Instrumental responding for rewards is associated with enhanced neuronal response in subcortical reward systems. NeuroImage. 2004;21:984–990. doi: 10.1016/j.neuroimage.2003.10.010. [DOI] [PubMed] [Google Scholar]
  • Ernst et al., 2004.Ernst M, Nelson EE, McClure EB, Monk CS, Munson S, Eshel N, Zarahn E, Leibenluft E, Zametkin A, Towbin K, Blair J, Charney D, Pine DS. Choice selection and reward anticipation: an fMRI study. Neuropsychologia. 2004;42:1585–1597. doi: 10.1016/j.neuropsychologia.2004.05.011. [DOI] [PubMed] [Google Scholar]
  • Evans et al., 1994.Evans AC, Kamber M, Collins DL, Macdonald D. An MRI-based probabilistic atlas of neuroanatomy. In: Shorvon S, Fish D, Andermann F, Bydder GM, Stefan H, editors. Magnetic resonance scanning and epilepsy. New York: Plenum; 1994. pp. 263–274. [Google Scholar]
  • Fiorillo et al., 2003.Fiorillo CD, Tobler PN, Schultz W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science. 2003;299:1898–1902. doi: 10.1126/science.1077349. [DOI] [PubMed] [Google Scholar]
  • Frank et al., 2004.Frank MJ, Seeberger LC, O'Reilly RC. By carrot or by stick: cognitive reinforcement learning in parkinsonism. Science. 2004;306:1940–1943. doi: 10.1126/science.1102941. [DOI] [PubMed] [Google Scholar]
  • Friston et al., 1995.Friston KJ, Ashburner J, Poline J-B, Frith CD, Heather JD, Frackowiak RSJ. Spatial registration and normalization of images. Hum Brain Mapp. 1995;2:1–25. [Google Scholar]
  • Glascher and Buchel, 2005.Glascher J, Buchel C. Formal learning theory dissociates brain regions with different temporal integration. Neuron. 2005;47:295–306. doi: 10.1016/j.neuron.2005.06.008. [DOI] [PubMed] [Google Scholar]
  • Haber et al., 1995.Haber SN, Kunishio K, Mizobuchi M, Lynd-Balta E. The orbital and medial prefrontal circuit through the primate basal ganglia. J Neurosci. 1995;15:4851–4867. doi: 10.1523/JNEUROSCI.15-07-04851.1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Kahn et al., 2002.Kahn I, Yeshurun Y, Rotshtein P, Fried I, Ben-Bashat D, Hendler T. The role of the amygdala in signaling prospective outcome of choice. Neuron. 2002;33:983–994. doi: 10.1016/s0896-6273(02)00626-8. [DOI] [PubMed] [Google Scholar]
  • Kahneman and Tversky, 1991.Kahneman D, Tversky A. Loss aversion in riskless choice: a reference dependent model. Q J Econ. 1991;106:1039–1061. [Google Scholar]
  • Kahneman and Tversky, 2000.Kahneman D, Tversky A. Choices, values, and frames. Cambridge: Cambridge UP; 2000. [Google Scholar]
  • Knutson et al., 2000.Knutson B, Westdorp A, Kaiser E, Hommer D. FMRI visualization of brain activity during a monetary incentive delay task. NeuroImage. 2000;12:20–27. doi: 10.1006/nimg.2000.0593. [DOI] [PubMed] [Google Scholar]
  • Knutson et al., 2001a.Knutson B, Adams CM, Fong GW, Hommer D. Anticipation of increasing monetary reward selectively recruits nucleus accumbens. J Neurosci. 2001a;21:1–5. doi: 10.1523/JNEUROSCI.21-16-j0002.2001. RC159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Knutson et al., 2001b.Knutson B, Fong GW, Adams CM, Varner JL, Hommer D. Dissociation of reward anticipation and outcome with event-related fMRI. NeuroReport. 2001b;12:3683–3687. doi: 10.1097/00001756-200112040-00016. [DOI] [PubMed] [Google Scholar]
  • Knutson et al., 2003.Knutson B, Fong GW, Bennett SM, Adams CM, Hommer D. A region of mesial prefrontal cortex tracks monetarily rewarding outcomes: characterization with rapid event-related fMRI. NeuroImage. 2003;18:263–272. doi: 10.1016/s1053-8119(02)00057-5. [DOI] [PubMed] [Google Scholar]
  • Knutson et al., 2005.Knutson B, Taylor J, Kaufman M, Peterson R, Glover G. Distributed neural representation of expected value. J Neurosci. 2005;25:4806–4812. doi: 10.1523/JNEUROSCI.0642-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • LaBar et al., 1998.LaBar KS, Gatenby JC, Gore JC, LeDoux JE, Phelps EA. Human amygdala activation during conditioned fear acquisition and extinction: a mixed-trial fMRI study. Neuron. 1998;20:937–945. doi: 10.1016/s0896-6273(00)80475-4. [DOI] [PubMed] [Google Scholar]
  • Logothetis et al., 2001.Logothetis NK, Pauls J, Augath M, Trinath T, Oeltermann A. Neurophysiological investigation of the basis of the fMRI signal. Nature. 2001;412:150–157. doi: 10.1038/35084005. [DOI] [PubMed] [Google Scholar]
  • Machina, 1987.Machina MJ. Choice under uncertainty: problems solved and unsolved. J Econ Perspect. 1987;1:121–154. [Google Scholar]
  • Matthews et al., 2004.Matthews SC, Simmons AN, Lane SD, Paulus MP. Selective activation of the nucleus accumbens during risk-taking decision making. NeuroReport. 2004;15:2123–2127. doi: 10.1097/00001756-200409150-00025. [DOI] [PubMed] [Google Scholar]
  • McClure et al., 2003.McClure SM, Berns GS, Montague PR. Temporal prediction errors in a passive learning task activate human striatum. Neuron. 2003;38:339–346. doi: 10.1016/s0896-6273(03)00154-5. [DOI] [PubMed] [Google Scholar]
  • Nichols et al., 2005.Nichols T, Brett M, Andersson J, Wager T, Poline JB. Valid conjunction inference with the minimum statistic. NeuroImage. 2005;25:653–660. doi: 10.1016/j.neuroimage.2004.12.005. [DOI] [PubMed] [Google Scholar]
  • O'Doherty et al., 2004.O'Doherty J, Dayan P, Schultz J, Deichmann R, Friston K, Dolan RJ. Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science. 2004;304:452–454. doi: 10.1126/science.1094285. [DOI] [PubMed] [Google Scholar]
  • O'Doherty et al., 2003.O'Doherty JP, Dayan P, Friston K, Critchley H, Dolan RJ. Temporal difference models and reward-related learning in the human brain. Neuron. 2003;38:329–337. doi: 10.1016/s0896-6273(03)00169-7. [DOI] [PubMed] [Google Scholar]
  • Pagnoni et al., 2002.Pagnoni G, Zink CF, Montague PR, Berns GS. Activity in human ventral striatum locked to errors of reward prediction. Nat Neurosci. 2002;5:97–98. doi: 10.1038/nn802. [DOI] [PubMed] [Google Scholar]
  • Pasqualini et al., 1996.Pasqualini C, Olivier V, Guibert B, Frain O, Leviel V. Rapid stimulation of striatal dopamine synthesis by estradiol. Cell Mol Neurobiol. 1996;16:411–415. doi: 10.1007/BF02088105. [DOI] [PubMed] [Google Scholar]
  • Petry 1996.Petry J. Psychotherapie der Glücksspielsucht. Weinheim: Beltz/Psychologie Verlags Union; 1996. [Google Scholar]
  • Ramnani et al., 2004.Ramnani N, Elliott R, Athwal BS, Passinghama RE. Prediction error for free monetary reward in the human prefrontal cortex. NeuroImage. 2004;23:777–786. doi: 10.1016/j.neuroimage.2004.07.028. [DOI] [PubMed] [Google Scholar]
  • Rogers et al., 1999.Rogers RD, Owen AM, Middleton HC, Williams EJ, Pickard JD, Sahakian BJ, Robbins TW. Choosing between small, likely rewards and large, unlikely rewards activates inferior and orbital prefrontal cortex. J Neurosci. 1999;19:9029–9038. doi: 10.1523/JNEUROSCI.19-20-09029.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Schultz and Dickinson, 2000.Schultz W, Dickinson A. Neuronal coding of prediction errors. Annu Rev Neurosci. 2000;23:473–500. doi: 10.1146/annurev.neuro.23.1.473. [DOI] [PubMed] [Google Scholar]
  • Schultz et al., 1997.Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593–1599. doi: 10.1126/science.275.5306.1593. [DOI] [PubMed] [Google Scholar]
  • Seymour et al., 2005.Seymour B, O'Doherty JP, Koltzenburg M, Wiech K, Frackowiak R, Friston K, Dolan R. Opponent appetitive-aversive neural processes underlie predictive learning of pain relief. Nat Neurosci. 2005;8:1234–1240. doi: 10.1038/nn1527. [DOI] [PubMed] [Google Scholar]
  • Sheehan et al., 1998.Sheehan DV, Lecrubier Y, Sheehan KH, Amorim P, Janavs J, Weiller E, Hergueta T, Baker R, Dunbar GC. The Mini-International Neuropsychiatric Interview (M.I.N.I.): the development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10. J Clin Psychiatry. 1998;59([Suppl 20]):22–33. quiz 34–57. [PubMed] [Google Scholar]
  • Shmuel et al., 2006.Shmuel A, Augath M, Oeltermann A, Logothetis NK. Negative functional MRI response correlates with decreases in neuronal activity in monkey visual area V1. Nat Neurosci. 2006;9:569–577. doi: 10.1038/nn1675. [DOI] [PubMed] [Google Scholar]
  • Tobler et al., 2005.Tobler PN, Fiorillo CD, Schultz W. Adaptive coding of reward value by dopamine neurons. Science. 2005;307:1642–1645. doi: 10.1126/science.1105370. [DOI] [PubMed] [Google Scholar]
  • Trepel et al., 2005.Trepel C, Fox CR, Poldrack RA. Prospect theory on the brain? Toward a cognitive neuroscience of decision under risk. Cogn Brain Res. 2005;23:34–50. doi: 10.1016/j.cogbrainres.2005.01.016. [DOI] [PubMed] [Google Scholar]
  • Tzourio-Mazoyer et al., 2002.Tzourio-Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, Mazoyer B, Joliot M. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage. 2002;15:273–289. doi: 10.1006/nimg.2001.0978. [DOI] [PubMed] [Google Scholar]
  • Zink et al., 2004.Zink CF, Pagnoni C, Martin-Skurski ME, Chappelow JC, Berns GS. Human striatal responses to monetary reward depend on saliency. Neuron. 2004;42:509–517. doi: 10.1016/s0896-6273(04)00183-7. [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES