Abstract
Few aspects of human cognition are more personal than the choices we make. Our decisions – from the mundane to the impossibly complex – continually shape the courses of our lives. In recent years, researchers have applied the tools of neuroscience to understand the mechanisms that underlie decision making, as part of the new discipline of decision neuroscience. A primary goal of this emerging field has been to identify the processes that underlie specific decision variables, including the value of rewards, the uncertainty associated with particular outcomes, and the consequences of social interactions. Recent work suggests potential neural substrates that integrate these variables, potentially reflecting a common neural currency for value, to facilitate value comparisons. Despite the successes of decision neuroscience research for elucidating brain mechanisms, significant challenges remain. These include building new conceptual frameworks for decision making, integrating research findings across disparate techniques and species, and extending results from neuroscience to shape economic theory. To overcome these challenges, future research will likely focus on interpersonal variability in decision making, with the eventual goal of creating biologically plausible models for individual choice.
Humans and other animals continually make decisions: Should I give up a sure immediate reward for a larger, but risky reward in the future? Should I take an aggressive or passive stance toward my competitor? Is this a fair trade? Poor decision making is a hallmark of many cognitive disorders, from addiction to schizophrenia. Over the past decade, there has been dramatic growth in the use of neuroscience methods to study the mechanisms of decision making. Here, we summarize some key insights and describe ongoing challenges from this new interdiscipline of “decision neuroscience” or “neuroeconomics”. Although these two terms have been used synonymously throughout the literature, we use the former term hereafter for clarity and breadth.
A. Decision Variables
The cardinal goal of decision neuroscience research has been to identify the neural mechanisms that shape individual choice behavior [1–6]. Most studies have adopted a “decision variable” approach: first identify an economic phenomenon of interest, then abstract that phenomenon into a format amenable to neuroscience research, next choose one or more variables that modulate decisions, and finally identify aspects of brain function that track changes in those decision variables. In this section, we focus the three most common classes of decision variables: value, uncertainty, and social interactions.
A.1. Value: Dopamine and Reward Prediction Error
The fundamental elements of any decision are its potential outcomes, and specifically their values. An extensive literature implicates the neurotransmitter dopamine in assigning value based on environmental stimuli [7–10]. Dopaminergic neurons in the brainstem's ventral tegmental area (VTA) project to several subcortical and cortical targets, most notably to the nucleus accumbens in the ventral striatum (vSTR). It was originally believed that dopamine coded for the hedonic impact of rewards [11–15], and this viewpoint remains common within popular accounts of dopamine as the “pleasure chemical”. More recent work, however, emphasizes dopamine's role in motivated behavior, including altering the salience of incentives [16–18] and updating models of future rewards [19]. It should be also noted that some authors have questioned whether dopamine specifically contributes to reward processing, in itself [e.g., Ref 20].
Reward prediction errors (RPE) arise when a stimulus provides information that changes expectations of the timing, amount, or content of future rewards. Early studies by Schultz and colleagues [19,21–23] used electrophysiological methods to track changes in neuron firing rate to cues that predicted a reward (e.g., fruit juice) that was delivered a few seconds later. At the beginning of the experiment, before the monkey learned that a given cue predicted any subsequent reward, the neuronal activity in VTA only increased to the delivery of the rewarding fruit juice. As the monkeys learned that the cue predicted future rewards, the cue evoked increasing VTA activity but activity to the reward itself diminished. Once the cue-reward contingency was established, the researchers omitted some expected rewards and found that VTA activity decreased below baseline firing rates. Based on these results, Schultz and colleagues interpreted the firing rate of dopaminergic neurons to carry RPE signals [19], which provides a computationally tractable method for tracking changes in value.
Signals consistent with reward prediction errors have since been identified in neurons in the vSTR [21] and ventromedial prefrontal cortex (vmPFC) [21,24]. Similar prediction errors have recently been reported in human dopaminergic neurons in the substantia nigra [25]. Studies using functional magnetic resonance imaging (fMRI) also have shown that reward predictability modulates the response to reward in the vSTR [26,27] and the VTA [28]. Collectively these results have led to the common conclusion that key dopaminergic regions (i.e., VTA and vSTR) and their targets (e.g., vmPFC) constitute the brain's reward system (Figure 1). Nevertheless, the processing of reward is not limited to these regions. For example, an early primate electrophysiology study by Platt and Glimcher [29] demonstrated that activity of neurons in posterior parietal cortex was highly correlated with value of different response options. These and similar results from brain regions associated with response selection and motor output [30–34] indicate that value information can modulate processing at many stages of decision making.
Figure 1. Brain regions supporting reward processing and value computation.

Reward experience and evaluation evokes activation in several interconnected brain regions within the brain's dopaminergic system. Key regions include the ventral tegmental area (VTA), the ventral striatum (vSTR), and the ventromedial prefrontal cortex (vmPFC).
A.2. Value: Alternative Explanations
The role of the dopaminergic system in forming and updating predictions about rewards is now well-established [reviewed in Ref 9]. Yet, intriguing research points to other interpretations for the functions of these regions. One key focus of current research examines potentially separate signals associated with anticipation and receipt of rewards. Knutson and colleagues took the basic paradigms used in prior primate electrophysiology studies and create a novel response-time task suitable for fMRI [35]. At the beginning of each trial, participants viewed a single cue that indicated the potential monetary consequences of that trial (e.g., a gain or loss). Then, following a short and variable delay, a target appeared. If the participant pressed a button sufficiently quickly thereafter, then the monetary reward would be delivered (or a monetary punishment would be avoided). During the period in which participants anticipated potential rewards, Knutson and colleagues observed robust striatal and medial prefrontal activation, consistent with the role of these regions in reward anticipation. This basic paradigm, called the Monetary Incentive Delay (MID) task, has become a common approach for eliciting anticipation-related activation in reward-related regions [reviewed in Ref 36].
Similarly, early fMRI studies using gambling games revealed that receipt of monetary rewards evoked vSTR activation. [37–39]. Delgado and colleagues [37] used a card-guessing task in which correct guesses were associated with monetary gains, but incorrect guesses were associated with monetary losses. They found that activation in the vSTR increased to winning, compared to losing trials. Some evidence indicates, however, that reward receipt evokes activation specifically in the vmPFC [40,41], consistent with a role of that region in computing the expected value of a reward [42]. Considered generally, activation in vmPFC (and adjacent orbitofrontal cortex, OFC) may reflect the assessed value of rewards. Studies using single-unit recordings indicate that the responsiveness of vmPFC neurons to rewards depends on the monkey's satiation [43,44], a conclusion that has since been replicated in human participants using fMRI [45,46]. Value computations in vmPFC likely play an important role during active decision making, as considered later in this review.
While decision neuroscience research has most commonly used monetary rewards (in humans) and juice rewards (in monkeys), strong evidence indicates that reward-related responses generalize to a wide range of stimuli. Neuroimaging and single-unit experiments have observed vSTR and vmPFC activation in response to many sorts of sensory rewards, including tastes [26,47–49], smells [50,51], touch [52], attractive faces [53–56], and even sexual experience [57,58]. More abstract rewards also evoke activation within the reward system: beautiful art [59], humor [60], charitable giving [61,62], love [63–65], along with a wide range of social stimuli [reviewed in Ref 66].
Value learning requires consideration of both positive and negative outcomes. Electrophysiological studies have identified sets of neurons within the VTA that code for either aversive or appetitive events [67,68], which could potentially project into distinct regions of the striatum for separate processing of losses and gains [69]. Notably, aversive stimuli can also evoke activation in similar brain regions as rewarding stimuli depending on context [70–72].
Finally, some research suggests that the dopaminergic system may signal a broader class of environmental events than just reinforcers. In particular, research points to a potential role for the striatum, at least, in the response to salient but non-rewarding events [73–75]. For example, vSTR activation can be evoked by unexpected but meaningful auditory stimuli (e.g., sirens, dog barks) in the absence of any overt rewards [74]. Important evidence in support of a salience perspective would come from the demonstration of valence-independent changes (e.g., increases in activation to both positive and negative cues and/or outcomes). Recent attempts to dissociate reward salience from reward valence have led to equivocal results, at least within the vSTR, with evidence both for [76] and against [77] valence-independent activation. Future studies will be necessary to reconcile these disparate perspectives.
A.3. Uncertainty
A second important decision variable is uncertainty. Considered in a psychological [78] or economic [79] context, uncertainty reflects the absence of some desired information – such as about the timing, content, value, or certainty of future rewards. Uncertainty pervades many real-world decisions, and organisms actively seek to reduce uncertainty in many contexts. Note that uncertainty is intimately connected to reward valuation; cues about future rewards, by definition, minimize uncertainty. As shown by Fiorillo and colleagues [80], the pattern of cue- and reward- dopamine neuron activity described in the previous section scales with probability: as the probability of reward increases, cue-related activity increases but outcome-related activity decreases. The same study also indicates that uncertainty may lead to sustained activity of dopaminergic neurons during anticipation periods [80]. And, valuation-related activation of the striatum tracks probability in a nonlinear manner, consistent with probability weighting functions identified behaviorally [81].
When uncertainty reflects known probabilities, decisions involve risk. Studies of risky choice typically ask participants to select between outcomes with different probability of reward or with different variances of potential reward distributions. Across numerous studies, key areas involved in risky decision making include lateral and orbital prefrontal cortex, anterior cingulate cortex, posterior parietal cortex, and insular cortex [31,82–86] (Figure 2). Given the complexity of risky choice, parsing the distinct contributions of these regions remains an active area of study. One important target for current research has been anterior insular cortex. Building upon prior research linking this region to representations of bodily states, Bechara, Damasio, and their colleagues have linked the insula (and vmPFC) to internal feedback signals that may shape behavior away from potential negative consequences [87,88]. Consistent with this idea, insular activation increases both to stimuli that signal increasing environmental risk [82,89] and attempts to minimize risk [90]. Recent work by Preuschoff and colleagues [84] suggests that activation of the anterior insula represents a signal for a risk prediction error. Under their model, the anterior insula tracks unexpected changes in risk, based on new information or decision outcomes. This intriguing result may provide an important link to cognitive neuroscience studies of the role of insular cortex in cognitive control (see Section D for additional discussion).
Figure 2. Brain regions supporting decision uncertainty.

Several brain regions respond to uncertainty, or situations lacking desired information about the timing, content, value, or certainty of rewards. These include the insular cortex (Ins), anterior cingulate cortex (ACC), lateral prefrontal cortex (LPFC), and posterior parietal cortex (PPC).
A smaller set of studies have examined the effects of ambiguity, or unknown probabilities, upon decision making. Consider the following example, adapted from Ellsberg [91]. In front of you are two urns, each with 100 colored balls. The left urn has exactly 50 red balls and 50 blue balls, while the right urn has an unknown number of red balls and an unknown number of blue balls (and no other colors). You win a monetary prize if you declare a color, reach into the urn, and pull out a ball of your chosen color. What do you do? When faced with analogues of this decision in the laboratory, most individuals choose to pull a ball from the left, or risky, urn. But, examination of the decision problem reveals that the chances of winning are exactly the same in either case (i.e., 50%). When Hsu and colleagues [92] presented similar decision problems to participants in a fMRI session, they found that lateral orbitofrontal cortex and the amygdala exhibited significantly greater activation to decisions involving ambiguity, compared to decisions involving risk. A similar approach was used by Huettel and colleagues [83], who observed ambiguity-related activation in different regions: the insula, the posterior parietal cortex, and the lateral prefrontal cortex, with the last of these also tracking ambiguity preferences. These disparate results may reflect distinct aspects of ambiguity-related processing. The lateral orbitofrontal cortex, in particular, has been associated with aversion to negative events [e.g., Ref 93; for a review, see Ref 94]. This interpretation is supported by lesion data reported by Hsu and colleagues (2005), who found that patients with orbitofrontal cortex damage exhibited decreased aversion to ambiguity (and risk). Conversely, regions of prefrontal and parietal cortex may be critical for forming representations of potentially knowable information, as recently shown by Bach and colleagues [95].
Uncertainty can also be induced by increasing the delay before a reward is received, which leads subjects to devalue potential rewards (i.e., temporal discounting). To account for known anomalies in intertemporal choice, researchers have proposed that temporal discounting reflects two processes: an impulsive system (β) that rapidly devalues rewards that are not immediately attainable, and a patient system (δ) that exhibits much more gradual discounting. Studies by McClure and colleagues [96,97] supported this two-system model, such that the β system comprises reward-related regions including the vSTR and the vmPFC, whereas the δ system includes cognitive regions like lateral parietal and lateral prefrontal cortices. Other research has cast doubt onto the β-δ model with evidence that intertemporal choices follow from activation of a single system for subjective value [98] comprising vSTR, posterior cingulate cortex, and vmPFC. Of particular relevant for resolving this debate will be paradigms that examine delay discounting as it occurs, as has been explored in a few recent studies [96,99,100].
A.4. Social Interactions
Real-world decision making often involves social settings where individuals must not only consider the value and uncertainty of outcomes, but also incorporate information about other individuals [5]. Decision neuroscience research has investigated two main classes of information about other individuals: their actions (e.g., in competitive games) and their characteristics (e.g., facial features).
Studies of social interactions often first constrain behavior using interactive games with well defined rules and payoffs [101], and then implement psychological parameters (e.g., guilt, envy, fairness, equity) into subsequent analyses. This general approach has provided valuable insights about the mechanisms underlying cooperation [102,103], fairness [104–107], altruism [61,62,108], punishment [109–111].
Most such studies have used fMRI to scan one individual who interacts with other participants, whether real or computer-generated, outside of the scanner. For example, research by Sanfey and colleagues [104] using the Ultimatum Game demonstrated that activation in the anterior insula increased to others' unfair actions, whereas activation of lateral prefrontal cortex increased when ignoring unfairness and accepting an offered reward. Another important approach has been hyperscanning, which involves scanning multiple individuals simultaneously [112,113]. The power of this latter approach was first shown by King-Casas and colleagues, who scanned pairs of subjects while they interacted in an investment game [113]. This game requires one player to trust the other with some of their money, whereupon if the trust is reciprocated both players benefit. By identifying correlations between the activation patterns of two subjects' brains, these researchers provided evidence that activation of the caudate was consistent with the development of an intention to trust. Note that while most research on social interactions has used human participants, some primate electrophysiology research has set up decision scenarios modeled on competitive games [34,114]. For example, Barraclough and colleagues [114] demonstrated that neurons in DLPFC encoded decision variables critical for strategic choice (e.g., interactions between past decisions and opponent tendencies).
Social interactions can themselves be highly rewarding [66]. For example, Rilling and colleagues [103] reported activation in reward-related regions when individuals cooperated during a repeated Prisoner's Dilemma game. Reward can also be derived from punishing others; as shown by de Quervian and colleagues [109], punishing a non-cooperative counterpart evokes activation in the ventral caudate. Recently, Hsu and colleagues [106] examined fairness by allowing participants to distribute food donations to groups of Ugandan orphans. They found the striking results that the efficiency (i.e., overall amount of food) and inequity (i.e., imbalance of allocations across individuals) of food donations were tracked in distinct regions – the putamen and insula, respectively – with the tradeoff between these parameters expressed in the caudate activation. As shown by these studies, the decision variables identified earlier in this review for individual choice behavior also modulate social interactions. Yet, despite these similarities, it remains unclear whether individual and social decisions involve similar coding at the neuronal level.
Social interactions also rely on gaining information about others' characteristics. Particularly relevant are faces, whose complexity and informational properties make them intrinsically rewarding [115]. Attractive faces reliably evoke activation in the vmPFC, as seen in a wide range of experimental paradigms [53–56]. However, activation in ventral striatum has been only infrequently observed [55,56]. Moreover, recent work has demonstrated that heterosexual males will trade small amounts of money to view photographs of attractive females [116], paralleling previous research showing that male monkeys will sacrifice juice rewards to view images of females' perinea [117,118]. Heterosexual males will also work (i.e., exert effort) to view photographs of attractive females [56,116]. Furthermore, it has also been found that the reward value of viewing an attractive face increases with increasing duration of presentation [116].
How is social information integrated with non-social information to guide behavior? Some brain systems may play roles specifically in social decision making. Research in both humans [119] and monkeys [120] has demonstrated that distinct regions of the medial prefrontal cortex compute social and non-social information: the anterior cingulate sulcus tracks changes in reward expectations, whereas the anterior cingulate gyrus responds to social information. Moreover, regions involved in social cognition also contribute to decision processes [121]. However, most regions that support decision making likely do so in both individual and social contexts (see Refs 119,122 for reviews). In the next section, we consider how the brain may integrate information from a variety of sources to reach decisions.
B. Value Comparison
Decision neuroscience research often seeks to understand how value computations lead to specific choices [1,4,6]. To facilitate value comparison, a variety of goods, experiences, and actions must be converted into some sort of “common currency” wherein comparisons can quickly and efficiently be made upon the same relative scale [123]. Elucidating the specific computations that underlie a common currency representation would have important implications for valuation and decision making [124]. Of note, however, relatively few decision neuroscience studies have used multiple reward modalities, as necessary for evaluating relative valuation.
Yet in recent years, there has been substantial interest in relative valuation, often in the context of economic exchanges like purchasing decisions. A common fMRI paradigm allows participants to trade money earned in the experimental setting for goods of greater or lesser value. When participants purchased inexpensive familiar objects, subjective value of those goods was correlated with activation in vSTR but not vmPFC, whereas subsequent information about prices modulated activation of vmPFC and insular cortex [125]. In contrast, when hungry participants placed bids on food items that could be consumed after the scanning session, activation in vmPFC was modulated by the subjective desirability of the food items [126]. Additional recent fMRI studies have demonstrated that a similar region of vmPFC is involved in computing the value of items during trading [127,128]. Neurons in ventral prefrontal cortex have also been shown to code for the relative value of juice rewards, across a variety of task parameters and decision context [129,130]. These studies thus converge on the idea that vmPFC represents a critical substrate for trading money for another good or service – and potentially, a more general computation of common currency.
However, many decisions may rely on more than just brain systems for value computation. Behavioral economics research has identified a variety of anomalies in preferences, including the endowment effect [131] and framing effects [132], that may reflect context-dependent contributions from specific brain regions [reviewed in Refs 133,134]. Recent neuroimaging studies of the endowment effect, or the tendency to overvalue goods that one already possesses, indicate that activation in the vSTR tracks value in a largely reference-dependent manner [135,136]. Framing effects occur when the manner of representing a decision problem (e.g., describing outcomes either as losses or as gains from different points of reference) biases individuals toward one choice or another. Decisions consistent with framing effects evoke increased activation in the amygdala, while decisions inconsistent with framing effects evoke increased activation in dorsomedial prefrontal cortex [137]; the latter effect may reflect the role of this region in implementing strategies for decision making [138].
Adaptive decision makers should also incorporate value information from decisions that are not made; i.e., from rewards that are observed, but not received, or “fictive” outcomes [139–141]. Extending prior neuroimaging work that suggested the vSTR responds to fictive outcomes [139,141], a recent electrophysiological recording study in monkeys indicated that single neurons in anterior cingulate cortex (ACC) track fictive outcomes [140]. Critically, neurons representing fictive outcomes utilized a similar coding scheme as neurons that represented experienced outcomes. These studies demonstrate that at least some brain regions incorporate unobtained outcomes into value computations, which may greatly facilitate decision making in dynamic, complex environments.
C. Individual Differences
Models of decision-making behavior have traditionally assumed that all individuals approach choices in a similar fashion. Yet, individuals vary, often dramatically, in the decision variables that contribute to their choices: uncertainty preferences, aversion to loss, delay discounting, inequity aversion, other-regarding preferences, among many others. Even the most fundamental and robust phenomena in decision making are subject to individual variability. As one example, in the seminal work by Kahneman and Tversky on heuristics and biases in decision making [142], substantial minorities of participants make choices contrary to canonical biases (e.g., opposite to framing effects).
Within decision neuroscience, like within cognitive neuroscience more generally, the study of individual differences has been a relatively recent development – and one limited to a subset of research methods. A primary challenge is sample size: the experiment must include enough participants to have substantial variability. Studies using non-human animals or human lesion patients often include only a handful of participants. Similarly, many early neuroimaging studies had relatively small samples (e.g., ~10 participants), which precluded examination of individual variability. And, neither behavioral economics nor cognitive psychology has a strong tradition of examining differences among individuals. Both disciplines have tended to collect data from large samples of subjects, typically drawn from a relatively homogeneous population of young adults, and then combine data across that population to extract general rules for behavior. Reflecting these limitations, nearly all neuroscience studies of individual differences in decision making have used fMRI in human participants.
An early example of individual difference effects was reported by Huettel and colleagues [83] who measured brain responses to decisions involving risky and ambiguous gambles. For each subject, the authors used the pattern of choices to estimate relative preferences for or against risk and ambiguity. Among regions exhibiting increased activity to ambiguity compared to risk, activation in lateral PFC tracked ambiguity preferences whereas activation in posterior parietal cortex tracked risk preferences. These authors also found that the lateral PFC activation tracked individual differences in impulsiveness, which suggested a link between the cognitive detection of ambiguity and the construction of rules for behavior.
Similar approaches have been used in other domains of choice, often to support the inference that the targeted brain regions contribute to the process of interest. In work by Tom and colleagues [143], subjects chose whether to accept mixed gambles consisting of one gain and one loss, with equal probability (e.g., a coin flip, such that heads wins $20 but tails loses $15). The loss-aversion parameter (λ) was defined as the multiplication factor necessary to make a potential gain and a potential loss subjectively equivalent. That is, an individual with λ of 2 would be willing to flip a coin if the win was $21 but the loss was $10, but would not flip that same coin when the win was reduced to $19. The authors found that activation in the vSTR and vmPFC was strongly correlated with subjects' relative loss aversion, supporting the hypothesis that individual differences in such choices reflect the relative neural sensitivity to rewards.
Individual differences in parameter functions can also be critical for disambiguating competing theories. As described above, early research on intertemporal choice indicated that decisions involving immediately available outcomes evoked activation in regions critical for reward evaluation [97]. One interpretation of this activation is that it reflected the temptation of the immediate reward, consistent with the “hot” system in dual-system models (i.e., System I). More recent work [98] used a similar paradigm but additionally measured individual differences in the rate of intertemporal discounting (i.e., k in a hyperbolic model of discounted value). Kable and Glimcher found that activation in reward related regions was well predicted by the utility of the decision options, as estimated individually for each subject from their discounting parameter. Based on this result, these authors concluded that subjective value, at least in this sort of choice paradigm, reflects the computations of a single system regardless of delay until reward delivery. While the debate about dual-systems models has not yet been resolved (see below for additional discussion), these results provide a clear demonstration of the additive power provided by individual difference measurements.
A second major application of individual difference analyses has been showing how activation during decision making is modulated by variation in relevant traits. Most such studies collect trait data using validated psychometric tests adopted from social or clinical psychology, or from behavioral economics, administered outside of the behavioral session. Then, the scores on one or more tests are included as covariates in across-subjects statistical analyses, to identify brain regions whose change in activation during some task correlates with the trait measure. Using this approach, decision neuroscientists have identified neural correlates of personality measures (e.g., harm avoidance; [89,144]), manipulativeness [145], altruism [108], reward sensitivity [146].
An important recent direction for decision neuroscience has been identifying genetic predictors of individual differences, using the methods of imaging genomics [147]. Using this approach, several decision neuroscience studies have successfully used genetic variability to predict differential brain activation during economic tasks [148–153]. For example, Boettiger and colleagues [149] found that impulsive choice behavior and activity levels in dorsal PFC and posterior parietal cortex were predicted by the Val158Met polymorphism of the catechol-O-methyltransferase (COMT) gene, which modulate the availability of synaptic dopamine. In addition, Yacubian and colleagues [150] found that reward sensitivity was blunted by an interaction between COMT and the dopamine transporter gene DAT. Recently, Rosier and colleagues [152] found that the individuals who were homozygous for the short allele of the serotonin transporter gene (5-HTTLPR) were more susceptible to framing and exhibited greater activity in the amygdala, compared to individuals with the long allele. In another striking example of linking genetics and behavior, Frank and colleagues [148] demonstrated that genes that influence striatal dopamine function (i.e., DARPP-32 and DRD2) predict individual differences in exploitative learning, whereas a gene that influences dopamine function in the prefrontal cortex (i.e., COMT) predicted exploratory tendencies. Although linking psychological traits to genetics biomarkers poses many challenges [e.g., Refs 154,155], studies such as these have the potential to discover endophenotypes that may be useful for the treatment and diagnosis of disorders of decision making [147,156–159].
Despite these many apparent successes, the search for individual differences has not been without controversy. Any trait measurement will be subject to imprecision. Personality questionnaires, in particular, are subject to at least two sorts of error in measurement: limitations of the questionnaire to assess the underlying trait, and fluctuations in the measurement according to environmental changes (e.g., mood, time of day). As argued in a recent criticism of individual difference measures in social neuroscience [160], many reports of high correlations between traits and brain activation may be the result of statistical artifacts, with the true correlation considerably smaller. This particular controversy remains ongoing; see, for example, Ref 161 for replies and Refs 162,163 for related discussions. It is important to emphasize, however, this criticism reflects concern about the values of brain-behavior correlations, not the significance of those correlations.
A more damaging criticism reflects the tendency for reverse inference [164] in interpreting brain-behavior relationships. When observing that individual differences in some measure (e.g., loss aversion) predict a particular activation pattern (e.g., vSTR activation to losses), it is natural to draw a causal conclusion: that loss aversion during decision making reflects a change in the subjective weighting of different rewards. This example may seem straightforward, given how tightly activation of the vSTR has been coupled to reward learning, but other claims are more problematic. Activation in insular cortex or lateral prefrontal cortex, as just two examples, may be evoked in a wide variety of tasks, making it difficult to determine exactly what psychological processes differentiate individuals. Many biological and environmental factors may multiply determine individual differences in decision making, and a core problem for future research will be isolating specific contributors.
D. Conclusions and Future Directions
Decision neuroscience research has already provided many new insights into brain function, as evident from even the incomplete summary provided by this article. Yet, its cross-disciplinary nature poses considerable challenges: Does decision neuroscience research reflect an emerging and distinct discipline, or a tentative foray of cognitive neuroscience into a new topic area? What links will be built between decision neuroscience and other topics in neuroscience, from memory and perception to development and aging? And, will new models for decision making arise, or will neuroscientists simply create more precise maps of known decision processes? Meeting this last challenge, in particular, seems critical for the success of a new discipline of decision neuroscience.
D.1. Conceptual Challenges
Studies in decision neuroscience are often striking in their simplicity. Many laboratories have studied the neural underpinnings of economic phenomena (e.g., risk aversion, framing bias) that can be modeled using well-validated functions. If a variable or operation in those functions is correlated with the physiological or metabolic changes in a brain region (see Section A for examples) – or if computations change when that region is disabled [e.g., Ref 107] – then the researcher concludes that the brain region contributes to that economic phenomenon. The power of this approach comes from its operationalization of key processes: “risk aversion” can be defined as a parameter in a model, not via reference to some underlying psychological state. Accordingly, research on that parameter can be conducted in both humans and non-human primates, often using relatively simple tasks that are well-suited to decision modeling (see Refs 27,165–168 for elegant examples).
The basic decision neuroscience approach shares conceptual underpinnings with early studies of learning [169,170], which derived model parameters only from the behavior of organisms. By eschewing interpretations in terms of cognitive or neural underpinnings, behaviorist researchers constructed robust, descriptive models for behavior, many of whose elements still pervade learning theory [171,172]. Operationalization carries a significant conceptual cost, however: it precludes extrapolation of behavior to new and complex environments. Early cognitive psychologists challenged the behaviorist dogma by introducing the concept of “converging operations”, which posited that complex concepts could be established through sets of experiments. As a quick example, one could model the choices of gamblers in a casino according to the reinforcement history of their decisions, without recourse to any underlying states. Or, one could postulate that the gamblers' behavior was modulated by cognitive states (e.g., regret, temptation) that are only indirectly measurable and that tend to be highly correlated with objective measures of reward outcome. By evaluating behavior across a series of experiments, each manipulating a different aspect of reinforcement, researchers could converge on the features of the decision that best predict choice – which might be parsimoniously described using a complex psychological term like “regret”.
The most common conceptual interpretations of decision neuroscience research have been variants of the “dual-systems” psychological model of choice [173,174]. Considered roughly, a dual-systems model contends that decisions (and often behavior, more generally) reflect the interaction between two distinct neural contributors. The emotional System I acts quickly and automatically, processes information only superficially using parallel mechanisms, overweights immediate or salient consequences, and emphasizes affective elements of decision making. The rational System II acts more slowly and consciously, processes information more deeply using sequential mechanisms, ascribes value to all outcomes, and downplays emotional content. When extended to neuroscience, this dichotomy becomes isomorphic with specific brain regions: System I regions include elements of the reward system, the amygdala, and medial prefrontal cortex (Figure 1), whereas System II regions include lateral prefrontal cortex, anterior cingulate cortex, and posterior parietal cortex (Figure 2). In many decision neuroscience studies, including seminal work, choices have been postulated to reflect the competitive interplay between these two systems [97,104,137].
While the dual-systems model makes intuitive sense – we all have experienced the temptation of a mouth-watering dessert, followed by an effort of will to decline – it fails as an account of the neural mechanisms of decision making. One major problem lies in a lack of converging evidence. Decision neuroscience has adopted an experimental approach more similar to neuroscience than to psychology (or to behavioral economics). Most publications in decision neuroscience comprise only a single experiment, usually with only one manipulation of the phenomenon of interest and one neuroscience technique for measurement. Accordingly, psychological interpretations of data from that experiment (e.g., activation of the prefrontal cortex leads to rational decision making) may not generalize to other experimental paradigms. Generalization may be particularly problematic for brain regions shown to be activated within a wide range of experiments; this is known as the problem of “reverse inference” [164]. Despite the popular interpretation that neuroscience provides direct access to hidden aspects of our mental lives [i.e., “neuroessentialism”, see Ref 175], the mapping between our intuitions (i.e., subjective feelings of conflict) and the underlying neural computations may be difficult to discern.
Moreover, considerable evidence indicates that many regions canonically associated with emotion also contribute to a host of cognitive processes; for example, activation of the anterior insula tracks the risk associated with a decision [84,90], but also can be evoked by relatively simple executive processing tasks in the absence of risk [176,177]. Conversely, activation of canonically rational regions like the prefrontal cortex is not a prerequisite for rationality. Consider the striking examples that individuals with lesions to prefrontal cortex sometimes make more rational decisions than neurologically normal individuals [178,179]. Finally, even when putative cognitive and affective regions interact, the outcome may be unexpected. In a recent study by Venkatraman and colleagues [138], increased activation of insular cortex and vmPFC predicted decisions consistent with economic models, whereas activation of lateral prefrontal cortex and parietal cortex predicted seemingly irrational heuristic choices.
Given these challenges, what sort of conceptual approaches might direct future decision neuroscience research? An important new direction lies in improved connections between decision neuroscience research and cutting-edge work in other areas within cognitive neuroscience (see related work throughout this volume). As functional neuroimaging methods, in particular, have matured, there has been an increasing recognition that brain regions support particular types of computations that may be called upon in a variety of task contexts. Returning to the above counterintuitive examples: recent work on functional connectivity shows that insular cortex signals the need for cognitive control [180], which may be a consequence of risk (or changes in risk) within a decision scenario [84]. Similarly, there has been substantial recent research that attempts to parse the functions of the prefrontal cortex according to the computations supported by its subregions [181]. Some regions within prefrontal cortex seem particularly critical for feedback-based learning, and lesions to those regions often lead to maladaptive real-world decision making. However, there are also conditions where feedback is meaningless, as when playing games of chance or investing in the stock market (i.e., the outcome at one point in time is not predictive of future outcomes). Under such conditions, ignoring feedback may prevent regret-driven or risk-averse mistakes, leading to better outcomes [178].
D.2. Methodological Challenges
Research in decision neuroscience, like that in cognitive neuroscience more broadly, has used a diverse set of methods. Most common, especially in recent years, have been studies using fMRI. Also prevalent are electrophysiological recording of single-neuron activity in non-human primates and scalp-recorded event-related potentials (ERPs) in human subjects. These techniques provide some complementarities of scale: fMRI provides breadth of coverage and spatial precision, single-neuron recordings give direct measures of action potentials, and ERP measurements allow characterization of collective dendritic activity with high temporal resolution. Even so, integrating conclusions across studies using different neuroscience methods poses several sorts of challenges. One well-recognized problem is that these techniques measure different aspects of neural function. Functional MRI measures relative changes in the local concentration of deoxygenated hemoglobin [182–184], which mirrors local energy consumption and thus tracks primarily the dendritic activity of neurons [185]. Thus, activation observed using fMRI may reflect primarily the input or integrative activity within a brain region, whereas single-unit activity better reflects the output or signaling activity. For additional consideration of the challenges of integrating across different measures of brain function, we refer the interested reader to recent reviews [186,187].
A second challenge lies in reconciling results across species, given differences in experimental paradigms. Decision neuroscience studies with human participants typically adopt the methodological conventions of behavioral economics: participants make decisions about real but abstract rewards (e.g., money), they receive full information about the decision scenario (i.e., no deception), and researchers strive to minimize external influences on choice (e.g., no communication with other participants, no other incentives). Over the course of a 1–2hr testing session, a participant might make on the order of 100 independent decisions about information presented using words and numbers. Studies using non-human primates (usually the rhesus monkey, macaca mulatta) adopt very different methods: liquid rewards are delivered during the experimental session, the tasks involve simple choices whose options are indicated symbolically, and the animals are trained extensively over thousands of trials. Moreover, decision-making behavior may differ dramatically across species. As one of many examples, humans exhibit a temporal discounting rate for monetary rewards that approaches a few percent per year (e.g., in many real-world investments), whereas non-human primates show discounting for primary rewards over intervals of seconds [188]. Some aspects of human social decision making, from altruistic donation to cooperation in multiplayer games, can be difficult to model within animal models. And, every human participant possesses a wealth of prior conceptual knowledge, which may violate the assumptions of the experimental setting. For example, when judging the trustworthiness of a game partner, people evaluate not just that partner's past actions but also superficial features like appearance, with concomitant implications for neuroscience data [189].
To ameliorate these problems, some decision-neuroscience researchers have adopted experimental paradigms that can be readily completed by both human and non-human subjects. For instance, reward conditioning tasks involve the repeated presentation of one or more simple cues (e.g., abstract visual images) that predict subsequent outcomes (e.g., juice rewards). As described in Section A, this approach was first used in studies with non-human primates by Schultz and colleagues [19], and was subsequently extended to human paradigms by a number of investigators [e.g., Ref 49]. Tasks that adopt elements of foraging behavior [190] or of reinforcement learning [165] often translate well across species. Even more complex decisions can be incorporated into simple, species-general paradigms. Studies by Platt and colleagues demonstrated that monkeys would trade small amounts of juice to look at photos of other monkeys [117,118]; similar results were subsequently shown for human subjects who traded small amounts of money to view images of attractive human faces [116]. Yet, similarity of tasks, by itself, does not ensure similarity of processing. A human may arrive at a decision using a variety of simplifying heuristics, complex priors, and decision biases, whereas the same decision might be reached by a monkey based on the past history of reward. As such, an important direction for future decision neuroscience research will be to characterize the computations that lead to decisions, including potential differences in those computations across species.
A final methodological challenge is moving from correlational to causal models of the mechanisms of decision making. Common approaches like fMRI and single-neuron recording measure brain function: they allow decision neuroscientists to show that an experimental variable (e.g., risk) alters the functioning of a particular brain region. However, the claim that a region plays a causal role in decision making requires manipulation of brain function, followed by the demonstrations of concomitant changes in decision-making behavior. Many such approaches are available for animal research, including very precise lesion methods [191], microstimulation [192], and in vivo manipulation of neurotransmitter levels [193].
Human researchers have a more limited repertoire. One powerful approach has been examine the effects of disrupted brain function, either from naturally occurring lesions or via temporary lesions evoked using transcranial magnetic stimulation (TMS). The latter approach, which can be implemented in neurologically normal individuals, has been applied to several sorts of economic decisions. In one striking example, TMS applied to the prefrontal cortex disrupted judgments of unfairness in the Ultimatum Game [107], a result that stood in contrast to prior work using neuroimaging methods [104]. Another technique, transcranial direct current stimulation (tDCS) [reviewed in Ref 194], allows researchers to either increase or decrease cortical excitability depending on the polarity of the current flow [194]. Accordingly, increased cortical excitability over right DLPFC combined with decreased excitability over left DLPFC diminishes risk-taking behavior [195,196]. As tDCS can be administered to large cohorts of individuals simultaneously, it affords a novel opportunity to examine the neural basis of social interactions. A recent study by Knoch and colleagues recruited large groups of subjects to play the Ultimatum Game while receiving tDCS [197]. Strikingly, they observed a reduction in subjects' propensity to punish unfair behavior following stimulation that decreased excitability over right DLPFC.
Another important new direction uses drug or dietary manipulations to modify the chemical milieu of the brain. As notable examples, the intranasal delivery of oxytocin increases trusting behavior in a cooperative game [198,199], a dietary depletion of brain serotonin increases the likelihood of rejecting unfair offers [200], and dopamine antagonists increase gambling in pathological gamblers but not in normal controls [201].
D.3. Practical Challenges
Over the past five years, decision neuroscience has developed from a curiosity to a vibrant interdiscipline. There are now dozens of laboratories studying the mechanisms of decision making using neuroscience methods, and many core findings have permeated both the academic literature and the popular press. Even neuroscientists who study other aspects of mental life now commonly incorporate decision neuroscience concepts into their research, as when studying the effects of reward on memory systems [202] or the social cognition of comparing one's own performance against another's success [203]. Without question, concepts from decision-making research, including economic theory, have sparked robust growth within neuroscience.
Yet, decision modeling has proven largely resistant to inroads from neuroscience. Many potential new applications of decision-neuroscience research involve extensions to the social sciences: marketing, law, political science, and even public policy. Neuromarketing, in particular, has become a cottage industry, with several companies promising a better understanding of consumer's decisions based upon fMRI or EEG data. Attitudes of economists (and other social scientists) toward neuroscience have ranged from guarded optimism [204] to constructive criticism [205]. Some critics contend that neuroscience, at least so far, has provided no unique insights about decision-making behavior. Others make a much stronger claim: results from neuroscience cannot refine or falsify models of economic phenomenon, even in principle [206].
Two arguments support this claim (see Ref 207 for a more extensive consideration). First, because economic models are fundamentally about observable behavior, whether of specific individuals or aggregates (e.g., a stock market), only behavioral data can be used to evaluate those models. Consider a hypothetical example: an economic theorist creates a model that predicts bidding behavior in online auctions. A neuroscientist then shows that activation in the vmPFC increases as the auction progresses, which leads to the natural interpretation that the subjective value of a good increases as the bids get larger and larger. Yet, to the economic theorist, that information about brain function is simply irrelevant. The model, it is argued, could only be rejected based on using data about people's decisions, not about their brains. This contention has been labeled the “Behavioral Sufficiency” argument.
Second, many important aspects of economic modeling involve complex collective phenomena, such as voting behavior, financial markets, price bubbles, among others. Each of these does involve, at root, individual decisions: the price of an asset in a market reflects all of the individual decisions to buy or sell that asset. Yet, it can be difficult to see how neuroscience data – which describes the functioning of an individual's brain – bear upon collective phenomena. Critics charge that important phenomena like markets can only be understood by abstracting the individual decision makers and focusing instead on measures of higher-level interactions (e.g., the ebb and flow of prices themselves). This criticism has been called the “Emergent Phenomenon” argument.
How, given these forceful criticisms, might decision neuroscience shape the models of decision-making behavior? A model that fits existing data about choice behavior might fail in new circumstances (e.g., under stress, anger, or sleep deprivation). By understanding the effects of these environmental effects upon brain function, researchers may identify avenues for new experiments that could falsify a seemingly workable model. Similarly, there has been substantial interest in understanding decision making in populations other than the young, healthy, college-educated adults who constitute the lion's share of laboratory participants. To consider just one example, older adults often report an increased focus on positive, compared to negative, consequences of their decisions [208]. Consistent with this psychological bias, work using standard decision neuroscience paradigms indicated that older adults do show an attenuated ventral striatal response to anticipated negative outcomes [209]. This neuroscience result led to the hypothesis that older adults would show framing effects for monetary gains but not losses, as recently confirmed experimentally [210]. This series of disparate experiments progressed from observations of behavior, to measures of brain function, to new and testable predictions about behavior.
Neuroscience will not render behavioral economics and cognitive psychology obsolete; instead, it will indicate will indicate new and unanticipated directions for research. One such direction will be particularly critical: understanding variability in decision making. The major models of decision-making behavior (e.g., cumulative prospect theory) are often elegant in their simplicity, but they cannot in themselves predict why different people make different decisions. Thus, the future of decision neuroscience lies in contributing to the development of new, robust, and biologically plausible models of behavior.
Footnotes
Cross-References
COGSCI-319: Decision Making
COGSCI-133: Neural Mechanisms of Reward
COGSCI-044: Reinforcement Learning, Psychology of
References
- 1.Glimcher PW, Rustichini A. Neuroeconomics: The consilience of brain and decision. Science. 2004;306:447–452. doi: 10.1126/science.1102566. [DOI] [PubMed] [Google Scholar]
- 2.Platt ML, Huettel SA. Risky business: the neuroeconomics of decision making under uncertainty. Nature Neuroscience. 2008;11:398–403. doi: 10.1038/nn2062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Doya K. Modulators of decision making. Nature Neuroscience. 2008;11:410–416. doi: 10.1038/nn2077. [DOI] [PubMed] [Google Scholar]
- 4.Rangel A, Camerer C, Montague PR. A framework for studying the neurobiology of value-based decision making. Nature Reviews Neuroscience. 2008;9:545–556. doi: 10.1038/nrn2357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Sanfey AG. Social decision-making: Insights from game theory and neuroscience. Science. 2007;318:598–602. doi: 10.1126/science.1142996. [DOI] [PubMed] [Google Scholar]
- 6.Sanfey AG, Loewenstein G, McClure SM, Cohen JD. Neuroeconomics: cross-currents in research on decision-making. Trends in Cognitive Sciences. 2006;10:108–116. doi: 10.1016/j.tics.2006.01.009. [DOI] [PubMed] [Google Scholar]
- 7.Wise RA, Rompre PP. Brain Dopamine and Reward. Annual Review of Psychology. 1989;40:191–225. doi: 10.1146/annurev.ps.40.020189.001203. [DOI] [PubMed] [Google Scholar]
- 8.Schultz W. Behavioral dopamine signals. Trends in Neurosciences. 2007;30:203–210. doi: 10.1016/j.tins.2007.03.007. [DOI] [PubMed] [Google Scholar]
- 9.Schultz W. Behavioral theories and the neurophysiology of reward. Annual Review of Psychology. 2006;57:87–115. doi: 10.1146/annurev.psych.56.091103.070229. [DOI] [PubMed] [Google Scholar]
- 10.Schultz W. Multiple reward signals in the brain. Nature Reviews Neuroscience. 2000;1:199–207. doi: 10.1038/35044563. [DOI] [PubMed] [Google Scholar]
- 11.Wise RA, Spindler J, deWit H, Gerberg GJ. Neuroleptic-induced “anhedonia” in rats: pimozide blocks reward quality of food. Science. 1978;201:262–264. doi: 10.1126/science.566469. [DOI] [PubMed] [Google Scholar]
- 12.Wise RA. Neuroleptics and operant behavior: the anehodonia hypothesis. Behavioral Brain Sciences. 1982;5:39–87. [Google Scholar]
- 13.Bishop MP, Elder ST, Heath RG. Intracranial Self-Stimulation in Man. Science. 1963;140:394–396. doi: 10.1126/science.140.3565.394. [DOI] [PubMed] [Google Scholar]
- 14.Olds ME, Fobes JL. The Central Basis of Motivation: Intracranial Self-Stimulation Studies. Annual Review of Psychology. 1981;32:523–574. doi: 10.1146/annurev.ps.32.020181.002515. [DOI] [PubMed] [Google Scholar]
- 15.Olds J, Milner P. Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain. Journal of Comparative and Physiological Psychology. 1954;47:419–427. doi: 10.1037/h0058775. [DOI] [PubMed] [Google Scholar]
- 16.Berridge KC, Robinson TE. What is the role of dopamine in reward: hedonic impact, reward learning, or incentive salience? Brain Research Reviews. 1998;28:309–369. doi: 10.1016/s0165-0173(98)00019-8. [DOI] [PubMed] [Google Scholar]
- 17.Berridge KC. The debate over dopamine's role in reward: the case for incentive salience. Psychopharmacology (Berl) 2007;191:391–431. doi: 10.1007/s00213-006-0578-x. [DOI] [PubMed] [Google Scholar]
- 18.Berridge KC. Food reward: Brain substrates of wanting and liking. Neuroscience & Biobehavioral Reviews. 1996;20:1–25. doi: 10.1016/0149-7634(95)00033-b. [DOI] [PubMed] [Google Scholar]
- 19.Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593–1599. doi: 10.1126/science.275.5306.1593. [DOI] [PubMed] [Google Scholar]
- 20.Cannon CM, Bseikri MR. Is dopamine required for natural reward? Physiology & Behavior. 2004;81:741–748. doi: 10.1016/j.physbeh.2004.04.020. [DOI] [PubMed] [Google Scholar]
- 21.Schultz W, Tremblay L, Hollerman JR. Reward prediction in primate basal ganglia and frontal cortex. Neuropharmacology. 1998;37:421–429. doi: 10.1016/s0028-3908(98)00071-9. [DOI] [PubMed] [Google Scholar]
- 22.Schultz W. The phasic reward signal of primate dopamine neurons. Advances in Pharmacology. 1998;42:686–690. doi: 10.1016/s1054-3589(08)60841-8. [DOI] [PubMed] [Google Scholar]
- 23.Schultz W. Predictive reward signal of dopamine neurons. Journal of Neurophysiology. 1998;80:1–27. doi: 10.1152/jn.1998.80.1.1. [DOI] [PubMed] [Google Scholar]
- 24.Tremblay L, Schultz W. Modifications of reward expectation-related neuronal activity during learning in primate orbitofrontal cortex. Journal of Neurophysiology. 2000;83:1877–1885. doi: 10.1152/jn.2000.83.4.1877. [DOI] [PubMed] [Google Scholar]
- 25.Zaghloul KA, Blanco JA, Weidemann CT, McGill K, Jaggi JL, Baltuch GH, Kahana MJ. Human Substantia Nigra Neurons Encode Unexpected Financial Rewards. Science. 2009;323:1496–1499. doi: 10.1126/science.1167342. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Berns GS, McClure SM, Pagnoni G, Montague PR. Predictability modulates human brain response to reward. Journal of Neuroscience. 2001;21:2793–2798. doi: 10.1523/JNEUROSCI.21-08-02793.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.McClure SM, Berns GS, Montague PR. Temporal prediction errors in a passive learning task activate human striatum. Neuron. 2003;38:339–346. doi: 10.1016/s0896-6273(03)00154-5. [DOI] [PubMed] [Google Scholar]
- 28.D'Ardenne K, McClure SM, Nystrom LE, Cohen JD. BOLD Responses Reflecting Dopaminergic Signals in the Human Ventral Tegmental Area. Science. 2008;319:1264–1267. doi: 10.1126/science.1150605. [DOI] [PubMed] [Google Scholar]
- 29.Platt ML, Glimcher PW. Neural correlates of decision variables in parietal cortex. Nature. 1999;400:233–238. doi: 10.1038/22268. [DOI] [PubMed] [Google Scholar]
- 30.Deaner RO, Platt ML. Reflexive social attention in monkeys and humans. Current Biology. 2003;13:1609–1613. doi: 10.1016/j.cub.2003.08.025. [DOI] [PubMed] [Google Scholar]
- 31.McCoy AN, Platt ML. Risk-sensitive neurons in macaque posterior cingulate cortex. Nature Neuroscience. 2005;8:1220–1227. doi: 10.1038/nn1523. [DOI] [PubMed] [Google Scholar]
- 32.Dean HL, Platt ML. Spatial representations in posterior cingulate cortex (Abstract) Journal of Vision. 2003;3:427. [Google Scholar]
- 33.McCoy AN, Crowley JC, Haghighian G, Dean HL, Platt ML. Saccade reward signals in posterior cingulate cortex. Neuron. 2003;40:1031–1040. doi: 10.1016/s0896-6273(03)00719-0. [DOI] [PubMed] [Google Scholar]
- 34.Dorris MC, Glimcher PW. Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron. 2004;44:365–378. doi: 10.1016/j.neuron.2004.09.009. [DOI] [PubMed] [Google Scholar]
- 35.Knutson B, Westdorp A, Kaiser E, Hommer D. FMRI visualization of brain activity during a monetary incentive delay task. Neuroimage. 2000;12:20–27. doi: 10.1006/nimg.2000.0593. [DOI] [PubMed] [Google Scholar]
- 36.Knutson B, Greer SM. Anticipatory affect: neural correlates and consequences for choice. Philosophical Transactions of the Royal Society B: Biological Sciences. 2008;363:3771–3786. doi: 10.1098/rstb.2008.0155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Delgado MR, Nystrom LE, Fissell C, Noll DC, Fiez JA. Tracking the hemodynamic responses to reward and punishment in the striatum. Journal of Neurophysiology. 2000;84:3072–3077. doi: 10.1152/jn.2000.84.6.3072. [DOI] [PubMed] [Google Scholar]
- 38.Elliott R, Friston KJ, Dolan RJ. Dissociable Neural Responses in Human Reward Systems. Journal of Neuroscience. 2000;20:6159–6165. doi: 10.1523/JNEUROSCI.20-16-06159.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Breiter HC, Aharon I, Kahneman D, Dale A, Shizgal P. Functional imaging of neural responses to expectancy and experience of monetary gains and losses. Neuron. 2001;30:619–639. doi: 10.1016/s0896-6273(01)00303-8. [DOI] [PubMed] [Google Scholar]
- 40.Knutson B, Fong GW, Adams CM, Varner JL, Hommer D. Dissociation of reward anticipation and outcome with event-related fMRI. Neuroreport. 2001;12:3683–3687. doi: 10.1097/00001756-200112040-00016. [DOI] [PubMed] [Google Scholar]
- 41.Knutson B, Fong GW, Bennett SM, Adams CM, Homme D. A region of mesial prefrontal cortex tracks monetarily rewarding outcomes: characterization with rapid event-related fMRI. Neuroimage. 2003;18:263–272. doi: 10.1016/s1053-8119(02)00057-5. [DOI] [PubMed] [Google Scholar]
- 42.Knutson B, Taylor J, Kaufman M, Peterson R, Glover G. Distributed neural representation of expected value. Journal of Neuroscience. 2005;25:4806–4812. doi: 10.1523/JNEUROSCI.0642-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Rolls ET, Sienkiewicz ZJ, Yaxley S. Hunger Modulates the Responses to Gustatory Stimuli of Single Neurons in the Caudolateral Orbitofrontal Cortex of the Macaque Monkey. European Journal of Neuroscience. 1989;1:53–60. doi: 10.1111/j.1460-9568.1989.tb00774.x. [DOI] [PubMed] [Google Scholar]
- 44.Critchley HD, Rolls ET. Hunger and satiety modify the responses of olfactory and visual neurons in the primate orbitofrontal cortex. Journal of Neurophysiology. 1996;75:1673–1686. doi: 10.1152/jn.1996.75.4.1673. [DOI] [PubMed] [Google Scholar]
- 45.O'Doherty J, Rolls ET, Francis S, Bowtell R, McGlone F, Kobal G, Renner B, Ahne G. Sensory-specific satiety-related olfactory activation of the human orbitofrontal cortex. NeuroReport. 2000;11:893–897. doi: 10.1097/00001756-200003200-00046. [DOI] [PubMed] [Google Scholar]
- 46.Kringelbach ML, O'Doherty J, Rolls ET, Andrews C. Activation of the human orbitofrontal cortex to a liquid food stimulus is correlated with its subjective pleasantness. Cerebral Cortex. 2003;13:1064–1071. doi: 10.1093/cercor/13.10.1064. [DOI] [PubMed] [Google Scholar]
- 47.Small DM, Gregory MD, Mak YE, Gitelman D, Mesulam MM, Parrish T. Dissociation of Neural Representation of Intensity and Affective Valuation in Human Gustation. Neuron. 2003;39:701–711. doi: 10.1016/s0896-6273(03)00467-7. [DOI] [PubMed] [Google Scholar]
- 48.O'Doherty J, Rolls ET, Francis S, Bowtell R, McGlone F. Representation of pleasant and aversive taste in the human brain. Journal of Neurophysiology. 2001;85:1315–1321. doi: 10.1152/jn.2001.85.3.1315. [DOI] [PubMed] [Google Scholar]
- 49.O'Doherty JP, Deichmann R, Critchley HD, Dolan RJ. Neural responses during anticipation of a primary taste reward. Neuron. 2002;33:815–826. doi: 10.1016/s0896-6273(02)00603-7. [DOI] [PubMed] [Google Scholar]
- 50.Anderson AK, Christoff K, Stappen I, Panitz D, Ghahremani DG, Glover G, Gabrieli JDE, Sobel N. Dissociated neural representations of intensity and valence in human olfaction. Nature Neuroscience. 2003;6:196–202. doi: 10.1038/nn1001. [DOI] [PubMed] [Google Scholar]
- 51.Gottfried JA, O'Doherty J, Dolan RJ. Appetitive and aversive olfactory learning in humans studied using event-related functional magnetic resonance imaging. Journal of Neuroscience. 2002;22:10829–10837. doi: 10.1523/JNEUROSCI.22-24-10829.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Rolls ET, O'Doherty J, Kringelbach ML, Francis S, Bowtell R, McGlone F. Representations of pleasant and painful touch in the human orbitofrontal and cingulate cortices. Cerebral Cortex. 2003;13:308–317. doi: 10.1093/cercor/13.3.308. [DOI] [PubMed] [Google Scholar]
- 53.Winston JS, O'Doherty J, Kilner JM, Perrett DI, Dolan RJ. Brain systems for assessing facial attractiveness. Neuropsychologia. 2007;45:195–206. doi: 10.1016/j.neuropsychologia.2006.05.009. [DOI] [PubMed] [Google Scholar]
- 54.O'Doherty JP, Winston J, Critchley H, Perrett D, Burt DM, Dolan RJ. Beauty in a smile: the role of medial orbitofrontal cortex in facial attractiveness. Neuropsychologia. 2003;41:147–155. doi: 10.1016/s0028-3932(02)00145-8. [DOI] [PubMed] [Google Scholar]
- 55.Cloutier J, Heatherton TF, Whalen PJ, Kelley WM. Are Attractive People Rewarding? Sex Differences in the Neural Substrates of Facial Attractiveness. Journal of Cognitive Neuroscience. 2008;20:941–951. doi: 10.1162/jocn.2008.20062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Aharon I, Etcoff N, Ariely D, Chabris CF, O'Connor E, Breiter HC. Beautiful faces have variable reward value: fMRI and behavioral evidence. Neuron. 2001;32:537–551. doi: 10.1016/s0896-6273(01)00491-3. [DOI] [PubMed] [Google Scholar]
- 57.Arnow BA, Desmond JE, Banner LL, Glover GH, Solomon A, Polan ML, Lue TF, Atlas SW. Brain activation and sexual arousal in healthy, heterosexual males. Brain. 2002;125:1014–1023. doi: 10.1093/brain/awf108. [DOI] [PubMed] [Google Scholar]
- 58.Ferretti A, Caulo M, Del Gratta C, Di Matteo R, Merla A, Montorsi F, Pizzella V, Pompa P, Rigatti P, Rossini PM, Salonia A, Tartaro A, Romani GL. Dynamics of male sexual arousal: distinct components of brain activation revealed by fMRI. NeuroImage. 2005;26:1086–1096. doi: 10.1016/j.neuroimage.2005.03.025. [DOI] [PubMed] [Google Scholar]
- 59.Vartanian O, Goel V. Neuroanatomical correlates of aesthetic preference for paintings. NeuroReport. 2004;15:893–897. doi: 10.1097/00001756-200404090-00032. [DOI] [PubMed] [Google Scholar]
- 60.Mobbs D, Greicius MD, Abdel-Azim E, Menon V, Reiss AL. Humor modulates the mesolimbic reward centers. Neuron. 2003;40:1041–1048. doi: 10.1016/s0896-6273(03)00751-7. [DOI] [PubMed] [Google Scholar]
- 61.Harbaugh WT, Mayr U, Burghart DR. Neural Responses to Taxation and Voluntary Giving Reveal Motives for Charitable Donations. Science. 2007;316:1622–1625. doi: 10.1126/science.1140738. [DOI] [PubMed] [Google Scholar]
- 62.Moll J, Krueger F, Zahn R, Pardini M, de Oliveira-Souza R, Grafman J. Human fronto-mesolimbic networks guide decisions about charitable donation. Proceedings of the National Academy of Sciences of the USA. 2006;103:15623–15628. doi: 10.1073/pnas.0604475103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Aron A, Fisher H, Mashek DJ, Strong G, Li H, Brown LL. Reward, motivation, and emotion systems associated with early-stage intense romantic love. Journal of Neurophysiology. 2005;94:327–337. doi: 10.1152/jn.00838.2004. [DOI] [PubMed] [Google Scholar]
- 64.Bartels A, Zeki S. The neural correlates of maternal and romantic love. NeuroImage. 2004;21:1155–1166. doi: 10.1016/j.neuroimage.2003.11.003. [DOI] [PubMed] [Google Scholar]
- 65.Bartels A, Zeki S. The neural basis of romantic love. NeuroReport. 2000;11:3829–3834. doi: 10.1097/00001756-200011270-00046. [DOI] [PubMed] [Google Scholar]
- 66.Fehr E, Camerer CF. Social neuroeconornics: the neural circuitry of social preferences. Trends in Cognitive Sciences. 2007;11:419–427. doi: 10.1016/j.tics.2007.09.002. [DOI] [PubMed] [Google Scholar]
- 67.Joshua M, Adler A, Mitelman R, Vaadia E, Bergman H. Midbrain Dopaminergic Neurons and Striatal Cholinergic Interneurons Encode the Difference between Reward and Aversive Events at Different Epochs of Probabilistic Classical Conditioning Trials. Journal of Neuroscience. 2008;28:11673–11684. doi: 10.1523/JNEUROSCI.3839-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Brischoux Fdr, Chakraborty S, Brierley DI, Ungless MA. Phasic excitation of dopamine neurons in ventral VTA by noxious stimuli. Proceedings of the National Academy of Sciences of the USA. 2009;106:4894–4899. doi: 10.1073/pnas.0811507106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Seymour B, Daw N, Dayan P, Singer T, Dolan R. Differential encoding of losses and gains in the human striatum. Journal of Neuroscience. 2007;27:4826–4831. doi: 10.1523/JNEUROSCI.0400-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Kim H, Shimojo S, O'Doherty JP. Is avoiding an aversive outcome rewarding? Neural substrates of avoidance learning in the human brain. PLoS Biology. 2006;4:1453–1461. doi: 10.1371/journal.pbio.0040233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Jensen J, McIntosh AR, Crawley AP, Mikulis DJ, Remington G, Kapur S. Direct Activation of the Ventral Striatum in Anticipation of Aversive Stimuli. Neuron. 2003;40:1251–1257. doi: 10.1016/s0896-6273(03)00724-4. [DOI] [PubMed] [Google Scholar]
- 72.Becerra L, Breiter HC, Wise R, Gonzalez RG, Borsook D. Reward circuitry activation by noxious thermal stimuli. Neuron. 2001;32:927–946. doi: 10.1016/s0896-6273(01)00533-5. [DOI] [PubMed] [Google Scholar]
- 73.Zink CF, Pagnoni G, Martin ME, Dhamala M, Berns GS. Human striatal response to salient nonrewarding stimuli. Journal of Neuroscience. 2003;23:8092–8097. doi: 10.1523/JNEUROSCI.23-22-08092.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Zink CF, Pagnoni G, Chappelow J, Martin-Skurski M, Berns GS. Human striatal activation reflects degree of stimulus saliency. Neuroimage. 2006;29:977–983. doi: 10.1016/j.neuroimage.2005.08.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Zink CF, Pagnoni G, Martin-Skurski ME, Chappelow JC, Berns GS. Human striatal responses to monetary reward depend on saliency. Neuron. 2004;42:509–517. doi: 10.1016/s0896-6273(04)00183-7. [DOI] [PubMed] [Google Scholar]
- 76.Cooper JC, Knutson B. Valence and salience contribute to nucleus accumbens activation. NeuroImage. 2008;39:538–547. doi: 10.1016/j.neuroimage.2007.08.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Jensen J, Smith AJ, Willeit M, Crawley AP, Mikulis DJ, Vitcu I, Kapur S. Separate brain regions code for salience vs. valence during reward prediction in humans. Human Brain Mapping. 2007;28:294–302. doi: 10.1002/hbm.20274. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Garner WR. Uncertainty and structure as psychological concepts. Wiley New York; 1962. [Google Scholar]
- 79.Knight FH. Risk, Uncertainty and Profit. Houghton Mifflin; New York: 1921. [Google Scholar]
- 80.Fiorillo CD, Tobler PN, Schultz W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science. 2003;299:1898–1902. doi: 10.1126/science.1077349. [DOI] [PubMed] [Google Scholar]
- 81.Hsu M, Krajbich I, Zhao C, Camerer CF. Neural Response to Reward Anticipation under Risk Is Nonlinear in Probabilities. Journal of Neuroscience. 2009;29:2231–2237. doi: 10.1523/JNEUROSCI.5296-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Huettel SA, Song AW, McCarthy G. Decisions under uncertainty: Probabilistic context influences activation of prefrontal and parietal cortices. Journal of Neuroscience. 2005;25:3304–3311. doi: 10.1523/JNEUROSCI.5070-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83.Huettel SA, Stowe CJ, Gordon EM, Warner BT, Platt ML. Neural signatures of economic preferences for risk and ambiguity. Neuron. 2006;49:765–775. doi: 10.1016/j.neuron.2006.01.024. [DOI] [PubMed] [Google Scholar]
- 84.Preuschoff K, Quartz SR, Bossaerts P. Human insula activation reflects risk prediction errors as well as risk. Journal of Neuroscience. 2008;28:2745–2752. doi: 10.1523/JNEUROSCI.4286-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Huettel SA. Behavioral, but not reward, risk modulates activation of prefrontal, parietal, and insular cortices. Cognitive Affective & Behavioral Neuroscience. 2006;6:141–151. doi: 10.3758/cabn.6.2.141. [DOI] [PubMed] [Google Scholar]
- 86.Behrens TEJ, Woolrich MW, Walton ME, Rushworth MFS. Learning the value of information in an uncertain world. Nature Neuroscience. 2007;10:1214–1221. doi: 10.1038/nn1954. [DOI] [PubMed] [Google Scholar]
- 87.Damasio AR, Everitt BJ, Bishop D. The Somatic Marker Hypothesis and the Possible Functions of the Prefrontal Cortex [and Discussion] Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences. 1996;351:1413–1420. doi: 10.1098/rstb.1996.0125. [DOI] [PubMed] [Google Scholar]
- 88.Bechara A, Damasio H, Damasio AR. Emotion, decision making and the orbitofrontal cortex. Cerebral Cortex. 2000;10:295–307. doi: 10.1093/cercor/10.3.295. [DOI] [PubMed] [Google Scholar]
- 89.Paulus MP, Rogalsky C, Simmons A, Feinstein JS, Stein MB. Increased activation in the right insula during risk-taking decision making is related to harm avoidance and neuroticism. Neuroimage. 2003;19:1439–1448. doi: 10.1016/s1053-8119(03)00251-9. [DOI] [PubMed] [Google Scholar]
- 90.Kuhnen CM, Knutson B. The neural basis of financial risk taking. Neuron. 2005;47:763–770. doi: 10.1016/j.neuron.2005.08.008. [DOI] [PubMed] [Google Scholar]
- 91.Ellsberg D. Risk, ambiguity, and the Savage axioms. The Quarterly Journal of Economics. 1961:643–669. [Google Scholar]
- 92.Hsu M, Bhatt M, Adolphs R, Tranel D, Camerer CF. Neural systems responding to degrees of uncertainty in human decision-making. Science. 2005;310:1680–1683. doi: 10.1126/science.1115327. [DOI] [PubMed] [Google Scholar]
- 93.O'Doherty J, Kringelbach ML, Rolls ET, Hornak J, Andrews C. Abstract reward and punishment representations in the human orbitofrontal cortex. Nature Neuroscience. 2001;4:95–102. doi: 10.1038/82959. [DOI] [PubMed] [Google Scholar]
- 94.Kringelbach ML, Rolls ET. The functional neuroanatomy of the human orbitofrontal cortex: evidence from neuroimaging and neuropsychology. Progress in Neurobiology. 2004;72:341–372. doi: 10.1016/j.pneurobio.2004.03.006. [DOI] [PubMed] [Google Scholar]
- 95.Bach DR, Seymour B, Dolan RJ. Neural Activity Associated with the Passive Prediction of Ambiguity and Risk for Aversive Events. Journal of Neuroscience. 2009;29:1648–1656. doi: 10.1523/JNEUROSCI.4578-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.McClure SM, Ericson KM, Laibson DI, Loewenstein G, Cohen JD. Time discounting for primary rewards. Journal of Neuroscience. 2007;27:5796–5804. doi: 10.1523/JNEUROSCI.4246-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.McClure SM, Laibson DI, Loewenstein G, Cohen JD. Separate neural systems value immediate and delayed monetary rewards. Science. 2004;306:503–507. doi: 10.1126/science.1100907. [DOI] [PubMed] [Google Scholar]
- 98.Kable JW, Glimcher PW. The neural correlates of subjective value during intertemporal choice. Nature Neuroscience. 2007;10:1625–1633. doi: 10.1038/nn2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99.Luhmann CC, Chun MM, Yi D-J, Lee D, Wang X-J. Neural Dissociation of Delay and Uncertainty in Intertemporal Choice. Journal of Neuroscience. 2008;28:14459–14466. doi: 10.1523/JNEUROSCI.5058-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Gregorios-Pippas L, Tobler PN, Schultz W. Short-Term Temporal Discounting of Reward Value in Human Ventral Striatum. Journal of Neurophysiology. 2009;101:1507–1523. doi: 10.1152/jn.90730.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Von Neumann J, Morgenstern O. Theory of games and economic behavior. Princeton University Press; Princeton: 1944. [Google Scholar]
- 102.Singer T, Kiebel SJ, Winston JS, Dolan RJ, Frith CD. Brain responses to the acquired moral status of faces. Neuron. 2004;41:653–662. doi: 10.1016/s0896-6273(04)00014-5. [DOI] [PubMed] [Google Scholar]
- 103.Rilling J, Gutman D, Zeh T, Pagnoni G, Berns G, Kilts C. A neural basis for social cooperation. Neuron. 2002;35:395–405. doi: 10.1016/s0896-6273(02)00755-9. [DOI] [PubMed] [Google Scholar]
- 104.Sanfey AG, Rilling JK, Aronson JA, Nystrom LE, Cohen JD. The neural basis of economic decision-making in the Ultimatum Game. Science. 2003;300:1755–1758. doi: 10.1126/science.1082976. [DOI] [PubMed] [Google Scholar]
- 105.Tabibnia G, Satpute AB, Lieberman MD. The Sunny Side of Fairness: Preference for Fairness Activates Reward Circuitry (and Disregarding Unfairness Activates Self-Control Circuitry) Psychological Science. 2008;19:339–347. doi: 10.1111/j.1467-9280.2008.02091.x. [DOI] [PubMed] [Google Scholar]
- 106.Hsu M, Anen C, Quartz SR. The Right and the Good: Distributive Justice and Neural Encoding of Equity and Efficiency. Science. 2008;320:1092–1095. doi: 10.1126/science.1153651. [DOI] [PubMed] [Google Scholar]
- 107.Knoch D, Pascual-Leone A, Meyer K, Treyer V, Fehr E. Diminishing Reciprocal Fairness by Disrupting the Right Prefrontal Cortex. Science. 2006;314:829–832. doi: 10.1126/science.1129156. [DOI] [PubMed] [Google Scholar]
- 108.Tankersley D, Stowe CJ, Huettel SA. Altruism is associated with an increased neural response to agency. Nature Neuroscience. 2007;10:150–151. doi: 10.1038/nn1833. [DOI] [PubMed] [Google Scholar]
- 109.de Quervain DJ, Fischbacher U, Treyer V, Schellhammer M, Schnyder U, Buck A, Fehr E. The neural basis of altruistic punishment. Science. 2004;305:1254–1258. doi: 10.1126/science.1100735. [DOI] [PubMed] [Google Scholar]
- 110.Seymour B, Singer T, Dolan R. The neurobiology of punishment. Nature Reviews Neuroscience. 2007;8:300–311. doi: 10.1038/nrn2119. [DOI] [PubMed] [Google Scholar]
- 111.Buckholtz JW, Asplund CL, Dux PE, Zald DH, Gore JC, Jones OD, Marois R. The Neural Correlates of Third-Party Punishment. Neuron. 2008;60:930–940. doi: 10.1016/j.neuron.2008.10.016. [DOI] [PubMed] [Google Scholar]
- 112.Montague PR, Berns GS, Cohen JD, McClure SM, Pagnoni G, Dhamala M, Wiest MC, Karpov I, King RD, Apple N. Hyperscanning: Simultaneous fMRI during linked social interactions. Neuroimage. 2002;16:1159–1164. doi: 10.1006/nimg.2002.1150. [DOI] [PubMed] [Google Scholar]
- 113.King-Casas B, Tomlin D, Anen C, Camerer CF, Quartz SR, Montague PR. Getting to know you: Reputation and trust in a two-person economic exchange. Science. 2005;308:78–83. doi: 10.1126/science.1108062. [DOI] [PubMed] [Google Scholar]
- 114.Barraclough DJ, Conroy ML, Lee D. Prefrontal cortex and decision making in a mixed-strategy game. Nature Neuroscience. 2004;7:404–410. doi: 10.1038/nn1209. [DOI] [PubMed] [Google Scholar]
- 115.Little AC, Jones BC, Waitt C, Tiddeman BP, Feinberg DR, Perrett DI, Apicella CL, Marlowe FW. Symmetry Is Related to Sexual Dimorphism in Faces: Data Across Culture and Species. PLoS ONE. 2008:3. doi: 10.1371/journal.pone.0002106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 116.Hayden BY, Parikh PC, Deaner RO, Platt ML. Economic principles motivating social attention in humans. Proceedings of the Royal Society B. 2007;274:1751–1756. doi: 10.1098/rspb.2007.0368. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 117.Deaner RO, Khera AV, Platt ML. Monkeys pay per view: adaptive valuation of social images by rhesus macaques. Current Biology. 2005;15:543–548. doi: 10.1016/j.cub.2005.01.044. [DOI] [PubMed] [Google Scholar]
- 118.Klein JT, Deaner RO, Platt ML. Neural correlates of social target value in macaque parietal cortex. Current Biology. 2008;18:419–424. doi: 10.1016/j.cub.2008.02.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Behrens TEJ, Hunt LT, Rushworth MFS. The Computation of Social Behavior. Science. 2009;324:1160–1164. doi: 10.1126/science.1169694. [DOI] [PubMed] [Google Scholar]
- 120.Rudebeck PH, Buckley MJ, Walton ME, Rushworth MFS. A Role for the Macaque Anterior Cingulate Gyrus in Social Valuation. Science. 2006;313:1310–1312. doi: 10.1126/science.1128197. [DOI] [PubMed] [Google Scholar]
- 121.Behrens TEJ, Hunt LT, Woolrich MW, Rushworth MFS. Associative learning of social value. Nature. 2008;456:245–249. doi: 10.1038/nature07538. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 122.Lee D. Game theory and neural basis of social decision making. Nature Neuroscience. 2008;11:404–409. doi: 10.1038/nn2065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 123.Montague PR, Berns GS. Neural economics and the biological substrates of valuation. Neuron. 2002;36:265–284. doi: 10.1016/s0896-6273(02)00974-1. [DOI] [PubMed] [Google Scholar]
- 124.Montague PR, King-Casas B. Efficient statistics, common currencies and the problem of reward-harvesting. Trends in Cognitive Sciences. 2007;11:514–519. doi: 10.1016/j.tics.2007.10.002. [DOI] [PubMed] [Google Scholar]
- 125.Knutson B, Rick S, Wirnmer GE, Prelec D, Loewenstein G. Neural predictors of purchases. Neuron. 2007;53:147–156. doi: 10.1016/j.neuron.2006.11.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 126.Plassmann H, O'Doherty J, Rangel A. Orbitofrontal Cortex Encodes Willingness to Pay in Everyday Economic Transactions. Journal of Neuroscience. 2007;27:9984. doi: 10.1523/JNEUROSCI.2131-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 127.Hare TA, O'Doherty J, Camerer CF, Schultz W, Rangel A. Dissociating the Role of the Orbitofrontal Cortex and the Striatum in the Computation of Goal Values and Prediction Errors. Journal of Neuroscience. 2008;28:5623. doi: 10.1523/JNEUROSCI.1309-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 128.Hare TA, Camerer CF, Rangel A. Self-Control in Decision-Making Involves Modulation of the vmPFC Valuation System. Science. 2009;324:646–648. doi: 10.1126/science.1168450. [DOI] [PubMed] [Google Scholar]
- 129.Padoa-Schioppa C, Assad JA. Neurons in the orbitofrontal cortex encode economic value. Nature. 2006;441:223–226. doi: 10.1038/nature04676. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 130.Padoa-Schioppa C, Assad JA. The representation of economic value in the orbitofrontal cortex is invariant for changes of menu. Nature Neuroscience. 2008;11:95–102. doi: 10.1038/nn2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131.Thaler R. Toward a positive theory of consumer choice. Journal of Economic Behavior and Organization. 1980;1:39–60. [Google Scholar]
- 132.Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981;211:453–458. doi: 10.1126/science.7455683. [DOI] [PubMed] [Google Scholar]
- 133.Clithero JA, Smith DV. Reference and preference: how does the brain scale subjective value? Frontiers in Human Neuroscience. 2009:3. doi: 10.3389/neuro.09/011.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 134.Seymour B, McClure SM. Anchors, scales and the relative coding of value in the brain. Current Opinion in Neurobiology. 2008;18:173–178. doi: 10.1016/j.conb.2008.07.010. [DOI] [PubMed] [Google Scholar]
- 135.Knutson B, Wimmer GE, Rick S, Hollon NG, Prelec D, Loewenstein G. Neural antecedents of the endowment effect. Neuron. 2008;58:814–822. doi: 10.1016/j.neuron.2008.05.018. [DOI] [PubMed] [Google Scholar]
- 136.De Martino B, Kumaran D, Holt B, Dolan RJ. The Neurobiology of Reference-Dependent Value Computation. Journal of Neuroscience. 2009;29:3833–3842. doi: 10.1523/JNEUROSCI.4832-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 137.De Martino B, Kumaran D, Seymour B, Dolan RJ. Frames, Biases, and Rational Decision-Making in the Human Brain. Science. 2006;313:684–687. doi: 10.1126/science.1128356. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 138.Venkatraman V, Payne JW, Bettman JR, Luce MF, Huettel SA. Separate Neural Mechanisms Underlie Choices and Strategic Preferences in Risky Decision Making. Neuron. 2009;62:593–602. doi: 10.1016/j.neuron.2009.04.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139.Chiu PH, Lohrenz TM, Montague PR. Smokers' brains compute, but ignore, a fictive error signal in a sequential investment task. Nature Neuroscience. 2008;11:514–520. doi: 10.1038/nn2067. [DOI] [PubMed] [Google Scholar]
- 140.Hayden BY, Pearson JM, Platt ML. Fictive Reward Signals in the Anterior Cingulate Cortex. Science. 2009;324:948–950. doi: 10.1126/science.1168488. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 141.Lohrenz T, McCabe K, Camerer CF, Montague PR. Neural signature of fictive learning signals in a sequential investment task. Proceedings of the National Academy of Sciences of the USA. 2007;104:9493–9498. doi: 10.1073/pnas.0608842104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 142.Tversky A, Kahneman D. Judgment under Uncertainty: Heuristics and Biases. Science. 1974;185:1124–1131. doi: 10.1126/science.185.4157.1124. [DOI] [PubMed] [Google Scholar]
- 143.Tom SM, Fox CR, Trepel C, Poldrack RA. The neural basis of loss aversion in decision-making under risk. Science. 2007;315:515–518. doi: 10.1126/science.1134239. [DOI] [PubMed] [Google Scholar]
- 144.Mobbs D, Petrovic P, Marchant JL, Hassabis D, Weiskopf N, Seymour B, Dolan RJ, Frith CD. When Fear Is Near: Threat Imminence Elicits Prefrontal-Periaqueductal Gray Shifts in Humans. Science. 2007);317:1079–1083. doi: 10.1126/science.1144298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 145.Spitzer M, Fischbacher U, Herrnberger B, Gron G, Fehr E. The neural signature of social norm compliance. Neuron. 2007;56:185–196. doi: 10.1016/j.neuron.2007.09.011. [DOI] [PubMed] [Google Scholar]
- 146.Beaver JD, Lawrence AD, van Ditzhuijzen J, Davis MH, Woods A, Calder AJ. Individual differences in reward drive predict neural responses to images of food. Journal of Neuroscience. 2006;26:5160. doi: 10.1523/JNEUROSCI.0350-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 147.Hariri AR. The Neurobiology of Individual Differences in Complex Behavioral Traits. Annual Review of Neuroscience. 2009;32 doi: 10.1146/annurev.neuro.051508.135335. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 148.Frank MJ, Doll BB, Oas-Terpstra J, Moreno F. Prefrontal and striatal dopaminergic genes predict individual differences in exploration and exploitation. Nature Neuroscience. 2009;12:1062–1068. doi: 10.1038/nn.2342. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 149.Boettiger CA, Mitchell JM, Tavares VC, Robertson M, Joslyn G, D'Esposito M, Fields HL. Immediate reward bias in humans: Fronto-parietal networks and a role for the catechol-O-methyltransferase 158(Val/Val) genotype. Journal of Neuroscience. 2007;27:14383–14391. doi: 10.1523/JNEUROSCI.2551-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 150.Yacubian J, Sommer T, Schroeder K, Glascher J, Kalisch R, Leuenberger B, Braus DF, Buchell C. Gene-gene interaction associated with neural reward sensitivity. Proceedings of the National Academy of Sciences of the USA. 2007;104:8125–8130. doi: 10.1073/pnas.0702029104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 151.Klein TA, Neumann J, Reuter M, Hennig J, von Cramon DY, Ullsperger M. Genetically Determined Differences in Learning from Errors. Science. 2007;318:1642–1645. doi: 10.1126/science.1145044. [DOI] [PubMed] [Google Scholar]
- 152.Roiser JP, de Martino B, Tan GCY, Kumaran D, Seymour B, Wood NW, Dolan RJ. A Genetically Mediated Bias in Decision Making Driven by Failure of Amygdala Control. Journal of Neuroscience. 2009;29:5985–5991. doi: 10.1523/JNEUROSCI.0407-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 153.Krugel LK, Biele G, Mohr PNC, Li S-C, Heekeren HR. Genetic variation in dopaminergic neuromodulation influences the ability to rapidly and flexibly adapt decisions. Proceedings of the National Academy of Sciences of the USA. 2 009;106:17951–17956. doi: 10.1073/pnas.0905191106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 154.Mackay TFC, Stone EA, Ayroles JF. The genetics of quantitative traits: challenges and prospects. Nature Reviews Genetics. 2009;10:565–577. doi: 10.1038/nrg2612. [DOI] [PubMed] [Google Scholar]
- 155.Meyer-Lindenberg A, Nicodemus KK, Egan MF, Callicott JH, Mattay V, Weinberger DR. False positives in imaging genetics. NeuroImage. 2008;40:655–661. doi: 10.1016/j.neuroimage.2007.11.058. [DOI] [PubMed] [Google Scholar]
- 156.Meyer-Lindenberg A, Weinberger DR. Intermediate phenotypes and genetic mechanisms of psychiatric disorders. Nature Reviews Neuroscience. 2006;7:818–827. doi: 10.1038/nrn1993. [DOI] [PubMed] [Google Scholar]
- 157.Meyer-Lindenberg A. Neural connectivity as an intermediate phenotype: Brain networks under genetic control. Human Brain Mapping. 2009;30:1938–1946. doi: 10.1002/hbm.20639. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 158.Kendler KS, Fiske A, Gardner CO, Gatz M. Delineation of Two Genetic Pathways to Major Depression. Biological Psychiatry. 2009;65:808–811. doi: 10.1016/j.biopsych.2008.11.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 159.Esslinger C, Walter H, Kirsch P, Erk S, Schnell K, Arnold C, Haddad L, Mier D, Opitz von Boberfeld C, Raab K, Witt SH, Rietschel M, Cichon S, Meyer-Lindenberg A. Neural Mechanisms of a Genome-Wide Supported Psychosis Variant. Science. 2009;324:605. doi: 10.1126/science.1167768. [DOI] [PubMed] [Google Scholar]
- 160.Vul E, Harris C, Winkielman P, Pashler H. Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition. Perspectives on Psychological Science. 2009;4:274–290. doi: 10.1111/j.1745-6924.2009.01125.x. [DOI] [PubMed] [Google Scholar]
- 161.Lieberman MD, Berkman ET, Wager TD. Correlations in Social Neuroscience Aren't Voodoo: Commentary on Vul et al. (2009) Perspectives on Psychological Science. 2009;4:299–307. doi: 10.1111/j.1745-6924.2009.01128.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 162.Poldrack RA, Mumford JA. Independence in ROI analysis: where is the voodoo? Social Cognitve Affective Neuroscience. 2009;4:208–213. doi: 10.1093/scan/nsp011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 163.Kriegeskorte N, Simmons WK, Bellgowan PSF, Baker CI. Circular analysis in systems neuroscience: the dangers of double dipping. Nature Neuroscience. 2009;12:535–540. doi: 10.1038/nn.2303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 164.Poldrack RA. Can cognitive processes be inferred from neuroimaging data? Trends in Cognitive Sciences. 2006;10:59–63. doi: 10.1016/j.tics.2005.12.004. [DOI] [PubMed] [Google Scholar]
- 165.O'Doherty JP, Dayan P, Schultz J, Deichmann R, Friston K, Dolan RJ. Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science. 2004;304:452–454. doi: 10.1126/science.1094285. [DOI] [PubMed] [Google Scholar]
- 166.O'Doherty JP, Buchanan TW, Seymour B, Dolan RJ. Predictive neural coding of reward preference involves dissociable responses in human ventral midbrain and ventral striatum. Neuron. 2006;49:157–166. doi: 10.1016/j.neuron.2005.11.014. [DOI] [PubMed] [Google Scholar]
- 167.O'Doherty JP, Hampton A, Kim H. Model-based fMRI and its application to reward learning and decision making. Reward and Decision Making in Corticobasal Ganglia Networks. 2007:35–53. doi: 10.1196/annals.1390.022. [DOI] [PubMed] [Google Scholar]
- 168.Daw ND, Doya K. The computational neurobiology of learning and reward. Current Opinion in Neurobiology. 2006;16:199–204. doi: 10.1016/j.conb.2006.03.006. [DOI] [PubMed] [Google Scholar]
- 169.Skinner BF. The Behavior of Organisms. Appleton-Century-Crofts; New York: 1938. [Google Scholar]
- 170.Watson JB. Behaviorism. University of Chicago Press; Chicago, Il: 1930. [Google Scholar]
- 171.Sutton RS, Barto AG. Reinforcement Learning. MIT Press; Cambridge, MA: 1998. [Google Scholar]
- 172.Rescorla RA, Wagner AR. A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In: Black AH, Prokosy WF, editors. Classical conditioning II: Current research and theory. Appleton-Century-Crofts; New York: 1972. pp. 64–99. [Google Scholar]
- 173.Kahneman D, Tversky A. Choices, Values, and Frames. American Psychologist. 1984;39:341–350. [Google Scholar]
- 174.Gilovich T, Griffin DW, Kahneman D. Heuristics and biases: The psychology of intuitive judgement. Cambridge University Press; 2002. [Google Scholar]
- 175.Racine E, Bar-Ilan O, Illes J. fMRI in the public eye. Nature Reviews Neuroscience. 2005;6:159–164. doi: 10.1038/nrn1609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 176.Huettel SA, Misiurek J, Jurkowski AJ, McCarthy G. Dynamic and strategic aspects of executive processing. Brain Research. 2004;1000:78–84. doi: 10.1016/j.brainres.2003.11.041. [DOI] [PubMed] [Google Scholar]
- 177.Wu CT, Weissman DH, Roberts KC, Woldorff MG. The neural circuitry underlying the executive control of auditory spatial attention. Brain Research. 2007;1134:187–198. doi: 10.1016/j.brainres.2006.11.088. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 178.Weller JA, Levin IP, Shiv B, Bechara A. Neural correlates of adaptive decision making for risky gains and losses. Psychological Science. 2007;18:958–964. doi: 10.1111/j.1467-9280.2007.02009.x. [DOI] [PubMed] [Google Scholar]
- 179.Camille N, Coricelli G, Sallet J, Pradat-Diehl P, Duhamel J-R, Sirigu A. The Involvement of the Orbitofrontal Cortex in the Experience of Regret. Science. 2004;304:1167–1170. doi: 10.1126/science.1094550. [DOI] [PubMed] [Google Scholar]
- 180.Sridharan D, Levitin DJ, Menon V. A critical role for the right fronto-insular cortex in switching between central-executive and default-mode networks. Proceedings of the National Academy of Sciences USA. 2008;105:12569–12574. doi: 10.1073/pnas.0800005105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 181.Koechlin E, Ody C, Kouneiher F. The Architecture of Cognitive Control in the Human Prefrontal Cortex. Science. 2003;302:1181–1185. doi: 10.1126/science.1088545. [DOI] [PubMed] [Google Scholar]
- 182.Ogawa S, Lee TM, Kay AR, Tank DW. Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Proceedings of the National Academy of Sciences of the USA. 1990;87:9868–9872. doi: 10.1073/pnas.87.24.9868. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 183.Ogawa S, Tank DW, Menon R, Ellermann JM, Kim SG, Merkle H, Ugurbil K. Intrinsic signal changes accompanying sensory stimulation: functional brain mapping with magnetic resonance imaging. Proceedings of the National Academy of Sciences of the USA. 1992;89:5951–5955. doi: 10.1073/pnas.89.13.5951. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 184.Huettel SA, Song AW, McCarthy G. Functional Magnetic Resonance Imaging. Sinauer Associates; Sunderland, Mass: 2009. [Google Scholar]
- 185.Logothetis NK, Pauls J, Augath M, Trinath T, Oeltermann A. Neurophysiological investigation of the basis of the fMRI signal. Nature. 2001;412:150–157. doi: 10.1038/35084005. [DOI] [PubMed] [Google Scholar]
- 186.Logothetis NK. What we can do and what we cannot do with fMRI. Nature. 2008;453:869–878. doi: 10.1038/nature06976. [DOI] [PubMed] [Google Scholar]
- 187.Logothetis NK, Wandell BA. Interpreting the BOLD signal. Annual Review of Physiology. 2004;66:735–769. doi: 10.1146/annurev.physiol.66.082602.092845. [DOI] [PubMed] [Google Scholar]
- 188.Hayden BY, Platt ML. Temporal discounting predicts risk sensitivity in rhesus Macaques. Current Biology. 2007;17:49–53. doi: 10.1016/j.cub.2006.10.055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 189.Delgado MR, Frank RH, Phelps EA. Perceptions of moral character modulate the neural systems of reward during the trust game. Nature Neuroscience. 2005;8:1611–1618. doi: 10.1038/nn1575. [DOI] [PubMed] [Google Scholar]
- 190.Daw ND, O'Doherty JP, Dayan P, Seymour B, Dolan RJ. Cortical substrates for exploratory decisions in humans. Nature. 2006;441:876–879. doi: 10.1038/nature04766. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 191.Kennerley SW, Walton ME, Behrens TEJ, Buckley MJ, Rushworth MFS. Optimal decision making and the anterior cingulate cortex. Nature Neuroscience. 2006;9:940–947. doi: 10.1038/nn1724. [DOI] [PubMed] [Google Scholar]
- 192.Hayden BY, Nair AC, cCoy AN, Platt ML. Posterior Cingulate Cortex Mediates Outcome-Contingent Allocation of Behavior. Neuron. 2008;60:19–25. doi: 10.1016/j.neuron.2008.09.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 193.Phillips PE, Stuber GD, Heien ML, Wightman RM, Carelli RM. Subsecond dopamine release promotes cocaine seeking. Nature. 2003;422:614–618. doi: 10.1038/nature01476. [DOI] [PubMed] [Google Scholar]
- 194.Wagner T, Valero-Cabre A, Pascual-Leone A. Noninvasive Human Brain Stimulation. Annual Review of Biomedical Engineering. 2007;9:527–565. doi: 10.1146/annurev.bioeng.9.061206.133100. [DOI] [PubMed] [Google Scholar]
- 195.Fecteau S, Pascual-Leone A, Zald DH, Liguori P, Theoret H, Boggio PS, Fregni F. Activation of Prefrontal Cortex by Transcranial Direct Current Stimulation Reduces Appetite for Risk during Ambiguous Decision Making. Journal of Neuroscience. 2007;27:6212–6218. doi: 10.1523/JNEUROSCI.0314-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 196.Fecteau S, Knoch D, Fregni F, Sultani N, Boggio P, Pascual-Leone A. Diminishing Risk-Taking Behavior by Modulating Activity in the Prefrontal Cortex: A Direct Current Stimulation Study. Journal of Neuroscience. 2007;27:12500–12505. doi: 10.1523/JNEUROSCI.3283-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 197.Knoch D, Nitsche MA, Fischbacher U, Eisenegger C, Pascual-Leone A, Fehr E. Studying the Neurobiology of Social Interaction with Transcranial Direct Current Stimulation--The Example of Punishing Unfairness. Cerebral Cortex. 2008;18:1987–1990. doi: 10.1093/cercor/bhm237. [DOI] [PubMed] [Google Scholar]
- 198.Zak PJ, Kurzban R, Matzner WT. Oxytocin is associated with human trustworthiness. Hormones and Behavior. 2005;48:522–527. doi: 10.1016/j.yhbeh.2005.07.009. [DOI] [PubMed] [Google Scholar]
- 199.Kosfeld M, Heinrichs M, Zak PJ, Fischbacher U, Fehr E. Oxytocin increases trust in humans. Nature. 2005;435:673–676. doi: 10.1038/nature03701. [DOI] [PubMed] [Google Scholar]
- 200.Crockett MJ, Clark L, Tabibnia G, Lieberman MD, Robbins TW. Serotonin Modulates Behavioral Reactions to Unfairness. Science. 2008;320:1739. doi: 10.1126/science.1155577. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 201.Zack M, Poulos CX. A D2 Antagonist Enhances the Rewarding and Priming Effects of a Gambling Episode in Pathological Gamblers. Neuropsychopharmacology. 2007;32:1678–1686. doi: 10.1038/sj.npp.1301295. [DOI] [PubMed] [Google Scholar]
- 202.Adcock RA, Thangavel A, Whitfield-Gabrieli S, Knutson B, Gabrieli JDE. Reward-Motivated Learning: Mesolimbic Activation Precedes Memory Formation. Neuron. 2006;50:507–517. doi: 10.1016/j.neuron.2006.03.036. [DOI] [PubMed] [Google Scholar]
- 203.Fliessbach K, Weber B, Trautner P, Dohmen T, Sunde U, Elger CE, Falk A. Social Comparison Affects Reward-Related Brain Activity in the Human Ventral Striatum. Science. 2007;318:1305–1308. doi: 10.1126/science.1145876. [DOI] [PubMed] [Google Scholar]
- 204.Camerer CF. The Potential of Neuroeconomics. Economics and Philosophy. 2008;24:369–379. [Google Scholar]
- 205.Harrison GW. Neuroeconomics: A critical reconsideration. Economics and Philosophy. 2008;24:303–344. [Google Scholar]
- 206.Gul F, Pesendorfer W. The Case for Mindless Economics. In: Caplin A, Schotter A, editors. Foundations of Positive and Normative Economics. Oxford University Press; Oxford: 2008. [Google Scholar]
- 207.Clithero JA, Tankersley D, Huettel SA. Foundations of Neuroeconomics: From Philosophy to Practice. PLoS Biology. 2008;6:2348–2353. doi: 10.1371/journal.pbio.0060298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 208.Carstensen LL, Mikels JA. At the intersection of emotion and cognition. Current Directions in Psychological Science. 2005;14:117. [Google Scholar]
- 209.Samanez-Larkin GR, Gibbs SEB, Khanna K, Nielsen L, Carstensen LL, Knutson B. Anticipation of monetary gain but not loss in healthy older adults. Nature Neuroscience. 2007;10:787–791. doi: 10.1038/nn1894. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 210.Mikels JA, Reed AE. Monetary Losses Do Not Loom Large in Later Life: Age Differences in the Framing Effect. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences. 2009;64B:457–460. doi: 10.1093/geronb/gbp043. [DOI] [PMC free article] [PubMed] [Google Scholar]
