Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Sep 24.
Published in final edited form as: Neuron. 2009 Sep 24;63(6):733–745. doi: 10.1016/j.neuron.2009.09.003

The Neurobiology of Decision: Consensus and Controversy

Joseph W Kable 1, Paul W Glimcher 2
PMCID: PMC2765926  NIHMSID: NIHMS150345  PMID: 19778504

Abstract

We review and synthesize recent neurophysiological studies of decision-making in humans and non-human primates. From these studies, the basic outline of the neurobiological mechanism for primate choice is beginning to emerge. The identified mechanism is now known to include a multi-component valuation stage, implemented in ventromedial prefrontal cortex and associated parts of striatum, and a choice stage, implemented in lateral prefrontal and parietal areas. Neurobiological studies of decision-making are beginning to enhance our understanding of economic and social behavior, as well as our understanding of significant health disorders where people’s behavior plays a key role.

Introduction

Only seven years have passed since Neuron published a special issue entitled “Reward and Decision,” an event which signaled a surge in interest in the neural mechanisms underlying decision-making that continues to this day (Cohen and Blum, 2002). At the time, many scholars were excited that quantitative formal models of choice behavior—from economics, evolutionary biology, computer science, and mathematical psychology—were beginning to provide a fruitful framework for new and more detailed investigations of the neural mechanisms of choice. To borrow David Marr’s (1982) famous typology for computational studies of the brain, decision scholars seemed for the first time poised to investigate decision-making at the theoretical, algorithmic, and implementation levels simultaneously.

Since that time, hundreds of research papers have been published on the neural mechanisms of decision-making, at least two new societies dedicated to the topic have been formed (the Society for Neuroeconomics and the Association for NeuroPsychoEconomics), and a basic textbook for the field has been introduced (Glimcher et al., 2009). In this review, we survey some of the scientific progress that has been made in these past seven years, focusing specifically on neurophysiological studies in primates and including closely related work in humans. In an effort to achieve brevity we have been selective. Our aim is to provide one synthesis of the neurophysiology of decision-making, as we understand it. While many issues remain to be resolved, our conviction is that the available data suggest the basic outlines of the neural systems that algorithmically produce choice. Although there are certainly vigorous controversies, we believe that most scientists in the field would exhibit consensus over (at least) the terrain of contemporary debate.

The Basic Mechanism

Any neural model of decision-making needs to answer two key questions. First, how are the subjective values of the various options under consideration learned, stored, and represented? Second, how is a single highly valued action chosen from amongst the options under consideration to be implemented by the motor circuitry? Below, we review evidence that we interpret as suggesting that valuation involves the ventromedial sectors of the prefrontal cortex and associated parts of the striatum (likely as a final common path funneling information from many antecedent areas), while choice involves lateral prefrontal and parietal areas traditionally viewed as intermediate regions in the sensory-motor hierarchy. Based on these data we argue that a “basic model” for primate decision-making is emerging from recent investigations, which involves the coordinated action of these two circuits in a two-stage algorithm.

Before proceeding, we should be clear about the relationship between this neurophysiological model of choice and the very similar theoretical models in economics from which it is derived. Traditional economic models aim only to predict (or explain) an individual’s observable choices. They do not seek to explain the (putatively unobservable) process by which those choices are generated. In the famous terminology of Milton Friedman, traditional economic models are conceived of as being “as if” models (Friedman, 1953). Classic proofs in utility theory (i.e., Samuelson, 1937), for example, demonstrate that any decision maker who chooses in a mathematically consistent fashion behaves as if they had first constructed and stored a single list of the all possible options ordered from best to worst, and then in a second step had selected the highest ordered of those available options. Friedman and nearly all of the neoclassical economists who followed him were explicit that the concept of utility was not meant to apply to anything about the algorithmic or implementation levels. For Friedman, and for many contemporary economists (i.e., Gul and Pesendorfer, 2008), whether or not there are “neurophysiological” correlates of utility is, by construction, irrelevant.

Neurophysiological models, of course, aim to explain the mechanisms by which choices are generated, as well as the choices themselves. These models seek to explain both behavior and its causes, and employ constraints at the algorithmic level to validate the plausibility of behavioral predictions. One might call these models, which are concerned with algorithm and implementation as well as with behavior, because models. Although a fierce debate rages in economic circles about the validity and usefulness of because models, we take as a given for the purposes of this review that describing the mechanism of primate (both human and non-human) decision-making will yield new insights into behavior, just as studies of the primate visual system have revolutionized our understanding of perception.

Stage 1: Valuation

Ventromedial Prefrontal Cortex and Striatum: A Final Common Path

Most decision theories—from expected utility theory in economics (von Neumann and Morgenstern, 1944), to prospect theory in psychology (Kahneman and Tversky, 1979), to reinforcement learning theories in computer science (Sutton and Barto, 1998)—share a core conclusion. Decision-makers integrate the various dimensions of an option into a single measure of its idiosyncratic subjective value, and then choose the option that is most valuable. Comparisons between different kinds of options rely on this abstract measure of subjective value, a kind of “common” currency for choice. That humans can in fact compare apples to oranges when they buy fruit is evidence for this abstract common scale.

At first blush, the notion that all options can be represented on a single scale of desirability might strike some as a peculiar idea. Intuitively it might feel like complicated choices amongst objects with many different attributes would resist reduction to a single dimension of desirability. However, as Samuelson showed over half a century ago (Samuelson, 1937), any individual whose choices can be described as internally consistent can be perfectly modeled by algorithms that employ a single common scale of desirability. If someone selects an apple when they could have had an orange, and an orange when they could have had a pear, then (assuming they are in the same state) they should not select a pear when they could have had an apple instead. This is the core notion of consistency, and when people behave in this manner, we can model their choices as arising from a single, consistent, “utility” ordering over all possible options.

For traditional economic theories, however, consistent decision makers only choose as if they employed a single hidden common currency for comparing options. There was no claim when these theories were first advanced that subjective representations of value were used at the algorithmic level during choice. However, there is now growing evidence that subjective value representations do in fact play a role at the neural algorithmic level, and that these representations are encoded primarily in ventromedial prefrontal cortex and striatum.

One set of studies has documented responses in orbitofrontal cortex related to the subjective values of different rewards, or in the language of economics, “goods.” Padoa-Schioppa and Assad (2006) recorded from Area 13 of the orbitofrontal cortex while monkeys chose between pairs of juices. The amount of each type of juice offered to the animals varied from trial-to-trial, and the types of juices offered changed across sessions. Based on each monkey’s actual choices, they calculated a subjective value for each juice reward, based on type and quantity of juice, which could explain these choices as resulting from a common value scale. They then searched for neurons that showed evidence of this hypothesized common scale for subjective value. They found three dominant patterns of responding, which accounted for 80% of the neuronal responses in this region. First and most importantly they identified offer value neurons, cells with firing rates that were linearly correlated with the subjective value of one of the offered rewards, as computed from behavior. Second, they observed chosen value neurons, which tracked the subjective value of the chosen reward in a single common currency that was independent of type of juice. Finally, they observed taste neurons, which showed a categorical response when a particular juice was chosen. All of these responses were independent of the spatial arrangement of the stimuli and of the motor response produced by the animal to make its choice. Perhaps unsurprisingly, offer value and chosen value responses were prominent right after the options were presented and again at the time of juice receipt. Taste responses, in contrast, occurred primarily after the juice was received.

Based on their timing and properties, these different responses likely play different roles during choice. Offer value signals could serve as subjective values, in a single common neuronal currency, for comparing and deciding between offers. They are exactly the kind of value representation posited by most decision theories, and they could be analogous to what economists call “utilities” (or, if they also responded to probabilistically delivered rewards to “expected utilities”) and to what psychologists call “decision utilities.” Chosen values, by contrast, can only be calculated after a choice has been made, and thus could not be the basis for a decision. As discussed in the section below, however, learning the value of an action from experience depends on being able to compare two quantities—the forecasted value of taking an action, and the actual value experienced when that action was taken. Chosen value responses in the orbitofrontal cortex may then signal the forecast, or for neurons active very late in the choice process the experienced, subjective value from that choice (See also Takahashi et al., 2009, for discussion of this potential function of orbitofrontal value representations).

In a follow-up study, Padoa-Schioppa and Assad (2008) extended their conclusion that these neurons provide utility-like representations. They demonstrated that orbitofrontal responses were “menu-invariant” – that activity was internally consistent in the same way that the choices of the monkeys were internally consistent. In that study, choice pairs involving three different kinds of juice were interleaved from trial-to-trial. Behaviorally, the monkeys’ choices obeyed transitivity: if the animal preferred apple juice over grape juice, and grape juice over tea, then he also preferred apple juice over tea. They observed the same three kinds of neuronal responses as in their previous study, and these responses did not depend on the other option offered on that trial. For example, a neuron that encoded the offer value of grape juice did so in the same manner whether the other option was apple juice or tea. This independence can be shown to be required of utility-like representations (Houthakker, 1950) and thus strengthens the conclusion that these neurons may encode a common currency for choice.

Importantly, Padoa-Schioppa and Assad (2008) distinguished the “menu-invariance” that they observed, where neuronal responses do not change from trial-to-trial as the other juice offered changes, from a longer-term kind of stability they refer to as “condition-invariance.” Tremblay and Schultz’s (1999) data suggest that orbitofrontal responses may not be “condition-invariant,” since these responses seem adjust to the range of rewards when this range is stable over long blocks of trials. As Padoa-Schioppa and Assad (2008) argued, such longer-term re-scaling would serve the adaptive function of allowing orbitofrontal neurons to adjust across conditions so as to encode value across their entire dynamic range. However, in discussing this study and the ones below, we focus primarily on the question of whether neuronal responses are “menu-invariant,” i.e., whether they adjust dynamically from trial-to-trial depending on what other options are offered.

Another set of studies from two labs has documented similar responses in the striatum, the second area that appears to represent the subjective values of choice options. Lau and Glimcher (2008) recorded from the caudate nucleus while monkeys performed an oculomotor choice task. The task was based on the concurrent variable-interval schedules used to study Herrnstein’s matching law (Herrnstein, 1961; Platt and Glimcher, 1999; Sugrue et al., 2004). Behaviorally, the monkeys dynamically adjusted the proportion of their responses to each target to match the relative magnitudes of the rewards earned for looking at those targets. Recording from phasically active striatal neurons (PANs), they found three kinds of task-related responses closely related to the orbitofrontal signals of Padoa-Schioppa and Assad (2006, 2008): action value neurons, which tracked the value of one of the actions, independent of whether it was chosen; chosen value neurons, which tracked the value of a chosen action; and choice neurons, which produced a categorical response when a particular action was taken. Action value responses occurred primarily early in the trial, at the time of the monkey’s choice, while chosen value responses occurred later in the trial, near the time of reward receipt.

Samejima and colleagues (2005) provided important impetus for all of these studies when they gathered some of the first evidence that the subjective value of actions was encoded on a common scale. In that study, monkeys performed a manual choice task, turning a lever leftward or rightward to obtain rewards. Across different blocks, the probability that each turn would be rewarded with a large (as opposed to a small) magnitude of juice was changed. Recording from the putamen, they found that one-third of all modulated neurons tracked action value. This was almost exactly the same percentage of action value neurons that Lau and Glimcher (2008) later found in the oculomotor caudate. Samejima and colleagues’ design also allowed them to show that these responses did not depend on the value associated with the other action. For example, a neuron that tracked the value of a right turn would always exhibit an intermediate response when that action yielded a large reward with 50% probability, independent of whether the left turn was more (i.e., 90% probability) or less (i.e., 10% probability) valuable. This is critical because it means that the striatal signals, like the signals in orbitofrontal cortex, likely show the kind of consistent representation required for transitive behavior.

Thus, the responses in the caudate and putamen in these two studies mirror those found in orbitofrontal cortex, except anchored to the actions produced by the animals, rather than to a more abstract goods-based framework as observed in orbitofrontal cortex. One key question raised by these findings is the relationship between the action-based value responses observed in the striatum and the goods-based value responses observed in the orbitofrontal cortex. The extent to which these representations are independent has received much attention recently. For example, Horwitz and colleagues (2004) have shown that asking monkeys to choose between ‘goods’ that map arbitrarily to different actions from trial-to-trial leads almost instantaneously to activity in action-based choice circuits. Findings such as these suggest that action-based and goods-based representations of value are profoundly interconnected, although we acknowledge that this view remains controversial (Padoa-Schioppa and Assad, 2006; Padoa-Schioppa and Assad, 2008).

Human imaging studies have provided strong converging evidence that ventromedial prefrontal cortex and the striatum encode the subjective value of goods and actions. While it is difficult to determine whether the single-unit neurophysiology and fMRI studies have identified directly homologous sub-regions of these larger anatomical structures in the two different species, there is surprising agreement across the two methods concerning the larger anatomical structures important in valuation. As reviewed elsewhere, dozens of studies have demonstrated reward responses in these regions that are consistent with tracking forecasted or experienced value, which might play a role in value learning (Delgado, 2007; Knutson and Cooper, 2005; O’Doherty, 2004). Here we will focus on several recent studies that identified subjective value signals specific to the decision process. Two key design aspects that allow this identification in these particular studies are: (1) no outcomes were experienced during the experiment, so that decision-related signals could be separated from learning-related signals as much as possible, and (2) there was a behavioral measure of the subject’s preference, which allowed subjective value to be distinguished from the objective characteristics of the options.

Plassmann and colleagues (2007) scanned hungry subjects bidding on various snack foods. They used an auction procedure where subjects were strongly incentivized to report what each snack food was actually worth to them. They found that BOLD activity in medial orbitofrontal cortex was correlated with the each subject’s subjective valuation of that item. Hare and colleagues have now replicated this finding twice, once in a different task where the subjective value of the good could be dissociated from other possible signals (Hare et al., 2008), and again in a set of dieting subjects where the subjective value of the snack foods was affected by both taste and health concerns (Hare et al., 2009).

In a related study, Kable and Glimcher (2007) examined participants choosing between immediate and delayed monetary rewards. The immediate reward was fixed, while both the magnitude and receipt time of the delayed reward varied across trials. From each subject’s choices, an idiosyncratic discount function was estimated that described how the subjective value of money declined with delay for that individual. In medial prefrontal cortex and ventral striatum (among other regions), BOLD activity was correlated with the subjective value of the delayed reward as it varied across trials. Furthermore, across subjects, the neurometric discount functions describing how neural activity in these regions declined with delay matched the psychometric discount functions describing how subjective value declined with delay. In other words, for more impulsive subjects, neural activity in these regions decreased steeply as delay increased, while for more patient subjects this decline was less pronounced. These results suggest that neural activity in these regions encodes the subjective value of both immediate and delayed rewards in a common neural currency that takes into account the time at which a reward will occur.

Two recent studies have focused on decisions involving monetary gambles. These studies have demonstrated that modulation of a common value signal could also account for loss aversion and ambiguity aversion, two more recently identified choice-related behaviors which suggest important refinements to theoretical models of subjective value encoding (for a review of these issues see Fox and Poldrack, 2009). Tom and colleagues (2007) scanned subjects deciding whether to accept or reject monetary lotteries in which there was a 50–50 chance of gaining or losing money. They found that BOLD activity in ventromedial prefrontal cortex and striatum increased with the amount of the gain, and decreased with the amount of the loss. Furthermore, the size of the loss effect relative to the gain effect was correlated with the degree to which the potential loss affected the person’s choice more than the potential gain. Activity decreased faster in response to increasing losses for more loss-averse subjects. Levy and colleagues (2007) examined subjects choosing between a fixed certain amount and a gamble that was either risky (known probabilities) or ambiguous (unknown probabilities). They found that activity in ventromedial prefrontal cortex and striatum was correlated with the subjective value of both risky and ambiguous options.

Midbrain Dopamine: A Mechanism for Learning Subjective Value

The previous section reviewed evidence that ventromedial prefrontal cortex and striatum encode the subjective value of different goods or actions during decision-making in a way that could guide choice. But how do these subjective value signals arise? One of the most critical sources of value information is undoubtedly past experience. Indeed, in physiological experiments, animal subjects always have to learn the value of different actions over the course of the experiment—for these subjects the consequences of each action cannot be communicated linguistically. Although there are alternative viewpoints (Dommett et al., 2005; Redgrave and Gurney, 2006), unusually solid evidence now indicates that dopaminergic neurons in the midbrain encode a teaching signal that can be used to learn the subjective value of actions (for a detailed review, including a discussion of how nearly all of the findings often presented as discrepant with early versions of the dopaminergic teaching signal hypothesis have been reconciled with contemporary versions of the theory, see Niv and Montague, 2009). Indeed, these kinds of signals can be shown to be sufficient for learning the values of different actions from experience. Since these same dopaminergic neurons project primarily to prefrontal and striatal regions (Haber, 2003), it seems likely that these neurons play a critical role in subjective value learning.

The computational framework for these investigations of dopamine and learning comes from reinforcement learning theories developed in computer science and psychology over the past two decades (Niv and Montague, 2009). While several variants of these theories exist, in all of these models subjective values are learned through iterative updating based on experience. The theories rest on the idea that each time a subject experiences the outcome of her choice, an updated value estimate is calculated from the old value estimate and a reward prediction error—the difference between the experienced outcome of an action and the outcome that was forecast. This reward prediction error is scaled by a learning rate, which determines the weight given to recent versus remote experience.

Pioneering studies of Schultz and colleagues (1997) provided the initial evidence that dopaminergic neurons encode a reward prediction error signal of the kind proposed by a class of theories called temporal-difference learning (TD-models, Sutton and Barto, 1998). These studies demonstrated that during conditioning tasks, dopaminergic neurons: (1) responded to the receipt of unexpected rewards; (2) responded to the first reliable predictor of reward after conditioning; (3) did not respond to the receipt of fully predicted rewards; and (4) showed a decrease in firing when a predicted reward was omitted. Montague, Dayan and Sejnowksi (1996) were the first to propose that this pattern of results could be completely explained if the firing of dopamine neurons encoded a reward prediction error of the type required by TD-class models. Subsequent studies, examining different Pavlovian conditioning paradigms, demonstrated that the qualitative responses of dopaminergic neurons were entirely consistent with this hypothesis (Tobler et al., 2003; Waelti et al., 2001).

Recent studies have provided more quantitative tests of the reward prediction error hypothesis. Bayer and Glimcher (2005) recorded from dopaminergic neurons during an oculomotor task, in which the reward received for the same movement varied in a continuous manner from trial-to-trial. As is required by theory, the response on the current trial was a function of an exponentially-weighted sum of previous rewards obtained by the monkey. Thus, dopaminergic firing rates were linearly related to a model-derived reward prediction error. Interestingly, though, this relationship broke down for the most negative prediction error signals, although the implications of this last finding have been controversial.

Additional studies have demonstrated that, when conditioned cues predict rewards with different magnitudes or probabilities, the cue-elicited dopaminergic response scales with magnitude or probability, as expected if it represents a cue-elicited prediction error (Fiorillo et al., 2003; Tobler et al., 2005). In a similar manner, if different cues predict rewards after different delays, the cue-elicited response decreases as the delay-to-reward increases, consistent with a prediction that incorporates discounting of future rewards (Fiorillo et al., 2008; Kobayashi and Schultz, 2008; Roesch et al., 2007).

Until recently, direct evidence regarding the activity of dopaminergic neurons in humans has been scant. Imaging the midbrain with fMRI is difficult for several technical reasons, and the reward prediction error signals initially identified with fMRI were located in the presumed striatal targets of the dopaminergic neurons (McClure et al., 2003; O’Doherty et al., 2003). However, D’Ardenne and colleagues (2008) recently reported BOLD prediction error signals in the ventral tegmental area using fMRI. They used a combination of small voxel sizes, cardiac gating, and a specialized normalization procedure to detect these signals. Across two paradigms using primary and secondary reinforcers, they found that BOLD activity in the VTA was significantly correlated with positive, but not negative, reward prediction errors.

Zaghloul and colleagues (2009) reported the first electrophysiological recordings in human substantia nigra during learning. These investigators recorded neuronal activity while individuals with Parkinson’s disease underwent surgery to place electrodes for deep brain stimulation therapy. Subjects had to learn which of two options provided a greater probability of a hypothetical monetary reward, and their choices were fit with a reward prediction model. In the subset of neurons that were putatively dopaminergic, they found an increase in firing rate for unexpected positive outcomes, relative to unexpected negative outcomes, while the firing rates for expected outcomes did not differ. Such an encoding of unexpected rewards is again consistent with the reward prediction error hypothesis.

Pessiglione and colleagues (2006) demonstrated a causal role for dopaminergic signaling in both learning and striatal BOLD prediction error signals. During an instrumental learning paradigm, they tested subjects who had received L-DOPA (a dopamine precursor), haloperidol (a dopamine receptor antagonist), or placebo. Consistent with other findings from Parkinson’s patients (Frank et al., 2004), L-DOPA (compared to haloperidol) improved learning to select a more rewarding option, but did not affect learning to avoid a more punishing option. In addition, the BOLD reward prediction error in the striatum was larger for the L-DOPA group than for the haloperidol group, and differences in this response, when incorporated into a reinforcement learning model, could account for differences in the speed of learning across groups.

Stage 2: Choice

Lateral Prefrontal and Parietal Cortex: Choosing Based on Value

Learning and encoding subjective value in a common currency is not sufficient for decision-making—one action still needs to be chosen from amongst the set of alternatives and passed to the motor system for implementation. What is the process by which a highly valued option in a choice set is selected and implemented?

While we acknowledge that other proposals have been made regarding this process, we believe that the bulk of the available evidence implicates (at a minimum) the lateral prefrontal and parietal cortex in the process of selecting and implementing choices from amongst any set of available options. Some of the best evidence has come from studies of a well-understood model decision-making system: The visuo-saccadic system of the monkey. For largely technical reasons, the saccadic-control system has been intensively studied over the past three decades as a model for understanding sensory-motor control in general (Andersen and Buneo, 2002; Colby and Goldberg, 1999). The same has been true for studies of choice. The Lateral Intraparietal area (LIP), the frontal eye fields (FEF) and the superior colliculus (SC) comprise the core of a heavily interconnected network that plays a critical role in visuo-saccadic decision-making (Glimcher, 2003; Gold and Shadlen, 2007). The available data suggest a parallel role in non-saccadic decision-making for the motor cortex, the premotor cortex, the supplementary motor area and the areas in the parietal cortex adjacent to LIP.

At a theoretical level, the process of choice must involve a mechanism for comparing two or more options and identifying the most valuable of those options. Both behavioral evidence and theoretical models from economics make it clear that this process is also somewhat stochastic (McFadden, 1974). If two options have very similar subjective values, the less desirable option may be occasionally selected. Indeed, the probability that these “errors” will occur is a smooth function of the similarity in subjective value of the options under consideration. How then is this implemented in the brain?

Any system which performed such a comparison must be able to represent the values of each option before a choice is made and then must effectively pass information about the selected option, but not the unselected options, to downstream circuits. In the saccadic system, amongst the first evidence for such a circuit came from the work of Glimcher and Sparks (1992), who essentially replicated in the superior colliculus Tanji and Evarts’ (1976) classic studies of motor area M1. The laminar structure of the superior colliculus employs a topographic map to represent the amplitude and direction of all possible saccades. They showed that if two saccadic targets of roughly equal subjective value were presented to a monkey, then the two locations on this map corresponding to the two saccades became weakly active. If one of these targets was suddenly identified as having higher value, this led almost immediately to a high-frequency burst of activity at the site associated with that movement and a concomitant suppression of activity at the other site. In light of preceding work (Van Gisbergen et al., 1981), this led to the suggestion that a winner-take-all computation occurred in the colliculus that effectively selected one movement from the two options for execution.

Subsequent studies (Basso and Wurtz, 1998; Dorris and Munoz, 1998) established that activity at the two candidate movement sites, during the period before the burst, was graded. If the probability that a saccade would yield a reward was increased, firing rates associated with that saccade increased, and if the probability that a saccade would yield a reward was decreased, then the firing rate was decreased. These observations led Platt and Glimcher (1999) to test the hypothesis, just upstream of the colliculus in area LIP, that these pre-movement signals encoded the subjective values of movements. To test that hypothesis, they systematically manipulated either the probability that a given saccadic target would yield a reward or the magnitude of reward yielded by that target. They found that firing rates in area LIP before the collicular burst occurred were a nearly linear function of both magnitude and probability of reward.

This naturally led to the suggestion that the fronto-parietal network of saccade control areas formed, in essence, a set of topographic maps of saccade value. Each location on these maps encodes a saccade of a particular amplitude and direction, and it was suggested that firing rates on these maps encoded the desirability of each of those saccades. The process of choice, then, could be reduced to a competitive neuronal mechanism that identified the saccade associated with the highest level of neuronal activity. (In fact, studies in brain slices have largely confirmed the existence of such a mechanism in the colliculus – see for example, Isa et al., 2004 or Lee and Hall, 2006). Many subsequent studies have bolstered this conclusion, demonstrating that various manipulations that increase (or decrease) the subjective value of a given saccade also increase (or decrease) the firing rate of neurons within the frontal-parietal maps associated with that saccade (Dorris and Glimcher, 2004; Janssen and Shadlen, 2005; Kim et al., 2008; Leon and Shadlen, 1999; Leon and Shadlen, 2003; Sugrue et al., 2004; Wallis and Miller, 2003; Yang and Shadlen, 2007). Some of these studies have discovered one notable caveat to this conclusion, though. Firing rates in these areas encode the subjective value of particular saccade, relative to the values of all other saccades under consideration (Dorris and Glimcher, 2004; Sugrue et al., 2004). Thus, unlike firing rates in orbitofrontal cortex and striatum, firing rates in LIP (and presumably other frontal-parietal regions involved in choice rather than valuation) are not “menu-invariant.” This suggests an important distinction between activity in the parietal cortex and activity in the orbitofrontal cortex and striatum. Orbitofrontal and striatal neurons appear to encode absolute (and hence transitive) subjective values. Parietal neurons, presumably using a normalization mechanism like the one studied in visual cortex (Heeger, 1992), rescale these absolute values so as to maximize the differences between the available options before choice is attempted.

At the same time that these studies were underway, a second line of evidence also suggested that the fronto-parietal networks participate in decision-making, but in this case decision-making of a slightly different kind. In these studies of perceptual decision-making, an ambiguous visual stimulus was used to indicate which of two saccades would yield a reward, and the monkey was reinforced if he made the indicated saccade. Shadlen and Newsome (Shadlen et al., 1996; Shadlen and Newsome, 2001) found that the activity of LIP neurons early in this decision-making process carried stochastic information about the likelihood that a given movement would yield a reward. Subsequent studies have revealed the dynamics of this process. During these kinds of perceptual decision-making tasks the firing rates of LIP neurons increase as the evidence that a saccade into the response field will be rewarded accrues. However, this increase is bounded, once firing rates cross a maximal threshold a saccade is initiated (Churchland et al., 2008; Roitman and Shadlen, 2002). Closely related studies in the frontal eye fields lead to similar conclusions (Gold and Shadlen, 2000; Kim and Shadlen, 1999). Thus, both firing rates in LIP and FEF and behavioral responses in this kind of task can be captured by a race-to-barrier diffusion model (Ratcliff et al., 1999).

Several lines of evidence now suggest that this threshold represents a value (or evidence-based) threshold for movement selection (Kiani et al., 2008). When the value of any saccade crosses that pre-set minimum, the saccade is immediately initiated. Importantly, in the models derived from these data, the intrinsic stochasticity of the circuit gives rise to the stochasticity observed in actual choice behavior.

These two lines of evidence, one associated with the work of Shadlen, Newsome and their colleagues and the other associated with our research groups, describe two classes of models for understanding the choice mechanism. In reaction-time tasks the race-to-barrier model describes a situation in which a choice is made a soon as the value of any action exceeds a pre-set threshold. In non-reaction time economics-style tasks a winner-take-all model describes the process of selecting the option having the highest value from a set of candidates. Wang and colleagues (Lo and Wang, 2006; Wang, 2008; Wong and Wang, 2006) have recently shown that a single collicular or parietal circuit can be designed that performs both winner-take-all and thresholding operations in a stochastic fashion that depends on the inhibitory tone of the network. Their models suggest that the same mechanism can perform two kinds of choice – a slow competitive winner-take-all process that can identify the best of the available options and a rapid thresholding process that selects a single movement once some pre-set threshold of value is crossed. LIP, the superior colliculus and the frontal eye fields therefore seem to be part of a circuit that receives as input the subjective value of different saccades and then, representing these as relative values, stochastically selects from all possible saccades a single one for implementation.

Open questions and current controversies

Above, we have outlined our conclusion that, based on the data available today, a minimal model of primate decision-making includes valuation circuits in ventromedial prefrontal cortex and striatum and choice circuits in lateral prefrontal and parietal cortex. However, there are obviously many open questions about the details of this mechanism, as well as many vigorous debates that go beyond the general outline just presented. With regard to valuation, some of the important open questions concern what all of the many inputs to the final common path are (i.e., Hare et al., 2009), how the function of ventromedial prefrontal cortex and striatum might differ (i.e., Hare et al., 2008), and how to best define and delineate more specific roles for subcomponents of these large multi-part structures. In terms of value learning, current work focuses on what precise algorithmic model of reinforcement learning best describes the dopaminergic signal (i.e., Morris et al., 2006), how sophisticated the expectation of the future rewards that is intrinsic to this signal is, whether these signals adapt to volatility in the environment as necessary for optimal learning (i.e., Behrens et al., 2007), and how outcomes that are worse than expected are encoded (i.e., Bayer et al., 2007; Daw et al., 2002). In terms of choice, some of the important open questions concern what modulates the state of the choice network between the thresholding and winner-take-all mechanisms, what determines which particular options are passed to the choice circuitry, whether there are mechanisms for constructing and editing choice sets, how the time accorded to a decision is controlled, and whether this allocation adjusts in response to changes in the cost of “errors.”

While space does not permit us to review all the current work that addresses these questions, we do want to elaborate on one of these questions, which is perhaps the most hotly debated in the field at present. This is whether there are multiple valuation sub-systems, and if so, how these systems are defined, how independent or interactive they are, and how their valuations are combined into a final valuation that determines choice. Critically, this debate is not about whether different regions encode subjective value in a different—manner for example, the menu-invariant responses in orbitofrontal cortex compared to the relative value responses in LIP. Rather, the question is whether different systems encode different and inconsistent values for the same actions, such that these different valuations would lead to diverging conclusions about the best action to take. Many proposals along these lines have been made (Balleine et al., 2009; Balleine and Dickinson, 1998; Bossaerts et al., 2009; Daw et al., 2005; Dayan and Balleine, 2002; Rangel et al., 2008). One set builds upon a distinction made in the psychological literature between: Pavlovian systems, which learn a relationship between stimuli and outcomes and activate simple approach and withdrawal responses; habitual systems, which learn a relationship between stimuli and responses and therefore do not adjust quickly to changes in contingency or devaluation of rewards; and goal-directed systems, which learn a relationship between responses and outcomes and therefore do adjust quickly to changes in contingency or devaluation of rewards. Another related set builds upon a distinction between model-free reinforcement learning algorithms, which make minimal assumptions and work on “cached” action values, and more sophisticated model-based algorithms, which use more detailed information about the structure of the environment and can therefore adjust more quickly to changes in the environment. These proposals usually associate the different systems with different regions of the frontal cortex and striatum (Balleine et al., 2009), and raise the additional question of how these multiple valuations interact or combine to control behavior (Daw et al., 2005). It is important to note that most of the evidence we have reviewed here concerns decision-making by what would be characterized in much of this literature as the “goal-directed” system. This highlights the fact that our understanding of valuation circuitry is in its infancy. A critical question going forward is how multiple valuation circuits are integrated and how we can best account for the functional role of different sub-regions of ventromedial prefrontal cortex and striatum in valuation. While no one today knows how this debate will finally be resolved, we can identify the resolution of these issues as critical to the forward progress of decision studies.

Another significant area of research that we have neglected in this review concerns the function of dorsomedial prefrontal and medial parietal circuits in decision-making. Several recent reviews have focused specifically on the role of these structures in valuation and choice (Lee, 2008; Platt and Huettel, 2008; Rushworth and Behrens, 2008). Some of the most recently identified and interesting electrophysiological signals have been found in dorsal anterior cingulate (Hayden et al., 2009; Matsumoto et al., 2007; Quilodran et al., 2008; Seo and Lee, 2007, 2009) and the posterior cingulate (Hayden et al., 2008; McCoy and Platt, 2005). Decision-related signals in these areas have been found to occur after a choice has been made, in response to feedback about the result of that choice. One key function of these regions may therefore be in the monitoring of choice outcomes, and the subsequent adjustment of both choice behavior and sensory acuity in response to this monitoring. However, there is also evidence suggesting that parts of the anterior cingulate may encode action-based subjective values, in an analogous manner to the orbitofrontal encoding of goods-based subjective values (Rudebeck et al., 2008). This reiterates the need for work delineating the specific functional roles of different parts of ventromedial prefrontal cortex in valuation and decision-making.

Finally, a major topic area that we have not explicitly discussed in detail concerns decisions where multiple agents are involved or affected. These are situations that can be modeled using the formal frameworks of game theory and behavioral game theory. Many, and perhaps most, of the human neuroimaging studies of decision-making have involved social interactions and games, and several reviews have been dedicated to these studies (Fehr and Camerer, 2007; Lee, 2008; Montague and Lohrenz, 2007; Singer and Fehr, 2005). For our purposes, it is important to note that the same neural mechanisms we have described above are now known to operate during social decisions (Barraclough et al., 2004; de Quervain et al., 2004; Dorris and Glimcher, 2004; Hampton et al., 2008; Harbaugh et al., 2007; King-Casas et al., 2005; Moll et al., 2006). Of course, these decisions also require additional processes, such as the ability to model other people’s minds and make inferences about their beliefs (Adolphs, 2003; Saxe et al., 2004), and many ongoing investigations are aimed at understanding the role of particular brain regions in these social functions during decision-making (Hampton et al., 2008; Tankersley et al., 2007; Tomlin et al., 2006).

Conclusion

Neurophysiological investigations over the last seven years have begun to solidify the basic outlines of a neural mechanism for choice. This breakneck pace of discovery makes us optimistic that the field will soon be able to resolve many of the current controversies, and that it will also expand to address some of the questions that are now completely open.

Future neurophysiological models of decision-making should prove relevant beyond the domain of basic neuroscience. Since neurophysiological models share with economic ones the goal of explaining choices, ultimately there should prove to be links between concepts in the two kinds of models. For example, the neural noise in different brain circuits might correspond to the different kinds of stochasticity that are posited in different classes of economic choice models (McFadden, 1974; Selten, 1975). Similarly, rewards are experienced through sensory systems, with transducers that have both a shifting reference point and a finite dynamic range. These psychophysically characterized properties of sensory systems might contribute both to the decreasing sensitivity and reference dependence of valuations, which are both key aspects of recent economic models (Koszegi and Rabin, 2006). Moving forward, we think the greatest promise lies in building models of choice that incorporate constraints from both the theoretical and mechanistic levels of analysis.

Ultimately, such models should prove useful to questions of human health and disease. There are already several elegant examples of how an understanding of the neurobiological mechanisms of decision-making has provided a foundation for understanding aberrant decision-making in addiction, psychiatric disorders, autism and Parkinson’s disease (Bernheim and Rangel, 2004; Chiu et al., 2008a; Chiu et al., 2008b; Frank et al., 2004; King-Casas et al., 2008; Redish, 2004). In the future, we feel confident that understanding the neurobiology of decision-making also points the way towards improved treatments of these diseases and others, where people’s choices play a key role.

Figure 1.

Figure 1

Valuation circuitry. Diagram of a macaque brain, highlighting in black the regions discussed as playing role in valuation. Other regions are labeled in grey.

Figure 2.

Figure 2

An example orbitofrontal neuron that encodes offer value, in a menu-invariant and therefore transitive manner. (a) In red is the firing rate of the neuron (± s.e.m.), as a function of the magnitude of the two juices offered, for three different choice pairs. In black is the percentage of time the monkey chose the first offer. (b) Replots firing rates as a function of the offer value of juice C, demonstrating that this neuron encodes this value in a common currency in a manner that is independent of the other reward offered. The different symbols and colors refer to data from the three different juice pairs, and each symbol represents one trial type. Reprinted with permission from Padoa-Schioppa and Assad (2008).

Figure 3.

Figure 3

Two example striatal neurons that encode action value. (a) Caudate neuron that fires more when a contralateral saccade is more valuable (blue) compared to less valuable (yellow), independently of which saccade the animal eventually chooses. c denotes the average onset of the saccade cue. Reprinted with permission from Lau and Glimcher (2008). (b) Putamen neuron that encodes the value of a rightward arm movement (QR), independent of the value of a leftward arm movement (QL). Reprinted with permission from Samejima et al. (2005).

Figure 4.

Figure 4

Orbitofrontal cortex encodes the subjective value of food rewards in humans. (a) Hungry subjects bid on snack foods, which were the only items they could eat for 30 minutes after the experiment. At the time of the decision, medial orbitofrontal cortex (b) tracked the subjective value that subjects placed on each food item. Activity here increased as the subjects willingness-to-pay for the item increased (c). Reprinted with permission from Plassmann et al. (2007).

Figure 5.

Figure 5

Match between psychometric and neurometric estimates of subjective value during intertemporal choice. (a) Regions-of-interest are shown for one subject, in striatum, medial prefrontal cortex and posterior cingulate cortex. (b) Activity in these ROIs (black) decreases as the delay to a reward increases, in a similar manner to the way that subjective value estimated behaviorally (red) decreases as a function of delay. This decline in value can be captured by estimating a discount rate (k). (c) Comparison between discount rates estimated separately from the behavioral and neural data across all subjects, showing that on average there is a psychometric-neurometric match. Reprinted with permission from Kable and Glimcher (2007).

Figure 6.

Figure 6

Dopaminergic responses in monkeys and humans. (a) An example dopamine neuron recorded in a monkey, which responds more when the reward received was better than expected. (b) Firing rates of dopaminergic neurons track positive reward prediction errors. (c) Population average of dopaminergic responses (n=15) recorded in humans during deep brain stimulation (DBS) surgery for Parkinson’s disease, showing increased firing in response to unexpected gains. The red line indicates feedback onset. (d) Firing rates of dopaminergic neurons depend on the size and valence of the difference between the received and expected reward. All error bars represent standard errors. Panels (a–b) reprinted with permission from Bayer and Glimcher (2005), and panels (a–b) reprinted with permission from Zaghloul et al. (2009).

Figure 7.

Figure 7

Choice circuitry for saccadic decision-making. Diagram of a macaque brain, highlighting in black the regions discussed as playing a role in choice. Other regions are labeled in grey.

Figure 8.

Figure 8

LIP firing rates are greater when the larger magnitude reward is in the response field (n=30) (a), but are not affected when the magnitude of all rewards are doubled (n=22) (b). Adapted with permission from Dorris and Glimcher (2004).

Figure 9.

Figure 9

Schematic of the symmetric random walk (a) and race models (b) of choice and reaction time. (c) Schematic neural architecture and simulations of the computational model of that replicates activation dynamics in LIP and Superior Colliculus during choice. Panels (a) and (b) reprinted with permission from Gold and Shadlen (2007). Panel (c) adapted with permission from Lo and Wang (2006).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Adolphs R. Cognitive neuroscience of human social behaviour. Nat Rev Neurosci. 2003;4:165–178. doi: 10.1038/nrn1056. [DOI] [PubMed] [Google Scholar]
  2. Andersen RA, Buneo CA. Intentional maps in posterior parietal cortex. Annu Rev Neurosci. 2002;25:189–220. doi: 10.1146/annurev.neuro.25.112701.142922. [DOI] [PubMed] [Google Scholar]
  3. Balleine BW, Daw ND, O’Doherty JP. Multiple forms of value learning and the function of dopamine. In: Glimcher PW, Camerer CF, Fehr E, Poldrack RA, editors. Neuroeconomics: Decision Making and the Brain. New York, NY: Academic Press; 2009. [Google Scholar]
  4. Balleine BW, Dickinson A. Goal-directed instrumental action: Contingency and incentive learning and their cortical substrates. Neuropharmacology. 1998;37:407–419. doi: 10.1016/s0028-3908(98)00033-1. [DOI] [PubMed] [Google Scholar]
  5. Barraclough DJ, Conroy ML, Lee D. Prefrontal cortex and decision making in a mixed-strategy game. Nat Neurosci. 2004;7:404–410. doi: 10.1038/nn1209. [DOI] [PubMed] [Google Scholar]
  6. Basso MA, Wurtz RH. Modulation of neuronal activity in superior colliculus by changes in target probability. J Neurosci. 1998;18:7519–7534. doi: 10.1523/JNEUROSCI.18-18-07519.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bayer HM, Glimcher PW. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron. 2005;47:129–141. doi: 10.1016/j.neuron.2005.05.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bayer HM, Lau B, Glimcher PW. Statistics of midbrain dopamine neuron spike trains in the awake primate. J Neurophysiol. 2007;98:1428–1439. doi: 10.1152/jn.01140.2006. [DOI] [PubMed] [Google Scholar]
  9. Behrens TEJ, Woolrich MW, Walton ME, Rushworth MFS. Learning the value of information in an uncertain world. Nat Neurosci. 2007;10:1214–1221. doi: 10.1038/nn1954. [DOI] [PubMed] [Google Scholar]
  10. Bernheim BD, Rangel A. Addiction and cue-triggered decision processes. American Economic Review. 2004;94:1558–1590. doi: 10.1257/0002828043052222. [DOI] [PubMed] [Google Scholar]
  11. Bossaerts P, Preuschoff K, Hsu M. The neurobiological foundations of valuation in human decision-making under uncertainty. In: Glimcher PW, Camerer CF, Fehr E, Poldrack RA, editors. Neuroeconomics: Decision Making and the Brain. New York, NY: Academic Press; 2009. [Google Scholar]
  12. Chiu PH, Kayali MA, Kishida KT, Tomlin D, Klinger LG, Klinger MR, Montague PR. Self responses along cingulate cortex reveal quantitative neural phenotype for high-functioning autism. Neuron. 2008a;57:463–473. doi: 10.1016/j.neuron.2007.12.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Chiu PH, Lohrenz TM, Montague PR. Smokers’ brains compute, but ignore, a fictive error signal in a sequential investment task. Nat Neurosci. 2008b;11:514–520. doi: 10.1038/nn2067. [DOI] [PubMed] [Google Scholar]
  14. Churchland AK, Kiani R, Shadlen MN. Decision-making with multiple alternatives. Nat Neurosci. 2008;11:693–702. doi: 10.1038/nn.2123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Cohen JD, Blum KI. Reward and decision. Neuron. 2002;36:193–198. doi: 10.1016/s0896-6273(02)00973-x. [DOI] [PubMed] [Google Scholar]
  16. Colby CL, Goldberg ME. Space and attention in parietal cortex. Annu Rev Neurosci. 1999;22:319–349. doi: 10.1146/annurev.neuro.22.1.319. [DOI] [PubMed] [Google Scholar]
  17. D’Ardenne K, McClure SM, Nystrom LE, Cohen JD. BOLD responses reflecting dopaminergic signals in the human Ventral Tegmental Area. Science. 2008;319:1264–1267. doi: 10.1126/science.1150605. [DOI] [PubMed] [Google Scholar]
  18. Daw ND, Kakade S, Dayan P. Opponent interactions between serotonin and dopamine. Neural Netw. 2002;15:617–634. doi: 10.1016/s0893-6080(02)00052-7. [DOI] [PubMed] [Google Scholar]
  19. Daw ND, Niv Y, Dayan P. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat Neurosci. 2005;8:1704–1711. doi: 10.1038/nn1560. [DOI] [PubMed] [Google Scholar]
  20. Dayan P, Balleine BW. Reward, motivation, and reinforcement learning. Neuron. 2002;36:285–298. doi: 10.1016/s0896-6273(02)00963-7. [DOI] [PubMed] [Google Scholar]
  21. de Quervain DJ, Fischbacher U, Treyer V, Schellhammer M, Schnyder U, Buck A, Fehr E. The neural basis of altruistic punishment. Science. 2004;305:1254–1258. doi: 10.1126/science.1100735. [DOI] [PubMed] [Google Scholar]
  22. Delgado MR. Reward-related responses in the human striatum. Ann N Y Acad Sci. 2007;1104:70–88. doi: 10.1196/annals.1390.002. [DOI] [PubMed] [Google Scholar]
  23. Dommett E, Coizet V, Blaha CD, Martindale J, Lefebvre V, Walton N, Mayhew JE, Overton PG, Redgrave P. How visual stimuli activate dopaminergic neurons at short latency. Science. 2005;307:1476–1479. doi: 10.1126/science.1107026. [DOI] [PubMed] [Google Scholar]
  24. Dorris MC, Glimcher PW. Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron. 2004;44:365–378. doi: 10.1016/j.neuron.2004.09.009. [DOI] [PubMed] [Google Scholar]
  25. Dorris MC, Munoz DP. Saccadic probability influences motor preparation signals and time to saccadic initiation. J Neurosci. 1998;18:7015–7026. doi: 10.1523/JNEUROSCI.18-17-07015.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Fehr E, Camerer CF. Social neuroeconomics: The neural circuitry of social preferences. Trends Cogn Sci. 2007;11:419–427. doi: 10.1016/j.tics.2007.09.002. [DOI] [PubMed] [Google Scholar]
  27. Fiorillo CD, Newsome WT, Schultz W. The temporal precision of reward prediction in dopamine neurons. Nat Neurosci. 2008;11:966–973. doi: 10.1038/nn.2159. [DOI] [PubMed] [Google Scholar]
  28. Fiorillo CD, Tobler PN, Schultz W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science. 2003;299:1898–1902. doi: 10.1126/science.1077349. [DOI] [PubMed] [Google Scholar]
  29. Fox CR, Poldrack RA. Prospect theory and the brain. In: Glimcher PW, Camerer CF, Fehr E, Poldrack RA, editors. Neuroeconomics: Decision Making and the Brain. New York, NY: Academic Press; 2009. [Google Scholar]
  30. Frank MJ, Seeberger LC, O’Reilly R C. By carrot or by stick: cognitive reinforcement learning in parkinsonism. Science. 2004;306:1940–1943. doi: 10.1126/science.1102941. [DOI] [PubMed] [Google Scholar]
  31. Friedman M. Essays in Positive Economics. Chicago, IL: University of Chicago Press; 1953. [Google Scholar]
  32. Glimcher PW. The neurobiology of visual-saccadic decision making. Annu Rev Neurosci. 2003;26:133–179. doi: 10.1146/annurev.neuro.26.010302.081134. [DOI] [PubMed] [Google Scholar]
  33. Glimcher PW, Camerer CF, Fehr E, Poldrack RA. Neuroeconomics: Decision Making and the Brain. New York, NY: Academic Press; 2009. [Google Scholar]
  34. Glimcher PW, Sparks DL. Movement selection in advance of action in the superior colliculus. Nature. 1992;355:542–545. doi: 10.1038/355542a0. [DOI] [PubMed] [Google Scholar]
  35. Gold JI, Shadlen MN. Representation of a perceptual decision in developing oculomotor commands. Nature. 2000;404:390–394. doi: 10.1038/35006062. [DOI] [PubMed] [Google Scholar]
  36. Gold JI, Shadlen MN. The neural basis of decision making. Annu Rev Neurosci. 2007;30:535–574. doi: 10.1146/annurev.neuro.29.051605.113038. [DOI] [PubMed] [Google Scholar]
  37. Gul F, Pesendorfer W. The case for mindless economics. In: Caplin A, Schotter A, editors. The Foundations of Positive and Normative Economics: A Handbook. New York, NY: Oxford University Press; 2008. [Google Scholar]
  38. Haber SN. The primate basal ganglia: parallel and integrative networks. J Chem Neuroanat. 2003;26:317–330. doi: 10.1016/j.jchemneu.2003.10.003. [DOI] [PubMed] [Google Scholar]
  39. Hampton AN, Bossaerts P, O’Doherty JP. Neural correlates of mentalizing-related computations during strategic interactions in humans. Proc Natl Acad Sci U S A. 2008;105:6741–6746. doi: 10.1073/pnas.0711099105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Harbaugh WT, Mayr U, Burghart DR. Neural responses to taxation and voluntary giving reveal motives for charitable donations. Science. 2007;316:1622–1625. doi: 10.1126/science.1140738. [DOI] [PubMed] [Google Scholar]
  41. Hare TA, Camerer CF, Rangel A. Self-control in decision-making involves modulation of the vmPFC valuation system. Science. 2009;324:646–648. doi: 10.1126/science.1168450. [DOI] [PubMed] [Google Scholar]
  42. Hare TA, O’Doherty J, Camerer CF, Schultz W, Rangel A. Dissociating the role of the orbitofrontal cortex and the striatum in the computation of goal values and prediction errors. J Neurosci. 2008;28:5623–5630. doi: 10.1523/JNEUROSCI.1309-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Hayden BY, Nair AC, McCoy AN, Platt ML. Posterior cingulate cortex mediates outcome-contingent allocation of behavior. Neuron. 2008;60:19–25. doi: 10.1016/j.neuron.2008.09.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Hayden BY, Pearson JM, Platt ML. Fictive reward signals in the anterior cingulate cortex. Science. 2009;324:948–950. doi: 10.1126/science.1168488. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Heeger DJ. Normalization of cell responses in cat striate cortex. Vis Neurosci. 1992;9:181–197. doi: 10.1017/s0952523800009640. [DOI] [PubMed] [Google Scholar]
  46. Herrnstein RJ. Relative and absolute strength of responses as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behavior. 1961;4:267–272. doi: 10.1901/jeab.1961.4-267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Horwitz GD, Batista AP, Newsome WT. Representation of an abstract perceptual decision in macaque superior colliculus. J Neurophysiol. 2004;91:2281–2296. doi: 10.1152/jn.00872.2003. [DOI] [PubMed] [Google Scholar]
  48. Houthakker HS. Revealed preference and the utility function. Economica. 1950;17:159–174. [Google Scholar]
  49. Isa T, Kobayashi Y, Saito Y. Dynamic modulation of signal transmission through local circuits. In: Hall WC, Moschovakis A, editors. The Superior Colliculus: New Approaches for Studying Sensorimotor Integration. CRC Press; New York: 2004. [Google Scholar]
  50. Janssen P, Shadlen MN. A representation of the hazard rate of elapsed time in macaque area LIP. Nat Neurosci. 2005;8:234–241. doi: 10.1038/nn1386. [DOI] [PubMed] [Google Scholar]
  51. Kable JW, Glimcher PW. The neural correlates of subjective value during intertemporal choice. Nat Neurosci. 2007;10:1625–1633. doi: 10.1038/nn2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Kahneman D, Tversky A. Prospect theory: An analysis of decision under risk. Econometrica. 1979;47:263–291. [Google Scholar]
  53. Kiani R, Hanks TD, Shadlen MN. Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. J Neurosci. 2008;28:3017–3029. doi: 10.1523/JNEUROSCI.4761-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Kim JN, Shadlen MN. Neural correlates of a decision in the dorsolateral prefrontal cortex of the macaque. Nat Neurosci. 1999;2:176–185. doi: 10.1038/5739. [DOI] [PubMed] [Google Scholar]
  55. Kim S, Hwang J, Lee D. Prefrontal coding of temporally discounted values during intertemporal choice. Neuron. 2008;59:161–172. doi: 10.1016/j.neuron.2008.05.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. King-Casas B, Sharp C, Lomax-Bream L, Lohrenz T, Fonagy P, Montague PR. The rupture and repair of cooperation in borderline personality disorder. Science. 2008;321:806–810. doi: 10.1126/science.1156902. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. King-Casas B, Tomlin D, Anen C, Camerer CF, Quartz SR, Montague PR. Getting to know you: Reputation and trust in a two-person economic exchange. Science. 2005;308:78–83. doi: 10.1126/science.1108062. [DOI] [PubMed] [Google Scholar]
  58. Knutson B, Cooper JC. Functional magnetic resonance imaging of reward prediction. Curr Opin Neurol. 2005;18:411–417. doi: 10.1097/01.wco.0000173463.24758.f6. [DOI] [PubMed] [Google Scholar]
  59. Kobayashi S, Schultz W. Influence of reward delays on responses of dopamine neurons. J Neurosci. 2008;28:7837–7846. doi: 10.1523/JNEUROSCI.1600-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Koszegi B, Rabin M. A model of reference-dependent preferences. Quarterly Journal of Economics. 2006;121:1133–1166. [Google Scholar]
  61. Lau B, Glimcher PW. Value representations in the primate striatum during matching behavior. Neuron. 2008;58:451–463. doi: 10.1016/j.neuron.2008.02.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Lee D. Game theory and neural basis of social decision making. Nat Neurosci. 2008;11:404–409. doi: 10.1038/nn2065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Lee P, Hall WC. An in vitro study of horizontal connections in the intermediate layer of the superior colliculus. J Neurosci. 2006;26:4763–4768. doi: 10.1523/JNEUROSCI.0724-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Leon MI, Shadlen MN. Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron. 1999;24:415–425. doi: 10.1016/s0896-6273(00)80854-5. [DOI] [PubMed] [Google Scholar]
  65. Leon MI, Shadlen MN. Representation of time by neurons in the posterior parietal cortex of the macaque. Neuron. 2003;38:317–327. doi: 10.1016/s0896-6273(03)00185-5. [DOI] [PubMed] [Google Scholar]
  66. Levy I, Rustichini A, Glimcher PW. A single system represents subjective value under both risky and ambiguous decision-making in humans. 37th Annual Society for Neuroscience Meeting; San Diego, CA. 2007. [Google Scholar]
  67. Lo CC, Wang XJ. Cortico-basal ganglia circuit mechanism for a decision threshold in reaction time tasks. Nat Neurosci. 2006;9:956–963. doi: 10.1038/nn1722. [DOI] [PubMed] [Google Scholar]
  68. Marr D. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. New York, NY: Henry Holt and Co., Inc; 1982. [Google Scholar]
  69. Matsumoto M, Matsumoto K, Abe H, Tanaka K. Medial prefrontal cell activity signaling prediction errors of action values. Nat Neurosci. 2007;10:647–656. doi: 10.1038/nn1890. [DOI] [PubMed] [Google Scholar]
  70. McClure SM, Berns GS, Montague PR. Temporal prediction errors in a passive learning task activate human striatum. Neuron. 2003;38:339–346. doi: 10.1016/s0896-6273(03)00154-5. [DOI] [PubMed] [Google Scholar]
  71. McCoy AN, Platt ML. Risk-sensitive neurons in macaque posterior cingulate cortex. Nat Neurosci. 2005;8:1220–1227. doi: 10.1038/nn1523. [DOI] [PubMed] [Google Scholar]
  72. McFadden D. Conditional logit analysis of quantitative choice behavior. In: Zarembka P, editor. Frontiers in Econometrics. New York, NY: Academic Press; 1974. pp. 105–142. [Google Scholar]
  73. Moll J, Krueger F, Zahn R, Pardini M, de Oliveira-Souza R, Grafman J. Human fronto-mesolimbic networks guide decisions about charitable donation. Proceedings of the National Academy of Sciences. 2006;103:15623–15628. doi: 10.1073/pnas.0604475103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Montague PR, Dayan P, Sejnowski TJ. A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J Neurosci. 1996;16:1936–1947. doi: 10.1523/JNEUROSCI.16-05-01936.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Montague PR, Lohrenz T. To detect and correct: Norm violations and their enforcement. Neuron. 2007;56:14–18. doi: 10.1016/j.neuron.2007.09.020. [DOI] [PubMed] [Google Scholar]
  76. Morris G, Nevet A, Arkadir D, Vaadia E, Bergman H. Midbrain dopamine neurons encode decisions for future action. Nat Neurosci. 2006;9:1057–1063. doi: 10.1038/nn1743. [DOI] [PubMed] [Google Scholar]
  77. Niv Y, Montague PR. Theoretical and empirical studies of learning. In: Glimcher PW, Camerer CF, Fehr E, Poldrack RA, editors. Neuroeconomics: Decision Making and the Brain. New York, NY: Academic Press; 2009. [Google Scholar]
  78. O’Doherty JP. Reward representations and reward-related learning in the human brain: insights from neuroimaging. Curr Opin Neurobiol. 2004;14:769–776. doi: 10.1016/j.conb.2004.10.016. [DOI] [PubMed] [Google Scholar]
  79. O’Doherty JP, Dayan P, Friston K, Critchley H, Dolan RJ. Temporal difference models and reward-related learning in the human brain. Neuron. 2003;38:329–337. doi: 10.1016/s0896-6273(03)00169-7. [DOI] [PubMed] [Google Scholar]
  80. Padoa-Schioppa C, Assad JA. Neurons in the orbitofrontal cortex encode economic value. Nature. 2006;441:223–226. doi: 10.1038/nature04676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Padoa-Schioppa C, Assad JA. The representation of economic value in the orbitofrontal cortex is invariant for changes of menu. Nat Neurosci. 2008;11:95–102. doi: 10.1038/nn2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Pessiglione M, Seymour B, Flandin G, Dolan RJ, Frith CD. Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans. Nature. 2006;442:1042–1045. doi: 10.1038/nature05051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Plassmann H, O’Doherty J, Rangel A. Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. J Neurosci. 2007;27:9984–9988. doi: 10.1523/JNEUROSCI.2131-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Platt ML, Glimcher PW. Neural correlates of decision variables in parietal cortex. Nature. 1999;400:233–238. doi: 10.1038/22268. [DOI] [PubMed] [Google Scholar]
  85. Platt ML, Huettel SA. Risky business: The neuroeconomics of decision making under uncertainty. Nat Neurosci. 2008;11:398–403. doi: 10.1038/nn2062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Quilodran R, Rothe M, Procyk E. Behavioral shifts and action valuation in the anterior cingulate cortex. Neuron. 2008;57:314–325. doi: 10.1016/j.neuron.2007.11.031. [DOI] [PubMed] [Google Scholar]
  87. Rangel A, Camerer C, Montague PR. A framework for studying the neurobiology of value-based decision making. Nat Rev Neurosci. 2008;9:545–556. doi: 10.1038/nrn2357. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Ratcliff R, Van Zandt T, McKoon G. Connectionist and diffusion models of reaction time. Psychol Rev. 1999;106:261–300. doi: 10.1037/0033-295x.106.2.261. [DOI] [PubMed] [Google Scholar]
  89. Redgrave P, Gurney K. The short-latency dopamine signal: a role in discovering novel actions? Nat Rev Neurosci. 2006;7:967–975. doi: 10.1038/nrn2022. [DOI] [PubMed] [Google Scholar]
  90. Redish AD. Addiction as a computational process gone awry. Science. 2004;306:1944–1947. doi: 10.1126/science.1102384. [DOI] [PubMed] [Google Scholar]
  91. Roesch MR, Calu DJ, Schoenbaum G. Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nat Neurosci. 2007;10:1615–1624. doi: 10.1038/nn2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Roitman JD, Shadlen MN. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J Neurosci. 2002;22:9475–9489. doi: 10.1523/JNEUROSCI.22-21-09475.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Rudebeck PH, Behrens TE, Kennerley SW, Baxter MG, Buckley MJ, Walton ME, Rushworth MF. Frontal cortex subregions play distinct roles in choices between actions and stimuli. J Neurosci. 2008;28:13775–13785. doi: 10.1523/JNEUROSCI.3541-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Rushworth MF, Behrens TE. Choice, uncertainty and value in prefrontal and cingulate cortex. Nat Neurosci. 2008;11:389–397. doi: 10.1038/nn2066. [DOI] [PubMed] [Google Scholar]
  95. Samejima K, Ueda Y, Doya K, Kimura M. Representation of action-specific reward values in the striatum. Science. 2005;310:1337–1340. doi: 10.1126/science.1115270. [DOI] [PubMed] [Google Scholar]
  96. Samuelson P. A note on the measurement of utility. Review of Economic Studies. 1937;4:155–161. [Google Scholar]
  97. Saxe R, Carey S, Kanwisher N. Understanding other minds: Linking developmental psychology and functional neuroimaging. Annu Rev Psychol. 2004;55:87–124. doi: 10.1146/annurev.psych.55.090902.142044. [DOI] [PubMed] [Google Scholar]
  98. Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593–1599. doi: 10.1126/science.275.5306.1593. [DOI] [PubMed] [Google Scholar]
  99. Selten R. A reexamination of the perfectness concept for equilibrium points in extensive games. Intl J Game Theory. 1975;4:819–825. [Google Scholar]
  100. Seo H, Lee D. Temporal filtering of reward signals in the dorsal anterior cingulate cortex during a mixed-strategy game. J Neurosci. 2007;27:8366–8377. doi: 10.1523/JNEUROSCI.2369-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Seo H, Lee D. Behavioral and neural changes after gains and losses of conditioned reinforcers. J Neurosci. 2009;29:3627–3641. doi: 10.1523/JNEUROSCI.4726-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Shadlen MN, Britten KH, Newsome WT, Movshon JA. A computational analysis of the relationship between neuronal and behavioral responses to visual motion. J Neurosci. 1996;16:1486–1510. doi: 10.1523/JNEUROSCI.16-04-01486.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Shadlen MN, Newsome WT. Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. J Neurophysiol. 2001;86:1916–1936. doi: 10.1152/jn.2001.86.4.1916. [DOI] [PubMed] [Google Scholar]
  104. Singer T, Fehr E. The neuroeconomics of mind reading and empathy. American Economic Review. 2005;95:340–345. doi: 10.1257/000282805774670103. [DOI] [PubMed] [Google Scholar]
  105. Sugrue LP, Corrado GS, Newsome WT. Matching behavior and the representation of value in the parietal cortex. Science. 2004;304:1782–1787. doi: 10.1126/science.1094765. [DOI] [PubMed] [Google Scholar]
  106. Sutton RS, Barto AG. Reinforcement Learning: An Introduction. Cambridge, MA: The MIT Press; 1998. [Google Scholar]
  107. Takahashi YK, Roesch MR, Stalnaker TA, Haney RZ, Calu DJ, Taylor AR, Burke KA, Schoenbaum G. The orbitofrontal cortex and ventral tegmental area are necessary for learning from unexpected outcomes. Neuron. 2009;62:269–280. doi: 10.1016/j.neuron.2009.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Tanji J, Evarts EV. Anticipatory activity of motor cortex neurons in relation to direction of an intended movement. J Neurophysiol. 1976;39:1062–1068. doi: 10.1152/jn.1976.39.5.1062. [DOI] [PubMed] [Google Scholar]
  109. Tankersley D, Stowe CJ, Huettel SA. Altruism is associated with an increased neural response to agency. Nat Neurosci. 2007;10:150–151. doi: 10.1038/nn1833. [DOI] [PubMed] [Google Scholar]
  110. Tobler PN, Dickinson A, Schultz W. Coding of predicted reward omission by dopamine neurons in a conditioned inhibition paradigm. J Neurosci. 2003;23:10402–10410. doi: 10.1523/JNEUROSCI.23-32-10402.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Tobler PN, Fiorillo CD, Schultz W. Adaptive coding of reward value by dopamine neurons. Science. 2005;307:1642–1645. doi: 10.1126/science.1105370. [DOI] [PubMed] [Google Scholar]
  112. Tom SM, Fox CR, Trepel C, Poldrack RA. The neural basis of loss aversion in decision-making under risk. Science. 2007;315:515–518. doi: 10.1126/science.1134239. [DOI] [PubMed] [Google Scholar]
  113. Tomlin D, Kayali MA, King-Casas B, Anen C, Camerer CF, Quartz SR, Montague PR. Agent-specific responses in the cingulate cortex during economic exchanges. Science. 2006;312:1047–1050. doi: 10.1126/science.1125596. [DOI] [PubMed] [Google Scholar]
  114. Tremblay L, Schultz W. Relative reward preference in primate orbitofrontal cortex. Nature. 1999;398:704–708. doi: 10.1038/19525. [DOI] [PubMed] [Google Scholar]
  115. Van Gisbergen JA, Robinson DA, Gielen S. A quantitative analysis of generation of saccadic eye movements by burst neurons. J Neurophysiol. 1981;45:417–442. doi: 10.1152/jn.1981.45.3.417. [DOI] [PubMed] [Google Scholar]
  116. von Neumann J, Morgenstern O. The Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press; 1944. [Google Scholar]
  117. Waelti P, Dickinson A, Schultz W. Dopamine responses comply with basic assumptions of formal learning theory. Nature. 2001;412:43–48. doi: 10.1038/35083500. [DOI] [PubMed] [Google Scholar]
  118. Wallis JD, Miller EK. Neuronal activity in primate dorsolateral and orbital prefrontal cortex during performance of a reward preference task. Eur J Neurosci. 2003;18:2069–2081. doi: 10.1046/j.1460-9568.2003.02922.x. [DOI] [PubMed] [Google Scholar]
  119. Wang XJ. Decision making in recurrent neuronal circuits. Neuron. 2008;60:215–234. doi: 10.1016/j.neuron.2008.09.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Wong KF, Wang XJ. A recurrent network mechanism of time integration in perceptual decisions. J Neurosci. 2006;26:1314–1328. doi: 10.1523/JNEUROSCI.3733-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Yang T, Shadlen MN. Probabilistic reasoning by neurons. Nature. 2007;447:1075–1080. doi: 10.1038/nature05852. [DOI] [PubMed] [Google Scholar]
  122. Zaghloul KA, Blanco JA, Weidemann CT, McGill K, Jaggi JL, Baltuch GH, Kahana MJ. Human substantia nigra neurons encode unexpected financial rewards. Science. 2009;323:1496–1499. doi: 10.1126/science.1167342. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES