Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jul 9.
Published in final edited form as: Science. 2012 Oct 5;338(6103):132–135. doi: 10.1126/science.1226405

In Monkeys Making Value-Based Decisions LIP Neurons Encode Cue Salience and Not Action Value

Marvin L Leathers 1,*, Carl R Olson 1
PMCID: PMC3705639  NIHMSID: NIHMS481296  PMID: 23042897

In monkeys deciding between alternative saccadic eye movements, lateral intraparietal (LIP) neurons representing each saccade fire at a rate proportional to the value of the reward expected upon its completion. This observation has been interpreted as indicating that LIP neurons encode saccadic value and that they mediate value-based decisions between saccades. Here we show that LIP neurons representing a given saccade fire strongly not only if it will yield a large reward but also if it will incur a large penalty. This finding indicates that LIP neurons are sensitive to the motivational salience of cues. It is compatible neither with the idea that LIP neurons represent action value nor with the idea that value-based decisions take place in LIP.

Each of us makes hundreds of value-based decisions every day. Deciding whether to take a drink of juice or a sip of coffee at breakfast is a process driven by the subjective value of each outcome. So is deciding whether to go to graduate school or medical school. There has been debate in recent years concerning the neural mechanisms of such decisions. This debate has pitted a goods-based account against an action-based account. The goods-based account holds that limbic areas such as orbitofrontal cortex mediate a choice between goods (juice or coffee) based on their respective values and that the choice is translated into an appropriate motor command (reach for the glass or the cup) in parietal and dorsal frontal cortex (1-3). The action-based account holds that the values of juice and coffee are computed in orbitofrontal cortex and transmitted to parietal and dorsal frontal cortex, where neurons involved in planning to reach for the glass and the cup become active in proportion to the values of juice and coffee. The decision then evolves through competition between neuronal populations representing the opposed action plans (4-9). The goods-based model is intuitive and fits with the assumption underlying classic behavioral economics that decisions concern anticipated outcomes as distinct from motor plans. However, the action-based model has received apparent support from single-neuron recording studies of LIP. In monkeys making value-based decisions between saccade targets, LIP neurons representing each saccade fire early in the trial at a rate proportional to the reward expected upon its completion (10-14). This observation is compatible with the idea that LIP neurons represent action value in the service of an action-based decision process. However, there is another possible interpretation. Emotionally potent stimuli are salient in the sense that they automatically capture attention (15, 16). This is true in particular of stimuli associated with rewards and penalties (17-19). LIP neurons fire at an enhanced rate when attention is directed into their response fields (20). Thus LIP neurons might fire strongly when a valued target is in the response field simply because the target is motivationally salient (21, 22).

To distinguish between value-based and salience-based signals in LIP, we employed a task incorporating rewards and penalties (23). On each trial, the monkey chose between cues placed in and opposite the neuronal response field (Fig. 1A). Each cue indicated that if the monkey made an eye movement to its location at the end of the trial a particular outcome would ensue (Fig. 1B). The possible outcomes were large reward (several drops of water), small reward (one drop of water), small penalty (short period of eccentric fixation) and large penalty (long period of eccentric fixation). Confronted with two offers of different value, the monkeys consistently chose the better offer (Fig. 1C, Table S2). A large penalty possesses lower value than a small penalty but is emotionally more potent. Consequently, value-based and salience-based models make opposite predictions with regard to the impact of predicted penalty size on neuronal activity (3, 20, 24). To test the predictions, we collected data from 28 neurons in the right LIP of monkey 1 and 39 neurons in the left LIP of monkey 2.

Fig. 1.

Fig. 1

(A) Sequence of events in a single trial. The items in each panel were visible to the monkey with the exception of those depicting the response field (RF), gaze location and saccade. (B) The eight cues possessed different significance in the two monkeys. Large and small water drops: large and small reward. Large and small lightning bolts: large and small penalty. (C) The monkeys nearly always chose optimally. The numbers indicate the percentage of trials on which they chose the better outcome – either into the response field (blue) or away from it (red). (D) Population firing rate as a function of time during trials with a large-reward cue in the RF and a small-reward cue opposite (blue) or vice versa (red). The monkey chose large reward. (E) Population firing rate as a function of time during trials with a large-penalty cue in the RF and a small-penalty cue opposite (red) or vice versa (blue). The monkey chose small penalty. Tick marks indicate 10 ms bins in which the difference between red and blue curves crossed an arbitrary statistical threshold (two-tailed paired t-test, n = 67, α = 0.01). In D-E, each pair of red and blue curves is based on conditions colored red and blue in the accompanying inset and each p value indicates the outcome of a two-tailed paired t-test (n = 67) applied to firing rates under red and blue conditions (25).

We first examined trials in which the monkey, offered a choice between a large and a small reward, chose the large reward. Population activity during the cue period (25) was significantly stronger when the cue in the response field predicted a large reward (Fig. 1D, blue fill). The enhancement could have depended on the cue's higher value, its greater motivational salience or the monkey's decision to look toward the response field. We next examined trials in which the monkey, offered a choice between a large penalty and a small penalty, chose the small penalty. Firing during the cue period was significantly stronger when the cue in the response field predicted a large penalty (Fig. 1E, red fill). This enhancement must have depended on the greater motivational salience of the cue because its value was low and the monkey decided to look away from it.

To factor out completely any effect of the saccadic decision, we constructed population plots based on subsets of conditions in which cue value varied but saccade direction did not. Population activity was stronger when a cue in the response field predicted a large as compared to a small reward (Fig. 2A, blue fill) and also when it predicted a large as compared to a small penalty (Fig. 2B, blue fill). The value of the cue opposite the response field exerted only a weak and inconsistent effect (26). This pattern of location specificity could arise from enhanced spatial attention but not enhanced arousal, vigilance or general motivation.

Fig. 2.

Fig. 2

Population firing rate under selected conditions. (A) The cue in the response field predicted large (blue) or small (red) reward. (B) The cue in the response field predicted large (blue) or small (red) penalty.

Neurons might have responded strongly to large-penalty cues because, by chance, they possessed a high degree of physical salience. Therefore we gave the images different significance in the two monkeys. Cues predicting a large penalty in monkey 1 predicted a small reward in monkey 2 and vice versa (Fig. 1B). Nevertheless, in each monkey, cues predicting a large penalty elicited stronger firing than cues predicting a small reward (Fig. 3A-B, blue fill). Thus neuronal activity depended on the images' associated outcomes and not their physical attributes.

Fig. 3.

Fig. 3

Population activity when a cue appearing in the response field predicted a large penalty (blue) or a small reward (red). On all trials, the cue opposite the response field predicted a large reward and the monkey chose it. (A) In monkey 1, images A4 and B4 predicted a large penalty and images A1 and B1 predicted small reward. (B) In monkey 2, the contingencies were reversed.

To characterize reward- and penalty-related activity at the level of individual neurons, we carried out two analyses. One analysis assessed the impact of reward cues inside and penalty cues opposite the response field (Fig. 2A). Main effects of reward size in the response field predominated (red sector of Fig. 4A). A second analysis assessed the impact of penalty cues inside and reward cues opposite the response field (Fig. 2B). Main effects of penalty size in the response field predominated (green sector of Fig. 4B). We next generated for each neuron a reward index and a penalty index based on responses to cues in the response field. Each index ranged from -1 (fired only in response to “small” cue) to +1 (fired only in response to “large” cue). Plotting the penalty against the reward index revealed that they were positively correlated across neurons (27) and that most points fell in the upper right quadrant (Fig. 4C). These points represent neurons whose activity is best explained as encoding the motivational salience of the cues.

Fig. 4.

Fig. 4

Individual neurons exhibited significant sensitivity to reward size and penalty size for cues in the response field. (A) Counts of neurons exhibiting significant effects of reward size in the response field and penalty size opposite the response field (two factor ANOVA, α = 0.05). The predominant outcome was a main effect of reward in the response field (27 neurons: red tinting). (B) Counts of neurons exhibiting significant effects of penalty size in the response field and reward size opposite the response field (two factor ANOVA, α = 0.05). The predominant outcome was a main effect of penalty in the response field (28 neurons: green tinting). In A-B, “+” and “−” counts indicate numbers of neurons firing more strongly for the larger (+) or smaller (−) predicted outcome; counts in italic boldface are significantly greater than expected by chance (χ2 test, Yates correction, α = 0.05). (C) Penalty index vs. reward index for all 67 neurons. The quadrants are labeled according to the hypothesis most concordant with the corresponding pattern of activity: +salience = responds more strongly to more salient stimuli; -salience = responds more strongly to less salient stimuli; + value = responds more strongly to more valuable stimuli; -value = responds more strongly to less valuable stimuli.

That images with greater motivational salience elicit stronger activity in LIP fits with the idea that LIP constitutes a priority map in which neurons representing a given visual-field location fire at a rate proportional to that location's current draw on attention (20). Neurons in LIP respond strongly to physically salient images (28, 29). Whether they respond strongly to motivationally salient images has been unclear. Cues with greater incentive salience elicit stronger firing (22) but this effect is difficult to disentangle from the encoding of value (30, 31). The current finding that cues with greater aversive salience likewise elicit stronger firing establishes that LIP neurons are indeed sensitive to motivational salience (32).

We have shown that monkeys can make value-based saccadic decisions under circumstances in which the early firing of LIP neurons does not signal action value. This finding jibes with prior reports indicating that neuronal activity in LIP is poorly correlated with target value in a visual foraging task (33), that inactivation of LIP leaves intact the ability of monkeys to make value-based saccadic decisions (34) and that neural activity indicating which reward a subject will choose can develop even if the subject does not yet know which action will be required to obtain it (35, 36). These observations provide collective support for the idea that value-based decisions do not depend on LIP and may instead depend on limbic areas, including orbitofrontal cortex, where neurons signal goods value beginning around 100 ms after cue onset (2, 3).

Supplementary Material

supplementary_materials_NIHMSID481296

Acknowledgments

We thank Douglas Ruff for comments and Karen McCracken for technical assistance. Support: NIH RO1 EY018620 and NIH P50 MH45156. Technical support: NIH P30 EY08098 and P41RR03631.

References and Notes

  • 1.Padoa-Schioppa C. Annual Review of Neuroscience. 2011;34:333. doi: 10.1146/annurev-neuro-061010-113648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Padoa-Schioppa C, Assad JA. Nature. 2006;441:223. doi: 10.1038/nature04676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Roesch MR, Olson CR. Science. 2004;304:307. doi: 10.1126/science.1093223. [DOI] [PubMed] [Google Scholar]
  • 4.Gold JI, Shadlen MN. Annual Review of Neuroscience. 2007;30:535. doi: 10.1146/annurev.neuro.29.051605.113038. [DOI] [PubMed] [Google Scholar]
  • 5.Kable JW, Glimcher PW. Neuron. 2009;63:733. doi: 10.1016/j.neuron.2009.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Platt ML. Current Opinion in Neurobiology. 2002;12:141. doi: 10.1016/s0959-4388(02)00302-1. [DOI] [PubMed] [Google Scholar]
  • 7.Rangel A, Hare T. Current Opinion in Neurobiology. 2010;20:262. doi: 10.1016/j.conb.2010.03.001. [DOI] [PubMed] [Google Scholar]
  • 8.Sugrue LP, Corrado GS, Newsome WT. Nat Rev Neurosci. 2005;6:363. doi: 10.1038/nrn1666. [DOI] [PubMed] [Google Scholar]
  • 9.“At the first stage, a value transformation takes the input… and abstracts from it a representation of the value of available options. At the second stage, a decision transformation maps this value representation onto the probability of alternative courses of action. A final processing stage transforms this continuous probability into discrete choice among these alternatives” (8).
  • 10.Platt ML, Glimcher PW. Nature. 1999;400:233. doi: 10.1038/22268. [DOI] [PubMed] [Google Scholar]
  • 11.Coe B, Tomihara K, Matsuzawa M, Hikosaka O. The Journal of Neuroscience. 2002;22:5081. doi: 10.1523/JNEUROSCI.22-12-05081.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Sugrue LP, Corrado GS, Newsome WT. Science. 2004;304:1782. doi: 10.1126/science.1094765. [DOI] [PubMed] [Google Scholar]
  • 13.Klein JT, Deaner RO, Platt ML. Current Biology. 2008;18:419. doi: 10.1016/j.cub.2008.02.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Louie K, Glimcher PW. The Journal of Neuroscience. 2010;30:5498. doi: 10.1523/JNEUROSCI.5742-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Brosch T, Pourtois G, Sander D, Vuilleumier P. Neuropsychologia. 2011;49:1779. doi: 10.1016/j.neuropsychologia.2011.02.056. [DOI] [PubMed] [Google Scholar]
  • 16.Berridge KC. In: Well-Being: Foundations of Hedonic Psychology. Kahneman D, Diener E, Schwarz N, editors. Russell Sage Foundation; New York: 2003. pp. 525–557. [Google Scholar]
  • 17.Lim SL, Padmala S, Pessoa L. Proceedings of the National Academy of Sciences. 2009;106:16841. doi: 10.1073/pnas.0904551106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.O'Brien JL, Raymond JE. Psychological Science. 2012;23:359. doi: 10.1177/0956797611429800. [DOI] [PubMed] [Google Scholar]
  • 19.Anderson BA, Laurent PA, Yantis S. PLoS ONE. 2011;6:e27926. doi: 10.1371/journal.pone.0027926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Bisley JW, Goldberg ME. Annual Review of Neuroscience. 2010;33:1. doi: 10.1146/annurev-neuro-060909-152823. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Maunsell JHR. Trends in Cognitive Sciences. 2004;8:261. doi: 10.1016/j.tics.2004.04.003. [DOI] [PubMed] [Google Scholar]
  • 22.Peck CJ, Jangraw DC, Suzuki M, Efem R, Gottlieb J. The Journal of Neuroscience. 2009;29:11182. doi: 10.1523/JNEUROSCI.1929-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Other distinguishing features of the task included associating outcomes with cues regardless of their location, which ruled out the development of motor biases, and giving the cues fixed significance, which ruled out uncertainty.
  • 24.Wallis JD, Rich EL. Frontiers in Neuroscience. 2011;5 doi: 10.3389/fnins.2011.00124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.All analyses concern 0-250 ms after cue onset unless otherwise stated
  • 26.Further analyses including results from individual monkeys: Fig's. S1-S8 and Table S1
  • 27.Effect driven by monkey 1 (Fig's. S6-S7)
  • 28.Arcizet F, Mirpour K, Bisley JW. Cerebral Cortex. 2011;21:2498. doi: 10.1093/cercor/bhr035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Buschman TJ, Miller EK. Science. 2007 Mar 30;315:1860. doi: 10.1126/science.1138071. [DOI] [PubMed] [Google Scholar]
  • 30.Louie K, Grattan LE, Glimcher PW. The Journal of Neuroscience. 2011;31:10627. doi: 10.1523/JNEUROSCI.1237-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Rorie AE, Gao J, McClelland JL, Newsome WT. PLoS ONE. 2010;5:e9308. doi: 10.1371/journal.pone.0009308. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Incentive and aversive salience refer to the control of cues over behavior and do not necessarily denote an ability to capture attention (15, 16). Note, however, that the monkeys were faster to look toward locations marked by big-reward than by small-reward cues and slower to look away from locations marked by large-penalty than by small-penalty cues (Table S1) as expected from attentional capture.
  • 33.Mirpour K, Bisley JW. Proceedings of the National Academy of Sciences; June 5, 2012; [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Balan PF, Gottlieb J. The Journal of Neuroscience. 2009;29:8166. doi: 10.1523/JNEUROSCI.0243-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Cai X, Padoa-Schioppa C. Society for Neuroscience Meeting Planner; 2010. [Google Scholar]
  • 36.Wunderlich K, Rangel A, O'Doherty JP. Proceedings of the National Academy of Sciences; 2010. p. 15005. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supplementary_materials_NIHMSID481296

RESOURCES