Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2017 Jul 12;37(28):6601–6602. doi: 10.1523/JNEUROSCI.1073-17.2017

The Role of the Amygdala in Value-Based Learning

Cameron D Hassall 1,, Chad C Williams 2
PMCID: PMC6596558  PMID: 28701581

Central to the modern study of decision-making is an attempt to understand the ways in which value is represented and used in the brain. Decision-making depends on the learning of value, which has been consistently shown to involve dopamine projections (Kable and Glimcher, 2009). As learning occurs, value-based information is computed in the orbitofrontal cortex (OFC), the medial frontal cortex (MFC), and the striatum, then integrated within the posterior parietal cortex (PPC). Specifically, the OFC relays information about the value of stimuli, the MFC delivers information about the value of actions, and the striatum encodes information about actual and predicted rewards (Rushworth et al., 2007; Kable and Glimcher, 2009). The amygdala, traditionally associated with emotion and motivation, is also heavily involved in representing value (Jenison et al., 2011; Janak and Tye, 2015).

The amygdala's involvement in learning and decision-making is perhaps unsurprising given the bidirectional pathways that exist between it, the OFC, and the MFC (Ghashghaei et al., 2007). Complicating efforts to distinguish the roles of the amygdala, the OFC, and the MFC in decision-making, neural activation in all three areas is often related during decision-making tasks (Doya, 2008). In one attempt to examine the relationship between these three regions while animals learned the value of stimuli, Rudebeck et al. (2013) combined neural recording and lesioning. Monkeys did a simple decision-making task, both before and after bilateral lesioning of the amygdala, while neural activity was recorded. The monkeys were equally adept at the task before and after lesioning, but the way that the OFC and the MFC represented stimulus value (the amount of reward predicted by a stimulus) was altered. Before lesioning, a greater proportion of OFC neurons than MFC neurons encoded stimulus value, and OFC neurons tended to signal stimulus value earlier than MFC neurons. After lesioning, stimulus value encoding in the OFC (proportion of neurons, and latency of firing) dropped closer to MFC levels. These results suggest that stimulus-value encoding in the OFC (more so than in the MFC) depends, in part, on input from the amygdala. Furthermore, choice performance did not decline after amygdalectomy, suggesting that continued input from the amygdala to the OFC and the MFC is not needed to make decisions about familiar stimuli.

To address whether a similar story holds for novel stimuli, Rudebeck et al. (2017) followed up Rudebeck et al. (2013) by analyzing and discussing additional data from the same monkeys. Here, the monkeys learned to associate three levels of reward with three stimuli. On each trial, the monkeys selected one of two visual stimuli. After a short delay, the monkeys received a quantity of water corresponding to their choice. Outcomes were deterministic, the same level of reward was always received for a given stimulus choice, and monkeys quickly learned to choose the stimulus associated with the greatest reward. The monkeys completed this task both before and after lesioning of the amygdala. Importantly, the postoperative stimuli differed from the preoperative stimuli, allowing Rudebeck et al. (2017) to address the impact of amygdalectomy on learning novel reward contingencies.

Unlike with familiar stimuli, the monkeys were worse at learning about novel stimuli after amygdalectomy: familiar (Rudebeck et al., 2013) and novel (Rudebeck et al., 2017). In particular, the monkeys took longer to learn the correct response (i.e., select the higher-valued stimulus). Additionally, the monkeys were more likely to make two errors in a row after amygdalectomy, suggesting a deficit in learning from negative feedback. No such deficit was observed following correct trials; that is, the monkeys were no less likely to make an error after a correct response after amygdalectomy, suggesting that their ability to learn from positive feedback was intact. This is in line with human neuroimaging data: Yacubian et al. (2006) concluded that expectations of gain are encoded in the ventral striatum, whereas expectations of loss are encoded in the amygdala.

In addition to behavioral data, Rudebeck et al. (2017) measured the proportion of neurons in the OFC and the MFC encoding reward level at the time of stimulus presentation. First, and unlike in Rudebeck et al. (2013), they fit a reinforcement learning model to the trial-to-trial decisions of each monkey. For each monkey and trial, the model predicted the value of each stimulus. These predictions (unique to each monkey and trial) were used to identify neurons encoding stimulus value. The use of a reinforcement learning model was necessary because, presumably, stimulus values were not learned instantly, but rather developed across several trials. In line with Rudebeck et al. (2013), both the OFC and the MFC contained neurons encoding stimulus value, which fired slightly earlier in the OFC than in the MFC. Lesioning the amygdala resulted in diminished stimulus-value encoding in the OFC and the MFC, but with no change in encoding latency. Thus, maintaining existing value representations in the OFC relies on the amygdala (Rudebeck et al., 2013), as does acquiring new value representations in the OFC and the MFC (Rudebeck et al., 2017), representations that appear critical for optimal performance.

Although provocative, the fact that postoperative performance suffered in Rudebeck et al. (2017) is potentially problematic. Ideally, choice behavior would not have changed, and neural differences in stimulus-value encoding could be solely attributed to the amygdalectomy. However, it is unlikely that the change in OFC stimulus-value encoding was due to a change in performance; similar changes within the OFC were observed in Rudebeck et al. (2013), but with no associated decrease in performance. Furthermore, although Rudebeck et al. (2013) did not observe the same changes in MFC stimulus-value encoding as Rudebeck et al. (2017), it is difficult to conceive of a reason why slightly impaired (but still excellent) performance alone would explain a dramatic change in stimulus value encoding. The Rudebeck et al. (2017) preoperative and postoperative models were not compared, however, leaving the mechanisms underlying amygdalar influence unclear.

Reinforcement learning models have proven useful in bridging neural recording data with learning and decision-making theory. For example, Schultz et al. (1997) showed that midbrain dopamine neurons generate a signal analogous to model-generated prediction errors. Similarly, Rudebeck et al. (2017) sought OFC, MFC, and amygdala neurons whose activity resembled the trial-to-trial stimulus values computed by a reinforcement learning model. Future studies involving neural recordings in these regions might also benefit from this strategy, especially if models involving the amygdala are considered. For example, Costa et al. (2016) recently reported that both the amygdala and the ventral striatum contribute to reinforcement learning, albeit in somewhat different ways. In particular, they observed that when outcomes were deterministic, monkeys with amygdala lesions performed worse than control monkeys and monkeys with ventral striatum lesions. Costa et al. (2016) explained this performance deficit as a reduction in learning rate, a reinforcement learning model parameter. This observation is particularly relevant to the Rudebeck et al. (2017) study, in which a reinforcement learning model was used to predict value signals, but was not discussed with regard to lesion-related performance deficits. Importantly, such discussions may help to uncover the as-yet-unknown role of the amygdala in reinforcement learning (for a recent review, Averbeck and Costa, 2017).

It is informative that lesions of the amygdala impair learning about new stimuli (Rudebeck et al., 2017) but do not impair choices about familiar stimuli (Rudebeck et al., 2013). This suggests that a working amygdala is required for learning in stationary environments (in which reward contingencies do not change), but not for decision-making based on learned values. In nonstationary environments, such as in reversal learning tasks, however, the amygdala may have an even greater impact on decision-making because the agent must regularly unlearn old reward associations to learn new ones. Interestingly, monkeys with amygdala lesions perform better in reversal-learning tasks relative to controls, possibly due to switching to a response strategy that is independent of value representations (Rudebeck and Murray, 2008; Izquierdo et al., 2017).

The results of Rudebeck et al. (2017) suggest that the amygdala not only is important for maintaining existing value representations in the OFC (Rudebeck et al., 2013), but also is involved in learning new value representations in the OFC and the MFC. Unlike with familiar stimuli, impoverished value representations for novel stimuli can lead to performance deficits: slower learning rates, and inappropriate responses following errors. These results further obscure the distinction between the roles of the amygdala, the OFC, and the MFC in decision-making, and open the door for future studies designed to investigate the relationship between these neural regions.

Footnotes

Editor's Note: These short reviews of recent JNeurosci articles, written exclusively by students or postdoctoral fellows, summarize the important findings of the paper and provide additional insight and commentary. If the authors of the highlighted article have written a response to the Journal Club, the response can be found by viewing the Journal Club at www.jneurosci.org. For more information on the format, review process, and purpose of Journal Club articles, please see http://jneurosci.org/content/preparing-manuscript#journalclub.

This work was supported by the Neureducation Network.

The authors declare no competing financial interests.

References

  1. Averbeck BB, Costa VD (2017) Motivational neural circuits underlying reinforcement learning. Nat Neurosci 20:505–512. 10.1038/nn.4506 [DOI] [PubMed] [Google Scholar]
  2. Costa VD, Dal Monte O, Lucas DR, Murray EA, Averbeck BB (2016) Amygdala and ventral striatum make distinct contributions to reinforcement learning. Neuron 92:505–517. 10.1016/j.neuron.2016.09.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Doya K. (2008) Modulators of decision making. Nat Neurosci 11:410–416. 10.1038/nn2077 [DOI] [PubMed] [Google Scholar]
  4. Ghashghaei HT, Hilgetag CC, Barbas H (2007) Sequence of information processing for emotions based on the anatomic dialogue between prefrontal cortex and amygdala. Neuroimage 34:905–923. 10.1016/j.neuroimage.2006.09.046 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Izquierdo A, Brigman JL, Radke AK, Rudebeck PH, Holmes A (2017) The neural basis of reversal learning: an updated perspective. Neuroscience 345:12–26. 10.1016/j.neuroscience.2016.03.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Janak PH, Tye KM (2015) From circuits to behaviour in the amygdala. Nature 517:284–292. 10.1038/nature14188 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Jenison RL, Rangel A, Oya H, Kawasaki H, Howard MA (2011) Value encoding in single neurons in the human amygdala during decision making. J Neurosci 31:331–338. 10.1523/JNEUROSCI.4461-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Kable JW, Glimcher PW (2009) The neurobiology of decision: consensus and controversy. Neuron 63:733–745. 10.1016/j.neuron.2009.09.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Rudebeck PH, Murray EA (2008) Amygdala and orbitofrontal cortex lesions differentially influence choices during object reversal learning. J Neurosci 28:8338–8343. 10.1523/JNEUROSCI.2272-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Rudebeck PH, Mitz AR, Chacko RV, Murray EA (2013) Effects of amygdala lesions on reward-value coding in orbital and medial prefrontal cortex. Neuron 80:1519–1531. 10.1016/j.neuron.2013.09.036 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Rudebeck PH, Ripple JA, Mitz AR, Averbeck BB, Murray EA (2017) Amygdala contributions to stimulus–reward encoding in the macaque medial and orbital frontal cortex during learning. J Neurosci 37:2186–2202. 10.1523/JNEUROSCI.0933-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Rushworth MF, Behrens TE, Rudebeck PH, Walton ME (2007) Contrasting roles for cingulate and orbitofrontal cortex in decisions and social behaviour. Trends Cogn Sci 11:168–176. 10.1016/j.tics.2007.01.004 [DOI] [PubMed] [Google Scholar]
  13. Schultz W, Dayan P, Montague PR (1997) A neural substrate of prediction and reward. Science 275:1593–1599. 10.1126/science.275.5306.1593 [DOI] [PubMed] [Google Scholar]
  14. Yacubian J, Gläscher J, Schroeder K, Sommer T, Braus DF, Büchel C (2006) Dissociable systems for gain- and loss-related value predictions and errors of prediction in the human brain. J Neurosci 26:9530–9537. 10.1523/JNEUROSCI.2915-06.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES