Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Dec 4.
Published in final edited form as: Neuron. 2013 Dec 4;80(5):1109–1111. doi: 10.1016/j.neuron.2013.11.014

A surprised amygdala looks to the cortex for meaning

Ekaterina Likhtik 1,2, Joshua A Gordon 1,2,*
PMCID: PMC3894779  NIHMSID: NIHMS543895  PMID: 24314723

Abstract

Learning models propose a role for both signed and unsigned prediction errors in updating associations between cues and aversive outcomes. Here, Klavir et al. show how these errors arise from the interplay between the amygdala and anterior cingulate cortex.


Learning is a life-long endeavor, demanding that we constantly track and update the outcomes we associate with specific stimuli, so that we quickly learn not to respond to stimuli we encounter today with reactions that were appropriate yesterday. Indeed, a lack of cognitive flexibility can be a setback in daily life, and if serious enough, a sign of neurological distress. A critical driving force behind learning is prediction error, the discrepancy between expected and actual outcome. Influential models from learning theory propose different means for updating stimulus-outcome associations to correct for prediction error, bringing predictions in line with current conditions. For example, in the Rescorla-Wagner model, associations are updated when the unconditioned stimulus (US) violates previously established expectations, increasing in associational strength with the cue if the US is bigger than expected, and decreasing in strength if the US is smaller than expected (Rescorla and Wagner, 1972). Thus, in this model the prediction error is signed, driving the strength of cue-outcome associations upward (positive) if expectations are exceeded, and downward (or negative) if expectations are unmet. Conversely, in the Pearce-Hall and Mackintosh models, the prediction error is unsigned. If a cue produces an outcome that is different (whether more or less) than expected, more attention is given to the conditioned stimulus (CS), thereby strengthening the cue-outcome association regardless of the sign of the prediction error (Pearce and Hall, 1980; Mackintosh, 1975). Appetitive learning studies, where reward is unexpectedly delivered or omitted, support the co-existence of both models, and demonstrate that signed and unsigned errors are represented across a distributed network of brain regions (Shultz et al., 1997; Roesch et al., 2010). However, the neural substrates of prediction error during aversive learning, where a “positive” error – the surprising presence of a stimulus – results in the delivery of an aversive experience, have not been as well explored.

To investigate the temporal development of neural coding associated with aversive learning, Klavir et al. (in this issue of Neuron), simultaneously recorded from two areas known to support this task: the amygdala and the anterior cingulate cortex. Critically, the authors identify differential sequences of information processing in this circuit depending on whether neurons respond to signed or unsigned prediction errors. The group recorded units in the dorsal anterior cingulate (dACC) and in distributed locations of the basolateral nuclei of the amygdala (BLA) while monkeys performed a trace conditioning task where an auditory or a visual stimulus predicted either an aversive air puff to the eye (CS+) or no air puff (CS−). Once the association was learned, as evidenced by selective preparatory blinking to the CS+, the contingencies reversed. The authors show that successfully learning the reversal increases the functional connectivity between the dACC and BLA and imposes an error sign-dependent temporal order on information processing (Figure). Amygdala cells that increase their firing rate in response to both positive and negative prediction errors (as in the unsigned errors of the Pearce-Hall model) precede activity in the dACC. Conversely, amygdala cells that change their firing rate in opposite directions to the two error types (and therefore encode signed errors, as proposed by the Rescorla-Wagner model), fire after those in the dACC. These findings suggest that unsigned errors first propagate from the amygdala to the dACC, where they are given a sign and returned to the amygdala. Thus, the dense reciprocal connectivity of this circuit allows for processing of both signed and unsigned prediction errors by differentially controlling the directionality of information transfer. Given data from depressed patients indicating significantly compromised prediction error updating (Gradin et al., 2011), characterizing the strengths and vulnerabilities of this circuit would be of potential clinical relevance.

Figure.

Figure

Amygdala neurons representing unsigned prediction errors (top) increase their firing rates to both positive (CS− to CS+) and negative (CS+ to CS−) prediction errors during reversal learning. Activity in these “unsigned error” neurons preceeds that in dACC neurons. Firing rates in amygdala neurons representing signed prediction errors (bottom) change rates in opposite directions for the two types of error. Activity in these “signed error” neurons follows that in dACC neurons. These findings suggest that unsigned errors arise in the amygdala and are sent to the dACC, where the error signal is given a sign before being returned to the amygdala.

Early amygdala firing to surprising presence or absence of the aversive US, as shown in the Klavir et al. study, is similar to amygdala activity in processing unsigned prediction errors in appetitive tasks that use reward as a US (Roesch et al., 2010). Collectively, these data suggest that cells in the amygdala are highly tuned to any changes in the environment, constituting an ideal neural substrate for sign-free, attention-related error processing as described by the Pearce-Hall model. Given that the amygdala is an evolutionarily conserved structure with a central role in processing threat and orchestrating defensive reactions, it is appropriate that the amygdala is an effective detector of unexpected changes related to aversive stimuli. In support of this idea, the amygdala has been identified as a critical site for processing unpredictable stimuli that are not associated with a US of any value (Herry et al., 2007). Moreover, amygdala activity induced by surprising stimuli in humans was followed by enhanced attention toward upcoming threatening stimuli (Herry et al., 2007). An amygdala-mediated increase in attention to and salience of the CS, as suggested by the Pearce-Hall model, is consistent with the known role of this structure in emotional arousal and enhanced memory consolidation (McGaugh, 2013). Indeed, substantial evidence indicates that emotional arousal is accompanied by the release of glucocorticoids and activation of β-adrenergic receptors in the amygdala, processes that are essential for long term memory consolidation in humans and in rodents (McGaugh, 2013). Simply put, emotional events are remembered better. Given that memory consolidation is a time dependent process that occurs after training, it would be interesting to see if, after the surprise of the reversal in aversive contingencies of a given CS, the learning rate of the new stimulus-outcome associations is dependent on memory consolidation in the amygdala. If so, it would suggest that unsigned error-related increases in amygdala firing can lead to longer lasting changes in intracellular signaling cascades that ultimately alter the state of amygdala cells during learning.

Seminal work has shown that signed errors in reward processing, as predicted by the Rescorla-Wagner model, are mediated by dopaminergic neurons (Shultz et al, 1997). Furthermore, inhibitory neurons in the ventral tegmental area are active throughout the CS-US delay and in contrast to the dopaminergic cells, fire during the delivery of aversive as well as the rewarding stimuli (Cohen et al, 2012). Thus, interaction between the excitatory drive onto dopaminergic cells and local inhibition along their dendrites and soma likely participates in the updating of CS-US associational strength during prediction error learning and during dopaminergic silencing during aversive outcomes. Notably, it has also been demonstrated that normal appetitive error signaling in the striatum depends on input from the orbitofrontal cortex (OFC, Takahashi et al., 2011). In this work, the orbitofrontal signal was proposed to send contextual information to striatal cells, sculpting the striatal response as a function of the external context. Similarly, in the present study, Klavir et al., found that activity of signed dACC cells precedes activity of signed amygdala cells, suggesting that once the unsigned element of surprise is processed in the amygdala, dACC cells become active and could then in turn exert a top-down influence on the amygdala in signed prediction errors. An important goal for the future is to understand how amygdala and dACC inputs use the available local networks to shape the sign of an error by regulating the excitatory-inhibitory balance.

These intriguing similarities between frontal-striatal and frontal-amygdala circuits in updating prediction errors of rewarding and aversive conditioned stimuli, respectively, suggest that these networks rely on similar processing strategies. What may unite the two circuits is amygdala activity. The very nature of early amygdala activation to unsigned errors as shown by Klavir et al. and others (Herry et al., 2007, Roesch et al., 2010,), should influence those systems that are learning about both rewarding and aversive stimuli. In line with this idea, presentation of fearful faces to human subjects activated the amygdala and sped up the rate of cue-reward association learning and amygdala-striatal interactions (Watanabe et al., 2013). Thus, arousal via amygdala activation has the hallmark attributes of the Pearce-Hall model and can influence prediction error learning rates in cue-reward (Watanabe et al., 2013) and cue-aversive (Herry et al., 2007) associations, where signature elements of the Rescora-Wagner model are enacted in parallel systems.

One important task remaining is to integrate these findings into the growing body of data linking amygdala activity to value, particularly through interactions with the OFC. Recordings from similar reversal tasks demonstrate that amygdala neurons represent the value of learned stimuli, including the sign of that value (Paton et al, 2006). Simultaneous amygdala and OFC recordings have demonstrated a complex dance between the two structures during the reversal learning process, one where amygdala neurons more quickly acquire negative value representations, while OFC neurons more quickly acquire positive value representations (Morrison et al., 2011). It is not immediately clear how these signed value representations compare to the signed and unsigned prediction error representations described by Klavir et al. Also remaining to be puzzled out is how the dance between the amygdala and OFC compares to that between the amygdala and the dACC, and whether these circuits are activated serially or in parallel. Presumably, prediction errors are used to update value representations, and these updates require signed errors. It is therefore intriguing to imagine that inputs from the dACC and OFC converge within the amgydala, where signed errors from the dACC update stimulus-value encoding at OFC-amygdala synapses. Such computations occurring at multiple stages of top-down processing would create a pliable network capable of flexibly assigning meaning to the ever-surprising world around us.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Cohen JY, Haesler S, Vong L, Lowell BB, Uchida N. Neuron-type-specific signals for reward and punishment in the ventral tegmental area. Nature. 2012;482:85–8. doi: 10.1038/nature10754. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Expectancy-related changes in firing of dopamine neurons depend on orbitofrontal cortex. Nature Neurosci. 14:1590–97. doi: 10.1038/nn.2957. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Gradin VB, Kumar P, Waiter G, Ahearn T, Stickle C, Milders M, Reid I, Hall J, Steele JD. Expected value and prediction error abnormalities in depression and schizophrenia. Brain. 2011;134:1751–64. doi: 10.1093/brain/awr059. [DOI] [PubMed] [Google Scholar]
  4. Herry C, Bach DR, Esposito F, Di Salle F, Perrig WJ, Scheffler K, Lüthi A, Seifritz E. Processing of temporal unpredictability in human and animal amygdala. Journal Neurosci. 2007;27:5958–66. doi: 10.1523/JNEUROSCI.5218-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Klavir O, Genud-Gabai R, Paz R. Functional connectivity between Amygdala and Cingulate-cortex for adaptive aversive learning. Neuron. 2013;XX:XXX–XXX. doi: 10.1016/j.neuron.2013.09.035. [DOI] [PubMed] [Google Scholar]
  6. Mackintosh NJ. A theory of attention: Variations in the associability of stimuli with reinforcement. Psychological Review. 1975;82:276–298. [Google Scholar]
  7. McGaugh PNAS, 2013 Making lasting memories: remembering the significant. 110(Supplement 2) doi: 10.1073/pnas.1301209110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Morrison SE, Saez A, Lau B, Salzman CD. Different time courses for learning-related changes in amygdala and orbitofrontal cortex. Neuron 2011. 2011;71:1127–40. doi: 10.1016/j.neuron.2011.07.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Paton JJ, Belova MA, Morrison SE, Salzman CD. The primate amygdala represents the positive and negative value of visual stimuli during learning. Nature. 2006;439:865–70. doi: 10.1038/nature04490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Pearce JM, Hall G. A model for Pavlovian learning: Variations in the effectivenss of conditioned but not of unconditioned stimuli. Psychological Review. 1980;87:532–552. [PubMed] [Google Scholar]
  11. Rescorla RA, Wagner AR. A theory of Pavlovian Conditioning: variations in the effectivenss of reinforcement and non-reinforcement. In: Blacknd AH, Prokasy WE, editors. Classcial conditoinin II: Current research and theory. New York: Appleton-Century-Crofts; 1972. pp. 64–99. [Google Scholar]
  12. Roesch MR, Calu DJ, Esber GR, Schoenbaum G. Neural correlates of variations in event processing during learning in basolateral amygdala. Journal Neurosci. 2010;30:2464–2471. doi: 10.1523/JNEUROSCI.5781-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Shultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593–1599. doi: 10.1126/science.275.5306.1593. [DOI] [PubMed] [Google Scholar]
  14. Takahashi YK, Roesch MR, Wilson RC, Toreson K, O’Donnell P, Niv Y, Schoenbaum G. [Google Scholar]
  15. Watanabe N, Sakagami M, Haruno M. Reward prediction error signal enhanced by striatum-amygdala interaction explains the acceleration of probabilistic reward learning by emotion. J Neurosci. 2013;33:4487–93. doi: 10.1523/JNEUROSCI.3400-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES