Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jan 1.
Published in final edited form as: Neurobiol Learn Mem. 2014 Sep 1;117:1–3. doi: 10.1016/j.nlm.2014.08.014

Memory and decision making

A David Redish 1, Sheri JY Mizumori 2
PMCID: PMC4428655  NIHMSID: NIHMS680818  PMID: 25192867

The only reason we remember things is to make better decisions.

What is memory? Memory can be defined as any physical change that carries information about the historical past. Typically, in animal systems, memory is stored in physical changes inside and between neurons (Engert & Bonhoeffer, 1999; Kandel, 2006; Malinow & Malenka, 2002; Silva, Kogan, Frankland, & Kida, 1998). How these physical changes affect information processing depends on how those systems compute information processing. In practice, memory needs to be encoded in a representational form easily accessed by specific computational processes. There will be tradeoffs in these representational forms between generalization and specificity, between detail and accessibility, and between storage size and these other issues (Cormen, Leiserson, & Rivest, 1992; McClelland & Rumelhart, 1986; O’Reilly & McClelland, 1994). These tradeoffs suggest that there should be multiple memory systems, each with representational forms optimized for different aspects of these tradeoffs (O’Keefe & Nadel, 1978; Redish, 1999, 2013; Schacter & Tulving, 1994).

Similarly, we can ask What is a decision? Following the definitions in Redish (2013), in order to operationally define decision-making such that it can be easily recognized and observed, we define decision-making as the process of selecting an action. At its most general, an action is anything that physically affects the world — thus muscle movements (Grillner, 2003; Llinas, 2001) and social speech acts (Searle, 1965) are both decisions, as are physiological processes such as salivation (Pavlov, 1927). Because we are physical beings, a decision that changes one’s internal (computational) state can also be considered an action. And, of course, choosing not to act is also a decision-process.

This means that any process that leads to the selection of an action from a set of possible actions is a decision. As with memory, decisions depend on tradeoffs between factors such as generalization and specificity, and between computational speed and flexibility.

Therefore, as has been found to be the case with memory, there are likely to be multiple decision-making systems, each with computational processes optimized for different aspects of these trade-offs (Cisek & Kalask, 2010; Daw, Niv, & Dayan, 2005; Keramati, Dezfouli, & Piray, 2011; O’Keefe & Nadel, 1978; Redish, 1999, 2013). These computational processes select actions that reflect an interaction between one’s needs and desires (goals, motivation), external cues (information about the current state of the world), and internal representations of one’s historical experience (i.e. memory).

These two definitions imply a close relationship between memory and decision-making systems, particularly in their multiplicity of computational components. Where decision-making processes fall in terms of their tradeoffs is going to depend in large part on the computational availability of memory representations—a memory representation that provides quick generalization but little specificity is going to produce decisions that are fast, but inflexible, while a memory representation that provides many details, but requires extensive processing to unpack (and reconstitute) those details into amemory will produce decisions that are slow, but flexible. It follows, then, that the same underlying neural systems that are critical for memory are going to be critical for decision-making.

The idea that memory is not unitary traces itself back to the declarative versus procedural distinction first seen in the late 1970s and early 1980s (Cohen & Eichenbaum, 1993; Cohen & Squire, 1980; O’Keefe & Nadel, 1978; Redish, 1999; Squire, 1987). It was observed that quickly-learned, factual information (such that it could be “declared”) depended on one set of structures (such as the hippocampus), while slowly-learned procedural information depended on other structures (particularly specific dorsal and lateral aspects of the striatum). Over time, it was recognized that declarative memory did not depend on language itself, but rather on a ubiquitously-learned cognitive model of the world (a “cognitive map”) (Johnson & Crowe, 2009; O’Keefe & Nadel, 1978; Redish, 1999; Tolman, 1948). In contrast, procedural memories depended on a learning algorithm that only learned the cues that were important to predict outcomes (Berke, Breck, & Eichenbaum, 2009; Jog, Kubota, Connolly, Hillegaart, & Graybiel, 1999; Schmitzer-Torbert & Redish, 2008; Sutton & Barto, 1998).

Similarly, the idea that decision-making is not unitary traces itself in the animal learning literature back several decades to different effects of training on decision-making processes, particularly differences in latent learning and devaluation processes (Balleine & Dickinson, 1998; Bouton, 2007; Mackintosh, 1974). In latent learning, pre-exposure to a condition enables very fast changes in action selection when that condition affects the decision (such as adding a new goal location once one knows the structure of a maze) (Tolman, 1932; Tse et al., 2007). In devaluation, changing the value of one of two rewards (for example, by pairing it with a negative stimulus in another context) changes the response to that reward immediately on re-exposure (Adams & Dickinson, 1981; Balleine & Dickinson, 1998; Schoenbaum, Roesch, & Stalnaker, 2006). In contrast, slow, regular experiences led to decision-making processes that were insensitive to devaluation or to changes in the contingencies of cue-reward interaction (Balleine & Dickinson, 1998; Coutureau & Killcross, 2003). Changing the training presumably led to differences in memory-storage representations, which led to differences in decision-making behaviors. These two processes depended on the same brain structure differences as the non-unitary memory processes reviewed above (Yin, Knowlton, & Balleine, 2004).

In the 1990s, a similar set of differences appeared in the computational literature. Building on the ubiquitous temporal-difference reinforcement-learning (TDRL) model (Sutton & Barto, 1998), computational analyses showed that there were fundamental differences between algorithms that searched through potential futures and algorithms that selected actions based on recognition of the current state of the world. An algorithm that searched through models of the world to construct hypothetical states, which could then be evaluated in the context of the animal’s current situation, depended on knowing the structure of the world, was flexible, but computationally slow (Daw et al., 2005; Johnson, van der Meer, & Redish, 2007; Keramati et al., 2011; Niv, Joel, & Dayan, 2006; van der Meer, Kurth-Nelson, & Redish, 2012). In contrast, an algorithm that categorized the situation and recalled a single generalized action that had been learned to be optimal within that situation would be inflexible, but computationally fast to execute (Johnson et al., 2007; Niv et al., 2006; van der Meer et al., 2012; Yang & Shadlen, 2007). More recently, it has become clear that a full description of memory and decision-making will require additional components including affective memory systems, Pavlovian action-selection systems, reflexive systems, as well as cognitive and cue-recognition components (Dayan, 2012; Gershman, Blei, & Niv, 2010; Montague, Dolan, Friston, & Redish, 2012; Phelps, Lempert, & Sokol-Hessner, 2014; Redish, 2013; Redish, Jensen & Johnson, 2008; Redish, Jensen, Johnson, & Kurth-Nelson, 2007).

There are excellent reviews of the similarities and differences between these multiple memory systems, multiple decision-making systems, and multiple computational components, and so in this special issue, the eleven papers delve into specific issues related to these relationships, showing that decision-making abilities correlate with measures of memory abilities, and identifying the computational and neurophysiological processes that underlie these parallel memory and decision-making abilities.

The first set of papers examines the computational and neurophysiological processes that underlie these two primary systems (slow, flexible, based on searching through the future [termed model-based, because it requires a search through a model of the world] versus fast, inflexible, based on recall of primary situation- action pairs [termed model-free because it requires only categorization of current states of the world]). Doll, Shohamy, and Daw (this issue) review the key dichotomy here from a computational perspective and argue that the memory process distinction underlies the decision-making differences. They report experiments finding correlations between flexible, relational memory with model-based strategies but not with model-free strategies. Schacter, Benoit, De Bridgard, and Szpunar (this issue) review the concepts of episodic future thinking that are critical to searching through models of the world and suggest that episodic future thinking depends on the ability to construct counter-factual and hypothetical scenarios through imagination. They suggest that these abilities depend on a common neural network involving the hippocampus and prefrontal cortex. Wang, Cohen, and Voss (this issue) propose a conceptual framework in which prefrontal cortex polls the hippocampus for hypothetical scenarios covertly and rapidly, suggesting that the necessary simulation cycles explain the slower speed of some decisions, even those that occur without explicit awareness. While Wang et al. concentrate on interactions between prefrontal cortex and hippocampus in humans, Yu and Frank (this issue) examine those interactions in the other species in which they have been most studied, the rat.

The second set of papers follows these by delving deeper into those interactions. Dahmani and Bohbot (this issue) examine a task that differentiates spatial (model-based) and stimulus–response (model-free) strategies, which are known to separately activate hippocampal systems and caudate systems, respectively. They find that different aspects of prefrontal cortex are involved in these two systems, suggesting a dichotomy within prefrontal cortex itself. Similarly, Burton, Nakamura, and Roesch (this issue) review data on striatal subdivisions, finding a similar heterogeneity within striatum.

The third set of papers examines interactions between memory and decision-making systems beyond this single dichotomy. In particular, Shimp, Mitchell, Beas, Bizon, and Setlow (this issue) examine sensitivity to risk and find underlying correlations between working memory abilities (memory) and discounting rates (decision), suggesting underlying functional components, which likely relate to those discussed in the Schacter et al. and Wang et al. papers. Hutchinson, Uncapher, and Wagner (this issue) show that these representations of risk are represented in subregions of posterior parietal cortex, and note how retrieval of memory is itself a decision-process. Hart, Clark, and Phillips (this issue) examine the role of dopamine in risk taking behavior, finding that with dopamine signals in the rat correlate with both reward prediction errors and the expected variance of reward itself (similar to what has been found in monkeys). Their data suggests that these informational components change with experience on the task. Redila, Kinzel, Jo, Puryear, and Mizumori (this issue) take the computation one step further back, identifying the role and firing patterns of lateral dorsal tegmental (LDTg) neurons (which drive dopamine bursting in the ventral tegmental area [VTA]), and comparing them to other VTA afferents such as the pedunculopontine nucleus [PPTg]. The LDTg monitors ongoing behaviors perhaps to increase the accuracy of predicting future reward encounters, and the PPTg provides current sensory information that VTA needs to calculate reward prediction error signals.

Finally, Erdem, Milford, and Hasselmo (this issue) show that in a robot model capable of navigating in the world, memory is critical to correct formation of the cognitive map, particularly in recognition of situations, allowing the map to correctly reflect the world. Maps that better reflect the world provide a better substrate for decision-making processes. This brings this interaction together, showing how memory and decision-making necessarily interact in order to produce successful behavior.

These eleven papers provide new insights into the relationship between memory and decision-making. We hope you enjoy this special issue.

Contributor Information

A. David Redish, Email: redish@umn.edu, Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA.

Sheri J.Y. Mizumori, Email: mizumori@u.washington.edu, Department of Psychology, University of Washington, Seattle, WA 98195, USA.

References

  1. Adams CD, Dickinson A. Instrumental responding following reinforce devaluation. Quarterly Journal of Experimental Psychology: Comparative and Physiological Psychology, B. 1981;33:109–122. [Google Scholar]
  2. Balleine BW, Dickinson A. Goal-directed instrumental action: Contingency and incentive learning and their cortical substrates. Neuropharmacology. 1998;37(4–5):407–419. doi: 10.1016/s0028-3908(98)00033-1. [DOI] [PubMed] [Google Scholar]
  3. Berke JD, Breck JT, Eichenbaum H. Striatal versus hippocampal representations during win-stay maze performance. Journal of Neurophysiology. 2009;101(3):1575–1587. doi: 10.1152/jn.91106.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bouton ME. Learning and behavior: A contemporary synthesis. Sinauer Associates; 2007. [Google Scholar]
  5. Burton AC, Nakamura K, Roesch M. From ventral-medial to dorsal-lateral striatum: Neural correlates of reward-guided decision-making. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.05.003. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Cisek P, Kalaska JF. Neural mechanisms for interacting with a world full of action choices. Annual Review of Neuroscience. 2010;33:269–298. doi: 10.1146/annurev.neuro.051508.135409. [DOI] [PubMed] [Google Scholar]
  7. Cohen NJ, Eichenbaum H. Memory, amnesia, and the hippocampal system. MIT Press; 1993. [Google Scholar]
  8. Cohen NJ, Squire LR. Preserved learning and retention of pattern-analyzing skill in amnesia: Dissociation of knowing how and knowing that. Science. 1980;210:207–210. doi: 10.1126/science.7414331. [DOI] [PubMed] [Google Scholar]
  9. Cormen TH, Leiserson CE, Rivest RL. Introduction to algorithms. MIT Press; 1992. [Google Scholar]
  10. Coutureau E, Killcross S. Inactivation of the infralimbic prefrontal cortex reinstates goal-directed responding in overtrained rats. Behavioural Brain Research. 2003;146:167–174. doi: 10.1016/j.bbr.2003.09.025. [DOI] [PubMed] [Google Scholar]
  11. Dahmani L, Bohbot VD. Disscoiable contributions of the prefrontal cortex to hippocampus- and caudate nucleus-dependent virtual navigation strategies. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.07.002. (this issue). http://dx.doi.org/10.1016/j.nlm.2014.07.002. [DOI] [PubMed] [Google Scholar]
  12. Daw ND, Niv Y, Dayan P. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience. 2005;8:1704–1711. doi: 10.1038/nn1560. [DOI] [PubMed] [Google Scholar]
  13. Dayan P. Twenty-five lessons from computational neuromodulation. Neuron. 2012;76(1):240–256. doi: 10.1016/j.neuron.2012.09.027. [DOI] [PubMed] [Google Scholar]
  14. Doll B, Shohamy D, Daw N. Multiple memory systems as substrates for multiple decision systems. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.04.014. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Engert F, Bonhoeffer T. Dendritic spine changes associated with hippocampal long-term synaptic plasticity. Nature. 1999;399:66–70. doi: 10.1038/19978. [DOI] [PubMed] [Google Scholar]
  16. Erdem UM, Milford MJ, Hasselmo ME. A hierarchical model of goal directed navigation selects trajectories in a visual environment. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.07.003. (this issue). [DOI] [PubMed] [Google Scholar]
  17. Gershman SJ, Blei D, Niv Y. Context, learning and extinction. Psychological Review. 2010;117(1):197–209. doi: 10.1037/a0017808. [DOI] [PubMed] [Google Scholar]
  18. Grillner S. The motor infrastructure: From ion channels to neuronal networks. Nature Reviews Neuroscience. 2003;4(7):573–586. doi: 10.1038/nrn1137. [DOI] [PubMed] [Google Scholar]
  19. Hart AS, Clark JD, Phillips PEM. Dynamic shaping of dopamine signals during probabilistic Pavlovian conditioning. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.07.010. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Hutchinson JB, Uncapher MR, Wagner AD. Increased functional connectivity between dorsal posterior parietal and ventral occipitotemporal cortex during uncertain memory decisions. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.04.015. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Jog MS, Kubota Y, Connolly CI, Hillegaart V, Graybiel AM. Building neural representations of habits. Science. 1999;286:1746–1749. doi: 10.1126/science.286.5445.1745. [DOI] [PubMed] [Google Scholar]
  22. Johnson A, Crowe DA. Revisiting Tolman, his theories and cognitive maps. Cognitive Critique. 2009;1:43–72. [Google Scholar]
  23. Johnson A, van der Meer MAA, Redish AD. Integrating hippocampus and striatum in decision-making. Current Opinion in Neurobiology. 2007;17(6):692–697. doi: 10.1016/j.conb.2008.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Kandel E. In search of memory: The emergence of a new science of mind. Norton; 2006. [Google Scholar]
  25. Keramati M, Dezfouli A, Piray P. Speed/accuracy trade-off between the habitual and the goal-directed processes. PLoS Computational Biology. 2011;7(5):e1002055. doi: 10.1371/journal.pcbi.1002055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Llinas RR. I of the Vortex. MIT Press; 2001. [Google Scholar]
  27. Mackintosh NJ. The psychology of animal learning. Academic Press; 1974. [Google Scholar]
  28. McClelland JL, Rumelhart DE, editors. PDP: Explorations in the microstructures of cognition. Psychological and biological models. Vol. 2. MIT Press; 1986. [Google Scholar]
  29. Malinow R, Malenka RC. AMPA receptor trafficking and synaptic plasticity. Annual Review of Neuroscience. 2002;25:103–126. doi: 10.1146/annurev.neuro.25.112701.142758. [DOI] [PubMed] [Google Scholar]
  30. Montague PR, Dolan RJ, Friston KJ, Dayan P. Computational psychiatry. Trends in Cognitive Sciences. 2012;16(1):72–80. doi: 10.1016/j.tics.2011.11.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Niv Y, Joel D, Dayan P. A normative perspective on motivation. Trends in Cognitive Sciences. 2006;10(8):375–381. doi: 10.1016/j.tics.2006.06.010. [DOI] [PubMed] [Google Scholar]
  32. O’Keefe J, Nadel L. The hippocampus as a cognitive map. Clarendon Press; 1978. [Google Scholar]
  33. O’Reilly RC, McClelland JL. Hippocampal conjunctive encoding, storage, and recall: Avoiding a trade-off. Hippocampus. 1994;4(6):661–682. doi: 10.1002/hipo.450040605. [DOI] [PubMed] [Google Scholar]
  34. Pavlov I. Conditioned reflexes. Oxford Univ Press; 1927. [Google Scholar]
  35. Phelps E, Lempert KM, Sokol-Hessner P. Emotion and decision making: Multiple modulatory circuits. Annual Reviews Neuroscience. 2014;37:263–287. doi: 10.1146/annurev-neuro-071013-014119. [DOI] [PubMed] [Google Scholar]
  36. Redila V, Kinzel C, Jo YS, Puryear CB, Mizumori SJY. A role for the lateral dorsal tegmentum in memory and decision neural circuitry. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.05.009. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Redish AD. Beyond the cognitive map: From place cells to episodic memory. Cambridge MA: MIT Press; 1999. [Google Scholar]
  38. Redish AD. The mind within the brain: How we make decisions and how those decisions go wrong. Oxford University Press; 2013. [Google Scholar]
  39. Redish AD, Jensen S, Johnson A. A unified framework for addiction: Vulnerabilities in the decision process. Behavioral and Brain Sciences. 2008;31:415–487. doi: 10.1017/S0140525X0800472X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Redish AD, Jensen S, Johnson A, Kurth-Nelson Z. Reconciling reinforcement learning models with behavioral extinction and renewal: Implications for addiction, relapse, and problem gambling. Psychological Review. 2007;114(3):784–805. doi: 10.1037/0033-295X.114.3.784. [DOI] [PubMed] [Google Scholar]
  41. Schacter D, Benoit R, De Bridgard F, Szpunar K. Episodic future thinking and episodic counterfactual thinking: Intersections between memory and decisions. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2013.12.008. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Schacter DL, Tulving E, editors. Memory Systems 1994. MIT Press; 1994. [Google Scholar]
  43. Schmitzer-Torbert NC, Redish AD. Task-dependent encoding of space and events by striatal neurons is dependent on neural subtype. Neuroscience. 2008;153(2):349–360. doi: 10.1016/j.neuroscience.2008.01.081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Schoenbaum G, Roesch M, Stalnaker TA. Orbitofrontal cortex, decision making, and drug addiction. Trends in Neurosciences. 2006;29(2):116–124. doi: 10.1016/j.tins.2005.12.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Searle J. Philosophy in America. Cornell University Press; 1965. What is a speech act? pp. 221–239. [Google Scholar]
  46. Shimp KG, Mitchell MR, Beas BS, Bizon JL, Setlow B. Affective and cognitive mechanisms of risky decision making. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.03.002. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Silva AJ, Kogan JH, Frankland PW, Kida S. CREB and memory. Annual Review of Neuroscience. 1998;21:127–148. doi: 10.1146/annurev.neuro.21.1.127. [DOI] [PubMed] [Google Scholar]
  48. Squire LR. Memory and brain. New York: Oxford University Press; 1987. [Google Scholar]
  49. Sutton RS, Barto AG. Reinforcement learning: An introduction. Cambridge MA: MIT Press; 1998. [Google Scholar]
  50. Tolman EC. Purposive behavior in animals and men. New York: Appleton-Century-Crofts; 1932. [Google Scholar]
  51. Tolman EC. Cognitive maps in rats and men. Psychological Review. 1948;55:189–208. doi: 10.1037/h0061626. [DOI] [PubMed] [Google Scholar]
  52. Tse D, Langston RF, Kakeyama M, Bethus I, Spooner PA, Wood ER, et al. Schemas and memory consolidation. Science. 2007;316(5821):76–82. doi: 10.1126/science.1135935. [DOI] [PubMed] [Google Scholar]
  53. van der Meer MAA, Kurth-Nelson Z, Redish AD. Information processing in decision-making systems. The Neuroscientist. 2012;18(4):342–359. doi: 10.1177/1073858411435128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Wang JX, Cohen NJ, Voss JL. Covert rapid action-memory simulation (CRAMS): A hypothesis of hippocampal-prefrontal interactions for adaptive behavior. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.04.003. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Yang T, Shadlen MN. Probabilistic reasoning by neurons. Nature. 2007;447:1075–1080. doi: 10.1038/nature05852. [DOI] [PubMed] [Google Scholar]
  56. Yin HH, Knowlton B, Balleine BW. Lesions of dorsolateral striatum preserve outcome expectancy but disrupt habit formation in instrumental learning. European Journal of Neuroscience. 2004;19:181–189. doi: 10.1111/j.1460-9568.2004.03095.x. [DOI] [PubMed] [Google Scholar]
  57. Yu J, Frank L. Hippocampal-cortical interaction in decision making. Neurobiology of Learning and Memory. doi: 10.1016/j.nlm.2014.02.002. (this issue). http://dx.doi.org/10.1016/j.nlm.2014.02.002. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES