Summary
Every day we make decisions critical for adaptation and survival. We repeat actions with known consequences. But we also draw on loosely related events to infer and imagine the outcome of entirely novel choices. These inferential decisions are thought to engage a number of brain regions; however, the underlying neuronal computation remains unknown. Here, we use a multi-day cross-species approach in humans and mice to report the functional anatomy and neuronal computation underlying inferential decisions. We show that during successful inference, the mammalian brain uses a hippocampal prospective code to forecast temporally structured learned associations. Moreover, during resting behavior, coactivation of hippocampal cells in sharp-wave/ripples represent inferred relationships that include reward, thereby “joining-the-dots” between events that have not been observed together but lead to profitable outcomes. Computing mnemonic links in this manner may provide an important mechanism to build a cognitive map that stretches beyond direct experience, thus supporting flexible behavior.
Keywords: inference, memory, hippocampus, humans, mice, sharp-wave ripple, prospective code, cognitive short-cut, cognitive map
Graphical Abstract
Highlights
-
•
Inferential decisions engage the hippocampus in humans and mice
-
•
During inference, a hippocampal prospective code draws on associative memories
-
•
This hippocampal prospective code preserves the learned temporal statistics
-
•
During rest, hippocampal ripples nest cognitive short-cuts for inferred relations
In humans and mice, the hippocampus supports inferential reasoning by computing a prospective code to predict upcoming events, before extracting logical links between discrete events during rest to form a mnemonic short cut for inferred relationships.
Introduction
When making decisions, we often draw on previous experience. We repeat actions that were profitable in the past and avoid those that led to unwanted consequences. However, we can also make decisions using information we have not directly experienced, by combining knowledge from multiple discrete items or events to infer new relationships. This ability to infer previously unobserved relationships is thought to be critical for flexible and adaptive behavior.
Anatomical lesions in rodents and functional imaging in humans have started to uncover the macroscopic network of brain regions supporting inferential decisions (Bunsey and Eichenbaum, 1996; Hampton et al., 2006; Jones et al., 2012; Nicholson and Freeman, 2000; Preston et al., 2004; Robinson et al., 2014; Wimmer and Shohamy, 2012; Zeithamova et al., 2012a), highlighting the involvement of orbitofrontal, medial prefrontal, perirhinal, and retrosplenial cortices, along with the hippocampus. However, the mechanistic contribution of these regions and the neuronal computation underpinning inference remain unclear.
A potential mechanism for inference involves chaining together memories for discrete events at the time of choice. In this scenario, an inferred outcome is predicted by internally simulating the short-term consequences of each memory in the chain. Retrieval mechanisms of this kind may be described by a family of theories known as model-based reinforcement learning (Daw et al., 2005) that involve a learned model of the world. By constructing predictions for decision outcomes on the fly, such mechanisms capture a hallmark of flexible decision-making. However, this comes with the computational cost of searching through a potentially large number of memories.
To reduce the computational demand associated with inference, events that have not been encountered together in space or time may be linked to form cognitive “short-cuts.” Together with prior memories, such higher-order relationships may form a “relational” or “cognitive map” of the world (Cohen and Eichenbaum, 1993; O’Reilly and Rudy, 2001; Tolman, 1948). The hippocampus has been attributed to holding a cognitive map (O’Keefe and Nadel, 1978), with neuronal representations observed in the spatially tuned activity of pyramidal cells during exploration (Ekstrom et al., 2003; O’Keefe and Dostrovsky, 1971). In addition to representing space, the hippocampus supports memory for past experience (Squire, 1992) and mediates associations between sequential events (Fortin et al., 2002; Schendan et al., 2003). However, while the hippocampus is a suitable candidate to hold internal maps, it remains unclear whether this brain region represents or computes cognitive short-cuts to support inference.
One possibility is that memories for distinct experiences are linked together or even fundamentally restructured during awake rest and sleep (Buckner, 2010; Buzsáki, 2015; Diekelmann and Born, 2010; Foster, 2017; Joo and Frank, 2018). During these quiet periods, hippocampal local-field potentials (LFPs) are characterized by sharp-wave/ripples (SWRs): short-lived, large-amplitude deflections accompanied by high-frequency oscillations (Buzsáki, 2015; Csicsvari et al., 1999). During SWRs, hippocampal cells fire synchronously and their temporally structured spiking can “replay” previous waking experience (Louie and Wilson, 2001; Nádasdy et al., 1999; Wilson and McNaughton, 1994) to support memory and planning (Buzsáki, 2015; Foster, 2017; Joo and Frank, 2018). Growing evidence suggests SWR activity also extends beyond replay of directly experienced information. For instance, hippocampal SWR spiking can anticipate upcoming experience (Dragoi and Tonegawa, 2011; Ólafsdóttir et al., 2015), reorder events according to a trained rule (Liu et al., 2019), or even stitch together spatial trajectories (Gupta et al., 2010; Wu and Foster, 2014). In this manner, we hypothesize that hippocampal SWR activity generates spiking motifs that provide a cellular basis for novel higher-order relationships, thus breaking the constraints imposed by direct experience.
Here, we investigate the neuronal computation underlying inference in the mammalian brain using a cross-species approach. We implement a multi-day inference task and deploy brain recording technologies in both humans and mice to synergize insights gained across species. Namely, we acquire near-whole brain ultra-high field (7T) functional magnetic resonance imaging (fMRI) in humans to identify where inference is computed, before using this finding to inform optogenetic manipulations in mice to test causality.
Using human 7T fMRI and mouse in vivo multichannel electrophysiology, we then obtain complementary signatures of inference at the macroscopic and cellular resolution, respectively. By implementing the same analytical framework across species, we show that during inferential choice the hippocampus forecasts mnemonic, temporally structured associations “on-the-fly.” While this prospective code draws on learned experience, in humans the inferred outcome is represented in the medial prefrontal cortex (mPFC) and the putative dopaminergic midbrain. Next, during rest/sleep in mice, neuronal coactivations during hippocampal SWRs increasingly represent inferred relationships that include reward, thus “joining-the-dots” between discrete events. These findings show that the hippocampus supports inference by computing a prospective code to “look ahead” and predict upcoming experience, before extracting “logical” links between events in SWRs. In this manner, the hippocampus may construct a cognitive map that stretches beyond direct experience (O’Keefe and Nadel, 1978; Tolman, 1948), creating new knowledge to facilitate flexible future decisions.
Results
Cross-Species Task Design and Behavioral Performance
We designed a three-stage task (Figure 1A) that leveraged a sensory preconditioning paradigm (Brogden, 1939) while permitting recordings of brain activity in humans and mice. To match the paradigm across species, we trained human participants in a virtual-reality environment simulating the open-field arena used with mice (Figure 1B). The inference task was performed across multiple days (Figures 1C and 1D). In the first stage, we exposed subjects to pairs of sensory stimuli, with each pair n including an auditory cue Xn that signaled contiguous presentation of a visual cue Yn (Figure 1A; Xn→Yn “observational learning”). In the second stage, we re-exposed subjects to the visual cues Yn, each of which now predicted delivery of either a rewarding (set 1 stimuli) or neutral (set 2 stimuli) outcome Zn (Figure 1A; Yn→Zn “conditioning”). Rewarding outcomes were virtual silver coins for humans (exchangeable for a real monetary sum) and drops of sucrose for mice. Neutral outcomes were (non-exchangeable) woodchips for humans and drops of water for mice. In humans, we included a many-to-one mapping between task cues (Figures S1C and S1D), to further dissociate cue-specific representations. Importantly, auditory cues Xn were never paired with outcomes Zn, providing an opportunity to assess evidence for an inferred relationship between these indirectly related stimuli. Accordingly, in the final stage, we presented auditory cues Xn in isolation, without visual cues Yn or outcomes Zn, and we measured evidence for inference from Xn to Zn by quantifying reward-seeking behavior (Figures 1A, Xn→? “inference test”, 1C, and 1D).
During the conditioning, both humans and mice were trained to show higher levels of reward-seeking behavior during visual cues Yn in set 1 relative to set 2. As expected, in response to Yn, subjects successfully anticipated the relevant outcome Zn prior to its delivery (Figures 1C–1F, S1A–S1F, and S2A–S2F).
During the inference test, both humans and mice showed significantly greater reward-seeking behavior in response to auditory cues Xn in set 1 relative to set 2 (Figures 1C–1F, S1A, S1B, S2G, and S2H). Therefore, despite never directly experiencing outcome Zn in response to auditory cues Xn, both species showed behavioral evidence for an inferred relationship between these stimuli.
The Hippocampus Is Engaged during Inference: Macroscopic Network in Humans
To identify where inference is computed, we took advantage of near-whole brain imaging in humans using 7T fMRI (Figure 2A) to measure the blood oxygen level dependent (BOLD) signal during the inference test and conditioning trials. We used two independent analyses. First, by comparing correct and incorrect trials in the inference test, we observed significantly higher BOLD signal in the hippocampus during correct trials (Figures 2B and S3A; Table S1), consistent with animal lesion studies and previous human imaging (Bunsey and Eichenbaum, 1996; Gilboa et al., 2014; Preston et al., 2004). Second, by taking the auditory cortex as a seed, a region showing elevated BOLD signal across all inference test trials (Figures 2C and S3B), we identified brain regions that co-activate with auditory cortex differentially across correct and incorrect trials. Again, we observed a significant effect in the hippocampus, along with a broader network including retrosplenial and visual cortices (Figure 2D; Table S2). These results suggest hippocampal activity is modulated during correct inference, together with brain regions important for memory and the processing of relevant sensory cues.
The Hippocampus Is Required for Inferential Choice: Optogenetic Silencing in Mice
We next used these findings in humans to guide neuronal silencing in mice, leveraging the cellular and temporal precision of optogenetic tools (Deisseroth, 2011) to test the causal contribution of hippocampal activity at the time of inferential choice. We transduced pyramidal cells of the dorsal hippocampal CA1 (dCA1) with the yellow light-driven neural silencer Archaerhodopsin-T fused with the green fluorescent protein reporter (ArchT-GFP). Optic fibers were subsequently implanted, targeting bilateral dCA1 with light to suppress neuronal spiking during sensory cue presentation (Figures 2E–2H). Suppressing dCA1 spiking impaired inference: light delivery during auditory cues Xn in the inference test (50% of test trials for both set 1 and 2) prevented ArchT-GFP mice from expressing significant reward-seeking bias to Xn cues in set 1 relative to set 2 (Figure 2I). dCA1 light delivery did not impair the reward-seeking bias in GFP control mice (Figure 2I). Furthermore, light delivery during the visual cues Yn, presented after the inference task was complete, did not impair anticipatory reward-seeking behavior of ArchT-GFP mice (Figure 2J). Thus, dCA1 is necessary for inference while dispensable for visual discrimination and first-order conditioning.
Selective Hippocampal Spiking Response to Task Cues: in Mice
Using in vivo electrophysiology to record dCA1 ensembles in mice, we observed neurons with increased spiking during either the auditory, visual, or outcome cue in both set 1 and set 2 (Figure 3A). To identify neurons showing preferential firing to Xn, Yn, or Zn, we applied a general-linear model (GLM) to spiking activity monitored during each task cue, with the obtained regression weights indicating the response magnitude of each neuron (Figures 3B and 3C). We observed largely non-overlapping neuronal ensembles representing the different task cues (Figures 3D–3F). This suggests dCA1 has the capacity to selectively represent each of the discrete sensory cues and outcomes experienced in the inference task.
The Hippocampus Computes a Prospective Code during Inference: in Humans and Mice
We asked whether the hippocampus represents the learned and inferred relationships between task cues. First, we assessed evidence for modulation of hippocampal activity during inference in both humans and mice. As reported above, in humans, we observed an increase in the hippocampal BOLD signal during correct versus incorrect inference (Figures 2B and 4A). Similarly, in mice, we observed significant modulation of dCA1 spiking on correct versus incorrect inference trials, after controlling for variance attributed to speed and set (Figure 4B). These findings show modulation of neuronal activity in the mammalian hippocampus during correct inferential choice.
We next investigated the hippocampal computation that serves inference. In the spatial domain, spiking activity in the medial temporal lobe can sweep ahead of an animal’s location (Gupta et al., 2013; Johnson and Redish, 2007; Mehta et al., 2000) and predict subsequent behavior (Ferbinteanu and Shapiro, 2003; Pastalkova et al., 2008; Pfeiffer and Foster, 2013; Singer et al., 2013; Wood et al., 2000). We reasoned that if similar predictive activity serves inferential choice in the cognitive domain, the hippocampus should draw on mnemonic relationships to prospectively represent visual cues Yn in response to auditory cues Xn in the inference test, thereby chaining Xn to outcome Zn.
To test this, we measured hippocampal representations in humans and mice during each auditory cue Xn in the inference test and during each visual cue Yn in the conditioning. We then deployed the same analytical framework across species, applying representational similarity analysis (RSA) (Kriegeskorte et al., 2008; McKenzie et al., 2014; Nili et al., 2014) to both voxels (humans) and neurons (mice) (Figures 4C and 4D). We observed similar results in humans and mice: when the correct outcome was inferred behaviorally, hippocampal activity during the current auditory cue Xn showed higher representational similarity with the associated visual cue Yn, compared to the non-associated (cross-set) visual cues Ym (Figures 4E–4H, S3C, and S3D). This set-selective discrimination in the hippocampal code was not explained by the temporal proximity between inference test trials and re-conditioning trials (Figures S1A and S1B). Therefore, at the time of inferential choice, presentation of Xn elicited representations of the expected Yn cues in a set-specific manner. This suggests hippocampal activity represents learned associations to predict the short-term future, thereby engaging a prospective code to “look ahead” within the current spatial context.
Notably, in mice, these results did not reflect the animal’s location (Figure S4). In humans, where we controlled for value by including multiple visual cues Yn for each outcome Zn (Figure S1D), hippocampal activity during Xn was not explained by the associated value of Yn (Figures S3G–S3I). During inference, the hippocampus therefore appears to draw on memories to forecast the learned consequence of sensory cues (Xn→Yn).
The Hippocampal Prospective Code Preserves Learned Temporal Statistics: in Mice
To assess whether this prospective code preserves the statistics inherent to the observational learning, we took advantage of the high temporal resolution of in vivo electrophysiology. Taking the neuronal ensembles that selectively represent either Xn or Yn cues (Figures 3C–3F), we assessed the temporal order of spiking activity for pairs of Xn and Yn neurons upon presentation of Xn during the inference test. For within-set XnYn neuronal pairs, neurons representing Yn were significantly more likely to spike after neurons representing Xn (Figures 5 and S5), preserving the temporal relationship between cues Xn and Yn experienced during observational learning despite no further presentation of Yn cues at this stage. Thus, during inference, hippocampal activity represents a prospective code that reflects mnemonic recall for learned temporal statistics.
The mPFC and Midbrain Represent the Inferred Outcome: in Humans
Having shown that both the human and mouse hippocampi represent the intermediary visual cue Yn in response to the auditory cue Xn during inference (Figures 4 and 5), we asked where in the brain this prospective memory code (Xn → Yn) is translated into a representation of the inferred outcome Zn. We capitalized on the many-to-one mapping between task cues in humans, where representations of Yn and Zn could be dissociated (Figures S1C and S1D). In response to Xn, there was no evidence for representation of the inferred outcome Zn in the human hippocampus (Figures 6A, 6B, and S6A–S6C), with analogous findings in mice (Figures 6C and 6D). This result contrasts with our data recorded in mice during the conditioning stage where dCA1 ensembles show robust responses to the experienced outcome Zn (Figure 3). This suggests that inferential decisions are not supported by a mechanism whereby Xn acquires the value of Yn during encoding (Figures S3G–S3I, 6A, and 6B), or by a mechanism whereby mnemonic information is recirculated within the medial temporal lobe to represent the inferred outcome within the hippocampus (Kumaran, 2012; Kumaran and McClelland, 2012) (Figures 6A, 6B, and S6C). Instead, during inference, the hippocampus uses a prospective code to forecast the learned consequences of sensory cues (Xn→Yn).
To identify where this prospective code is translated into a representation of the inferred outcome, we took advantage of near-whole brain imaging in humans. Using an RSA searchlight to sweep through the entire imaged brain volume (Figures S6D and S6E), we identified regions showing significant representational similarity between auditory cues Xn and inferred outcomes Zn. When the correct outcome was inferred during the inference test, activity patterns in both mPFC and midbrain showed significant representational similarity with the inferred outcome Zn (Figures 6E–6G), even when restricting analyses to neutral cues alone (Figures 6H–6J). Notably, representation of Zn was conditional on the cues that predicted Zn (Figures 6E and S6B), suggesting the inferred outcome is computed online according to a model of the task. Moreover, by representing value-neutral sensory features, the processing specificities of mPFC and putative dopaminergic midbrain regions may go beyond reward or direct reinforcement, consistent with recordings in rodents (Sadacca et al., 2016; Stalnaker et al., 2019; Takahashi et al., 2017). To detail the interaction between mPFC, midbrain, and dCA1 ensembles during inference, further multi-brain-site recordings in animal models will be required, as illustrated during exposure to outcome Z1 (Figure S6H).
Hippocampal SWRs Nest Mnemonic Short-Cuts for Inferred Relationships: in Mice
Using data acquired during inferential choice in both humans and mice, we showed evidence for a prospective retrieval mechanism that forecasts learned associations, thus indirectly relating cues Xn and Zn (Figures 4, 5, and 6). However, complementary mechanisms may directly link Xn to Zn. One candidate mechanism involves using multiple memories to internally simulate and cache statistics of the environment (Sutton, 1991). In the spatial domain, temporally compressed simulations of previous experience occur in hippocampal SWRs during periods of awake immobility (rest) and sleep (Buzsáki, 2015; Foster, 2017; Joo and Frank, 2018). SWR-related activity could extend beyond direct experience by recombining and recoding mnemonic information (Buzsáki, 2015; Diekelmann and Born, 2010; Lewis and Durrant, 2011; Shohamy and Daw, 2015; Zeithamova et al., 2012b). Accordingly, we tested whether hippocampal SWR activity effectively “autocompletes” the firing associations representing unobserved (yet logical) relationships between cues Xn and Zn. While non-invasive methods can provide a macroscopic index for memory reactivation, accessing the unique electrophysiological profile of SWRs (Buzsáki, 2015) requires invasive methods. The following analyses were therefore restricted to electrophysiological recordings in mice.
For each recording day (Figures 1D and S1B), we calculated the probability that SWRs nest spikes from neuronal triplets, where each cell provides a (non-overlapping) representation of one of the task cues Xn, Yn, or Zn (Figures 7A and 7B). When comparing early versus late days, the probability that awake SWRs jointly represent all three cues (Xn, Yn, and Zn) significantly increased for set 1 but not for set 2 (Figure 7C). This result suggests that reward-related activity is prioritized in hippocampal SWRs, consistent with work investigating replay of previous experience in SWRs (Singer and Frank, 2009).
This result did not merely reflect simulation of an internal model of the inference task (X1→ Y1→ Z1), because the probability that SWRs co-represent X1 and Z1 together with the visual cue from the alternative set, Y2, similarly increased (Figure 7D). Indeed, regardless of the intermediary visual cue Yn, the probability that SWRs co-represent the auditory cue X1 together with the logically associated outcome Z1 increased with behavioral experience of the task (Figures 7E and S7B). Furthermore, with the recorded ensemble of neurons at hand, the probability that a given awake SWR represents the inferred relationship (X1, Z1) in the absence of the intermediary cue (Y1) also increased with experience (Figure 7F). These results suggest SWRs represent a mnemonic short-cut for inferred relationships that include reward.
These findings cannot be explained by a mere increase in SWRs representing reward (Z1) as the observed increase in representation of set 1 cell pairs, X1Z1, was significantly greater than equivalent changes in the cross-set cell pairs, X2Z1 (Figures 7E and 7F). Moreover, unlike cross-set cell pairs, the increase in probability that a given awake SWR represents co-activity for the inferred relationship (X1, Z1) occurred over and above any change in activity for cells representing either X1 or Z1 cues (Figures 7G and S7C). Comparable results were observed during offline periods of sleep but with lower fidelity (Figures S1B, S7D, and S7E), as reported for replay of spatial firing patterns during awake rest versus sleep (Karlsson and Frank, 2009). Together, these findings suggest that the hippocampal representation of profitable (rewarding) yet unobserved relationships increases in SWRs, thus supporting a direct mnemonic short-cut for inferred relationships.
During awake SWRs, we further assessed the spike time relationships between Xn and Zn cells. Past studies investigating representation of spatial trajectories in hippocampal SWRs report evidence for replay in a temporally reversed order (Csicsvari et al., 2007; Diba and Buzsáki, 2007; Foster and Wilson, 2006; Gupta et al., 2010). Despite cues Xn and Zn never being directly experienced together, here we found that cells representing Z1 fired significantly earlier than cells representing X1 (Figure 7H). Cell pairs representing cues from neutral set 2 (X2 and Z2) showed no such temporal bias (Figure 7H). Thus, cell pairs that included reward representation (X1 and Z1) exhibited reverse firing (Z1→ X1) relative to the inferred direction (X1→ Z1). Consistent with evidence suggesting that hippocampal replay coordinates reward responsive neurons with the dopaminergic midbrain during quiet wakefulness but not sleep (Gomperts et al., 2015), we did not observe offline reverse firing of the inferred relationship (Figures S7F and S7G). This suggests that waking memories may serve reverse sequential firing in awake SWRs to assign credit or allow updates to environmental cues that are logically linked but not directly experienced with reward.
Discussion
Here, we use a cross-species approach to uncover how the mammalian brain computes inference, a cognitive operation central to adaptive behavior. Across a multi-day inference task, we reveal a cellular-level description of the underlying computation, alongside a macroscopic readout of this process.
Our study shows that during inference, the hippocampus engages a prospective code that preserves the learned temporal statistics of the task. In addition, during rest/sleep in mice, hippocampal SWRs show increased coactivation of neurons representing inferred relationships that include reward. Thus, during rest/sleep the hippocampus appears to “join-the-dots” between discrete items that may be profitable. We propose this mechanism provides a means to build a cognitive map that stretches beyond direct experience, creating new knowledge to facilitate future decisions.
This process of “joining-the-dots” between logically related events is consistent with evidence that SWR spiking is not only determined by prior experience (Buzsáki, 2015; Foster, 2017; Joo and Frank, 2018). Rather, the intrinsic connectivity of the hippocampus (Somogyi and Klausberger, 2017), self-generated sequences (Dragoi and Tonegawa, 2011), forward planning (Ólafsdóttir et al., 2015; Pfeiffer and Foster, 2013), structural knowledge (Liu et al., 2019), and stitching together of spatial trajectories (Gupta et al., 2010; Wu and Foster, 2014) all play a role. Here, hippocampal SWR spiking represents a non-spatial, second-order mnemonic link between items not experienced together, over and above simulating an internal model that draws on direct experience. The reported increase in SWRs nesting inferred relationships suggests hippocampal spiking activity during SWRs may build higher-order relationships to integrate knowledge into a coherent schema (Tse et al., 2007). This new understanding of hippocampal SWRs may explain why sleep/rest facilitates behavioral readouts of insight and inferential reasoning in humans (Coutanche et al., 2013; Ellenbogen et al., 2007; van Kesteren et al., 2010; Lau et al., 2011; Wagner et al., 2004; Werchan and Gómez, 2013).
Consistent with studies showing that reward-related activity influences the spatial content of SWRs (Pfeiffer and Foster, 2013; Singer and Frank, 2009), our findings suggest SWR content can be skewed toward events that are more salient, have greater future utility, and/or generate larger reward-prediction errors. The spiking content reported here may further be prioritized toward active inferential choice, because correct inference in response to X1, but not X2, requires mice to elicit a purposeful action toward the dispenser. While distinct memories encoded close in time are represented by overlapping ensembles (Cai et al., 2016), here, we controlled for this by presenting cues from set 1 and 2 in a randomly interleaved manner during the reconditioning and inference test, thus matching the temporal proximity of within- and between-set cues.
Changes in neuronal coactivation in hippocampal SWRs are suitable to influence wide-spread cortical and subcortical targets, directly or via intermediate relay regions (Battaglia et al., 2011; Buzsáki, 2015; Joo and Frank, 2018). This may explain how the putative dopaminergic midbrain acquires a representation of the inferred outcome (Zn) in response to a preconditioned cue (Xn), which cannot be accounted for by temporal difference learning algorithms (Sadacca et al., 2016). Specifically, hippocampal SWR spiking may broadcast value information to relate reward information received at the end of a sequence to earlier events (Foster et al., 2000; Sutton, 1988). Consistent with this hypothesis, reverse replay in awake SWRs occurs during reward-motivated spatial behavior (Diba and Buzsáki, 2007; Foster and Wilson, 2006; Gupta et al., 2010), while our data show an inverted temporal order in non-spatial inferred relationships. SWR-nested spiking may therefore facilitate retrospective credit assignment or value updating of sensory cues represented by the mPFC and midbrain, even if those cues are not directly paired with an outcome. Such cross-region coordination may explain why functional coupling observed between hippocampus and mPFC during post-encoding rest predicts measures of memory integration in humans (Schlichting and Preston, 2016). In this manner, SWR-related hippocampal training signals may alleviate the computational cost of inference by building a model or “cognitive map” of the external world that spans multiple brain regions.
In addition to this SWR-related mechanism during rest/sleep, we show in mice that dCA1 pyramidal neurons are necessary for inferential choice. Moreover, during inference in both humans and mice, the hippocampus represents a veridical copy of learned associations in temporal sequence (Xn→Yn). These findings were not explained by mere spatial location, yet these temporally structured mnemonic associations may be analogous to spatial sequences of place cells (e.g., McNaughton et al., 1983; Mehta et al., 2000). Sequential firing of this kind may be a necessary requirement for a brain region evolved to support memory (Buzsáki and Moser, 2013).
Previous studies suggest that during learning, memories for past overlapping events can be evoked and associated with newly encountered information to link memories across experiences (Nagode and Pardo, 2002; Schlichting et al., 2014; Shohamy and Wagner, 2008; Zeithamova and Preston, 2010; Zeithamova et al., 2012a). This integrative encoding may even assign value to stimuli not directly paired with an outcome (Wimmer and Shohamy, 2012), alleviating the need to recall intermediary cues at the time of choice. However, previous studies have not differentiated between representation of the intermediary (Yn) and inferred cues (Zn) during inferential choice, leaving the underlying mechanism ambiguous. Here, in humans, we dissociate representations of the intermediary (Yn) and inferred cues (Zn) by using a many-to-one mapping between cues. At the time of choice, this paradigm shows evidence for hippocampal representation of the intermediary cue (Yn), but not the inferred outcome (Zn) or value associated with Yn. We also show that mouse hippocampal dCA1 is necessary for inference. Together, these results suggest inferential choice is supported by a hippocampal mechanism where mnemonic sequences are recalled “on-the-fly.” This mechanism may further depend upon extra-hippocampal regions representing the relevant sensory cues. Thus, while our findings are not contradictory to previous human fMRI studies, by dissociating representations of Yn from Zn at the time of inference, we propose the hippocampus draws on learned experience, while other downstream circuits may use the hippocampal output to reinstate an integrated or overlapping neural code.
Using near-whole brain imaging in humans, we show that during inferential choice the inferred outcome (Zn) is represented in mPFC and midbrain, even when the corresponding outcome is neutral. This highlights a division of mnemonic labor between the hippocampus on the one hand, and the mPFC and (putative dopaminergic) midbrain on the other: whereas the hippocampus draws on learned sequences (Xn→Yn), the hypothetical inferred outcome (Zn), rewarding or neutral, is represented in the mPFC and midbrain, potentially inherited by integrative encoding or spiking activity during SWRs. Inference therefore involves a memory recall mechanism that spans multiple brain regions. This differs from computational models that propose associative information is integrated locally within the medial temporal lobe via recurrent loops (Kumaran, 2012), but is consistent with evidence showing representation of intermediary cues in the medial temporal lobe at the time of choice (Koster et al., 2018). Moreover, our findings support evidence suggesting the mPFC uses an abstract model of the environment to guide behavior (Hampton et al., 2006), while the midbrain supports learning of relationships that extend beyond those associated with direct reinforcement (Langdon et al., 2018; Sadacca et al., 2016; Sharpe et al., 2017; Stalnaker et al., 2019; Takahashi et al., 2017). Retaining both veridical mnemonic recall and allowing inference for higher-order relationships provides the comprehensive cognitive flexibility necessary for adaptive mammalian behavior in an ever-changing environment.
The inference task was implemented across multiple days and may therefore generalize to everyday examples of inference where individuals draw upon information learned across days, weeks, or even years. While training demands in rodents made this multi-day paradigm inevitable, we note that our results could, in part, reflect the consequence of this schedule. For example, by conducting each task stage on a separate day, we mitigated against the formation of overlapping neuronal codes for distinct memories encoded close in time. Our task design also lends toward using more mature or consolidated memories: evidence in rodents suggests memories are rapidly generated in both hippocampus and mPFC, gradually becoming quiescent in hippocampus with consolidation in mPFC (Kitamura et al., 2017; Preston and Eichenbaum, 2013; Squire et al., 2015). If training demands allowed the inference task to be performed within 1 day, the inferred outcome may be represented in both the hippocampus and mPFC, rather than mPFC and midbrain, as observed here. Notably, our paradigm differs from several studies investigating inferential reasoning in humans within one day (Koster et al., 2018; Preston et al., 2004; Schlichting et al., 2014; Wimmer and Shohamy, 2012; Zeithamova et al., 2012a). Unveiling the precise temporal dependency of the computation supporting inference will require further work.
Recording and manipulating neural dynamics will help establish an understanding of the mechanisms underlying adaptive and maladaptive behavior (Deisseroth, 2014). However, cellular recordings and causal manipulations are normally performed using invasive methods in animal models where it is difficult to translate the identified mechanisms into an understanding of human behavior. In attempt to overcome this explanatory gap, here we use a cross-species approach to take advantage of complementary tools available in humans and mice. Despite differences between the mouse and the human brain, a cross-species approach remains justified by the preserved homology of mammalian neural circuits. However, there are inevitable limitations associated with comparing data across species. When investigating aspects of higher-order cognition, perhaps the greatest limitation resides in our inability to verbally communicate with animals. While humans received explicit instructions and comprehension was monitored throughout, mice had to reveal task rules via trial and error, with no social obligation to cooperate. Despite keeping the experimental paradigm the same across mice, their behavior was more variable. The difference in our ability to instruct/train humans and mice also led to differences in task design, where humans were able to learn a many-to-one mapping between cues to permit dissociation of neuronal representations. Nevertheless, by implementing a comparable task in humans and mice, and acquiring data in an iterative manner, we show how results from one species can guide the course of investigation in the other. We propose this cross-species approach provides a foundation for innovative multidisciplinary investigation of brain functions, in both physiological and pathophysiological conditions.
In summary, our study reveals the functional anatomy and neuronal computation underlying inferential reasoning in the mammalian brain. We implement a cross-species approach in humans and mice to integrate data from whole-brain imaging, cellular-level electrophysiology, and optogenetic manipulations of the same behavior. In doing so, we reveal a holistic description of the neural computation underlying inference in the mammalian brain. We identify a critical role for the hippocampus, which engages a prospective memory code during inferential choice and represents a cognitive short-cut for inferred relationships that include reward in rest/sleep. By unveiling these neuronal mechanisms, we show how the brain can generate new knowledge beyond direct experience, thus supporting high-level cognition.
STAR★Methods
Key Resources Table
REAGENT or RESOURCE | SOURCE | IDENTIFIER |
---|---|---|
Bacterial and Virus Strains | ||
rAAV2-CAG-flex-ArchT-GFP | UNC Vector Core | N/A |
rAAV2-CamKII-ArchT-GFP | UNC Vector Core | N/A |
rAAV2-CAG-flex-GFP | UNC Vector Core | N/A |
rAAV2-CamKII-GFP | UNC Vector Core | N/A |
Experimental Models: Organisms/Strains | ||
CaMKII-Cre mice | The Jackson Laboratory |
https://www.jax.org Stock #005359; RRID: IMSR_JAX:005359 |
C57BL/6J mice | Charles River, UK | https://www.criver.com/Strain code: 632 |
Software and Algorithms | ||
MATLAB | Mathworks |
https://www.mathworks.com Version: 2016b |
Psychtoolbox-3 | Psychtoolbox developers |
http://psychtoolbox.org Version: 3.0.13 |
SPM | FIL Methods group, University College London (UCL) |
https://www.fil.ion.ucl.ac.uk/spm Version: SPM 12 |
RSA Toolbox | Nili et al., 2014 | http://www.mrc-cbu.cam.ac.uk/methods-and-resources/toolboxes/ |
Unity | Unity Technologies, CA United States | https://unity.com/Version: 5.5.4 |
Intan RHD2000 | Intan Technologies, Los Angeles |
http://intantech.com/products_RHD2000.html Version: RHD2164 |
KlustaKwik | K. Harris | https://github.com/klusta-team/klustakwik/ |
Kilosort via SpikeForest | Magland et al., 2020; Pachitariu et al., 2016 |
https://github.com/cortex-lab/KiloSort via SpikeForest https://github.com/flatironinstitute/spikeforest |
Imetronic POLYtrack | Imetronic | http://www.imetronic.com/ |
Other | ||
MR compatible headphones | Sensimetrics | https://www.sens.com/Type: S14 inset earphones |
7 Tesla Magnetom MRI scanner | Siemens | N/A |
1-channel transmit and a 32-channel phased-array head coil | Nova Medical, USA | N/A |
Arduino microcontroller development board | Arduino online store | https://store.arduino.cc/usa/Product: Arduino Mega 2560 Rev3 |
Adafruit Motor/Stepper/Server Shield for Arduino v2 | Adafruit online store | https://www.adafruit.com/Product: Adafruit Motorshield V2 |
12um tungsten wires | California Fine Wire | https://calfinewire.com/Product: M294520 |
Optic fibers | Doric lenses, Québec, Canada | http://doriclenses.com/Product: MFC_200/230-0.37_10mm_RM3_FLT |
Head-stage amplifier | Intan Technologies, Los Angeles |
https://www.intantech.com Product: RHD2164 |
561nm diode-pumped solid-state laser | Laser 2000, Ringstead, UK | Product: CL561-100-0 |
Resource Availability
Lead Contact
Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, David Dupret (david.dupret@bndu.ox.ac.uk).
Materials Availability
This study did not generate new unique reagents.
Data and Code Availability
The data and code used in this study will be made available via the MRC BNDU Data Sharing Platform (https://data.mrc.ox.ac.uk/) upon reasonable request.
Experimental Model and Subject Details
Mouse subjects
24 mice were included in the study (age of 4-6 months, 24 males) (Table S4). Mice were heterozygous for the transgene expressing the Cre recombinase under the control of the CaMKIIa promoter and maintained on a C57BL/6J background (Jackson Laboratories; CamIIa-Cre B6.Cg-Tg(Camk2a-cre)T29-1Stl/J, stock number 005359, RRID: IMSR_JAX:005359) or were wild-types with a C57BL/6J background (Charles River, UK). Animals had free access to water in a dedicated housing facility with a 12/12 h light/dark cycle (lights on at 07:00h). Animals were housed with their littermates up until the start of the experiment, during which they were housed alone. Food was available ad libitum before the experiments (see below). All experiments involving mice were conducted according to the UK Animals (Scientific Procedures) Act 1986 under personal and project licenses issued by the Home Office following ethical review.
Human subjects
22 healthy human volunteers participated in the study (mean age of 22.8 ± 0.74 years, 4 males). All experiments involving humans were approved by the University of Oxford ethics committee (reference number R43594/RE001). All participants gave informed written consent.
Method Details
Mouse inference task environment
During both the pre-training and the inference test protocols (see below), mice were allowed to explore a square-walled open-field enclosure (46 cm width, 38 cm high-walls) within which they were presented with a range of different sensory stimuli controlled via a single-board microcontroller (Arduino Mega 2560 Rev3). Two speakers placed above the open-field were used to deliver the auditory cues (Xn) which constituted pure tones (frequency: 10KHz and 2KHz). Two LED panels were affixed to walls of the open-field which when illuminated served as visual cues (Yn): an L-shaped set of green LEDs with main strip spanning the width of one wall, and a circular set of orange LEDs affixed to a different wall (Figure 1B). A liquid dispenser/aspirator fitted with an infra-red beam detecting lick events was used to deliver/remove the outcome (Zn) which constituted either a drop of 15% sucrose solution (reward; set 1) or a drop of water (neutral; set 2). The outcome cues (Zn) were only available for 10 s before being automatically aspirated by the dispenser.
Human inference task environment
The virtual reality (VR) environment simulating the open-field enclosure used for mice was coded using Unity 5.5.4f1 software (Unity Technologies, CA United States) and included a square-walled room with no roof. To help evoke the experience of 3D space and aid orientation within the VR environment, each wall of the environment was distinguished by color (dark green, light green, dark gray or light gray), illumination (two walls were illuminated while the other two were in shadow) and by the presence of permanent visual cues. The permanent visual cues included clouds in the sky, a vertical black stripe in the middle of the light green wall, a horizontal black strip across the light gray wall, and a wooden box situated in one corner of the environment. A first-person perspective was implemented and participants could control their movement through the virtual space using the keyboard arrows (2D translational motion) and the mouse-pad (head tilt). Movement through the environment elicited the sound of footsteps.
Within the VR environment participants were exposed to a range of different sensory stimuli. Auditory cues (Xn) constituted 80 different complex sounds (for example, natural sounds or those produced by musical instruments) that were played over headphones. Four different visual cues (Yn) could appear on the walls of the environment, each of which had a unique color and pattern. Two of the visual cues were always presented on the same wall, the assignment of which was randomized for each participant. The two remaining visual cues were ‘nomadic’, meaning that with each presentation they were randomly assigned to one of the four walls. A wooden box situated in the corner of the environment served to deliver the outcome cues (Zn) which constituted either a rewarding silver coin or a neutral wood-chip. To harvest the value of a silver coin (20 pence) (reward) or woodchip (0 pence) (neutral), participants were required to first collide with the wooden box which caused the wooden walls to disappear, and second collide with the coin or wood-chip which was accompanied by a ‘collision’ sound. The outcome cues (Zn) were only available for 10 s. The cumulative total value of harvested reward was displayed in the upper left corner of the computer screen.
Humans and mice: inference task overview
In the respective environments described above, both humans and mice performed an inference task. The task was adapted from associative inference and sensory preconditioning tasks described elsewhere (Brogden, 1939; Preston et al., 2004; Robinson et al., 2014) and involved 3 stages (Figures 1A, S1A, and S1B). First, in the ‘observational learning’ stage, subjects learned a set of associations between auditory and visual cues via mere exposure. On each trial, an auditory and visual cue (Xn and Yn) were presented serially and contiguously: auditory cue (mice: 10 s, humans: 8 s) followed by associated visual cue (mice: 8 s; humans: 8 s). Second, in the ‘conditioning’ stage, subjects learned that half the visual cues predicted delivery of a rewarding outcome (‘set 1’), while the other half predicted delivery of a neutral outcome (‘set 2’). On each trial a visual cue and outcome (Yn and Zn) were presented serially and contiguously: visual cue (mice: 8 s; humans: 8 s) followed by outcome delivery (mice: 10 s available from liquid dispenser; humans: 6 s available in wooden box). Finally, in the ‘inference test’ stage subjects were exposed to the auditory cues (Xn) in isolation. In response to each auditory cue, subjects could infer the appropriate outcome using the learned structure of the task. This test thus provided an opportunity to investigate inferential choice. In both species the 3 stages of the task were performed across at least 3 consecutive days (see below).
To match task difficulty and avoid ceiling effects in human task performance, in the observational learning stage we scaled the number of associative memories learned by human subjects relative to mice. Consequently, between auditory and visual cues there was a one-to-one mapping in mice and a many-to-one mapping in humans (Figure S1C). In addition, in mice we included one visual cue per set, while in humans we included two visual cues per set (Figure S1D). Therefore, in total mice learned two auditory-visual associations in the observational learning stage and two visual-outcome associations in the conditioning stage, while humans learned eighty auditory-visual associations in the observational learning stage and four visual-outcome associations in the conditioning stage. At the start of the experiment the pairings between auditory, visual and outcome cues were randomly assigned for each human participant and for each mouse.
Mouse pre-training protocol
Mice were preselected by assessing their propensity to lick drops of sucrose solution delivered at a liquid dispenser when food restricted to 90% their free-feeding body weight. Selected mice were then fed ad libitum up until day 4 of the pre-training.
During the pre-training, mice first completed the observational learning stage, conducted across 6 consecutive days (Figures S1B and S1F). Each day the mice were placed in the open-field environment for 20 separate sessions, each lasting ∼8-10 minutes. Each session included 6 trials where an auditory cue (Xn) was followed by presentation of the associated visual cue (Yn), from either set 1 or 2. The inter-trial interval (ITI) was ∼1.5 minutes. On day 1, each session included cues from either set 1 or set 2 (‘blocked’), while on days 2-6, sessions could include cues from either set 1 or 2 (‘blocked’), or both set 1 and 2 presented in a pseudo-random order (‘mixed’). On each day of training, cues from set 1 and 2 were presented equally often. Trials were triggered only when the animal was moving. To prepare for the next stage of the task (conditioning), across the final 3 days of the observational learning stage mice were food restricted to reach 90% their free-feeding body weight.
After the observational learning stage, the conditioning was conducted across 4-5 consecutive days (Figures S1B and S1F). Each day the mice were placed in the open-field environment for 20 separate sessions, each lasting ∼8-10 minutes. Each session included 6 trials where a visual cue (Yn) was presented followed by the associated outcome (Zn), a drop of sucrose for set 1, or a drop of water for set 2. The ITI was ∼1.5 minutes. On day 1 of the conditioning stage, each session included cues from either set 1 or set 2 (‘blocked’), while on day 2-4/5 of the conditioning stage, each session included cues from either set 1 or 2 (‘blocked’), or from both set 1 and 2, presented in a pseudo-random order (‘mixed’). Trials were triggered only when the animal was moving. The animal’s propensity to visit the dispenser was assessed and the number of cues presented from set 1 and 2 adjusted accordingly. Thus, mice that were prone to approach the dispenser received additional set 2 trials, while mice that were less inclined to approach the dispenser received additional set 1 trials. Across mice, the ratio of set 1 to set 2 trials delivered during conditioning did not predict subsequent performance on the inference test (r22 = −0.06, p = 0.816).
Human pre-training protocol
On the first day of the experiment participants performed the observational learning during which participants were required to learn at least 40 (out of 80 total) auditory-visual associations (Figures S1C and S1E). This training occurred within the VR environment and was divided into 8 sub-sessions. In each sub-session, participants controlled their movement within the VR environment and were presented with 20 trials in which 10 different auditory-visual associations were each presented twice, in a random order. The ITI was 5 s. Participants were given the choice to repeat the sub-session if they so wished. After the sub-session, learning of auditory-visual associations was monitored outside the VR environment, using an observational learning test coded in MATLAB 2016b using Psychtoolbox (version 3.0.13). On each trial of the observational learning test, 1 auditory cue (Xn) from the sub-session was presented, followed by presentation of 4 different visual cues. Participants were instructed to select the visual cue (Yn) associated with the auditory cue (Xn), using a button press response within 3 s and were only given feedback on their average performance at the end of the test. Each auditory cue in the sub-session was presented 2 times. Participants were required to repeat this training in the VR environment (including the observational learning test) until they obtained at least 50% accuracy for auditory-visual associations in the sub-session. In total, 3 participants were unable to reach this learning criteria of 50% accuracy and were excluded after the first training day.
After obtaining at least 50% accuracy on the observational learning test for each sub-session, participants were given a master test. The master test had the same format as the observational learning test, except that it included all 80 auditory cues, each of which was presented 3 times. Training on the observational learning stage was terminated when participants reached 50% accuracy on the master test. If participants failed to reach 50% accuracy, training in the VR environment was repeated for those sub-sessions with poor performance.
On the second day of the experiment participants underwent the conditioning, where they learned that 2 of the 4 visual cues were associated with a virtual silver coin (later converted to a monetary reward of 20p per coin) on 80% of trials. The remaining 2 visual cues were associated with a virtual wood-chip, a neutral outcome of no value (0p per chip) on 100% of trials. Training occurred within the VR environment and on each trial, participants were presented with a visual cue (Yn) followed by delivery of the rewarding monetary coin or neutral wood-chip (Zn) to a wooden box. Participants were instructed to only look in the wooden box after the visual cue was presented and instructed to leave the wooden box before the next trial. The inter-trial interval ITI was 2 s.
Performance during the VR conditioning training was monitored using a conditioning test coded in MATLAB 2016b using Psychtoolbox (version 3.0.13). On each trial of the conditioning test, participants were presented with a still image of a visual cue before being asked to indicate the probability of reward using a number line. Participants were given 3 s to respond and were only given feedback on their average performance at the end of the test. Participants were required to repeat the VR conditioning training and conditioning test until they performed the test with 100% accuracy (Figure S1E).
On day 3 of the experiment, before entering the MRI scanner, participants repeated the conditioning test. Participants then entered the 7T MRI scanner and performed the fMRI scan task (see below and Figure S1A). Immediately after exiting the scanner, participants were given a surprise observational learning test, equivalent to the master test performed on day 1.
Mouse inference test protocol
After mice completed the training protocol (observational learning and conditioning stages) we implemented an inference test on up to 8 consecutive recording days (Figure S1B). On each test day we first reconditioned mice to show a reward-seeking bias to visual cues (Yn) in set 1 relative to those in set 2, across two consecutive sessions of at least 12 trials presented in a pseudo-random order. During this reconditioning, reward-seeking behavior was quantified as time spent in the outcome area during the visual cues (Yn), prior to outcome delivery (Figure 1D), for set 1 (Y1) minus set 2 (Y2) (i.e., difference from zero). All mice included in the analysis exhibited a reward-seeking bias during the reconditioning, as defined by requiring fewer reconditioning sessions to show a bias than 2 standard deviations from the group average. In total, two mice were excluded from the study as their number of trials required to achieve reward-seeking bias during reconditioning exceeded 2 standard deviations from the mean.
Mice then proceeded to the inference test where auditory cues (Xn) were presented in isolation for a total of 10 s, followed by an ITI of at least 30 s. Auditory cues (Xn) from set 1 and set 2 were presented in a pseudo-random order, with 26 trials per day. During the inference test, reward-seeking behavior was quantified as the time spent in the outcome area in the 20 s period after the offset of the auditory cues (Figure 1D). The reward-seeking bias was quantified as the difference in reward-seeking behavior for set 1 and set 2 against zero: X1- X2. After each block of inference test trials (8-10 trials), mice were removed from the open-field to rest, before being given a brief block of reconditioning trials (Figure S1B). This interleaved paradigm (reconditioning-test-reconditioning-test etc.) was designed to minimize extinction effects in response to the auditory cues. Finally, at the end of each test day, mice were re-exposed to the observational learning stage (Figures 1A and S1B).
The location of mice implanted with tetrodes was tracked using 3 LED clusters attached to the Intan board (see below) connected to the microdrive (McNamara et al., 2014); mice implanted with optic fibers only were tracked by contrast against the floor of the open-field (Imetronic, France). For both approaches the location of the animal was captured at 25 frames per second using an overhead camera.
Human inference test protocol
The inference test was incorporated into the fMRI scan task which included two different trial types: inference test trials and reconditioning trials (Figures 1C and S1A). For both types of trials, participants viewed a short video taken from the VR training environment. The videos were presented via a computer monitor and projected onto a screen inside the scanner bore. On each trial the duration of the video was determined using a truncated gamma distribution with mean of 7 s, minimum of 4 s and maximum of 14 s. During reconditioning trials, the video of the VR environment orientated toward a visual stimulus displayed on one of the four walls. At the end of the video, participants were presented with a still image of the associated outcome for that visual cue (Figure 1C). During the inference test trials, the video of the VR environment was accompanied by an auditory cue, played over MR compatible headphones (S14 inset earphones, Sensimetrics). Visual cues were not displayed during these trials. At the end of the video, participants were presented with a question asking ‘Would you like to look in the box?’, with the options ‘yes’ or ‘no’ (Figure 1C). Participants were required to make a response within 3 s using an MR compatible button box and their right index or middle fingers. No feedback was given. To control for potential confounding effects of space, each video involved a trajectory constrained to a 1/16 quadrant of the VR environment, evenly distributed across the different visual and auditory cues. Across conditioning trials, each visual cue was presented 16 times, once in each possible spatial quadrant. Across inference test trials, each of the 80 possible auditory cues was presented once, and for each set of auditory cues Xn (determined by the associated visual cues Yn) the spatial quadrant of the accompanying videos were evenly distributed across all quadrants. The fMRI scan task was evenly divided across 2 scan blocks, each of which lasted 15 minutes.
Mouse surgical procedures
All surgical procedures were performed under deep anesthesia using isoflurane (0.5%–2%) and oxygen (2 l/min), with analgesia provided before (0.1 mg/kg vetergesic) and after (5 mg/kg metacam) surgery.
For optogenetic manipulations (n = 15 mice), viral injections were targeted bilaterally to dCA1 using stereotaxic coordinates (−1.7mm and −2.3mm anteroposterior from bregma, ± 1.7 mm lateral from bregma, and −1.1 mm ventral from the brain surface; 400 nL per site). The adeno-associated viral (AAV) vectors were delivered at a rate of 100 nl/min using a glass micropipette, lowered to the target site and held in place for 4 mins after virus delivery before being withdrawn (McNamara et al., 2014). rAAV2-CAG-FLEX-ArchT-GFP was used for the Cre-dependent expression of ArchT-GFP under CAG promoter in the dCA1 pyramidal neurons of CamKII-Cre mice and rAAV2-CamKII-ArchT-GFP was used for the expression of ArchT-GFP under CamKII promoter in the dCA1 neurons of C57BL/6J mice. For the control experiments we used corresponding AAVs encoding GFP only: rAAV2-CAG-FLEX-GFP in CamKII-Cre mice or rAAV2-CamKII-GFP in C57BL/6J mice. All viruses are from E.S. Boyden (available at UNC Vector Core).
For electrophysiological recordings (n = 10 mice), mice were implanted with a microdrive containing 12-14 independently movable tetrodes that either all targeted the pyramidal layer of bilateral CA1 in the hippocampus (n = 6) (van de Ven et al., 2016) or three brain regions: the pyramidal layer of hippocampal CA1, the medial prefrontal cortex and the ventral tegmental area (triple-site dCA1-mPFC-VTA recordings; Figure S6H). The distance between neighboring tetrodes inserted in a given brain region of each hemisphere was 0.4 mm. Tetrodes were constructed by twisting together four insulated tungsten wires (12 μm diameter, California Fine Wire) which were briefly heated to bind them together into a single bundle. Each tetrode was attached to a 6 mm long M1.0 screw to enable independent manipulation of depth. The microdrive was implanted under stereotaxic control with reference to bregma. For hippocampal dCA1, tetrodes were implanted by first identifying central coordinates −2.0 mm anteroposterior from bregma, ± 1.7 mm lateral from bregma as references to position each individual tetrode contained in the microdrive, and initially implanting tetrodes above the pyramidal layer (∼-1 mm ventral from brain surface). A similar approach was used for tetrodes aimed at the medial prefrontal cortex using central coordinates +1.7 mm anteroposterior from bregma, ± 0.3 mm lateral from bregma, and an initial −1.5 mm ventral from brain surface; and at the ventral tegmental area using central coordinates −3.2 anteroposterior from bregma, ± 0.5 mm lateral from bregma, and an initial −3.8 mm ventral from brain surface. Following the implantation, the exposed parts of the tetrodes were covered with paraffin wax, after which the drive was secured to the skull using dental cement. For extra stability, four stainless-steel anchor screws were inserted into the skull before the drive was implanted. Two of the anchor screws, inserted above the cerebellum, were attached to 50 μm tungsten wires (California Fine Wire) and further served as a ground and reference electrodes during the recordings.
For optogenetic manipulations, optic fibers (230 μm diameter, Doric Lenses, Canada) were incorporated into a microdrive designed to bilaterally deliver light to the dCA1 pyramidal cell layer using stereotaxic coordinates (−2.0 mm anteroposterior from bregma, ± 1.7 mm lateral from bregma, and −1.1 mm ventral from the brain surface). Implantation occurred 2 weeks after dCA1 viral injections.
Mouse in vivo light delivery
Optical dCA1 stimulation was performed in both ArchT-transduced mice (to optogenetically silence pyramidal neurons) and GFP-transduced control mice using a diode-pumped solid-state laser (Laser 2000, Ringstead) that delivers yellow light (561nm; ∼5-7 mW output power) to the optic fibers implanted above the dCA1 pyramidal cell layer. Using synchronous transistor-transistor logic (TTL) pulses, light was delivered to dCA1 simultaneous with the presentation of either the auditory cues in the inference test (10 s duration; Figure 2I), or with presentation of visual cues presented after both training and the inference test were complete (8 s duration; Figure 2J). Notably, optogenetic suppression of dCA1 neuronal spiking during presentation of these additional visual cues could not affect learning or performance on the inference test.
Mouse in vivo multichannel data acquisition
During the training protocol mice were gradually accustomed to being connected to the recording system. On the morning of each recording day, single-unit spiking activity together with the electrophysiological profile of the local field potentials (LFPs) were used to adjust the position of each tetrode relative to either the dCA1 pyramidal cell layer, mPFC or VTA. Tetrodes were then left in position for ∼1.5-2 h before recordings started. At the end of each recording day, tetrodes were gently raised by ∼500 μm to avoid possible mechanical damage to their target structure overnight.
Multichannel ensemble recordings were conducted during the inference test protocol. The signals from the electrodes were amplified, multiplexed and digitized using a single integrated circuit located on the head of the animal (RHD2164, Intan Technologies; gain x1000) (McNamara et al., 2014). The amplified and filtered (0.09Hz to 7.60kHz) electrophysiological signals were digitized at 20 kHz and saved to disk along with the synchronization signals from the animal’s position tracking, the presentation of each type of sensory cue, the licks events, and the laser activation. To track the location of the animal, three LED clusters were attached to the headstage and captured at 25 frames per second by an overhead color camera.
Mouse spike detection and unit isolation
The electrophysiological data was subsequently bandpass filtered (800Hz to 5kHz) and single extracellular discharges detected through thresholding the root-mean square (RMS) power spectrum using a 0.2 ms sliding window. Detected spikes of the individual electrodes were combined per tetrode. To isolate spikes putatively belonging to the same neuron, spike waveforms were first up-sampled to 40 kHz and aligned to their maximum trough. Principal component analysis was applied to these waveforms ± 0.5 ms from the trough to extract the first 3–4 principal components per channel, such that each individual spike was represented by 12 waveform parameters. For all main analyses, an automatic clustering program (KlustaKwik, http://klusta-team.github.io) was run on the principal component space and the resulting clusters were manually recombined and further isolated based on cloud shape in the principal component space, cross-channel spike waveforms, auto-correlation histograms and cross-correlation histograms (Harris et al., 2000; van de Ven et al., 2016). All sessions recorded on the same day were concatenated and clustered together. Clusters were only included for further analysis if they showed stable cross-channel spike waveforms across the entire recording day, a clear refractory period in the auto-correlation histogram, and well-defined cluster boundaries. For a small subset of our data (Figures 2G, 2H, and S6H), we applied an automated clustering pipeline using Kilosort (https://github.com/cortex-lab/KiloSort) via the SpikeForest sorting framework (https://github.com/flatironinstitute/spikeforest) (Magland et al., 2020; Pachitariu et al., 2016). To apply KiloSort to data acquired using tetrodes, the algorithm restricts templates to channels within a given tetrode bundle, while effectively masking all other recording channels. The resulting clusters were manually curated to check all clusters and remove spurious cells using metrics derived from the waveforms and spike times, and then verified by the operator. This procedure was cross-validated using several datasets and verified against manual curation, by computing confusion matrices to validate that clusters obtained automatically were also obtained with the previous method. In total, 1586 neurons were included in the analyses.
Human fMRI data acquisition
The fMRI scan task was performed inside a 7 Tesla Magnetom MRI scanner (Siemens) using a 1-channel transmit and a 32-channel phased-array head coil (Nova Medical, USA) at the Wellcome Centre for Integrative Neuroimaging Centre (University of Oxford). To acquire fMRI data a multiband echo planar imaging (EPI) sequence was used to acquire 50 1.5 mm thick transverse slices with no interslice gap and resulting isotropic voxels of 1.5 × 1.5 × 1.5 mm3 resolution, repetition time (TR) = 1.512 s, echo time (TE) = 20 ms, flip angle = 85°, field of view 192 mm, and acceleration factor of 2. To increase SNR in brain regions for which we had prior hypotheses, we restricted the fMRI sequence to a partial volume, thus increasing the number of measurements acquired. The partial volume covered occipital and temporal cortices. For each participant, a T1-weighted structural image was acquired to correct for geometric distortions and perform co-registration between EPIs, consisting of 176 0.7 mm axial slices, in-plane resolution of 0.7 × 0.7 mm2, TR = 2.2 s, TE = 2.96 ms, and field of view = 224 mm. A field map with dual echo-time images was also acquired (TE1 = 4.08 ms, TE2 = 5.1 ms, whole-brain coverage, voxel size 2 × 2 × 2 mm3).
Quantification and Statistical Analysis
Mouse electrophysiology analysis: inference task
To identify ensembles of neurons representing the six different cues included in the task (Xn, Yn, Zn), we first filtered the data by the “decision point” of the mouse. The “decision point” of the mouse was defined as the latest time bin in the trial of interest where the speed of the mouse was below 5cm/s prior to visiting the outcome area (Figure S4). By excluding data acquired in time bins occurring after the “decision point” of the mouse, we eliminated epochs when the mouse was located at, or approaching, the liquid dispenser. In this manner, we controlled for the spatial location of the mice on each trial.
After filtering the data by the “decision point”, we then visualized the firing response of different dCA1 neurons to each of the task cues (Xn, Yn, Zn; Figure 3A). The instantaneous spike discharge of each neuron was assessed within time bins that spanned a ± 10 s window from onset of each cue. For the peristimulus time histograms, the time bin for estimating the firing rate (Hz) for each neuron was 150 ms. For the heatmap, the time bin for estimating the average Z-scored firing rate for each neuron was 100 ms (Figure 3A).
Across all recorded neurons, within each trial we filtered the data by the “decision point” before estimating the Z-scored firing rate of each neuron during each 100 ms time bin spanning each trial. For each neuron, the Z-scored firing rate across time bins was then averaged for each trial, and the responses across all trials stacked and regressed onto a GLM indicating the identity of the sensory cues presented on each trial (Figure 3B). To control for differences in running speed across trials, we included a dummy variable in the model, indicating the standardized average speed per trial, again filtered by the “decision point” (Figure 3B). For each neuron, this analysis provided a regression weight indicating the extent to which the firing rate of the neuron in question changed in response to each sensory cue. To identify ensembles of neurons representing a given task cue, we selected neurons with a positive beta weight above 2 standard deviations from the mean regression coefficient, calculated across all recorded neurons for that cue (Figures 3C and 3D).
To assess whether successful inferential choice was associated with modulation of hippocampal dCA1 spiking activity, for each recorded neuron we estimated the average Z-scored spike discharge in each 100 ms time bin spanning a 30 s period peristimulus to presentation of each auditory cue Xn in the inference test. For each 100ms time bin, we then regressed the firing rate vector onto behavioral performance (‘1’ for correct inference and ‘0’ for incorrect inference), while accounting for both the speed of the mouse and the set of the auditory cues (set 1 or 2) across trials. Each general linear model (GLM) thus yielded a regression weight reflecting the difference in Z-scored firing rate for correct versus incorrect trials, for a given neuron. For each temporal bin, we then estimated the average regression weight across all recorded neurons, indicating the extent to which dCA1 spiking activity was modulated by behavioral performance (correct versus incorrect inference) through time (Figure 4B).
To assess the representational similarity of dCA1 ensemble firing patterns in response to the six cues included in the inference task, we first established the average Z-scored firing rate of each neuron in 100 ms time bins spanning all trials. For each cue, we then averaged the response of each neuron, before stacking the response across neurons to generate a population vector (Figure 4D). For each task cue, separate population vectors were generated for correct and incorrect trials. A representational similarity matrix (RSM) was then generated for both correct and incorrect trials, using the Pearson correlation coefficient obtained by correlating the population vector for each cue with the population vector for all other cues (Figure 4F). To estimate the representational similarity between auditory and visual cues on each recording day, the average between-association correlation coefficient (RSM off-diagonals: X1 versus Y2, and X1 versus Y2) was subtracted from the average within-association correlation coefficient (RSM main-diagonal: X1 versus Y1, and X2 versus Y2) (Figure 4H). Summary statistics were tested at the group level using two approaches: (1) a one-sided Wilcoxon signed-rank test across recording days; (2) a one-sided permutation test where the null distribution was generated by estimating the group average 10,000 times, after permuting the identity of all auditory cues in the RSM on each iteration. Correct and incorrect trials were kept separate for this permutation test. For visualization of the group average RSM (Figure 4F), the average correlation coefficient was estimated for each auditory-visual pair for each recording day to give a 4x4 matrix.
During presentation of auditory cues Xn in the inference test we used a spike-triggered average to assess the temporal relationship in spiking discharge between pairs of neurons representing Xn and Yn cues (Figures 5C and 5D). Taking ensembles of neurons representing Xn and Yn cues (as defined in Figures 3C and 3D), a 200 ms window was defined around each Xn spike during the within-set auditory cue. For each pair of Xn and Yn neurons, spike counts for each Yn neuron (Yn, within set: Figure 5C; Ym, cross-set: Figure 5D) were summed within each 1 ms bin of the 200 ms window, before estimating the Z-scored average firing rate for each 1 ms bin across all possible Xn spikes in the pair. Those pairs of Xn and Yn neurons where the Yn neurons fired less than 20 spikes across all Xn spikes were excluded from the analysis. For visualization purposes only (Figures 5C and 5D), a moving average was applied to the spike-triggered average, using bin size of 5 ms. An equivalent analysis was performed to assess spike counts in Xn neurons in response to spikes in Yn neurons (Figure S5).
Mouse electrophysiology analysis: rest/sleep
SWR events were detected as described previously (McNamara et al., 2014; van de Ven et al., 2016). The LFP signal from the tetrode with the highest number of recorded dCA1 pyramidal neurons was band-pass filtered (135-250 Hz), and the signal from a ripple-free reference tetrode was subtracted to eliminate common-mode noise (such as muscle artifacts). Next the power (root mean square) of the processed signal was calculated. SWR detection was applied to periods of immobility (instantaneous speed below 1.5cm/s), and the threshold for SWR event detection set to 7 standard deviations above the background mean power.
To determine whether triplets of neurons were coactive during SWRs, we estimated the joint-firing probability during SWRs recorded during periods of quiet wakefulness in the inference test (Figure 7) or during periods of long immobility in the sleep/rest session (Figures S7A and S7D–S7G). To control for differences in firing rate across triplets, the joint-firing probability was normalized by the average firing rate of the triplet. Across recording days, we computed the difference in joint-firing probabilities during SWRs that occurred early (recording days 1:4) and late (recording days 5:8) in the inference test (Figure S1B). To assess joint-firing of neurons across Xn, Yn and Zn ensembles (as defined in Figures 3C and 3D), we estimated all possible triplets, and then computed the coactivation probability as follows:
where nearly (nlate) is the number of SWRs during the inference test (Figure 7) or during the rest session (Figures S7D and S7E) during early (late) recording days in which all neurons in the triplet were active; Nearly (Nlate) is the total number of SWRs during the inference test (Figure 7) or during the rest session (Figures S7D and S7E) during early (late) recording days; fearly (flate) is the average of the mean firing rate of neurons in the triplet during early (late) recording days. We then tested whether the difference in these probabilities pˆdiff = pˆlate - pˆearly, was consistently different from zero, estimating the effect size for the difference by computing 10,000 bootstrapped resamples. Triplets of neurons that were not coactive in any SWRs were not included in the analysis. To assess evidence for increased representation of a cognitive short-cut in SWRs (Xn, Zn), the above analysis was applied to either douplets of neurons regardless of neurons representing Yn (Figure 7E), or to triplets of neurons where the joint-firing of neurons Xn and Zn was considered only in absence of spiking activity in neurons representing Yn (Figures 7F and 7G). To estimate the inter-spike interval between pairs of neurons (Figure 7H, S7F, and S7G), we took only the first spike in the ripple for each neuron, before taking the difference in spike time across both neurons in the pair (i.e., Zn - Xn).
To further illustrate coactivation of neuronal pairs during SWRs (Figure S7B), we used a second approach reported previously (McNamara et al., 2014). In brief, this involved first estimating the instantaneous firing rate counts within each SWRs, before calculating the correlation coefficient between the instantaneous firing rate counts for each cell pair. Between early and late recording days, we tested the difference in correlation coefficients between cell pairs against zero. To estimate the effect size for the difference we computed 10,000 bootstrapped resamples. Notably, this approach did not allow for analysis of triplets, nor permit control for spiking activity in neurons representing Yn.
Human fMRI preprocessing and GLMs
Pre-processing of MRI data was carried out using SPM12 (https://www.fil.ion.ucl.ac.uk/spm/). Images were realigned to the first volume, corrected for distortion using field maps, normalized to a standard EPI template and smoothed using an 8-mm full-width at half maximum Gaussian kernel. To remove low frequency noise from the pre-processed data, a high-pass filter was applied to the data using SPM12′s default settings. For each participant and for each scanning block, the resulting fMRI data was analyzed in an event-related manner using four different GLMs, one designed for univariate analyses, a second designed for assessing functional connectivity using a psychophysiological interaction (PPI), and a third and fourth designed for multivariate analyses. All GLMs were applied to data from both scan task blocks. In addition to the explanatory variables (EVs) of interest (described below), in all GLMs 6 additional scan-to-scan motion parameters produced during realignment were included as nuisance regressors to account for motion-related artifacts in each task block.
The first GLM, used to analyze univariate BOLD effects (Figures 2B and 2C), included 14 EVs per block. Of the 14 EVs, 8 accounted for trials in the inference test, divided according to performance of the subject (correct or incorrect inference), before being further divided according to the 4 possible auditory-visual associations. An additional 4 explanatory variables accounted for conditioning trials, divided by the 4 different visual cues. The onset of events within these first 12 EVs were locked to the onset of the video presented in each trial. The 2 final EVs accounted for the onset of questions presented during inference test trials, and the onset of outcomes presented during conditioning trials. To decorrelate the first 12 EVs from the final 2, the duration of onsets for the first 12 EVs was set using a box-car function to 4 s, the minimum duration of the video, whereas the duration of onsets for the final 2 EVs was set to the duration of the outcome/question. All EVs were then convolved with the hemodynamic response function.
The second GLM, used to assess functional connectivity using a PPI (Figure 2D), included 3 EVs per block, describing physiological, psychological and PPI regressors. The physiological regressor was defined from the fMRI time-course extracted from a seed region in the auditory cortex (see ROI definition below). The psychological regressor contrasted trials with correct versus incorrect inference during the inference test. The PPI regressor was constructed by extracting and deconvolving the time-course from the auditory cortex, multiplying it by the psychological regressor and then convolving the output with the hemodynamic response function (HRF). To account for additional unwanted variance, task relevant EVs included in the first GLM described above were also included.
The third GLM, used to assess representational similarity between auditory and visual cues (Figures 4 and S3C–S3I), included a unique EV for each trial included in EVs 1-12 from the first GLM. To maximize cross-voxel sensitivity in the BOLD response to different cues, each unique EV was described by a delta function locked to the end of the video, 4 s after video onset to ensure adequate decoupling from the response to the question or outcome. 2 additional EVs were included to account for the onset of all questions (inference test trials) and the onset of all outcome presentations (conditioning trials), modeled in the same way as in the first GLM. The delta function for each EV was then convolved with the hemodynamic response function.
The fourth GLM, used to assess representational similarity between auditory and outcome cues (Figures 6 and S6A–S6G), included a unique EV for each trial included in EVs 1-8 from the first GLM, and each trial included in the EV accounting for the onset of all outcomes. All auditory cue unique EVs were modeled in the same way as in the third GLM. All outcome cue unique EVs were described by a delta function locked to the onset of the outcome presentation. 2 additional EVs were included to account for the onset of all questions (inference test trials) and the onset of all visual cues (conditioning trials), modeled in the same way as in the first GLM. The delta function for each EV was then convolved with the hemodynamic response function.
Human univariate fMRI analysis
Using the output of the first GLM for univariate analysis, the following contrasts were assessed. First, to measure the univariate BOLD response to all inference test trials, the fMRI BOLD signal during inference test trials (EVs 1-8) was contrasted against the fMRI BOLD signal during conditioning trials (EVs 9-12) (Figure 2C). Second, to measure the univariate BOLD response to correct versus incorrect inference, inference test trials where participants made the correct inference (EVs 1-4) were contrasted against those where participants made the incorrect inference (EVs 5-8) (Figure 2B). This second contrast was also used to define the psychological regressor implemented in the PPI described above (Figure 2D). The resulting contrast images for all participants were entered into a second-level random effects ‘group’ analysis.
To visualize the time-course of the hippocampal response to inference (Figure 4A), we extracted the BOLD time series from the preprocessed data of each participant using the hippocampal ROI (Figure S3A). The obtained signal was resampled with a resolution of 300ms, divided into trials on the inference test, and at each time bin the signal was regressed against an EV indicating which trials the participant made the correct versus incorrect inference while accounting for the delay in the haemodynamic response. For each participant, the resulting regression weights were then estimated at each time bin and the average across all participants displayed (Figure 4A).
Human multivariate fMRI analysis
The output of the third and fourth GLMs were used to estimate the representational similarity in the BOLD response to different trials, using the representational similarity analysis toolbox (RSA) (Nili et al., 2014). For each trial, a t-statistic map for the relevant EV was estimated (comparing the response to that trial against the baseline).
Using the output t-statistic maps from the third and fourth GLM, activity patterns were extracted from a hippocampal ROI (Figure S3F) and the relative similarity between the response patterns elicited in different trials were assessed using Pearson correlations, and expressed as a correlation coefficient (r). For each participant, the response patterns from trials during the inference test were compared with the response patterns from trials during the conditioning phase, before being represented in a trial-by-trial cross-stimulus representational similarity matrix (RSM) [response to inference test by response to conditioning trials] (Figure 4C). Note, unlike a distance or a correlation matrix, this is not a symmetric matrix and the diagonals quantify the similarity of response patterns between the conditioning and inference test. To test evidence for representation of auditory-visual or auditory-outcome associations, for each participant 2 RSMs were estimated: one using trials where the correct inference was made during the inference test, and a second using trials where the incorrect inference was made during the inference test (e.g., Figure 4E). Both ‘correct’ and ‘incorrect’ RSMs were then used to estimate the following summary statistics. First, the mean ‘within’ versus mean ‘between’ auditory-visual association was estimated (Figure 4G). Second, we fitted a GLM with 2 EVs to the RSM, to obtain parameter estimates for auditory-visual associations dependent upon (EV 1) or independent upon (EV 2) the value of the associated outcome (Figures S3G–S3I), or obtain parameter estimates for auditory-outcome mappings conditional on (EV 1) or unconditional on (EV 2) sensory cues that predicted the outcome (Figures 6 and S6A–S6C). In both cases, summary statistics were tested at the group level using two approaches: (1) a one-sided Wilcoxon signed-rank test across participants; (2) a one-sided permutation test where the null distribution was generated by estimating the group average 10,000 times, after permuting the identity of all auditory cues in the RSM on each iteration. Correct and incorrect trials were kept separate for this permutation test. For visualization of the group average RSM (Figure 4E), the average correlation coefficient was estimated for each auditory-visual pair for each participant to give a 4x4 matrix.
Using the output t-statistic maps from the third and fourth GLMs, we implemented searchlight RSA (Figures S3C, S6D, and S6E) using a spherical searchlight defined using default settings (Nili et al., 2014): fixed volume of 100 nearest neighbor voxels relative to the center voxel; variable radius with upper limit set to 15 mm to accommodate brain boundaries. The searchlight was swept across each brain volume. Across t-statistic maps (trials), the extracted voxels were correlated using Pearson correlations, and expressed as a correlation coefficient (r). The RSM was then constructed using correctly inferred trials as described for the hippocampal ROI RSA analysis above, and the resulting correlation coefficients were Fisher transformed. A summary statistic was then generated for each searchlight sphere, using the RSM to estimate ‘within’ versus ‘between’ auditory-visual associations (Figures S3C–S3E), ‘within’ versus ‘between’ auditory-outcome associations conditional on the visual cue (Figures 6H–6J), or multiple regression (GLM; see below) to compare different model RSMs (Figures 6E–6G). The summary statistic of interest was then mapped back to the central voxel in the searchlight sphere and saved. The sphere was then shifted and the entire procedure repeated until complete for the entire imaged volume. Across all spheres, this yielded a descriptive map per subject. Across the group, these subject maps were then entered into a second-level random effects analysis.
To compare different model RSMs within the same searchlight sphere (Figures 6E–6G), we used multiple regression (GLM) to compare the Z-scored RSMs across voxels. The GLM included 2 EVs to obtain parameter estimates for auditory-outcome associations that reflected the learned task structure (EV 1) or task-independent value (EV 2) (Figure S6B). For each EV, the regression weight was used as the summary statistic and across all spheres this yielded two descriptive maps per subject, one for each EV.
Human fMRI statistics and ROI specification
From the first and second GLMs, we report results at the group-level using whole-brain family wise error (FWE) corrected statistical significance. The cluster defining threshold was p < 0.01 uncorrected and the correct significance level defined as p < 0.05. For univariate effects in the hippocampus, we use an anatomical hippocampal mask (Figure S3A) to extract the raw hippocampal BOLD signal (Figure 4A) and to perform small-volume correction (SVC) for multiple comparisons with FWE peak-level correction at p < 0.05 (Figures 2B and S3D).
All ROIs were defined from contrasts that were orthogonal to the contrasts of interest. To define the seed region for the PPI (Figure 2D), we defined an ROI in bilateral auditory cortex using the contrast between inference test trials and conditioning trials, thresholded at p < 0.001 uncorrected (Figures 2C and S3B). To define an independent hippocampal ROI for RSA, the univariate contrast between correctly inferred and incorrectly inferred trials on the inference test (Figure 2B) was thresholded at p < 0.01 uncorrected (Figure S3F). To define independent masks in medial prefrontal cortex and putative dopaminergic midbrain we used two previous fMRI datasets, reporting functional maps for novel conjunctive representations in medial prefrontal cortex (Barron et al., 2013) and reward prediction error signals in ventral tegmental area (Klein-Flügge et al., 2011) respectively (Figures S6F and S6G). These functional masks were used to perform small-volume correction (SVC) for multiple comparisons with FWE peak-level correction at p < 0.05 (Figures 6H and 6J).
Acknowledgments
We would like to thank B. Micklem, P. Magill, A. Morley, S. Trouche, J. Westcott, and S. Rieger for advice and technical support, and the Dupret lab for discussions at all stages of the project. H.C.B. is supported by the John Fell Oxford University Press Research Fund (153/046) and a Junior Research Fellowship (Merton College, University of Oxford). H.C.B., H.M.R., P.P.V., and N.C.-U. are supported by the Medical Research Council (MRC) (MC_UU_12024/3 and MC_UU_00003/4). R.S.K. is supported by an EPSRC/MRC studentship (EP/L016052/1). A.S. and R.R. are supported by Wellcome Trust studentships (203836/Z/16/Z and 203964/Z/16/Z). T.E.J.B. is supported by a Wellcome Trust Senior Research Fellowship (WT104765MA). D.D. is supported by the Biotechnology and Biological Sciences Research Council UK (BB/N0059TX/1) and the MRC (programs MC_UU_12024/3 and MC_UU_00003/4). The Wellcome Centre for Integrative Neuroimaging is supported by core funding from the Wellcome Trust (203139/Z/16/Z).
Author Contributions
All authors contributed to the preparation of the manuscript. H.C.B., T.E.J.B., D.M.B., and D.D. designed the study. H.C.B., P.V.P., and D.D. developed the methodology for electrophysiology and optogenetic acquisition. H.C.B. made the VR environment. H.C.B., H.M.R., R.R., and D.D. acquired the electrophysiology and optogenetic data. H.C.B., R.S.K., and A.S. acquired the MRI data. H.C.B., R.S.K., A.S., J.X.O., H.N., and D.D. analyzed the data. H.C.B., T.E.J.B., and D.D. acquired funding.
Declaration of Interests
The authors declare no competing interests.
Published: September 17, 2020
Footnotes
Supplemental Information can be found online at https://doi.org/10.1016/j.cell.2020.08.035.
Contributor Information
Helen C. Barron, Email: helen.barron@merton.ox.ac.uk.
David Dupret, Email: david.dupret@bndu.ox.ac.uk.
Supplemental Information
References
- Barron H.C., Dolan R.J., Behrens T.E.J. Online evaluation of novel choices by simultaneous representation of multiple memories. Nat. Neurosci. 2013;16:1492–1498. doi: 10.1038/nn.3515. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Battaglia F.P., Benchenane K., Sirota A., Pennartz C.M.A., Wiener S.I. The hippocampus: hub of brain network communication for memory. Trends Cogn. Sci. 2011;15:310–318. doi: 10.1016/j.tics.2011.05.008. [DOI] [PubMed] [Google Scholar]
- Brogden W.J. Sensory pre-conditioning. J. Exp. Psychol. 1939;25:323–332. doi: 10.1037/h0058465. [DOI] [PubMed] [Google Scholar]
- Buckner R.L. The role of the hippocampus in prediction and imagination. Annu. Rev. Psychol. 2010;61:27–48. doi: 10.1146/annurev.psych.60.110707.163508. [DOI] [PubMed] [Google Scholar]
- Bunsey M., Eichenbaum H. Conservation of hippocampal memory function in rats and humans. Nature. 1996;379:255–257. doi: 10.1038/379255a0. [DOI] [PubMed] [Google Scholar]
- Buzsáki G. Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning. Hippocampus. 2015;25:1073–1188. doi: 10.1002/hipo.22488. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buzsáki G., Moser E.I. Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nat. Neurosci. 2013;16:130–138. doi: 10.1038/nn.3304. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cai D.J., Aharoni D., Shuman T., Shobe J., Biane J., Song W., Wei B., Veshkini M., La-Vu M., Lou J. A shared neural ensemble links distinct contextual memories encoded close in time. Nature. 2016;534:115–118. doi: 10.1038/nature17955. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen N.J., Eichenbaum H. MIT Press; 1993. Memory, Amnesia, and the Hippocampal System. [Google Scholar]
- Coutanche M.N., Gianessi C.A., Chanales A.J.H., Willison K.W., Thompson-Schill S.L. The role of sleep in forming a memory representation of a two-dimensional space. Hippocampus. 2013;23:1189–1197. doi: 10.1002/hipo.22157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Csicsvari J., Hirase H., Czurkó A., Mamiya A., Buzsáki G. Fast network oscillations in the hippocampal CA1 region of the behaving rat. J. Neurosci. 1999;19:RC20. doi: 10.1523/JNEUROSCI.19-16-j0001.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Csicsvari J., O’Neill J., Allen K., Senior T. Place-selective firing contributes to the reverse-order reactivation of CA1 pyramidal cells during sharp waves in open-field exploration. Eur. J. Neurosci. 2007;26:704–716. doi: 10.1111/j.1460-9568.2007.05684.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Daw N.D., Niv Y., Dayan P. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat. Neurosci. 2005;8:1704–1711. doi: 10.1038/nn1560. [DOI] [PubMed] [Google Scholar]
- Deisseroth K. Optogenetics. Nat. Methods. 2011;8:26–29. doi: 10.1038/nmeth.f.324. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deisseroth K. Circuit dynamics of adaptive and maladaptive behaviour. Nature. 2014;505:309–317. doi: 10.1038/nature12982. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Diba K., Buzsáki G. Forward and reverse hippocampal place-cell sequences during ripples. Nat. Neurosci. 2007;10:1241–1242. doi: 10.1038/nn1961. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Diekelmann S., Born J. The memory function of sleep. Nat. Rev. Neurosci. 2010;11:114–126. doi: 10.1038/nrn2762. [DOI] [PubMed] [Google Scholar]
- Dragoi G., Tonegawa S. Preplay of future place cell sequences by hippocampal cellular assemblies. Nature. 2011;469:397–401. doi: 10.1038/nature09633. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dupret D., O’Neill J., Pleydell-Bouverie B., Csicsvari J. The reorganization and reactivation of hippocampal maps predict spatial memory performance. Nat. Neurosci. 2010;13:995–1002. doi: 10.1038/nn.2599. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Efron B. The Bootstrap and Modern Statistics. J. Am. Stat. Assoc. 2000;95:1293–1296. [Google Scholar]
- Ekstrom A.D., Kahana M.J., Caplan J.B., Fields T.A., Isham E.A., Newman E.L., Fried I. Cellular networks underlying human spatial navigation. Nature. 2003;425:184–188. doi: 10.1038/nature01964. [DOI] [PubMed] [Google Scholar]
- Ellenbogen J.M., Hu P.T., Payne J.D., Titone D., Walker M.P. Human relational memory requires time and sleep. Proc. Natl. Acad. Sci. USA. 2007;104:7723–7728. doi: 10.1073/pnas.0700094104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ferbinteanu J., Shapiro M.L. Prospective and retrospective memory coding in the hippocampus. Neuron. 2003;40:1227–1239. doi: 10.1016/s0896-6273(03)00752-9. [DOI] [PubMed] [Google Scholar]
- Fortin N.J., Agster K.L., Eichenbaum H.B. Critical role of the hippocampus in memory for sequences of events. Nat. Neurosci. 2002;5:458–462. doi: 10.1038/nn834. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Foster D.J. Replay Comes of Age. Annu. Rev. Neurosci. 2017;40:581–602. doi: 10.1146/annurev-neuro-072116-031538. [DOI] [PubMed] [Google Scholar]
- Foster D.J., Wilson M.A. Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature. 2006;440:680–683. doi: 10.1038/nature04587. [DOI] [PubMed] [Google Scholar]
- Foster D.J., Morris R.G.M., Dayan P. A model of hippocampally dependent navigation, using the temporal difference learning rule. Hippocampus. 2000;10:1–16. doi: 10.1002/(SICI)1098-1063(2000)10:1<1::AID-HIPO1>3.0.CO;2-1. [DOI] [PubMed] [Google Scholar]
- Gilboa A., Sekeres M., Moscovitch M., Winocur G. Higher-order conditioning is impaired by hippocampal lesions. Curr. Biol. 2014;24:2202–2207. doi: 10.1016/j.cub.2014.07.078. [DOI] [PubMed] [Google Scholar]
- Gomperts S.N., Kloosterman F., Wilson M.A. VTA neurons coordinate with the hippocampal reactivation of spatial experience. eLife. 2015;4:e05360. doi: 10.7554/eLife.05360. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gupta A.S., van der Meer M.A.A., Touretzky D.S., Redish A.D. Hippocampal replay is not a simple function of experience. Neuron. 2010;65:695–705. doi: 10.1016/j.neuron.2010.01.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gupta K., Erdem U.M., Hasselmo M.E. Modeling of grid cell activity demonstrates in vivo entorhinal ‘look-ahead’ properties. Neuroscience. 2013;247:395–411. doi: 10.1016/j.neuroscience.2013.04.056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hampton A.N., Bossaerts P., O’Doherty J.P. The role of the ventromedial prefrontal cortex in abstract state-based inference during decision making in humans. J. Neurosci. 2006;26:8360–8367. doi: 10.1523/JNEUROSCI.1010-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harris K.D., Henze D.A., Csicsvari J., Hirase H., Buzsáki G. Accuracy of tetrode spike separation as determined by simultaneous intracellular and extracellular measurements. J. Neurophysiol. 2000;84:401–414. doi: 10.1152/jn.2000.84.1.401. [DOI] [PubMed] [Google Scholar]
- Ho J., Tumkaya T., Aryal S., Choi H., Claridge-Chang A. Moving beyond P values: data analysis with estimation graphics. Nat. Methods. 2019;16:565–566. doi: 10.1038/s41592-019-0470-3. [DOI] [PubMed] [Google Scholar]
- Johnson A., Redish A.D. Neural ensembles in CA3 transiently encode paths forward of the animal at a decision point. J. Neurosci. 2007;27:12176–12189. doi: 10.1523/JNEUROSCI.3761-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones J.L., Esber G.R., McDannald M.A., Gruber A.J., Hernandez A., Mirenzi A., Schoenbaum G. Orbitofrontal cortex supports behavior and learning using inferred but not cached values. Science. 2012;338:953–956. doi: 10.1126/science.1227489. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Joo H.R., Frank L.M. The hippocampal sharp wave-ripple in memory retrieval for immediate use and consolidation. Nat. Rev. Neurosci. 2018;19:744–757. doi: 10.1038/s41583-018-0077-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Karlsson M.P., Frank L.M. Awake replay of remote experiences in the hippocampus. Nat. Neurosci. 2009;12:913–918. doi: 10.1038/nn.2344. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kitamura T., Ogawa S.K., Roy D.S., Okuyama T., Morrissey M.D., Smith L.M., Redondo R.L., Tonegawa S. Engrams and circuits crucial for systems consolidation of a memory. Science. 2017;356:73–78. doi: 10.1126/science.aam6808. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Klein-Flügge M.C., Hunt L.T., Bach D.R., Dolan R.J., Behrens T.E.J. Dissociable reward and timing signals in human midbrain and ventral striatum. Neuron. 2011;72:654–664. doi: 10.1016/j.neuron.2011.08.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koster R., Chadwick M.J., Chen Y., Berron D., Banino A., Düzel E., Hassabis D., Kumaran D. Big-Loop Recurrence within the Hippocampal System Supports Integration of Information across Episodes. Neuron. 2018;99:1342–1354. doi: 10.1016/j.neuron.2018.08.009. [DOI] [PubMed] [Google Scholar]
- Kriegeskorte N., Mur M., Bandettini P. Representational similarity analysis - connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2008;2:4. doi: 10.3389/neuro.06.004.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kumaran D. What representations and computations underpin the contribution of the hippocampus to generalization and inference? Front. Hum. Neurosci. 2012;6:157. doi: 10.3389/fnhum.2012.00157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kumaran D., McClelland J.L. Generalization through the recurrent interaction of episodic memories: a model of the hippocampal system. Psychol. Rev. 2012;119:573–616. doi: 10.1037/a0028681. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Langdon A.J., Sharpe M.J., Schoenbaum G., Niv Y. Model-based predictions for dopamine. Curr. Opin. Neurobiol. 2018;49:1–7. doi: 10.1016/j.conb.2017.10.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lau H., Alger S.E., Fishbein W. Relational memory: a daytime nap facilitates the abstraction of general concepts. PLoS ONE. 2011;6:e27139. doi: 10.1371/journal.pone.0027139. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lewis P.A., Durrant S.J. Overlapping memory replay during sleep builds cognitive schemata. Trends Cogn. Sci. 2011;15:343–351. doi: 10.1016/j.tics.2011.06.004. [DOI] [PubMed] [Google Scholar]
- Lex A., Gehlenborg N., Strobelt H., Vuillemot R., Pfister H. UpSet: Visualization of Intersecting Sets. IEEE Trans. Vis. Comput. Graph. 2014;20:1983–1992. doi: 10.1109/TVCG.2014.2346248. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Y., Dolan R.J., Kurth-Nelson Z., Behrens T.E.J. Human Replay Spontaneously Reorganizes Experience. Cell. 2019;178:640–652. doi: 10.1016/j.cell.2019.06.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Louie K., Wilson M.A. Temporally structured replay of awake hippocampal ensemble activity during rapid eye movement sleep. Neuron. 2001;29:145–156. doi: 10.1016/s0896-6273(01)00186-6. [DOI] [PubMed] [Google Scholar]
- Magland J.F., Jun J.J., Lovero E., Morley A.J., Hurwitz C.L., Buccino A.P., Garcia S., Barnett A.H. SpikeForest: reproducible web-facing ground-truth validation of automated neural spike sorters. Elife. 2020;9:e55167. doi: 10.7554/eLife.55167. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McKenzie S., Frank A.J., Kinsky N.R., Porter B., Rivière P.D., Eichenbaum H. Hippocampal representation of related and opposing memories develop within distinct, hierarchically organized neural schemas. Neuron. 2014;83:202–215. doi: 10.1016/j.neuron.2014.05.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McNamara C.G., Tejero-Cantero Á., Trouche S., Campo-Urriza N., Dupret D. Dopaminergic neurons promote hippocampal reactivation and spatial memory persistence. Nat. Neurosci. 2014;17:1658–1660. doi: 10.1038/nn.3843. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McNaughton B.L., Barnes C.A., O’Keefe J. The contributions of position, direction, and velocity to single unit activity in the hippocampus of freely-moving rats. Exp. Brain Res. 1983;52:41–49. doi: 10.1007/BF00237147. [DOI] [PubMed] [Google Scholar]
- Mehta M.R., Quirk M.C., Wilson M.A. Experience-dependent asymmetric shape of hippocampal receptive fields. Neuron. 2000;25:707–715. doi: 10.1016/s0896-6273(00)81072-7. [DOI] [PubMed] [Google Scholar]
- Nádasdy Z., Hirase H., Czurkó A., Csicsvari J., Buzsáki G. Replay and time compression of recurring spike sequences in the hippocampus. J. Neurosci. 1999;19:9497–9507. doi: 10.1523/JNEUROSCI.19-21-09497.1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nagode J.C., Pardo J.V. Human hippocampal activation during transitive inference. Neuroreport. 2002;13:939–944. doi: 10.1097/00001756-200205240-00008. [DOI] [PubMed] [Google Scholar]
- Nicholson D.A., Freeman J.H., Jr. Lesions of the perirhinal cortex impair sensory preconditioning in rats. Behav. Brain Res. 2000;112:69–75. doi: 10.1016/s0166-4328(00)00168-6. [DOI] [PubMed] [Google Scholar]
- Nili H., Wingfield C., Walther A., Su L., Marslen-Wilson W., Kriegeskorte N. A toolbox for representational similarity analysis. PLoS Comput. Biol. 2014;10:e1003553. doi: 10.1371/journal.pcbi.1003553. [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Keefe J., Dostrovsky J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 1971;34:171–175. doi: 10.1016/0006-8993(71)90358-1. [DOI] [PubMed] [Google Scholar]
- O’Keefe J., Nadel L. Clarendon Press; 1978. The Hippocampus as a Cognitive Map. [Google Scholar]
- O’Reilly R.C., Rudy J.W. Conjunctive representations in learning and memory: principles of cortical and hippocampal function. Psychol. Rev. 2001;108:311–345. doi: 10.1037/0033-295x.108.2.311. [DOI] [PubMed] [Google Scholar]
- Ólafsdóttir H.F., Barry C., Saleem A.B., Hassabis D., Spiers H.J. Hippocampal place cells construct reward related sequences through unexplored space. eLife. 2015;4:e06063. doi: 10.7554/eLife.06063. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pachitariu M., Steinmetz N.A., Kadir S.N., Carandini M., Harris K.D. Fast and accurate spike sorting of high-channel count probes with KiloSort. In: Lee D.D., Sugiyama M., Luxburg U.V., Guyon I., Garnett R., editors. Advances in Neural Information Processing Systems 29 (NIPS 2016). NIPS Proceedings: Barcelona; Spain: 2016. [Google Scholar]
- Pastalkova E., Itskov V., Amarasingham A., Buzsáki G. Internally generated cell assembly sequences in the rat hippocampus. Science. 2008;321:1322–1327. doi: 10.1126/science.1159775. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pfeiffer B.E., Foster D.J. Hippocampal place-cell sequences depict future paths to remembered goals. Nature. 2013;497:74–79. doi: 10.1038/nature12112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Preston A.R., Eichenbaum H. Interplay of hippocampus and prefrontal cortex in memory. Curr. Biol. 2013;23:R764–R773. doi: 10.1016/j.cub.2013.05.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Preston A.R., Shrager Y., Dudukovic N.M., Gabrieli J.D.E. Hippocampal contribution to the novel use of relational information in declarative memory. Hippocampus. 2004;14:148–152. doi: 10.1002/hipo.20009. [DOI] [PubMed] [Google Scholar]
- Robinson S., Todd T.P., Pasternak A.R., Luikart B.W., Skelton P.D., Urban D.J., Bucci D.J. Chemogenetic silencing of neurons in retrosplenial cortex disrupts sensory preconditioning. J. Neurosci. 2014;34:10982–10988. doi: 10.1523/JNEUROSCI.1349-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sadacca B.F., Jones J.L., Schoenbaum G. Midbrain dopamine neurons compute inferred and cached value prediction errors in a common framework. eLife. 2016;5:e13665. doi: 10.7554/eLife.13665. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schendan H.E., Searl M.M., Melrose R.J., Stern C.E. An FMRI study of the role of the medial temporal lobe in implicit and explicit sequence learning. Neuron. 2003;37:1013–1025. doi: 10.1016/s0896-6273(03)00123-5. [DOI] [PubMed] [Google Scholar]
- Schlichting M.L., Preston A.R. Hippocampal-medial prefrontal circuit supports memory updating during learning and post-encoding rest. Neurobiol. Learn. Mem. 2016;134(Pt A):91–106. doi: 10.1016/j.nlm.2015.11.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schlichting M.L., Zeithamova D., Preston A.R. CA1 subfield contributions to memory integration and inference. Hippocampus. 2014;24:1248–1260. doi: 10.1002/hipo.22310. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sharpe M.J., Chang C.Y., Liu M.A., Batchelor H.M., Mueller L.E., Jones J.L., Niv Y., Schoenbaum G. Dopamine transients are sufficient and necessary for acquisition of model-based associations. Nat. Neurosci. 2017;20:735–742. doi: 10.1038/nn.4538. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shohamy D., Daw N.D. Integrating memories to guide decisions. Curr. Opin. Behav. Sci. 2015;5:85–90. [Google Scholar]
- Shohamy D., Wagner A.D. Integrating memories in the human brain: hippocampal-midbrain encoding of overlapping events. Neuron. 2008;60:378–389. doi: 10.1016/j.neuron.2008.09.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Singer A.C., Frank L.M. Rewarded outcomes enhance reactivation of experience in the hippocampus. Neuron. 2009;64:910–921. doi: 10.1016/j.neuron.2009.11.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Singer A.C., Carr M.F., Karlsson M.P., Frank L.M. Hippocampal SWR activity predicts correct decisions during the initial learning of an alternation task. Neuron. 2013;77:1163–1173. doi: 10.1016/j.neuron.2013.01.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Somogyi P., Klausberger T. Hippocampus: Intrinsic Organisation. In: Shepherd G.M., Grillne S., editors. Handbook of Brain Microcircuits. Oxford University Press; 2017. pp. 199–216. [Google Scholar]
- Squire L.R. Memory and the hippocampus: a synthesis from findings with rats, monkeys, and humans. Psychol. Rev. 1992;99:195–231. doi: 10.1037/0033-295x.99.2.195. [DOI] [PubMed] [Google Scholar]
- Squire L.R., Genzel L., Wixted J.T., Morris R.G. Memory consolidation. Cold Spring Harb. Perspect. Biol. 2015;7:a021766. doi: 10.1101/cshperspect.a021766. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stalnaker T.A., Howard J.D., Takahashi Y.K., Gershman S.J., Kahnt T., Schoenbaum G. Dopamine neuron ensembles signal the content of sensory prediction errors. eLife. 2019;8:e49315. doi: 10.7554/eLife.49315. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sutton R.S. Learning to predict by the methods of temporal differences. Mach. Learn. 1988;3:9–44. [Google Scholar]
- Sutton R.S. Dyna, an Integrated Architecture for Learning, Planning, and Reacting. ACM SIGART Bull. 1991;2:160–163. [Google Scholar]
- Takahashi Y.K., Batchelor H.M., Liu B., Khanna A., Morales M., Schoenbaum G. Dopamine Neurons Respond to Errors in the Prediction of Sensory Features of Expected Rewards. Neuron. 2017;95:1395–1405. doi: 10.1016/j.neuron.2017.08.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tolman E.C. Cognitive maps in rats and men. Psychol. Rev. 1948;55:189–208. doi: 10.1037/h0061626. [DOI] [PubMed] [Google Scholar]
- Tse D., Langston R.F., Kakeyama M., Bethus I., Spooner P.A., Wood E.R., Witter M.P., Morris R.G.M. Schemas and memory consolidation. Science. 2007;316:76–82. doi: 10.1126/science.1135935. [DOI] [PubMed] [Google Scholar]
- van de Ven G.M., Trouche S., McNamara C.G., Allen K., Dupret D. Hippocampal Offline Reactivation Consolidates Recently Formed Cell Assembly Patterns during Sharp Wave-Ripples. Neuron. 2016;92:968–974. doi: 10.1016/j.neuron.2016.10.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- van Kesteren M.T.R., Fernández G., Norris D.G., Hermans E.J. Persistent schema-dependent hippocampal-neocortical connectivity during memory encoding and postencoding rest in humans. Proc. Natl. Acad. Sci. USA. 2010;107:7550–7555. doi: 10.1073/pnas.0914892107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wagner U., Gais S., Haider H., Verleger R., Born J. Sleep inspires insight. Nature. 2004;427:352–355. doi: 10.1038/nature02223. [DOI] [PubMed] [Google Scholar]
- Werchan D.M., Gómez R.L. Generalizing memories over time: sleep and reinforcement facilitate transitive inference. Neurobiol. Learn. Mem. 2013;100:70–76. doi: 10.1016/j.nlm.2012.12.006. [DOI] [PubMed] [Google Scholar]
- Wilson M.A., McNaughton B.L. Reactivation of hippocampal ensemble memories during sleep. Science. 1994;265:676–679. doi: 10.1126/science.8036517. [DOI] [PubMed] [Google Scholar]
- Wimmer G.E., Shohamy D. Preference by association: how memory mechanisms in the hippocampus bias decisions. Science. 2012;338:270–273. doi: 10.1126/science.1223252. [DOI] [PubMed] [Google Scholar]
- Wood E.R., Dudchenko P.A., Robitsek R.J., Eichenbaum H. Hippocampal neurons encode information about different types of memory episodes occurring in the same location. Neuron. 2000;27:623–633. doi: 10.1016/s0896-6273(00)00071-4. [DOI] [PubMed] [Google Scholar]
- Wu X., Foster D.J. Hippocampal replay captures the unique topological structure of a novel environment. J. Neurosci. 2014;34:6459–6469. doi: 10.1523/JNEUROSCI.3414-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeithamova D., Preston A.R. Flexible memories: differential roles for medial temporal lobe and prefrontal cortex in cross-episode binding. J. Neurosci. 2010;30:14676–14684. doi: 10.1523/JNEUROSCI.3250-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeithamova D., Dominick A.L., Preston A.R. Hippocampal and ventral medial prefrontal activation during retrieval-mediated learning supports novel inference. Neuron. 2012;75:168–179. doi: 10.1016/j.neuron.2012.05.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeithamova D., Schlichting M.L., Preston A.R. The hippocampus and inferential reasoning: building memories to navigate future decisions. Front. Hum. Neurosci. 2012;6:70. doi: 10.3389/fnhum.2012.00070. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data and code used in this study will be made available via the MRC BNDU Data Sharing Platform (https://data.mrc.ox.ac.uk/) upon reasonable request.