Skip to main content
eLife logoLink to eLife
. 2019 Aug 6;8:e45333. doi: 10.7554/eLife.45333

Mapping sequence structure in the human lateral entorhinal cortex

Jacob LS Bellmund 1,2,3,, Lorena Deuker 4, Christian F Doeller 1,2,
Editors: Ida Momennejad5, Timothy E Behrens6
PMCID: PMC6684227  PMID: 31383256

Abstract

Remembering event sequences is central to episodic memory and presumably supported by the hippocampal-entorhinal region. We previously demonstrated that the hippocampus maps spatial and temporal distances between events encountered along a route through a virtual city (Deuker et al., 2016), but the content of entorhinal mnemonic representations remains unclear. Here, we demonstrate that multi-voxel representations in the anterior-lateral entorhinal cortex (alEC) — the human homologue of the rodent lateral entorhinal cortex — specifically reflect the temporal event structure after learning. Holistic representations of the sequence structure related to memory recall and the timeline of events could be reconstructed from entorhinal multi-voxel patterns. Our findings demonstrate representations of temporal structure in the alEC; dovetailing with temporal information carried by population signals in the lateral entorhinal cortex of navigating rodents and alEC activations during temporal memory retrieval. Our results provide novel evidence for the role of the alEC in representing time for episodic memory.

Research organism: Human

Introduction

Knowledge of the temporal structure of events is central to our experience. We remember how a sequence of events unfolded and can recall when in time events occurred. Emphasizing both when in time and where in space events came to pass, episodic memories typically comprise event information linked to a spatiotemporal context. Space and time have been suggested to constitute fundamental dimensions along which our experience is organized (Konkel and Cohen, 2009; Ekstrom and Ranganath, 2018; Bellmund et al., 2018a). Consistently, the role of the hippocampus — a core structure for episodic memory (Scoville and Milner, 1957; Squire, 1982) — in coding locations in space (O'Keefe and Dostrovsky, 1971; Moser et al., 2017; Epstein et al., 2017) and moments in time (Pastalkova et al., 2008; MacDonald et al., 2011; Eichenbaum, 2014; Ranganath, 2019; Howard, 2018) is well-established. Human memory research has highlighted the role of the hippocampus in the encoding, representation and retrieval of temporal relations (Tubridy and Davachi, 2011; DuBrow and Davachi, 2014; Ezzyat and Davachi, 2014; Hsieh et al., 2014; Jenkins and Ranganath, 2010; Jenkins and Ranganath, 2016; Kyle et al., 2015; Copara et al., 2014). The similarity patterns of mnemonic representations suggest that the hippocampus forms integrated maps reflecting the temporal and spatial structure of event memories (Deuker et al., 2016; Nielson et al., 2015). Consistently, activity in the hippocampal-entorhinal region has been demonstrated to be sensitive to Euclidean distances as well as the lengths of shortest paths to goals during navigation (Spiers and Maguire, 2007; Viard et al., 2011; Sherrill et al., 2013; Howard et al., 2014; Chrastil et al., 2015; Spiers and Barry, 2015).

How do representations of temporal structure arise in the hippocampus? Evidence suggests that neural ensembles in the lateral entorhinal cortex (EC), which is strongly connected to the hippocampus (Witter et al., 2017), carry temporal information in freely moving rodents (Tsao et al., 2018). Specifically, temporal information could be decoded from population activity with high accuracy (Tsao et al., 2018). This temporal information was suggested to arise from the integration of experience rather than an explicit clocking signal (Tsao et al., 2018). Recently, the human anterior-lateral entorhinal cortex (alEC), the homologue region of the rodent lateral entorhinal cortex (Navarro Schröder et al., 2015; Maass et al., 2015), as well as the perirhinal cortex and a network of brain regions including the hippocampus, the medial prefrontal cortex, posterior cingulate cortex and angular gyrus have been implicated in the recall of temporal information (Montchal et al., 2019). These regions responded more strongly for high compared to low accuracy retrieval of when in time snapshots from a sitcom appeared over the course of the episode viewed in the experiment (Montchal et al., 2019). Together, these findings demonstrate that entorhinal population activity carries temporal information in navigating rodents and that its human homologue is activated during temporal memory recall. However, the contents of mnemonic representations in the alEC remains unclear.

We used representational similarity analysis of fMRI multi-voxel patterns in the entorhinal cortex to address the question how learning the structure of an event sequence shapes mnemonic representations in the alEC. Using this paradigm and data, we previously demonstrated that participants can successfully recall spatial and temporal relations of events defined by object encounters in a virtual city and that the change of hippocampal representations reflects an integrated event map of the remembered distance structure (Deuker et al., 2016). Here, we show that the change of multi-voxel pattern similarity through learning in the alEC specifically reflects the temporal structure of the event sequence.

Results

We examined the effect of learning on object representations in the human entorhinal cortex using fMRI. In between two picture viewing tasks during which fMRI data were collected, participants acquired knowledge of temporal and spatial positions of objects in a familiar virtual city. Participants navigated repeated laps of a route along which they encountered chests containing different objects (Figure 1; Figure 1—figure supplement 1). We aimed to test whether entorhinal pattern similarity change from before to after learning related to experienced object relationships. Specifically, we presented object images twelve times in the picture viewing tasks before and after learning, using the same random order in both scanning runs. For both runs, we calculated the similarity of multi-voxel patterns for all object pairs and correlated changes in representational similarity with the temporal and spatial object relationships. The temporal distance structure of the object sequence can be quantified as the elapsed time between object encounters or as ordinal differences between their sequence positions, which are closely related in our task. Spatial distances on the other hand can be captured by Euclidean distances or geodesic distances between positions based on the shortest navigable paths between object positions. Importantly, we dissociated temporal from Euclidean and geodesic spatial object relationships through the use of teleporters along the route (Figure 1—figure supplement 2). Further, object relationships can be quantified by the distance traveled along the section of the route separating their positions (Figure 1—figure supplement 2). To assess whether entorhinal object representations change from before to after learning to map experienced object relationships, we compared changes in neural pattern similarity to the temporal and spatial structure of the task.

Figure 1. Design and analysis logic.

(A) During the spatio-temporal learning task, which took place in between two identical runs of a picture viewing task (Figure 1—figure supplement 1), participants repeatedly navigated a fixed route (blue line, mean ± standard deviation of median time per lap 264.6 ± 47.8 s) through the virtual city along which they encountered objects hidden in chests (numbered circles) (Deuker et al., 2016). Temporal (median time elapsed) and spatial (Euclidean and geodesic) distances between objects were dissociated through the use of three teleporters (lettered circles) along the route (Figure 1—figure supplement 2), which instantaneously changed the participant’s location to a different part of the city. (B) In the picture viewing tasks, participants viewed randomly ordered images of the objects encountered along the route while fMRI data were acquired. We quantified multi-voxel pattern similarity change between pairwise object comparisons from before to after learning the temporal and spatial relationships between objects in subregions of the entorhinal cortex. We tested whether pattern similarity change reflected the structure of the event sequence, by correlating it with the time elapsed between objects pairs (top right matrix shows median elapsed time between object encounters along the route averaged across participants). For each participant, we compared the correlation between pattern similarity change and the prediction matrix to a surrogate distribution obtained via bootstrapping and used the resulting z-statistic for group-level analysis (see Materials and methods).

Figure 1.

Figure 1—figure supplement 1. Overview of experimental design.

Figure 1—figure supplement 1.

Participants viewed object images in random order while undergoing fMRI before and after learning the temporal and spatial relationships between these objects. The order and timing of picture presentations was held identical in both sessions to assess changes in the similarity of object representations as measured by the difference in similarity of multi-voxel activity patterns (see Materials and methods). In between the two picture viewing tasks, participants acquired knowledge about the spatial and temporal positions of objects along a route through the virtual city. Initially, the route was marked by traffic cones, but in later laps participants navigated the route without guidance. Participants encountered chests along the route and were instructed to open the chests by walking into them. Each chest contained a different object, which was displayed on a black screen upon opening the chest. Crucially, the route featured three teleporters that instantly moved participants to a different part of the city where the route continued (Figure 1). This manipulation enabled us to dissociate the temporal and Euclidean spatial distances between object pairs (Figure 1—figure supplement 2). After the second picture viewing task, participants were asked to freely recall all objects encountered along the route in the order in which they came to mind. Further, participants’ memory for temporal and spatial relationships between object pairs was assessed. Here, participants adjusted a slider to indicate whether they remembered object pairs to be close together or far apart. Temporal and spatial relations were judged in separate trials. The results of these memory tests are reported in detail in Deuker et al. (2016).

Figure 1—figure supplement 2. Temporal distances are not correlated with Euclidean or geodesic spatial distances.

Figure 1—figure supplement 2.

(A) Pairwise temporal and Euclidean spatial distances between objects are uncorrelated (Pearson r = −0.068; bootstrapped 95% confidence interval: −0.24, 0.12; p=0.462). Median times elapsed between object encounters were z-scored and then averaged across participants. Spatial distances were defined as z-scored Euclidean distances between object positions. When correlating individual median times elapsed with spatial distances, the correlation between the dimensions was not significant in any of the participants (mean ± standard deviation of Pearson correlation coefficients r = −0.068 ± 0.006, all p≥0.378). (B, C) Likewise, temporal distances were not correlated with geodesic distances between object positions. Geodesic distances were quantified based on the lengths of the shortest paths between object positions allowing navigation of all locations not obstructed by buildings and other objects (B, Pearson r = −0.061, p=0.505; CI: −0.23, 0.14; individual Pearson r = −0.061 ± 0.006, all p≥0.414) or on the city’s street network only (C, Pearson r = −0.041, p=0.653; CI: −0.22, 0.15; individual Pearson r = −0.041 ± 0.006, all p≥0.552). (D, E) Illustrations of geodesic distances based on shortest paths (blue lines) from three object positions (white circles) to all other object positions (blue circles). Shortest paths between positions were calculated using all unobstructed positions (D) or the street network (E), respectively. (F) Because both temporal distances and traveled-route distances increase monotonically with the progression of the route, ordinal temporal distances and traveled-route distances between object pairs were closely related (Spearman r = 0.986, p<0.001; CI: 0.98, 0.99; individual Spearman r = 0.986 ± 0.003, all p<0.001). Circles in (A), (B), (C and F) indicate pairwise object comparisons; solid line shows least squares line; dashed lines and shaded region highlight bootstrapped confidence intervals.

The change in multi-voxel pattern similarity in alEC between pre- and post-learning scans was negatively correlated with the sequence structure (Figure 2A and Figure 2B, T(25)=- 3.75, p=0.001, alpha-level of 0.0125, Bonferroni-corrected for four comparisons), which was quantified as the median elapsed time between objects pairs along the route. After relative to before learning, objects encountered in temporal proximity were represented more similarly compared to object pairs further separated in time (Figure 2C). Pattern similarity change was negatively correlated with temporal distances after excluding comparisons of objects encountered in direct succession from the analysis (T(25)=-2.00, p=0.029, one-sided test, Figure 2—figure supplement 1A), speaking for holistic representations of temporal structure in the alEC and ruling out that the effect we observed is largely driven by increased similarity of temporally adjacent objects. Importantly, the strength of this effect was strongly related to behavior in the post-scan free recall test, where participants retrieved the objects from memory. Specifically, participants with stronger correlations between alEC pattern similarity change and the temporal task structure tended to recall objects together that were encountered in temporal proximity along the route (Pearson r = −0.53, p=0.006, CIs: −0.76,–0.19, Figure 2D).

Figure 2. Temporal mapping in alEC.

(A) Entorhinal cortex subregion masks from Navarro Schröder et al. (2015) were moved into subject-space and intersected with participant-specific Freesurfer parcellations of entorhinal cortex. Color indicates probability of voxels to belong to the alEC (blue) or pmEC (green) subregion mask after subject-specific masks were transformed back to MNI template space for visualization. (B) Pattern similarity change in the alEC correlated with the temporal structure of object relationships, defined by the median time elapsed between object encounters, as indicated by z-statistics significantly below 0. A permutation-based two-way repeated measures ANOVA further revealed a significant interaction highlighting a difference in mapping temporal and Euclidean spatial distances between alEC and pmEC. (C) To break down the negative correlation of alEC pattern similarity change and temporal distance shown in (B), pattern similarity change is plotted separately for object pairs close together or far apart in time along the route based on a median split of elapsed time between object encounters. (D) Pattern similarity change in alEC was negatively related to temporal relationships independent of objects encountered in direct succession (Figure 2—figure supplement 1A). The magnitude of this effect correlated significantly with participants’ free recall behavior. The temporal organization of freely recalled objects was assessed by calculating the absolute difference in position for all recalled objects and correlating this difference with the time elapsed between encounters of object pairs along the route. Solid line shows least squares line; dashed lines and shaded region highlight bootstrapped confidence intervals. (E) To illustrate the interaction effect shown in (B), the difference in the relationship between temporal and spatial distances to pattern similarity change is shown for alEC and pmEC. Negative values indicate stronger correlations with temporal compared to spatial distances. Bars show mean and S.E.M with lines connecting data points from the same participant in (C and E). **p<0.01.

Figure 2—source data 1. Z-values of correlations between pattern similarity change in the entorhinal subregions and temporal and Euclidean spatial distances as shown in panel B.
DOI: 10.7554/eLife.45333.010
Figure 2—source data 2. Pattern similarity changes in alEC for object pairs separated by low and high temporal distances as shown in panel C.
DOI: 10.7554/eLife.45333.011
Figure 2—source data 3. Z-values of correlations between alEC pattern similarity change and temporal distances without comparisons of objects encountered in direct succession along the route and Pearson correlation coefficients quantifying temporal clustering during the free recall task (panel D).
DOI: 10.7554/eLife.45333.012
Figure 2—source data 4. Z-value differences quantifying the difference in temporal and spatial mapping in alEC and pmEC as shown in panel E.
DOI: 10.7554/eLife.45333.013

Figure 2.

Figure 2—figure supplement 1. Entorhinal pattern similarity change reflects temporal structure beyond direct adjacency and stimulus presentation times from the pre-learning scan.

Figure 2—figure supplement 1.

(A) To rule out that the effect was driven by objects at temporally adjacent positions along the route we excluded these comparisons from the analysis. The effect of temporal mapping in the alEC remained significant (T(25)=-2.00, p=0.029, one-sided test) and the result of this analysis did not differ significantly from the original results obtained from the analysis including all comparisons (T(25)=1.40, p=0.190). (B) Pattern similarity change in alEC was not correlated with temporal distances from the first picture viewing task. Correlations with pattern similarity change were more negative for elapsed time during the learning task than for presentation times during the first picture viewing task (see Materials and methods). Bars show mean and S.E.M with lines connecting data points from the same participant. *p<0.05.

Figure 2—figure supplement 2. Geodesic spatial distances do not correlate with entorhinal pattern similarity change.

Figure 2—figure supplement 2.

(A, B) Two-way repeated measures ANOVAs with the factors entorhinal subregion and distance type (elapsed time and geodesic spatial distances) yielded comparable results to analyses based on Euclidean spatial distances (see main text); irrespective of whether geodesic distances were quantified as the lengths of shortest paths using all unobstructed locations (A) or restricting shortest paths to the city’s street network (B). Post hoc tests revealed more negative correlations of alEC pattern similarity change with temporal compared to geodesic distances (non-obstructed locations: T(25)=-2.88, p=0.009; street network only: T(25)=-2.51, p=0.019). Bars reflect mean and S.E.M with circles showing data points of individual participants. *p<0.05.

Figure 2—figure supplement 3. Signal-to-noise ratio in the entorhinal cortex.

Figure 2—figure supplement 3.

(A) The temporal signal-to-noise ratio did not differ between the entorhinal subregions. (B) Similarly, the spatial signal-to-noise ratio did not differ between entorhinal subregions. Bars show mean and S.E.M with lines connecting data points from the same participant.

Figure 2—figure supplement 4. No evidence for reactivation of object representations in the entorhinal cortex.

Figure 2—figure supplement 4.

(A) Group-level visualization of the region of interest used for the lateral occipital cortex (LOC). (B) Classification accuracies observed when testing the classifier trained on the pre-learning scan on the post-learning scan. Data are shown for different lags. Lag 0 corresponds to the same object presented on the screen in a given trial of the two picture viewing tasks. At lag 0, decoding accuracies were above chance levels in the LOC, but not in alEC or pmEC. Negative and positive lags show classifier predictions for objects at preceding and upcoming sequence positions, respectively. Classification accuracies were not above chance levels for preceding or upcoming sequence positions. Bars reflect mean and S.E.M with circles showing data points of individual participants. Red line and shaded area show mean and standard deviation of participant-specific chance levels determined via random permutations of trial labels (see Materials and methods). ***p<0.001; **p<0.01; *p<0.05.

Pattern similarity change in alEC did not correlate significantly with Euclidean spatial distances (T(25)=0.81, p=0.420) and pattern similarity change in posterior-medial EC (pmEC) did not correlate with Euclidean (T(25)=0.58, p=0.583) or temporal (T(25)=1.73, p=0.089) distances. Temporal distances between objects during the first picture viewing task were not related to alEC pattern similarity change (Figure 2—figure supplement 1B; T(24)=-0.29 p=0.776, one outlier excluded, see Materials and methods) and correlations with elapsed time between objects during navigation were significantly more negative (T(24)=-1.76 p=0.045; one-sided test); strengthening our interpretation that pattern similarity changes reflected relationships experienced in the virtual city.

Can we reconstruct the timeline of events from pattern similarity change in alEC? Here, we used multidimensional scaling to extract coordinates along one dimension from pattern similarity change averaged across participants (Figure 3A–D). The reconstructed temporal coordinates, transformed into the original value range using Procrustes analysis (Figure 3A), mirrored the time points at which objects were encountered during the task (Figure 3B, Pearson correlation between reconstructed and true time points, r = 0.56, p=0.023, bootstrapped 95% confidence interval: 0.21, 0.79). Further, we contrasted the fit of the coordinates from multidimensional scaling between the true and randomly shuffled timelines (Figure 3C). Specifically, we compared the standardized sum of squared errors of the fit between the reconstructed and the true timeline, the Procrustes distance, to a surrogate distribution of deviance values. This surrogate distribution was obtained by fitting the coordinates from multidimensional scaling to randomly shuffled timelines of events. The Procrustes distance from fitting to the true timeline was smaller than the 5th percentile of the surrogate distribution generated via 10000 random shuffles (Figure 3D, p=0.026). Taken together, these findings indicate that alEC representations change through learning to reflect the temporal structure of the acquired event memories and that we can recover the timeline of events from this representational change.

Figure 3. Reconstructing the timeline of events from entorhinal pattern similarity change.

Figure 3.

(A) To recover the temporal structure of events we performed multidimensional scaling on the average pattern similarity change matrix in alEC. The resulting coordinates, one for each object along the route, were subjected to Procrustes analysis, which applies translations, rotations and uniform scaling to superimpose the coordinates from multidimensional scaling on the true temporal coordinates along the route (see Materials and methods). For visualization, we varied the positions resulting from multidimensional scaling and Procrustes analysis along the y-axis. (B) The temporal coordinates of this reconstructed timeline were significantly correlated with the true temporal coordinates of object encounters along the route. Circles indicate time points of object encounters; solid line shows least squares line; dashed lines and shaded region highlight bootstrapped confidence intervals. (C) The goodness of fit of the reconstruction (the Procrustes distance) was quantified as the standardized sum of squared errors and compared to a surrogate distribution of Procrustes distances. This surrogate distribution was obtained from randomly shuffling the true coordinates against the coordinates obtained from multidimensional scaling and then performing Procrustes analysis for each of 10000 shuffles (left shows one randomly shuffled timeline for illustration). (D) The Procrustes distance obtained from fitting to the true timeline of events (dotted line) was smaller than the 5th percentile (dashed line) of the surrogate distribution (solid line), which constitutes the significance threshold at an alpha level of 0.05.

Figure 3—source data 1. True and reconstructed temporal coordinates of object positions as shown in panel B.
DOI: 10.7554/eLife.45333.015
Figure 3—source data 2. Procrustes distance from mapping coordinates from multidimensional scaling based on alEC pattern similarity change to true temporal coordinates and surrogate distribution obtained from fitting to shuffled temporal coordinates (panel D).
DOI: 10.7554/eLife.45333.016

What is the nature of regional specificity within entorhinal cortex? In a next step, we compared temporal and spatial mapping between the subregions of the entorhinal cortex. We conducted a permutation-based two-by-two repeated measures ANOVA (see Materials and methods) with the factors entorhinal subregion (alEC vs. pmEC) and relationship type (temporal vs. Euclidean spatial distance between events). Crucially, we observed a significant interaction between EC subregion and distance type (F(1,25)=7.40, p=0.011, Figure 2B and E). Further, the main effect of EC subregion was significant (F(1,25)=5.18, p=0.029), while the main effect of distance type was not (F(1,25)=0.84, p=0.367). Based on the significant interaction, we conducted planned post-hoc comparisons, which revealed significant differences (Bonferroni-corrected alpha-level of 0.025) between the mapping of temporal and spatial distances in alEC (T(25)=-2.91, p=0.007) and a significant difference between temporal mapping in alEC compared to pmEC (T(25)=-3.52, p=0.001).

Operationalizing the temporal structure in terms of the ordinal distances between object positions in the sequence yielded comparable results since our design did not disentangle time elapsed from ordinal positions as objects were encountered at regular intervals along the route. Pattern similarity change in alEC correlated significantly with ordinal temporal distances (Figure 4, T(25)=-3.37, p=0.002), an effect further qualified by a significant interaction in the two-by-two repeated measures ANOVA contrasting the effects of ordinal temporal and Euclidean spatial distances in the entorhinal subregions (interaction: F(1,25)=7.11, p=0.012; main effect of EC subregion: F(1,25)=4.97, p=0.033; main effect of distance type: F(1,25)=0.84, p=0.365). Alternative to the quantification of spatial relationships as Euclidean distances we calculated geodesic distances between object positions (Figure 1—figure supplement 2B–E). Entorhinal pattern similarity change was not correlated with geodesic distances based on shortest paths between locations using all positions not obstructed by buildings or other obstacles (Figure 2—figure supplement 2A, alEC: T(25)=0.82, p=0.436, pmEC: T(25)=0.73, p=0.479) or based on shortest paths using only the street network (Figure 2—figure supplement 2B, alEC: T(25)=0.36, p=0.715, pmEC: T(25)=0.92, p=0.375). Furthermore, the interaction of the two-by-two repeated measures ANOVA with the factors entorhinal subregion and distance type remained significant when using geodesic spatial distances based on shortest paths using all non-obstructed positions (interaction: F(1,25)=6.96, p=0.014; main effect of EC subregion: F(1,25)=5.18, p=0.031; main effect of distance type: F(1,25)=0.99, p=0.330) or the street network only (interaction: F(1,25)=4.30, p=0.048; main effect of EC subregion: F(1,25)=6.68, p=0.017; main effect of distance type: F(1,25)=0.81, p=0.376). Spatial and temporal signal-to-noise ratios did not differ between alEC and pmEC (Figure 2—figure supplement 3), ruling out that differences in signal quality might explain the observed effects.

Figure 4. Ordinal temporal distances correlate with pattern similarity change in alEC.

Figure 4.

Repeating the two-way repeated measures ANOVA using ordinal distances as the measure of sequence structure yielded results comparable to the analyses presented in Figure 2. We observed a significant interaction (see main text) highlighting a difference in temporal and spatial mapping between alEC and pmEC. Post hoc tests comparing mapping of ordinal temporal distances and Euclidean spatial distances in the alEC (T(25)=-2.81, p=0.008) and comparing mapping of ordinal temporal distances between alEC and pmEC (T(25)=-3.53, p=0.002) are significant at the Bonferroni-corrected alpha-level of 0.025. Bars reflect mean and S.E.M with circles showing data points of individual participants. **p<0.01.

Figure 4—source data 1. Z-values of correlations between pattern similarity change in the entorhinal subregions and ordinal temporal and Euclidean spatial distances.
DOI: 10.7554/eLife.45333.018

Does the presentation of object images during the post-learning picture viewing task elicit reactivations of similar representations from the pre-scan? For example, associations might be formed between objects encountered in succession along the route, which might result in the reactivation of neighboring objects after learning. To test this notion, we trained pattern classifiers to distinguish object representations on the pre-learning scan and tested these classifiers on the post-learning scan (see Materials and methods). We observed classifier accuracies exceeding chance levels in the lateral occipital cortex (LOC, Figure 2—figure supplement 4, T(25)=7.54, p<0.001, see Materials and methods) — known to be involved in visual object processing (Grill-Spector et al., 2001). In the entorhinal cortex, classification accuracies did not exceed chance levels (alEC: T(25)=-0.08, p=0.941; pmEC: T(25)=0.53, p=0.621). Next, we examined classifier predictions as a function of lag along the sequence. If effects in the alEC are driven by the reactivation of objects at neighboring sequence positions, then one might expect systematic classification errors, where an object might likely be confused with preceding or successive objects. In the entorhinal cortex, classifier evidence did not exceed chance levels for the three objects preceding (Figure 2—figure supplement 4B, alEC: most extreme T(25)=1.13; minimum p=0.270; pmEC: most extreme T(25)=1.00; min. p=0.332) or following (alEC: most extreme T(25)=-2.07; min. p=0.055; pmEC: most extreme T(25)=-0.83; min. p=0.414) an object. We also did not observe above-chance classifier evidence for nearby objects in the LOC, but rather classifier evidence was below chance levels for some lags, potentially due to high classification accuracies at no lag (preceding objects: most extreme T(25)=-2.51; min. p=0.018; successive objects: most extreme T(25)=-4.09; min. p<0.001).

Collectively, our findings demonstrate that, within the EC, only representations in the anterior-lateral subregion changed to resemble the temporal structure of the event sequence and that this mapping was specific to the temporal rather than the spatial dimension.

Discussion

We examined the similarity of multi-voxel patterns to demonstrate that alEC representations reflect the experienced temporal event structure. Despite being cued in random order after learning, these representations related to a holistic temporal map of the sequence structure. Moreover, entorhinal pattern similarity change correlated with participants’ recall behavior and we recovered the timeline of events during learning from these representations.

Our hypothesis for temporal mapping in the alEC specifically was based on a recent finding demonstrating that population activity in the rodent lateral EC carries information from which time can be decoded at different scales ranging from seconds to days (Tsao et al., 2018). Time could be decoded with higher accuracies from the lateral EC than the medial EC and hippocampal subfield CA3. During a structured task in which the animal ran repeated laps on a maze separated into different trials, neural trajectories through population activity space were similar across trials, illustrating that the dynamics of lateral EC neural signals were more stable than during free foraging (Tsao et al., 2018). Consistently, temporal coding was improved for time within a trial during the structured task compared to episodes of free foraging. These findings support the notion that temporal information in the lateral EC might inherently arise from the encoding of experience (Tsao et al., 2018). In our task, relevant factors contributing to a similar experience of the route on each lap are not only the encounters of objects in a specific order at their respective positions, but also recognizing and passing salient landmarks as well as traveled distance and navigational demands in general. Changes in metabolic states and arousal presumably varied more linearly over time. Slowly drifting activity patterns have been observed also in the human medial temporal lobe (Folkerts et al., 2018) and EC specifically (Lositsky et al., 2016). A representation of time within a known trajectory in the alEC could underlie the encoding of temporal relationships between events in our task, where participants repeatedly navigated along the route to learn the positions of objects. Hence, temporal mapping in the alEC as we report here might help integrate hippocampal spatio-temporal event maps (Deuker et al., 2016).

Our findings demonstrate that alEC representations reflect the temporal structure of events after learning. This finding further dovetails with a recent fMRI study (Montchal et al., 2019), in which participants indicated when a still frame was encountered over the course of an episode of a sitcom. The alEC activated more strongly for the third of trials in which participants recalled temporal information most accurately compared to the third of trials in which temporal precision was lowest (Montchal et al., 2019). Going beyond the relationship of univariate activation differences to the precision of temporal memory recall, we focused on the content of alEC activation patterns and demonstrate that the alEC represents the temporal structure of events after learning.

One possibility for why alEC multi-voxel patterns resemble a holistic temporal map of the event structure in our task is the reactivation of temporal context information. If alEC neural populations traverse similar population state trajectories on each lap, they would carry information about time within a lap. A given object would be associated with a similar alEC population state on each lap. Associations with temporally drifting signals during the learning task would result in representational changes relative to the baseline scan that, if reactivated in the post-learning picture viewing task, reflect the experienced temporal structure of object encounters. This might explain the observed pattern similarity structure with relatively increased similarity for objects encountered in temporal proximity during learning and decreased similarity for items encountered after longer delays. While this interpretation is in line with data from rodent electrophysiology (Tsao et al., 2018) and the framework proposed by the temporal context model (Howard and Kahana, 2002; Howard et al., 2005) as well as evidence for neural contiguity effects in image recognition tasks (Howard et al., 2012; Folkerts et al., 2018), we cannot test the reinstatement of specific activity patterns from the learning phase directly since fMRI data were only collected during the picture viewing tasks in this study.

An alternative explanation for how the observed effects might arise is through associations between the objects. During learning, an object might become associated with preceding and successive objects, with stronger associations for objects closer in the sequence (Metcalfe and Murdock, 1981; Lewandowsky and Murdock, 1989; Jensen and Lisman, 2005). In this framework, the reactivation of associated objects during the post-learning picture viewing task could drive similarity increases for objects close together in the sequence. We tested for stable object representations from before to after learning and assessed classifier predictions to test the hypothesis that — if object reactivations underlie the effects — we might observe biased classifier evidence for the objects preceding or following a given object in the sequence. However, using classifiers trained on the picture viewing task before learning, we did not observe evidence for stable object representations in the entorhinal cortex or above-chance classifier evidence for objects nearby in the sequence after learning. Object representations in lateral occipital cortex (LOC) were stable between the picture viewing tasks. Previous studies have observed evidence for cortical reinstatement during memory retrieval (Nyberg et al., 2000; Wheeler et al., 2000; Polyn et al., 2005), modulated by hippocampal-entorhinal activity (Bosch et al., 2014). We did not observe classification accuracies exceeding chance levels for objects from nearby sequence positions in LOC, which one would expect if associative retrieval of objects accompanied by cortical reinstatement were to underlie our effects. Hence, these results fail to provide evidence for the notion that the reactivation of object representations drove our effects.

Importantly, the highly-controlled design of our study supports the interpretation that alEC representations change through learning to map the temporal event structure. The order of object presentations during the scanning sessions was randomized and thus did not reflect the order in which objects were encountered during the learning task. Since the assignment of objects to positions was randomized across participants and we analyzed pattern similarity change from a baseline scan, our findings do not go back to prior associations between the objects, but reflect information learned over the course of the experiment. Further, we presented the object images during the scanning sessions not only in the same random order, but also with the same presentation times and inter-stimulus intervals; thereby ruling out that the effects we observed relate to temporal autocorrelation of the BOLD-signal. Taken together, the high degree of experimental control of our study supports the conclusion that alEC representations change to reflect the temporal structure of acquired memories.

The long time scales of lateral EC temporal codes differ from the observation of time cells in the hippocampus and medial EC, which fire during temporal delays in highly trained tasks (Pastalkova et al., 2008; MacDonald et al., 2011; Eichenbaum, 2014; Kraus et al., 2015; Mau et al., 2018; Heys and Dombeck, 2018). Time cell ensembles change over minutes and days (Mau et al., 2018), but their firing has been investigated predominantly in the context of short delays in the range of seconds. One recent study did not find evidence for time cell sequences during a 60s-delay (Sabariego et al., 2019). In our task, one lap of the route took approximately 4.5 min on average; comparable to the 250s-duration of a trial in Tsao et al. (2018). How memories are represented at different temporal scales, which might be integrated in hierarchically nested sequences such as different days within a week, remains a question for future research.

Our assessment of temporal representations in the antero-lateral and posterior-medial subdivision of the EC was inspired by a recent report of temporal coding during free foraging and repetitive behavior in the rodent EC, which was most pronounced in the lateral EC (Tsao et al., 2018). In humans, local and global functional connectivity patterns suggest a preserved bipartite division of the EC, but along not only its medial-lateral, but also its anterior-posterior axis (Navarro Schröder et al., 2015; Maass et al., 2015). Via these entorhinal subdivisions, cortical inputs from the anterior-temporal and posterior-medial memory systems might converge onto the hippocampus (Ranganath and Ritchey, 2012; Ritchey et al., 2015). In line with hexadirectional signals in pmEC during imagination (Bellmund et al., 2016), putatively related to grid-cell population activity (Doeller et al., 2010), one might expect the pmEC to map spatial distances between object positions in our task. However, we did not observe an association of pattern similarity change in the entorhinal cortex with Euclidean or geodesic distances based on shortest paths between object positions. One potential explanation for the absence of evidence for a spatial distance signal in pmEC might be the way in which we cued participants’ memory during the picture viewing task. The presentation of isolated object images probed locations in their stored representation of the virtual city. Due to the periodic nature of grid-cell firing, different locations might not result in diverging patterns of grid-cell population activity. Hence, the design here was not optimized for the analysis of spatial representations in pmEC, if the object positions were encoded in grid-cell firing patterns as suggested by models of grid-cell function (Fiete et al., 2008; Mathis et al., 2012; Bush et al., 2015).

Our findings are in line with the role of the hippocampus in the retrieval of temporal information from memory (Copara et al., 2014; DuBrow and Davachi, 2014; Kyle et al., 2015; Nielson et al., 2015). Hippocampal pattern similarity has been shown to scale with temporal distances between events (Deuker et al., 2016; Nielson et al., 2015) and evidence for the reinstatement of temporally associated items from memory has been reported in the hippocampus (DuBrow and Davachi, 2014). Already at the stage of encoding, hippocampal and entorhinal activity have been related to later temporal memory (DuBrow and Davachi, 2014; DuBrow and Davachi, 2016; Ezzyat and Davachi, 2014; Jenkins and Ranganath, 2010; Jenkins and Ranganath, 2016; Lositsky et al., 2016; Tubridy and Davachi, 2011). For example, increased pattern similarity has been reported for items remembered to be close together compared to items remembered to be far apart in time, despite the same time having elapsed between these items (Ezzyat and Davachi, 2014). Similarly, changes in EC pattern similarity during the encoding of a narrative correlated with later duration estimates between events (Lositsky et al., 2016). Complementing these reports, our findings demonstrate that entorhinal activity patterns carry information about the temporal structure of memories at retrieval. Furthermore, the degree to which EC patterns reflected holistic representations of temporal relationships related to recall behavior characterized by the consecutive reproduction of objects encountered in temporal proximity; potentially through mental traversals of the route during memory recall. The central role of the hippocampus and entorhinal cortex in temporal memory (for review see Davachi and DuBrow, 2015; Howard, 2018; Ranganath, 2019; Wang and Diana, 2017) dovetails with the involvement of these regions in learning sequences and statistical regularities in general (Barnett et al., 2014; Garvert et al., 2017; Hsieh et al., 2014; Kumaran and Maguire, 2006; Schapiro et al., 2012; Schapiro et al., 2016; Thavabalasingam et al., 2018).

In this experiment, the paradigm was designed to disentangle temporal distances from Euclidean spatial distances between objects (Deuker et al., 2016), resulting also in a decorrelation of temporal distances and geodesic distances based on shortest paths between object positions. Since objects were encountered at regular intervals along the route, temporal distances quantified as elapsed time in seconds or, on an ordinal level of measurement, as the difference in sequence position were highly correlated measures of the sequence structure. To partially decorrelate elapsed time from ordinal temporal distances and distance traveled along the route, future studies could vary the speed of movement between different sections of the route. This might allow the investigation of the level of precision at which the hippocampal-entorhinal region stores temporal relations, in line with evidence for the integration of duration information in the representations of short sequences (Thavabalasingam et al., 2018; Thavabalasingam et al., 2019). Interestingly, a multi-scale ensemble of successor representations was recently suggested to estimate sequences of anticipated future states, including the order and distances between states (Stachenfeld et al., 2017; Momennejad and Howard, 2018); consistent with the sensitivity of neurons (Sarel et al., 2017; Gauthier and Tank, 2018; Qasim et al., 2018) and BOLD-responses (Spiers and Maguire, 2007; Viard et al., 2011; Sherrill et al., 2013; Howard et al., 2014; Chrastil et al., 2015) in the hippocampal-entorhinal region to distances and directions to navigational goals. Related to the effects of circumnavigation on travel time and Euclidean distance estimates (Brunec et al., 2017), experimental manipulations of route tortuosity could shed additional light on how, in the context of navigation, spatio-temporal event structures shape episodic memory.

In conclusion, our findings demonstrate that activity patterns in alEC, the human homologue region of the rodent lateral EC, carry information about the temporal structure of experienced events. The observed effects might be related to the reactivation of temporal contextual tags, in line with the recent report of temporal information available in rodent lateral EC population activity and models of episodic memory.

Materials and methods

Participants

26 participants (mean ± std. 24.88 ± 2.21 years of age, 42.3% female) were recruited via the university’s online recruitment system and participated in the study. As described in the original publication using this dataset (Deuker et al., 2016), this sample size was based on a power-calculation (alpha-level of 0.001, power of 0.95, estimated effect size of d = 1.03 based on a prior study; Milivojevic et al., 2015) using G*Power (http://www.gpower.hhu.de/). Participants with prior knowledge of the virtual city (see Deuker et al., 2016) were recruited for the study. All procedures were approved by the local ethics committee (CMO Regio Arnhem Nijmegen, CMO2001/095, version 6.2) and all participants gave written informed consent prior to commencement of the study.

Design

Overview

The experiment began by a 10 min session during which participants freely navigated the virtual city (Bellmund et al., 2018b) on a computer screen to re-familiarize themselves with its layout. Afterwards participants were moved into the scanner and completed the first run of the picture viewing task during which they viewed pictures of everyday objects as described below (Figure 1—figure supplement 1). After this baseline scan, participants learned a fixed route through the virtual city along which they encountered the objects at predefined positions (Figure 1 and Figure 1—figure supplement 1). The use of teleporters, which instantaneously moved participants to a different part of the city, enabled us to dissociate temporal from Euclidean and geodesic spatial distances between object positions (Figure 1—figure supplement 2). Subsequent to the spatio-temporal learning task, participants again underwent fMRI and completed the second run of the picture viewing task. Lastly, participants’ memory was probed outside of the MRI scanner. Specifically, participants freely recalled the objects they encountered, estimated spatial and temporal distances between them on a subjective scale, and indicated their knowledge of the positions the objects in the virtual city on a top-down map (Deuker et al., 2016).

Spatio-temporal learning task

Participants learned the positions of everyday objects along a trajectory through the virtual city Donderstown (Bellmund et al., 2018b). This urban environment, surrounded by a range of mountains, consists of a complex street network, parks and buildings. Participants with prior knowledge of the virtual city (see Deuker et al., 2016) were recruited for the study. After the baseline scan, participants navigated the fixed route through the city along which they encountered 16 wooden chests at specified positions (Figure 1A). During the initial six laps the route was marked by traffic cones. In later laps, participants had to rely on their memory to navigate the route, but guidance in the form of traffic cones was available upon button press for laps 7–11. Participants completed 14 laps of the route in total (mean ± standard deviation of duration 71.63 ± 13.75 min), which were separated by a black screen displayed for 15 s before commencement of the next lap from the start position.

Participants were instructed to open the chests they encountered along the route by walking into them. They were then shown the object contained in that chest for 2 s on a black screen. A given chest always contained the same object for a participant, with the assignment of objects to chests randomized across participants. Therefore, each object was associated with a spatial position defined by its location in the virtual city and a temporal position described by its occurrence along the progression of the route. Importantly, we dissociated temporal relationships between object pairs (measured by time elapsed between their encounter) from the Euclidean distance between their positions in the city through the use of teleporters. Specifically, at three locations along the route participants encountered teleporters, which immediately transported them to a different position in the city where the route continued (Figure 1A). This manipulation allows the otherwise impossible encounter of objects after only a short temporal delay, but with a large Euclidean distance between them in the virtual city (Deuker et al., 2016). Indeed, temporal distances across all comparisons of object pairs were not correlated with spatial relationships measured as Euclidean distances (Figure 1—figure supplement 2A).

An alternative way of capturing the spatial structure of the task is via geodesic distances. We quantified geodesic distances as the lengths of the shortest paths between object locations. Shortest paths were calculated using a Matlab implementation of the A* search algorithm (https://mathworks.com/matlabcentral/fileexchange/56877). First, we calculated shortest paths that were allowed to cross all positions not obstructed by buildings or other obstacles (see Figure 1—figure supplement 2D for example paths). Second, because participants were instructed to only navigate on the streets during the learning task, we found shortest paths restricted to the city’s street network (example paths are shown in Figure 1—figure supplement 2E). Neither form of geodesic distances between object positions was correlated with temporal distances (Figure 1—figure supplement 2B,C). Traveled-route distances were quantified as the median across laps of the distances participants traveled between the object positions when following the route.

Picture viewing tasks

Before and after the spatio-temporal learning task, participants completed the picture viewing tasks while undergoing fMRI (Deuker et al., 2016). During these picture viewing tasks, the 16 objects from the learning task as well as an additional target object were presented. Participants were instructed to attend to the objects and to respond via button press when the target object was presented. Every object was shown 12 times in 12 blocks, with every object being shown once in every block. In each block, the order of objects was randomized. Blocks were separated by a 30 s break without object presentation. Objects were presented for 2.5 s on a black background in each trial and trials were separated by two or three TRs. These intertrial intervals occurred equally often and were randomly assigned to the object presentations. The presentation of object images was locked to the onset of the new fMRI volume. For each participant, we generated a trial order adhering to the above constraints and used the identical trial order for the picture viewing tasks before and after learning the spatio-temporal arrangement of objects along the route. Using the exact same temporal structure of object presentations in both runs rules out potential effects of temporal autocorrelation of the BOLD signal on the results, since such a spurious influence on the representational structure would be present in both tasks similarly and therefore cannot drive the pattern similarity change that we focused our analysis on (Deuker et al., 2016).

MRI acquisition

All MRI data were collected using a 3T Siemens Skyra scanner (Siemens, Erlangen, Germany). Functional images during the picture viewing tasks were acquired with a 2D EPI sequence (voxel size 1.5 mm isotropic, TR = 2270 ms, TE = 24 ms, 40 slices, distance factor 13%, flip angle 85°, field of view (FOV) 210 × 210 × 68 mm). The FOV was oriented to fully cover the medial temporal lobes and if possible calcarine sulcus (Deuker et al., 2016). To improve the registration of the functional images with partial coverage of the brain, 10 volumes of the same functional sequence with an increased number of slices (120 slices, TR = 6804.1 ms) were acquired (see fMRI preprocessing). Additionally, gradient field maps were acquired (for 21 participants) with a gradient echo sequence (TR = 1020 ms; TE1 = 10 ms; TE2 = 12.46 ms; flip angle = 90°; volume resolution = 3.5 × 3.5×2 mm; FOV = 224 × 224 mm). Further, a structural image was acquired for each participant (voxel size = 0.8 × 0.8×0.8 mm, TR = 2300 ms; TE = 315 ms; flip angle = 8°; in-plane resolution = 256 × 256 mm; 224 slices).

Quantification and statistical analysis

Behavioral data

Results from in-depth analysis of the behavioral data obtained during the spatio-temporal learning task as well as the memory tests conducted after fMRI scanning are reported in detail in Deuker et al. (2016). Here, we used data from the spatio-temporal learning task as predictions for multi-voxel pattern similarity (see below). Specifically, we defined the temporal structure of pairwise relationships between objects pairs as the median time elapsed between object encounters across the 14 laps of the route. These times differed between participants due to differences in navigation speed (Deuker et al., 2016). Figure 1b shows the temporal distance matrix averaged across participants for illustration. In our task, chests containing objects were spread evenly along the route and hence ordinal distances between objects provide a closely related measure of temporal structure (mean ± standard deviation Pearson r = 0.993 ± 0.0014). For details of the analysis quantifying the relationship between entorhinal pattern similarity change and recall behavior see the corresponding section below.

fMRI preprocessing

Preprocessing of fMRI data was carried out using FEAT (FMRI Expert Analysis Tool, version 6.00), part of FSL (FMRIB's Software Library, www.fmrib.ox.ac.uk/fsl, version 5.0.8), as described in Deuker et al. (2016). Functional images were submitted to motion correction and high-pass filtering (cutoff 100 s). Images were not smoothed. When available, distortion correction using the fieldmaps was applied. Using FLIRT (Jenkinson and Smith, 2001; Jenkinson et al., 2002), the functional images acquired during the picture viewing tasks were registered to the preprocessed whole-brain mean functional images, which were in turn registered to the to the participant’s structural scan. The linear registration from this high-resolution structural to standard MNI space (1 mm resolution) was then further refined using FNIRT nonlinear registration (Anderson et al., 2007). Representational similarity analysis of the functional images acquired during the picture viewing tasks was carried out in regions of interests co-registered to the space of the whole-brain functional images.

ROI definition

Based on functional connectivity patterns, the anterior-lateral and posterior-medial portions of human EC were identified as human homologue regions of the rodent lateral and medial EC in two independent studies (Navarro Schröder et al., 2015; Maass et al., 2015). Here, we focused on temporal coding in the alEC, building upon a recent report of temporal signals in rodent lateral EC during navigation (Tsao et al., 2018). Therefore, we used masks from Navarro Schröder et al. (2015) to perform ROI-based representational similarity analysis on our data. For each ROI, the mask was co-registered from standard MNI space (1 mm) to each participant’s functional space (number of voxels: alEC 126.7 ± 46.3; pmEC 69.0 ± 32.9). To improve anatomical precision for the EC masks, the subregion masks from Navarro Schröder et al. (2015) were each intersected with participant-specific EC masks obtained from their structural scan using the automated segmentation implemented in Freesurfer (version 5.3). ROI masks for the bilateral lateral occipital cortex were defined based on the Freesurfer segmentation and intersected with the combined brain masks from the two fMRI runs since this ROI was located at the edge of our field of view.

Representational similarity analysis

As described in Deuker et al., 2016, we implemented representational similarity analysis (RSA, Kriegeskorte et al., 2008a; Kriegeskorte et al., 2008b) for the two picture viewing tasks individually and then analyzed changes in pattern similarity between the two picture viewing tasks, which were separated by the spatio-temporal learning phase. After preprocessing, analyses were conducted in Matlab (version 2017b, MathWorks). In a general linear model, we used the motion parameters obtained during preprocessing as predictors for the time series of each voxel in the respective ROI. Only the residuals of this GLM, that is the part of the data that could not be explained by head motion, were used for further analysis. Stimulus presentations during the picture viewing tasks were locked to the onset of fMRI volumes and the third volume after the onset of picture presentations, corresponding to the time 4.54 to 6.81 s after stimulus onset, was extracted for RSA.

For each ROI, we calculated Pearson correlation coefficients between all object presentations except for comparisons within the same of the 12 blocks of each picture viewing task. For each pairwise comparison, we averaged the resulting correlation coefficients across comparisons, yielding a 16 × 16 matrix reflecting the average representational similarity of objects for each picture viewing task (Deuker et al., 2016). These matrices were Fisher z-transformed. Since the picture viewing task was conducted before and after spatio-temporal learning, the two cross-correlation matrices reflected representational similarity with and without knowledge of the spatial and temporal relationships between objects, respectively. Thus, the difference between the two matrices corresponds to the change in pattern similarity due to learning. Specifically, we subtracted the pattern similarity matrix obtained prior to learning from the pattern similarity matrix obtained after learning, resulting in a matrix of pattern similarity change for each ROI from each participant. This change in similarity of object representations was then compared to different predictions of how this effect of learning might be explained (Figure 1B).

To test the hypothesis that multi-voxel pattern similarity change reflects the temporal structure of the object encounters along the route, we correlated pattern similarity change with the temporal relationships between object pairs; defined by the participant-specific median time elapsed between object encounters while navigating the route. Likewise, we compared pattern similarity change to the Euclidean distances between object positions in the virtual city. We calculated Spearman correlation coefficients to quantify the fit between pattern similarity change and each prediction. We expected negative correlations as relative increases in pattern similarity are expected for objects separated by only a small distance compared to comparisons of objects separated by large distances (Deuker et al., 2016). We compared these correlation coefficients to a surrogate distribution obtained from shuffling pattern similarity change against the respective prediction. For each of 10000 shuffles, the Spearman correlation coefficient between the two variables was calculated, yielding a surrogate distribution of correlation coefficients (Figure 1B). We quantified the size of the original correlation coefficient in comparison to the surrogate distribution. Specifically, we assessed the proportion of larger or equal correlation coefficients in the surrogate distribution and converted the resulting p-value into a z-statistic using the inverse of the normal cumulative distribution function (Deuker et al., 2016; Stelzer et al., 2013; Schlichting et al., 2015). Thus, for each participant, we obtained a z-statistic reflecting the fit of the prediction to pattern similarity change in that ROI. For visualization (Figure 2C), we averaged correlation coefficients quantifying pattern similarity change in alEC separately for comparisons of objects encountered close together or far apart in time based on the median elapsed time between object pairs.

The z-statistics were tested on the group level using permutation-based procedures (10000 permutations) implemented in the Resampling Statistical Toolkit for Matlab (https://mathworks.com/matlabcentral/fileexchange/27960-resampling-statistical-toolkit). To test whether pattern similarity change in alEC reflected the temporal structure of object encounters, we tested the respective z-statistic against 0 using a permutation-based t-test and compared the resulting p-value against an alpha of 0.0125 (Bonferroni-corrected for four comparisons, Figure 2). Respecting within-subject dependencies, differences between the fit of temporal and spatial relationships between objects and pattern similarity change in the EC subregions were assessed using a permutation-based two-way repeated measures ANOVA with the factors EC subregion (alEC vs. pmEC) and relationship type (elapsed time vs. Euclidean distance). Planned post-hoc comparisons then included permutation-based t-tests of temporal against spatial mapping in alEC and temporal mapping between alEC and pmEC (Bonferroni-corrected alpha-level of 0.025).

Accounting for adjacency effects

To rule out that only increased pattern similarity for object pairs encountered at adjacent temporal positions along the route drove the effect we excluded these comparisons from the analysis when testing whether pattern similarity change in alEC reflected temporal relationships. We tested the resulting z-values, reflective of holistic temporal maps independent of direct adjacency, against 0 (Figure 2—figure supplement 1A, one-sided permutation-based t-test). The z-values of this analysis were used for the correlation with recall behavior described below and shown in Figure 2D.

Relationship between pattern similarity change and recall behavior

We assessed participants’ tendency to reproduce objects encountered closely in time along the route at nearby positions during free recall. In this task, conducted after the post-learning picture viewing task, participants had two minutes to name as many of the objects encountered in the virtual city as possible and to speak the names in the order in which they came to mind into a microphone (Deuker et al., 2016). For each pair of recalled objects, we calculated the absolute positional difference in reproduction order and correlated these recall distances with elapsed time between object encounters of these pairs. This resulted in high Pearson correlation coefficients for participants with the tendency to recall objects at distant temporal positions along the route far apart and to retrieve objects encountered closely together in time along the route at nearby positions during memory retrieval. Such a temporally organized recall order would result for example from mentally traversing the route during the free recall task. The temporal organization of participants’ free recall was significantly correlated with the strength of the relationship between elapsed time and pattern similarity change after excluding comparisons of objects encountered at directly adjacent temporal positions (Figure 2D).

Temporal intervals during the baseline scan

We interpret pattern similarity change between the picture viewing tasks as being induced by the learning task. To rule out effects of temporal intervals between objects experienced outside of the virtual city we correlated pattern similarity change in the alEC with temporal relationships during the pre-learning baseline scan. Specifically, we calculated the average temporal distance during the first picture viewing task for each pair of objects. Analogous to the time elapsed during the task, we correlated these temporal distances with pattern similarity change in the alEC. One participant was excluded from this analysis due to a z-value more than 1.5 times the interquartile range below the lower quartile. We tested whether pattern similarity change differed from zero and whether correlations with elapsed time during the task were more negative than correlations with temporal distance during the first picture viewing task (one-sided test) using permutation-based t-tests (Figure 2—figure supplement 1B).

Timeline reconstruction

To reconstruct the timeline of events from alEC pattern similarity change we combined multidimensional scaling with Procrustes analysis (Figure 2A). We first rescaled the pattern similarity matrix to a range from 0 to 1 and then converted it to a distance matrix (distance = 1 − similarity). We averaged the distance matrices across participants and subjected the resulting matrix to classical multidimensional scaling. Since we were aiming to recover the timeline of events, we extracted coordinates underlying the averaged pattern distance matrix along one dimension. In a next step, we fitted the resulting coordinates to the times of object encounters along the route, which were also averaged across participants, using Procrustes analysis. This analysis finds the linear transformation, allowing scaling and reflections, that minimizes the sum of squared errors between the two sets of temporal coordinates. To assess whether the reconstruction of the temporal relationships between memories was above chance, we correlated the reconstructed temporal coordinates with the true temporal coordinates using Pearson correlation (Figure 2B). 95% confidence intervals were bootstrapped using the Robust Correlation Toolbox (Pernet et al., 2012). Additionally, we compared the goodness of fit of the Procrustes transform—the Procrustes distance, which measures the deviance between true and reconstructed coordinates—to a surrogate distribution. Specifically, we randomly shuffled the true temporal coordinates and then mapped the coordinates from multidimensional scaling onto these shuffled timelines. We computed the Procrustes distance for each of 10000 iterations. We quantified the proportion of random fits in the surrogate distribution better than the fit to the true timeline (i.e. smaller Procrustes distances) and expressed it as a p-value to demonstrate that our reconstruction exceeds chance level (Figure 2C–D).

Signal-to-noise ratio

We quantified the temporal and spatial signal-to-noise ratio for each ROI. Temporal signal-to-noise was calculated for each voxel as the temporal mean divided by the temporal standard deviation for both runs of the picture viewing task separately. Values were averaged across the two runs and across voxels in the ROIs. Spatial signal-to-noise ratio was calculated for each volume as the mean signal divided by the standard deviation across voxels in the ROI. The resulting values were averaged across volumes of the time series and averaged across the two runs. Signal-to-noise ratios were compared between ROIs using permutation-based t-tests.

Classification analysis

To examine whether object representations were stable between the pre- and the post-learning scan, we turned to pattern classification techniques and examined whether classifiers trained on the pre-learning scan exhibited systematic errors when tested on the post-learning data. Using the same time window as for the representational similarity analysis described above, we used data corresponding to the activation patterns evoked by individual object presentations during the picture viewing tasks from the LOC, alEC and pmEC. Data for each voxel within an ROI were z-scored separately for the pre- and post-learning scan. For the pre-learning data of each ROI, we trained support vector machines on the binary classification of object identities in a one-versus-one coding design using the Matlab (version 2018b) function fitcecoc. Then, we tested the resulting classifiers on the independent data from the post-learning picture viewing task. We tested for stable object representations by comparing the percentages of correctly predicted object labels against chance with permutation-based one-sample t-tests (lag 0 in Figure 2—figure supplement 4B). Participant-specific chance levels were determined as average classifier accuracies when comparing classifier predictions to randomly permuted trial labels (1000 permutations).

In a second step, we examined classifier evidence as a function of the objects’ positions along the route. If learned associations between objects lead to the reactivation of representations corresponding to objects from neighboring sequence positions, one might expect systematic classifier errors. We calculated classifier evidence for the three objects preceding and following a given object by shifting the true labels for each lag. At each lag, we excluded trials where shifted labels were invalid, that is not in the range of 1–16 for the 16 objects along the route, when calculating the percentage of hits. Chance levels were determined by randomly permuting the true labels for each lag. Classifier performance was tested against chance levels using permutation-based t-tests at each lag (Figure 2—figure supplement 4B). Note that classifier performance is below chance for some preceding and upcoming sequence positions due to high accuracy at lag 0.

Acknowledgements

The authors would like to thank Raphael Kaplan for comments on a previous version of this manuscript. CFD’s research is supported by the Max Planck Society; the European Research Council (ERCCoG GEOCOG 724836); the Kavli Foundation, the Centre of Excellence scheme of the Research Council of Norway – Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, the National Infrastructure scheme of the Research Council of Norway – NORBRAIN; and the Netherlands Organisation for Scientific Research (NWO-Vidi 452-12-009; NWO-Gravitation 024-001-006; NWO-MaGW 406-14-114; NWO-MaGW 406-15-291). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Jacob LS Bellmund, Email: bellmund@cbs.mpg.de.

Christian F Doeller, Email: doeller@cbs.mpg.de.

Ida Momennejad, Princeton University, United States.

Timothy E Behrens, University of Oxford, United Kingdom.

Funding Information

This paper was supported by the following grants:

  • Max-Planck-Gesellschaft to Christian F Doeller.

  • European Research Council ERCCoG GEOCOG 724836 to Christian F Doeller.

  • Nederlandse Organisatie voor Wetenschappelijk Onderzoek NWO‐Vidi 452‐12‐ 009 to Christian F Doeller.

  • Nederlandse Organisatie voor Wetenschappelijk Onderzoek NWO‐Gravitation 024‐001‐006 to Christian F Doeller.

  • Nederlandse Organisatie voor Wetenschappelijk Onderzoek NWO‐MaGW 406‐14‐114 to Christian F Doeller.

  • Nederlandse Organisatie voor Wetenschappelijk Onderzoek NWO‐MaGW 406‐15‐291 to Christian F Doeller.

  • Kavli Foundation to Christian F Doeller.

  • Research Council of Norway 223262 to Christian F Doeller.

  • The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits to Christian F Doeller.

  • NORBRAIN – National Infrastructure scheme of the Research Council of Norway to Christian F Doeller.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Formal analysis, Investigation, Visualization, Writing—original draft, Writing—review and editing.

Conceptualization, Formal analysis, Investigation, Writing—review and editing.

Conceptualization, Supervision, Funding acquisition, Writing—review and editing.

Ethics

Human subjects: All procedures were approved by the local ethics committee (CMO Regio Arnhem Nijmegen, CMO2001/095, version 6.2) and all participants gave written informed consent prior to commencement of the study.

Additional files

Transparent reporting form
DOI: 10.7554/eLife.45333.019

Data availability

Source data files have been provided for Figures 2, 3 and 4. The virtual city Donderstown is available at https://osf.io/78uph/.

References

  1. Anderson J, Jenkinson M, Smith S. Non-Linear Registration, Aka Spatial Normalisation. FMRIB Technical Report TR07JA. Oxford, United Kingdom: FMRIB Centre; 2007. [Google Scholar]
  2. Barnett AJ, O'Neil EB, Watson HC, Lee AC. The human Hippocampus is sensitive to the durations of events and intervals within a sequence. Neuropsychologia. 2014;64:1–12. doi: 10.1016/j.neuropsychologia.2014.09.011. [DOI] [PubMed] [Google Scholar]
  3. Bellmund JLS, Deuker L, Navarro Schröder T, Doeller CF. Grid-cell representations in mental simulation. eLife. 2016;5:e17089. doi: 10.7554/eLife.17089. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bellmund JLS, Gärdenfors P, Moser EI, Doeller CF. Navigating cognition: spatial codes for human thinking. Science. 2018a;362:eaat6766. doi: 10.1126/science.aat6766. [DOI] [PubMed] [Google Scholar]
  5. Bellmund JLS, Deuker L, Doeller CF. Donderstown. Open Science Framework. 2018b https://osf.io/78uph/
  6. Bosch SE, Jehee JF, Fernández G, Doeller CF. Reinstatement of associative memories in early visual cortex is signaled by the Hippocampus. Journal of Neuroscience. 2014;34:7493–7500. doi: 10.1523/JNEUROSCI.0805-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brunec IK, Javadi AH, Zisch FEL, Spiers HJ. Contracted time and expanded space: the impact of circumnavigation on judgements of space and time. Cognition. 2017;166:425–432. doi: 10.1016/j.cognition.2017.06.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bush D, Barry C, Manson D, Burgess N. Using grid cells for navigation. Neuron. 2015;87:507–520. doi: 10.1016/j.neuron.2015.07.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Chrastil ER, Sherrill KR, Hasselmo ME, Stern CE. There and back again: hippocampus and retrosplenial cortex track homing distance during human path integration. Journal of Neuroscience. 2015;35:15442–15452. doi: 10.1523/JNEUROSCI.1209-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Copara MS, Hassan AS, Kyle CT, Libby LA, Ranganath C, Ekstrom AD. Complementary roles of human hippocampal subregions during retrieval of spatiotemporal context. Journal of Neuroscience. 2014;34:6834–6842. doi: 10.1523/JNEUROSCI.5341-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Davachi L, DuBrow S. How the Hippocampus preserves order: the role of prediction and context. Trends in Cognitive Sciences. 2015;19:92–99. doi: 10.1016/j.tics.2014.12.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Deuker L, Bellmund JLS, Navarro Schröder T, Doeller CF. An event map of memory space in the Hippocampus. eLife. 2016;5:e16534. doi: 10.7554/eLife.16534. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Doeller CF, Barry C, Burgess N. Evidence for grid cells in a human memory network. Nature. 2010;463:657–661. doi: 10.1038/nature08704. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. DuBrow S, Davachi L. Temporal memory is shaped by encoding stability and intervening item reactivation. Journal of Neuroscience. 2014;34:13998–14005. doi: 10.1523/JNEUROSCI.2535-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. DuBrow S, Davachi L. Temporal binding within and across events. Neurobiology of Learning and Memory. 2016;134:107–114. doi: 10.1016/j.nlm.2016.07.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Eichenbaum H. Time cells in the Hippocampus: a new dimension for mapping memories. Nature Reviews Neuroscience. 2014;15:732–744. doi: 10.1038/nrn3827. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Ekstrom AD, Ranganath C. Space, time, and episodic memory: the Hippocampus is all over the cognitive map. Hippocampus. 2018;28:680–687. doi: 10.1002/hipo.22750. [DOI] [PubMed] [Google Scholar]
  18. Epstein RA, Patai EZ, Julian JB, Spiers HJ. The cognitive map in humans: spatial navigation and beyond. Nature Neuroscience. 2017;20:1504–1513. doi: 10.1038/nn.4656. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Ezzyat Y, Davachi L. Similarity breeds proximity: pattern similarity within and across contexts is related to later mnemonic judgments of temporal proximity. Neuron. 2014;81:1179–1189. doi: 10.1016/j.neuron.2014.01.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Fiete IR, Burak Y, Brookings T. What grid cells convey about rat location. Journal of Neuroscience. 2008;28:6858–6871. doi: 10.1523/JNEUROSCI.5684-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Folkerts S, Rutishauser U, Howard MW. Human episodic memory retrieval is accompanied by a neural contiguity effect. The Journal of Neuroscience. 2018;38:4200–4211. doi: 10.1523/JNEUROSCI.2312-17.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Garvert MM, Dolan RJ, Behrens TE. A map of abstract relational knowledge in the human hippocampal-entorhinal cortex. eLife. 2017;6:e17086. doi: 10.7554/eLife.17086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Gauthier JL, Tank DW. A dedicated population for reward coding in the Hippocampus. Neuron. 2018;99:179–193. doi: 10.1016/j.neuron.2018.06.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Grill-Spector K, Kourtzi Z, Kanwisher N. The lateral occipital complex and its role in object recognition. Vision Research. 2001;41:1409–1422. doi: 10.1016/S0042-6989(01)00073-6. [DOI] [PubMed] [Google Scholar]
  25. Heys JG, Dombeck DA. Evidence for a subcircuit in medial entorhinal cortex representing elapsed time during immobility. Nature Neuroscience. 2018;21:1574–1582. doi: 10.1038/s41593-018-0252-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Howard MW, Fotedar MS, Datey AV, Hasselmo ME. The temporal context model in spatial navigation and relational learning: toward a common explanation of medial temporal lobe function across domains. Psychological Review. 2005;112:75–116. doi: 10.1037/0033-295X.112.1.75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Howard MW, Viskontas IV, Shankar KH, Fried I. Ensembles of human MTL neurons "jump back in time" in response to a repeated stimulus. Hippocampus. 2012;22:1833–1847. doi: 10.1002/hipo.22018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Howard LR, Javadi AH, Yu Y, Mill RD, Morrison LC, Knight R, Loftus MM, Staskute L, Spiers HJ. The Hippocampus and entorhinal cortex encode the path and euclidean distances to goals during navigation. Current Biology. 2014;24:1331–1340. doi: 10.1016/j.cub.2014.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Howard MW. Memory as perception of the past: compressed time inMind and brain. Trends in Cognitive Sciences. 2018;22:124–136. doi: 10.1016/j.tics.2017.11.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Howard MW, Kahana MJ. A distributed representation of temporal context. Journal of Mathematical Psychology. 2002;46:269–299. doi: 10.1006/jmps.2001.1388. [DOI] [Google Scholar]
  31. Hsieh LT, Gruber MJ, Jenkins LJ, Ranganath C. Hippocampal activity patterns carry information about objects in temporal context. Neuron. 2014;81:1165–1178. doi: 10.1016/j.neuron.2014.01.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Jenkins LJ, Ranganath C. Prefrontal and medial temporal lobe activity at encoding predicts temporal context memory. Journal of Neuroscience. 2010;30:15558–15565. doi: 10.1523/JNEUROSCI.1337-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Jenkins LJ, Ranganath C. Distinct neural mechanisms for remembering when an event occurred. Hippocampus. 2016;26:554–559. doi: 10.1002/hipo.22571. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Jenkinson M, Bannister P, Brady M, Smith S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage. 2002;17:825–841. doi: 10.1006/nimg.2002.1132. [DOI] [PubMed] [Google Scholar]
  35. Jenkinson M, Smith S. A global optimisation method for robust affine registration of brain images. Medical Image Analysis. 2001;5:143–156. doi: 10.1016/S1361-8415(01)00036-6. [DOI] [PubMed] [Google Scholar]
  36. Jensen O, Lisman JE. Hippocampal sequence-encoding driven by a cortical multi-item working memory buffer. Trends in Neurosciences. 2005;28:67–72. doi: 10.1016/j.tins.2004.12.001. [DOI] [PubMed] [Google Scholar]
  37. Konkel A, Cohen NJ. Relational memory and the Hippocampus: representations and methods. Frontiers in Neuroscience. 2009;3:166–174. doi: 10.3389/neuro.01.023.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Kraus BJ, Brandon MP, Robinson RJ, Connerney MA, Hasselmo ME, Eichenbaum H. During running in place, grid cells integrate elapsed time and distance run. Neuron. 2015;88:578–589. doi: 10.1016/j.neuron.2015.09.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Kriegeskorte N, Mur M, Ruff DA, Kiani R, Bodurka J, Esteky H, Tanaka K, Bandettini PA. Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron. 2008a;60:1126–1141. doi: 10.1016/j.neuron.2008.10.043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Kriegeskorte N, Mur M, Bandettini P. Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience. 2008b;2:4. doi: 10.3389/neuro.06.004.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Kumaran D, Maguire EA. An unexpected sequence of events: mismatch detection in the human Hippocampus. PLOS Biology. 2006;4:e424. doi: 10.1371/journal.pbio.0040424. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Kyle CT, Smuda DN, Hassan AS, Ekstrom AD. Roles of human hippocampal subfields in retrieval of spatial and temporal context. Behavioural Brain Research. 2015;278:549–558. doi: 10.1016/j.bbr.2014.10.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Lewandowsky S, Murdock BB. Memory for serial order. Psychological Review. 1989;96:25–57. doi: 10.1037/0033-295X.96.1.25. [DOI] [Google Scholar]
  44. Lositsky O, Chen J, Toker D, Honey CJ, Shvartsman M, Poppenk JL, Hasson U, Norman KA. Neural pattern change during encoding of a narrative predicts retrospective duration estimates. eLife. 2016;5:e16070. doi: 10.7554/eLife.16070. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Maass A, Berron D, Libby LA, Ranganath C, Düzel E. Functional subregions of the human entorhinal cortex. eLife. 2015;4:e06426. doi: 10.7554/eLife.06426. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. MacDonald CJ, Lepage KQ, Eden UT, Eichenbaum H. Hippocampal "time cells" bridge the gap in memory for discontiguous events. Neuron. 2011;71:737–749. doi: 10.1016/j.neuron.2011.07.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Mathis A, Herz AV, Stemmler M. Optimal population codes for space: grid cells outperform place cells. Neural Computation. 2012;24:2280–2317. doi: 10.1162/NECO_a_00319. [DOI] [PubMed] [Google Scholar]
  48. Mau W, Sullivan DW, Kinsky NR, Hasselmo ME, Howard MW, Eichenbaum H. The same hippocampal CA1 population simultaneously codes temporal information over multiple timescales. Current Biology. 2018;28:1499–1508. doi: 10.1016/j.cub.2018.03.051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Metcalfe J, Murdock BB. An encoding and retrieval model of single-trial free recall. Journal of Verbal Learning and Verbal Behavior. 1981;20:161–189. doi: 10.1016/S0022-5371(81)90365-0. [DOI] [Google Scholar]
  50. Milivojevic B, Vicente-Grabovetsky A, Doeller CF. Insight reconfigures hippocampal-prefrontal memories. Current Biology. 2015;25:821–830. doi: 10.1016/j.cub.2015.01.033. [DOI] [PubMed] [Google Scholar]
  51. Momennejad I, Howard MW. Predicting the future with Multi-scale successor representations. bioRxiv. 2018 doi: 10.1101/449470. [DOI]
  52. Montchal ME, Reagh ZM, Yassa MA. Precise temporal memories are supported by the lateral entorhinal cortex in humans. Nature Neuroscience. 2019;22:284–288. doi: 10.1038/s41593-018-0303-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Moser EI, Moser MB, McNaughton BL. Spatial representation in the hippocampal formation: a history. Nature Neuroscience. 2017;20:1448–1464. doi: 10.1038/nn.4653. [DOI] [PubMed] [Google Scholar]
  54. Navarro Schröder T, Haak KV, Zaragoza Jimenez NI, Beckmann CF, Doeller CF. Functional topography of the human entorhinal cortex. eLife. 2015;4:e06738. doi: 10.7554/eLife.06738. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Nielson DM, Smith TA, Sreekumar V, Dennis S, Sederberg PB. Human Hippocampus represents space and time during retrieval of real-world memories. PNAS. 2015;112:11078–11083. doi: 10.1073/pnas.1507104112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Nyberg L, Habib R, McIntosh AR, Tulving E. Reactivation of encoding-related brain activity during memory retrieval. PNAS. 2000;97:11120–11124. doi: 10.1073/pnas.97.20.11120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. O'Keefe J, Dostrovsky J. The Hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research. 1971;34:171–175. doi: 10.1016/0006-8993(71)90358-1. [DOI] [PubMed] [Google Scholar]
  58. Pastalkova E, Itskov V, Amarasingham A, Buzsáki G. Internally generated cell assembly sequences in the rat Hippocampus. Science. 2008;321:1322–1327. doi: 10.1126/science.1159775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Pernet CR, Wilcox R, Rousselet GA. Robust correlation analyses: false positive and power validation using a new open source matlab toolbox. Frontiers in Psychology. 2012;3 doi: 10.3389/fpsyg.2012.00606. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Polyn SM, Natu VS, Cohen JD, Norman KA. Category-specific cortical activity precedes retrieval during memory search. Science. 2005;310:1963–1966. doi: 10.1126/science.1117645. [DOI] [PubMed] [Google Scholar]
  61. Qasim SE, Miller J, Inman CS, Gross R, Willie JT, Lega B, Lin J-J, Sharan A, Wu C, Sperling MR. Single neurons in the human entorhinal cortex remap to distinguish individual spatial memories. bioRxiv. 2018 doi: 10.1101/433862. [DOI] [PMC free article] [PubMed]
  62. Ranganath C. Time, memory, and the legacy of howard eichenbaum. Hippocampus. 2019;29:146–161. doi: 10.1002/hipo.23007. [DOI] [PubMed] [Google Scholar]
  63. Ranganath C, Ritchey M. Two cortical systems for memory-guided behaviour. Nature Reviews Neuroscience. 2012;13:713–726. doi: 10.1038/nrn3338. [DOI] [PubMed] [Google Scholar]
  64. Ritchey M, Libby LA, Ranganath C. Cortico-hippocampal systems involved in memory and cognition: the PMAT framework. Progress in Brain Research. 2015;219:45–64. doi: 10.1016/bs.pbr.2015.04.001. [DOI] [PubMed] [Google Scholar]
  65. Sabariego M, Schönwald A, Boublil BL, Zimmerman DT, Ahmadi S, Gonzalez N, Leibold C, Clark RE, Leutgeb JK, Leutgeb S. Time cells in the Hippocampus are neither dependent on medial entorhinal cortex inputs nor necessary for spatial working memory. Neuron. 2019;102:1235–1248. doi: 10.1016/j.neuron.2019.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Sarel A, Finkelstein A, Las L, Ulanovsky N. Vectorial representation of spatial goals in the Hippocampus of bats. Science. 2017;355:176–180. doi: 10.1126/science.aak9589. [DOI] [PubMed] [Google Scholar]
  67. Schapiro AC, Kustner LV, Turk-Browne NB. Shaping of object representations in the human medial temporal lobe based on temporal regularities. Current Biology. 2012;22:1622–1627. doi: 10.1016/j.cub.2012.06.056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Schapiro AC, Turk-Browne NB, Norman KA, Botvinick MM. Statistical learning of temporal community structure in the Hippocampus. Hippocampus. 2016;26:3–8. doi: 10.1002/hipo.22523. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Schlichting ML, Mumford JA, Preston AR. Learning-related representational changes reveal dissociable integration and separation signatures in the Hippocampus and prefrontal cortex. Nature Communications. 2015;6:8151. doi: 10.1038/ncomms9151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Scoville WB, Milner B. Loss of recent memory after bilateral hippocampal lesions. Journal of Neurology, Neurosurgery & Psychiatry. 1957;20:11–21. doi: 10.1136/jnnp.20.1.11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Sherrill KR, Erdem UM, Ross RS, Brown TI, Hasselmo ME, Stern CE. Hippocampus and retrosplenial cortex combine path integration signals for successful navigation. Journal of Neuroscience. 2013;33:19304–19313. doi: 10.1523/JNEUROSCI.1825-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Spiers HJ, Barry C. Neural systems supporting navigation. Current Opinion in Behavioral Sciences. 2015;1:47–55. doi: 10.1016/j.cobeha.2014.08.005. [DOI] [Google Scholar]
  73. Spiers HJ, Maguire EA. A navigational guidance system in the human brain. Hippocampus. 2007;17:618–626. doi: 10.1002/hipo.20298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Squire LR. The neuropsychology of human memory. Annual Review of Neuroscience. 1982;5:241–273. doi: 10.1146/annurev.ne.05.030182.001325. [DOI] [PubMed] [Google Scholar]
  75. Stachenfeld KL, Botvinick MM, Gershman SJ. The Hippocampus as a predictive map. Nature Neuroscience. 2017;20:1643–1653. doi: 10.1038/nn.4650. [DOI] [PubMed] [Google Scholar]
  76. Stelzer J, Chen Y, Turner R. Statistical inference and multiple testing correction in classification-based multi-voxel pattern analysis (MVPA): random permutations and cluster size control. NeuroImage. 2013;65:69–82. doi: 10.1016/j.neuroimage.2012.09.063. [DOI] [PubMed] [Google Scholar]
  77. Thavabalasingam S, O'Neil EB, Lee ACH. Multivoxel pattern similarity suggests the integration of temporal duration in hippocampal event sequence representations. NeuroImage. 2018;178:136–146. doi: 10.1016/j.neuroimage.2018.05.036. [DOI] [PubMed] [Google Scholar]
  78. Thavabalasingam S, O'Neil EB, Tay J, Nestor A, Lee ACH. Evidence for the incorporation of temporal duration information in human hippocampal long-term memory sequence representations. PNAS. 2019;116:6407–6414. doi: 10.1073/pnas.1819993116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Tsao A, Sugar J, Lu L, Wang C, Knierim JJ, Moser MB, Moser EI. Integrating time from experience in the lateral entorhinal cortex. Nature. 2018;561:57–62. doi: 10.1038/s41586-018-0459-6. [DOI] [PubMed] [Google Scholar]
  80. Tubridy S, Davachi L. Medial temporal lobe contributions to episodic sequence encoding. Cerebral Cortex. 2011;21:272–280. doi: 10.1093/cercor/bhq092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Viard A, Doeller CF, Hartley T, Bird CM, Burgess N. Anterior Hippocampus and goal-directed spatial decision making. Journal of Neuroscience. 2011;31:4613–4621. doi: 10.1523/JNEUROSCI.4640-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Wang F, Diana RA. Temporal context in human fMRI. Current Opinion in Behavioral Sciences. 2017;17:57–64. doi: 10.1016/j.cobeha.2017.06.004. [DOI] [Google Scholar]
  83. Wheeler ME, Petersen SE, Buckner RL. Memory's echo: vivid remembering reactivates sensory-specific cortex. PNAS. 2000;97:11125–11129. doi: 10.1073/pnas.97.20.11125. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Witter MP, Doan TP, Jacobsen B, Nilssen ES, Ohara S. Architecture of the entorhinal cortex A review of entorhinal anatomy in rodents with some comparative notes. Frontiers in Systems Neuroscience. 2017;11:46. doi: 10.3389/fnsys.2017.00046. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision letter

Editor: Ida Momennejad1
Reviewed by: Marc Howard2, H Freyja Ólafsdóttir3

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "Structuring time in human lateral entorhinal cortex" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Ida Momennejad as the Reviewing Editor and Reviewer #2, and the evaluation has been overseen by Timothy Behrens as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Marc Howard (Reviewer #1); H Freyja Ólafsdóttir (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

Bellmund and colleagues describe interesting findings pointing to a role of the alEC in temporal coding. Specifically, the authors show objects encountered with close temporal proximity during a spatiotemporal task show similar pattern similarity using an fMRI RSA analysis. Importantly, the degree of temporal coding in the alEC correlated with behavioural performance. This paper examines changes in the representation of stimuli presented in a virtual world as a function of the time between and the (virtual) distance between their presentations. Previous work from the same group (using the same dataset as a matter of fact) and others has shown that learning causes representations of stimuli presented close together in time to become more similar to one another. This paper reexamines these data to specifically examine the role of subregions of the entorhinal cortex. This is inspired by a recent rodent study from the Moser group (Tsao et al., 2018) that has received a great deal of attention. The primary novel result is that there is an interaction of distance accounted for by time and space in the anterior-lateral entorhinal cortex (alEC) as compared to the posterior-medial entorhinal cortex (pmEC).

This paper is an advancement to previous work, an fMRI paradigm published in eLife by the same group. The authors ask a timely question about the representation of temporal structures in the human anterior lateral entorhinal cortex. The paper is very well written and the reasoning is clear. All reviewers agree that the manuscript presents a novel finding, which adds to the nascent body of research dedicated to studying the coding of space and time in the brain. Moreover, the work agrees with findings recently reported in rodents (Tsao et al., 2018). Overall, the reviewers find the manuscript timely and of broad interest and would like to see it published. Comments and revision requests are summarized below.

Essential Revisions:

Broadly: Is the finding clearly about time or something else (e.g. ordinal information, object reactivation, distance, etc.)?

Different reviewers had questions about the temporal nature of the findings and other interpretations. One of the reviewers notes that unlike the Tsao paper, this manuscript does not show direct evidence for a temporal representation in alEC. Here temporal (or related) information is incorporated into the response to a repeated stimulus perhaps due to recovery of temporal information – a jump back in time. The contrast to spatial information pmEC is quite meaningful and validates the relevance of a large body of rodent work for studies of human memory. These results are just as well predicted by models of human memory and localize the temporal vs. spatial aspect to the alEC vs. pmEC. Many models of human memory (including the temporal context model as well as a Laplace transform of time) hypothesize gradually-decaying activation of preceding information. This is also just what the Tsao Nature paper found. While one can choose to call this decaying trace of an object representation or a temporal code, the authors are advised to clarify that the temporal nature is not exclusively established early on, throughout the manuscript and by including the ordinal figure in the main results, and in the Discussion.

The comments are listed below. Unless the authors are inclined to conduct control experiments that verifies the 'temporal' nature of the findings, they are advised to revise the title as the present findings do not clearly delineate a temporal code. Detailed comments are listed below.

– The authors state their findings corroborate those of Tsao and colleagues (2018), which show temporal coding in the EC in rats. Specifically, Tsao et al. say they find EC codes for moment-to-moment changes in experience and explain the temporal signal may be less of a clocking signal and more of an experience timestamp.

Do the authors interpret their finding along the same lines? Moreover, the authors show their effect is replicated when they do an ordinal analysis. Thus, do the authors claim the alEC signals temporal order specifically rather than time or experience? It would be useful for the authors to elaborate on the interpretation of their effect especially in light of the interpretation given by Tsao et al. for the EC temporal coding.

– Regarding the analysis which included the pmEC (which is meant to be the human homologue to the MEC in rats), should one not expect to see a big change in pattern similarity for objects spatially close together? i.e. pmEC is meant to contain grid cells which code for space, rather than time (or at least more robustly than time). Could the authors comment on why they think they do not observe the reverse effect in pmEC compared to alEC?

– Another reason this is important regards the time scale of temporal distances in the Tsao experiment compared to the present study. Given the large time scales here, and the fact that the rodent electrophysiology data related to smaller time scales, it is possible that this result has to do with other measures/scales of distance. Please discuss.

– Do the authors believe that short/long time scales are coded by the same region? Do they think there's a gradient in alEC that allows computational of long and short temporal/ordinal distances?

– The authors consider "spatial distance" to only denote Euclidean distance. While the teleport condition clearly can distinguish between Euclidean spatial distance and other distances, it is not sufficient to infer temporal distance. For instance, path distance and geodesic distance are relevant here. This is a crucial point, since other sorts of distance such as path distance and communicability distance could be correlated with representational similarity results that the authors take to be uniquely indicative of time distance – depending on the sequence the participants observed.

This is a major concern: dissociating both Euclidean and non-Euclidean spatial distances from temporal distance requires control experiments and a bit of computational work.

Figure 2—figure supplement 1: This figure regarding the relevance of ordinal distances should be in the main text. In addition, please discuss the relationship between time and ordinal distance and compare them to other types of distance (path/geodesic, communicability, etc.). Specifically, if the correlation to other types of distance turned out to be significant that should be shown in the main text as well. Do other distance measure not previously discussed in the paper also capture the patterns in the alEC?

On a related note: in the Materials and methods, the analysis does not consider spatial distance per se, but merely Euclidean distance. While the analysis can dissociate Euclidean distance from ordinal/temporal distance, the authors thus cannot conclude that they have dissociated space and time. This section can be revised depending on the results of the analyses/simulations the authors would conduct involving other forms of distance mentioned above.

As a potential future experiment, if the speed of motion was controlled (e.g. participants were taken from A to B with fixed geodesic or ordinal or path or goal or communicability distances with controlled varying speeds) then it would be easier to decide whether the distance denotes time or some form of non-Euclidean spatial distance such as path distance (which will probably correlate with ordinal distance). Changing speed would make it clear whether path distance or geodesic or communicability is relevant here or time distance can explain the variance observed in the region of interest.

– While it is not necessary that the authors run the following analyses, here are some thoughts. It is possible that the authors have the required data to test the demands of the speed-controlled experiment. Specifically, since the experiments are self-paced, it is possible that the authors can compare pattern similarity in temporal distances separated by geodesic distance x speed variations. If the present experiment does not include sufficient number of trials per subject to make this analysis work, perhaps a super-group analysis may be possible (which the authors also are not required to perform). For instance, one could use SRM or other methods of pooling individual data into a group space and explore the nature of the distance x speed x time interaction.

Whether the authors run the previous analysis or not, this is an important point also because they report similarity to ordinal results, which will also be related to path distance, communicability distance, and other forms of distance measures that can be derived from a graph of the sequence or successor predictive representations (and can still be dissociated from Euclidean distance).

The authors could discuss their reasoning about why the potentially object trace, ordinal, or distance code finding is temporal in nature in the Discussion. They could discuss future experiments and analyses that can discern whether this is an exclusively temporal representation or it could be object race or path distance or other spatially relevant distance in their future directions (e.g. time can be discerned by varying speed across similar distances in a controlled fashion). However, making a strong claim about "time" in the title given the present findings and the role of ordinal information seems unwarranted.

eLife. 2019 Aug 6;8:e45333. doi: 10.7554/eLife.45333.023

Author response


Essential Revisions

Broadly: Is the finding clearly about time or something else (e.g. ordinal information, object reactivation, distance, etc.)?

Different reviewers had questions about the temporal nature of the findings and other interpretations. One of the reviewers notes that unlike the Tsao paper, this manuscript does not show direct evidence for a temporal representation in alEC. Here temporal (or related) information is incorporated into the response to a repeated stimulus perhaps due to recovery of temporal information – a jump back in time. The contrast to spatial information pmEC is quite meaningful and validates the relevance of a large body of rodent work for studies of human memory. These results are just as well predicted by models of human memory and localize the temporal vs. spatial aspect to the alEC vs. pmEC. Many models of human memory (including the temporal context model as well as a Laplace transform of time) hypothesize gradually-decaying activation of preceding information. This is also just what the Tsao Nature paper found. While one can choose to call this decaying trace of an object representation or a temporal code, the authors are advised to clarify that the temporal nature is not exclusively established early on, throughout the manuscript and by including the ordinal figure in the main results, and in the Discussion.

The comments are listed below. Unless the authors are inclined to conduct control experiments that verifies the 'temporal' nature of the findings, they are advised to revise the title as the present findings do not clearly delineate a temporal code. Detailed comments are listed below.

We would like to thank the reviewers for their positive evaluation of our manuscript and the constructive feedback. We appreciate the helpful comments and suggestions and have taken the opportunity to clarify our views on the points raised by the reviewers. In line with the reviewers’ suggestions we have refined the interpretation of our results in the revised manuscript. As pointed out correctly, we do not measure temporal information during the learning task. Rather, our analysis capitalizes on changes in multi-voxel activity patterns elicited by the object cues during the picture viewing tasks before and after learning. In our design, we present stimuli in random order during the picture viewing task and yet observe activity patterns that reflect the temporal structure experienced during learning. Indeed, this is consistent with the notion of a “jump back in time” elicited by the object representations. For example, our results could be explained by a reinstatement of temporal contextual tags or by a reactivation of objects encountered nearby in time. We agree that these explanations are in line with models of human memory. The findings by Tsao et al. (2018), who decode temporal information from neural signals during ongoing navigation behavior, but do not focus on memory per se, are of relevance by emphasizing the role of the lateral entorhinal cortex, specifically. This served as our motivation to analyze pattern similarity change in the entorhinal subdivisions separately.

We have followed the suggestion to incorporate the analysis using ordinal temporal distances in the main manuscript. Further, we more prominently discuss this finding and describe that ordinal distances also reflect the temporal structure of the event sequence, but that with the current design we cannot disentangle the precise scale of temporal representations. We have further adapted the title as suggested by the reviewers. The title of the manuscript is now “Mapping sequence structure in the human lateral entorhinal cortex”. As we describe below and throughout the manuscript, both elapsed time and ordinal temporal distances capture the temporal structure of the sequence participants experienced during the learning task. Please find our detailed responses to the individual comments below.

– The authors state their findings corroborate those of Tsao and colleagues (2018), which show temporal coding in the EC in rats. Specifically, Tsao et al. say they find EC codes for moment-to-moment changes in experience and explain the temporal signal may be less of a clocking signal and more of an experience timestamp.

Do the authors interpret their finding along the same lines? Moreover, the authors show their effect is replicated when they do an ordinal analysis. Thus, do the authors claim the alEC signals temporal order specifically rather than time or experience? It would be useful for the authors to elaborate on the interpretation of their effect especially in light of the interpretation given by Tsao et al. for the EC temporal coding.

The reviewers ask the important question which role specifically the alEC might play for temporal coding. We appreciate the opportunity to offer our views on this issue. Tsao et al. (2018) emphasize that neural activity in the lateral subdivision of the entorhinal cortex in particular carries temporal information and suggest that this is due to the uniqueness of experience at every moment in time. In terms of the anatomical location of the effect, our data are in line with these findings in rodents as the anterior-lateral entorhinal cortex (alEC) is considered the human homologue region of the rodent LEC (Navarro Schröder et al., 2015; Maass et al., 2015). Further, they report cells with activity profiles that vary linearly over time. As alluded to above, no neural data was collected during the learning phase of our study. This precludes making strong interpretations about how specifically the effect we observe in the alEC relates to activity during the learning task. One explanation would be indeed that a slowly drifting signal, which might vary similarly on each lap due to the experience of navigating the same route (c.f. Tsao et al., 2018), provides timestamps for the object encounters during learning. These might in turn be reactivated in the alEC during the picture viewing task after learning, which could give rise to activity patterns reflecting the temporal distances of the task. This would be consistent with neural “jumps back in time” that have previously been observed in the human brain during image recognition tasks (Howard et al., 2012; Folkerts et al., 2018). Given the evidence for gradually drifting population activity in the human medial temporal lobe such an explanation seems more parsimonious to assume than an explicit clocking signal.

Our main analysis uses the median elapsed time between object encounters to show that alEC pattern similarity change correlates with the temporal structure of object relationships. We observe comparable results when quantifying the temporal structure of object relationships on an ordinal level of measurement using the difference in sequence position. We would like to stress that in our view such a representation of the serial order likely reflects a representation of the temporal structure of the encountered event sequence. One way how ordinal representations of the object sequence could arise is through object-to-object associations. For example, chaining models (e.g. Lewandowsky and Murdock, 1989; Jensen and Lisman, 2005) would predict that, during learning, an object is associated with preceding and successive objects in the sequence. Positions closer together in the sequence could result in stronger associations. During the picture viewing task after learning, object cues could result in the reactivation of associated objects. The strength of this reactivation might be more or less strong based on the strength of the association. This might give rise to activation patterns reflecting the serial order of events in a holistic fashion as observed in the alEC. Related to this question, we test in a new analysis in response to Comment 6 whether we can decode object representations in the entorhinal cortex and the lateral occipital cortex (LOC) using support vector machines. In brief, we do not observe stable object representations from the pre- to the post-scan picture viewing task in the entorhinal cortex.

While we could decode object identity from voxel patterns in the LOC, we did not observe evidence for the cortical reinstatement of temporally contiguous objects, which might be expected if the reactivation of object representations would underlie the effects we observed (see our response to Comment 6 for a detailed description of the analysis and results).

To dissociate whether temporal structure is represented on an ordinal as opposed to an interval or logarithmic level one would need an experimental design in which order and time/experience are at least partly decorrelated. Our experiment was designed to dissociate temporal and Euclidean spatial distances between objects and therefore the objects were spread fairly evenly along the route and participants’ movement speed was constant throughout the environment, which is why we cannot disentangle elapsed time from ordinal positions. In the revised manuscript, we have expanded our considerations of how a representation of the sequence structure might arise and how future studies might be able to dissociate ordinal temporal distances from time elapsed.

Please see below for the revised sections of the manuscript.

Introduction section

“This temporal information was suggested to arise from the integration of experience rather than an explicit clocking signal (Tsao et al., 2018).”

Discussion section

“While this interpretation is in line with data from rodent electrophysiology (Tsao et al., 2018) and the framework proposed by the temporal context model (Howard and Kahana, 2002; Howard et al., 2005) as well as evidence for neural contiguity effects in image recognition tasks (Howard et al., 2012; Folkerts et al., 2018), we cannot test the reinstatement of specific activity patterns from the learning phase directly since fMRI data were only collected during the picture viewing tasks in this study.”

“An alternative explanation for how the observed effects might arise is through associations between the objects. […] Hence, these results fail to provide evidence for the notion that the reactivation of object representations drove our effects.”

“In this experiment, the paradigm was designed to disentangle temporal distances from Euclidean spatial distances between objects (Deuker et al., 2016). […] This might allow the investigation of the level of precision at which the hippocampal-entorhinal region stores temporal relations, in line with evidence for the integration of duration information in the representations of short sequences (Thavabalasingam et al., 2018, 2019).”

– Regarding the analysis which included the pmEC (which is meant to be the human homologue to the MEC in rats), should one not expect to see a big change in pattern similarity for objects spatially close together? i.e. pmEC is meant to contain grid cells which code for space, rather than time (or at least more robustly than time). Could the authors comment on why they think they do not observe the reverse effect in pmEC compared to alEC?

The reviewers here ask the question why we did not observe reliable pattern similarity changes scaling with spatial distances in the posterior-medial subdivision of the entorhinal cortex in this task. This is a very interesting question given the wealth of literature describing spatial coding in the medial entorhinal cortex. Perhaps most prominently, grid cells have been discovered in the MEC (Hafting et al., Nature, 2005). Consistently, we have previously observed hexadirectional signals in the human pmEC (Bellmund et al., 2016), which are thought of as a proxy measure for activity in the entorhinal grid system in fMRI (Doeller et al., 2010). Models of grid-cell function suggest positions to be encoded by grid-cell population vectors in pmEC (e.g. Fiete et al., 2008; Mathis et al., 2012; Bush et al., 2015). Such spatial representations could be reactivated during the picture viewing task after learning in our study. However, cueing different positions might not result in BOLD activity patterns reflecting spatial distances. The analyses designed to detect grid-like entorhinal signals with fMRI are directional in nature, i.e. they contrast activity as a function of directions sampled in different trials (c.f. Doeller et al., 2010 for the univariate approach and Bellmund et al., 2016 for an adaptation of the analysis to multivoxel patterns). By presenting participants with isolated object images during picture viewing, we did not sample trajectories of different directions in this task and hence might not have been sensitive to spatial maps in pmEC if these were based on grid cell population codes.

We have made this notion explicit in the revised manuscript. The new section of the Discussion reads as follows:

“In line with hexadirectional signals in pmEC during imagination (Bellmund et al., 2016), putatively related to grid-cell population activity (Doeller et al., 2010), one might expect the pmEC to map spatial distances between object positions in our task. […] Hence, the design here was not optimized for the analysis of spatial representations in pmEC, if the object positions were encoded in grid-cell firing patterns as suggested by models of grid-cell function (Fiete et al., 2008; Mathis et al., 2012; Bush et al., 2015).”

– Another reason this is important regards the time scale of temporal distances in the Tsao experiment compared to the present study. Given the large time scales here, and the fact that the rodent electrophysiology data related to smaller time scales, it is possible that this result has to do with other measures/scales of distance. Please discuss.

The reviewers here ask the question how the time scale of our experiment compares to the experiment by Tsao et al. (2018), which decoded temporal information from LEC population signals. In our design, participants took around 264.6 ± 47.8s (mean ± standard deviation of median time per lap; see caption of Figure 1) to complete one lap of the route through the virtual city along which the 16 objects were encountered. In the rodent experiment, animals foraged for food in sessions consisting of a sequence of 12 trials with a trial length of 250s each. Tsao et al. show that they can not only decode trial identity, but also that they can decode shorter within-trial epochs. In their Figure 3F and Figure 3G, the authors show above chance decoding for epoch lengths of 20s, 10s and 1s. In our view, the trial length of 250s is comparable to the length of a lap of the route in our study. Further, epochs with a length of 10s and 20s constitute a similar temporal scale in comparison to our experiment, where objects were encountered on average every 16.6 ± 5.0s (mean ± standard deviation) on each lap. Therefore, we believe that the temporal scales can actually be regarded as comparable. Yet, of course, a key difference that remains is that Tsao et al. base their analyses on the period of ongoing activity, whereas our analyses focus on representations after learning. Nonetheless, the question of temporal scales in memory is intriguing as also discussed in response to the following comment. Further, in our view, this does not preclude other temporal scales in human memory, where temporal relations might also be represented on different levels or chunked in superordinate hierarchical structures such as different days or weeks.

We have made the match in the length of one lap in our design and the length of a trial in the paper by Tsao et al. explicit in the revised version of the manuscript.

Discussion section

“In our task, one lap of the route took approximately 4.5 minutes on average; comparable to the 250s-duration of a trial in Tsao et al. (2018).”

– Do the authors believe that short/long time scales are coded by the same region? Do they think there's a gradient in alEC that allows computational of long and short temporal/ordinal distances?

The reviewers here ask the question whether different time scales are represented by the same brain regions and whether there might be a gradient of temporal representation in the alEC. Relevant to the question how different time scales are encoded in the subregions of the hippocampal formation is the time cell literature. Time cells have been observed in the hippocampus (e.g. Pastalkova et al., 2008; MacDonald et al., 2011; Kraus et al., 2013; Mau et al., 2018) and medial entorhinal cortex (Kraus et al., Neuron, 2015) of the rodent brain during running in place. The length of the temporal intervals tiled by the sequential firing of time cells are typically in the range of seconds, constituting a neural code on a shorter temporal scale. Notably, the firing patterns of time cells with elevated firing at specific moments during the delay differ from firing patterns in the alEC where some cells’ activity varied linearly over time (Tsao et al., 2018).

Of note, studies investigating time cells typically employ highly-trained tasks including a repeated temporal delay, which constitutes a difference to temporal information derived from decaying traces of prior experience. Conceptually, we believe that slowly drifting population signals carrying temporal information arising from decaying traces of prior experience could more easily provide temporal context information for naturalistic episodic memory where temporal intervals are typically not repeated. Temporal information based on timestamps from a slowly varying signal might be a way to inherently encode temporal structure even without longer training procedures, a property desirable for episodic memory. Despite changes in the ensemble of time cells active over different sessions (Mau et al., 2018), it remains unclear whether time cells indeed also encode longer intervals. One recent study failed to detect time cells for a delay with a length of 60s (Sabariego et al., 2019).

A question that arises from this consideration is how time cells would behave in our task where one lap of the route is characterized by multiple intervals between the different events. One possibility might be that a long sequence of time cells could encode the temporal progression along the entire route. Alternatively, each interval between object encounters could be encoded by the same firing sequence since object encounters occurred at fairly regular temporal intervals along the route.

With respect to different granularities of representations there is evidence from studies in rodents for a granularity increase along the dorsoventral hippocampal axis in rodents. Place field size in the hippocampus and grid scale in the medial entorhinal cortex increase from more dorsal to ventral recording sites in the rodent brain (Kjelstrup et al., 2008; Barry et al., 2007; Stensola et al., 2012). Consistently, gradients in the scale of mnemonic networks (Collin et al., 2015) and fMRI voxel dynamics (Brunec, Bellana et al., 2018) have been documented along the hippocampal long axis in the human brain. To the best of our knowledge, there is no evidence for a gradient of representations in the lateral subdivision of the entorhinal cortex. Using fMRI, one possibility to investigate this question in future studies could be to scrutinize the particularly slow fluctuations of the BOLD signal in the EC (Lositsky et al., 2016) in more detail. Specifically, one could apply the analysis approach developed by Brunec et al. (Curr. Bio., 2018) and contrast time courses and temporal autocorrelation of voxels at different anatomical positions within the alEC. Given the small size of this region of interest, the increased spatial resolution of fMRI at 7T might be required. A gradient within the alEC could then be measured by greater similarity among voxel time courses and higher temporal autocorrelation in anterior compared to more posterior voxels (c.f. Brunec et al., 2018).

Discussion section

“Time cell ensembles change over minutes and days (Mau et al., 2018), but their firing has been investigated predominantly in the context of short delays in the range of seconds. […] How memories are represented at different temporal scales, which might be integrated in hierarchically nested sequences such as different days within a week, remains a question for future research.”

– The authors consider "spatial distance" to only denote Euclidean distance. While the teleport condition clearly can distinguish between Euclidean spatial distance and other distances, it is not sufficient to infer temporal distance. For instance, path distance and geodesic distance are relevant here. This is a crucial point, since other sorts of distance such as path distance and communicability distance could be correlated with representational similarity results that the authors take to be uniquely indicative of time distance – depending on the sequence the participants observed.

This is a major concern: dissociating both Euclidean and non-Euclidean spatial distances from temporal distance requires control experiments and a bit of computational work.

Figure 2—figure supplement 1: This figure regarding the relevance of ordinal distances should be in the main text. In addition, please discuss the relationship between time and ordinal distance and compare them to other types of distance (path/geodesic, communicability, etc.). Specifically, if the correlation to other types of distance turned out to be significant that should be shown in the main text as well. Do other distance measure not previously discussed in the paper also capture the patterns in the alEC?

On a related note: in the Materials and methods, the analysis does not consider spatial distance per se, but merely Euclidean distance. While the analysis can dissociate Euclidean distance from ordinal/temporal distance, the authors thus cannot conclude that they have dissociated space and time. This section can be revised depending on the results of the analyses/simulations the authors would conduct involving other forms of distance mentioned above.

As a potential future experiment, if the speed of motion was controlled (e.g. participants were taken from A to B with fixed geodesic or ordinal or path or goal or communicability distances with controlled varying speeds) then it would be easier to decide whether the distance denotes time or some form of non-Euclidean spatial distance such as path distance (which will probably correlate with ordinal distance). Changing speed would make it clear whether path distance or geodesic or communicability is relevant here or time distance can explain the variance observed in the region of interest.

The reviewers raise an important point and comment on different distance measures that might describe pattern similarity changes in the entorhinal cortex. Our paradigm was designed to dissociate Euclidean spatial distances and elapsed time between objects. We agree that there are other distance measures that can be used to quantify spatial relationships. In terms of the geodesic distance, we implemented two approaches of finding the shortest path between all pairs of object positions. First, we focused on all locations in the virtual city that were not obstructed by the collision volumes of virtual buildings, trees, or other objects distributed throughout the city and created a corresponding map of valid locations. Alternatively, one might consider only the streets as valid locations for navigation. Indeed, participants were instructed to stay on the streets during the learning phase and a prompt appearing on the screen reminded them to do so whenever they left the street network. Consequently, we created a second map in which only the streets of the virtual city were navigable positions. For either approach, we used a Matlab implementation of the A* search algorithm (https://mathworks.com/matlabcentral/fileexchange/56877) to find the shortest paths between all pairs of object positions. Examples of the resulting shortest paths are shown in Figure 1—figure supplement 2D and E. Geodesic distances were then quantified as the lengths of these shortest paths. Importantly, geodesic distances were not correlated with temporal distances measured as median elapsed time (Figure 1—figure supplement 2B and C; based on obstacles: mean and standard deviation of individual Pearson r=-0.061 ± 0.006, minimum p=0.414, correlation with averaged temporal distance: Pearson r=-0.061, p=0.505; based on streets: mean and standard deviation of individual Pearson r=-0.041 ± 0.006, minimum p=0.552, correlation with averaged temporal distance: Pearson r=-0.041 p=0.653). Entorhinal pattern similarity change was not significantly correlated with the geodesic distances based on obstacles in the virtual city (alEC: T(25)=0.82, p=0.436; pmEC: T(25)=0.73, p=0.479, Figure 2—figure supplement 2A) or the street network (alEC: T(25)=0.36, p=0.715; pmEC: T(25)=0.92, p=0.375, Figure 2—figure supplement 2B). Pattern similarity change in alEC was more strongly related to temporal than geodesic distances: akin to the main analyses, we conducted a 2x2 repeated measures ANOVA with the factors EC subregion and temporal vs. geodesic distances based on the obstacle map, revealing a significant interaction (main effect subregion: F(1,25)=5.18, p=0.031, main effect distance type:

F(1,25)=0.99, p=0.330, interaction: F(1,25)=6.96, p=0.014, post hoc comparison of temporal and geodesic distance in alEC: T(25)=-2.88, p=0.009). Likewise, we observed comparable results when using geodesic distances based on the street network (main effect subregion: F(1,25)=6.68, p=0.017, main effect distance type: F(1,25)=0.81, p=0.376, interaction: F(1,25)=4.30, p=0.048, post hoc comparison of temporal and geodesic distance in alEC: T(25)=-2.51, p=0.019). Taken together, these data highlight that pattern similarity change in alEC was not related to geodesic spatial distances between object positions.

The reviewers further point towards the path and communicability distance as a measure of spatial distances. In our analyses, the lengths of the paths between objects are almost perfectly correlated with the time elapsing between object encounters. This is due to the fact that travelled distances and elapsed time are identical unless participants take breaks from navigating. Since our measures of representational similarity are acquired before and after the navigation of the route, we are not sensitive to stops on individual laps of the route because we have to rely on measures describing central tendencies of participants’ behavior during the learning task. Because participants navigate the route repeatedly, consistent biases in stopping behavior across laps are unlikely. If one follows the notion that correlations between pattern similarity change in alEC and the temporal structure of the task arise not from a ticking clock, but from the association of objects with a slowly drifting contextual signal, e.g. through the decaying trace of prior experience, the travelled distances can be conceived of as an additional proxy measure of past experience that is closely related to elapsed time (mean and standard deviation of individual Pearson r=0.98 ± 0.005, all p<0.001; correlation with averaged temporal distance: Pearson r=0.98, p<0.001). However, there seems to be, to the best of our knowledge, little evidence in the literature that the human alEC or rodent LEC specifically would be involved in the mapping of distances travelled along a path in a spatial sense. Rather, keeping track of travelled distances is closely related to path integration for which grid cells found in the (posterior-) medial subdivision of the EC are thought to be of central importance in rodents and humans (Hafting et al., Nature, 2005; Gil et al., Nat. Neurosci. 2018; Chen et al., Curr. Bio., 2015; Stangl et al., Curr. Bio., 2018). In sum, the path distance along the route in our experiment and, more generally, travelled distances might be important contributors to the accumulation of experience in the context of spatial navigation. The reviewers discuss the experimental idea to dissociate the path distance from temporal distances (both elapsed time or ordinal distances) by controlling participants’ walking speed. As also discussed in response to Comments 2 and 7, we agree that variations of movement speed would be a suitable manipulation to disentangle the path distance from temporal relationships between positions in a future experiment.

In contrast, we do not think that the communicability distance offers a plausible measure for the object relationships in this task. While it would be possible to convert the street network and object positions into a graph structure, the communicability distance provides a suboptimal measure to quantify participants’ learning experience in this experiment in our view. This is because participants experienced the objects in a deterministic sequence by navigating along a fixed route that was designed to have little overlap. In fact, only a short section of one street was traversed twice on one lap of the route (see Figure 1, paths from chest 1 to 2 and chest 13 to 14) and no object was encountered on this stretch. BOLD-signals in the entorhinal cortex have been shown to be sensitive to communicability distances between nodes on a graph when stimulus sequences during learning reflect random walks along the underlying graph (Garvert et al., 2017). However, the ambiguity of the stimulus sequence constitutes a marked difference to the deterministic structure of the object sequence encountered along a route consisting almost exclusively of unique paths through the city. Therefore, we have not correlated communicability distances with pattern similarity change.

We have revised the manuscript according to the suggestions by the reviewers. We have incorporated the results of the analysis using ordinal temporal distances into the main manuscript. Further, we have included the analysis of geodesic distances in Figure 1—figure supplement 2 and Figure 2—figure supplement 2. We have carefully gone through the manuscript to specify where spatial distances are operationalized as Euclidean distances. Please see below for the revised sections of the manuscript.

See Figure 4, Figure 1—figure supplement 2B-E and Figure 2—figure supplement 2.

Introduction section

“We used representational similarity analysis of fMRI multi-voxel patterns in the entorhinal cortex to address the question how learning the structure of an event sequence shapes mnemonic representations in the alEC.”

Results section

“The temporal distance structure of the object sequence can be quantified as the elapsed time between object encounters or as ordinal differences between their sequence positions, which are closely related in our task. Spatial distances on the other hand can be captured by Euclidean or geodesic distances between positions. Importantly, we dissociated temporal from Euclidean and geodesic spatial object relationships through the use of teleporters along the route (Figure 1—figure supplement 2).”

“Pattern similarity change in alEC did not correlate significantly with Euclidean spatial distances (T(25)=0.81, p=0.420) and pattern similarity change in posterior-medial EC (pmEC) did not correlate with Euclidean (T(25)=0.58, p=0.583) or temporal (T(25)=1.73, p=0.089) distances.”

“Operationalizing the temporal structure in terms of the ordinal distances between object positions in the sequence yielded comparable results since our design did not disentangle time elapsed from ordinal positions as objects were encountered at regular intervals along the route. […]Furthermore, the interaction of the two-by-two repeated measures ANOVA with the factors entorhinal subregion and distance type remained significant when using geodesic spatial distances based on shortest paths using all non-obstructed positions (interaction: F(1,25)=6.96, p=0.014; main effect of EC subregion: F(1,25)=5.18, p=0.031; main effect of distance type: F(1,25)=0.99, p=0.330) or the street network only (interaction: F(1,25)=4.30, p=0.048; main effect of EC subregion: F(1,25)=6.68, p=0.017; main effect of distance type: F(1,25)=0.81, p=0.376).”

Discussion

“In our task, relevant factors contributing to a similar experience of the route on each lap are not only the encounters of objects in a specific order at their respective positions, but also recognizing and passing salient landmarks as well as travelled distance and navigational demands in general.”

Materials and methods

“The use of teleporters, which instantaneously moved participants to a different part of the city, enabled us to dissociate temporal from Euclidean and geodesic spatial distances between object positions (Figure 1—figure supplement 2).”

“Indeed, temporal distances across all comparisons of object pairs were not correlated with spatial relationships measured as Euclidean distances (Figure 1—figure supplement 2A).”

“An alternative way of capturing the spatial structure of the task is via geodesic distances. We quantified geodesic distances as the lengths of the shortest paths between object locations. Shortest paths were calculated using a Matlab implementation of the A* search algorithm (https://mathworks.com/matlabcentral/fileexchange/56877). First, we calculated shortest paths that were allowed to cross all positions not obstructed by buildings or other obstacles (see Figure 1—figure supplement 2D for example paths). Second, because participants were instructed to only navigate on the streets during the learning task, we found shortest paths restricted to the city’s street network (example paths are shown in Figure 1—figure supplement 2E). Neither form of geodesic distances between object positions was correlated with temporal distances (Figure 1—figure supplement 2BC).”

– While it is not necessary that the authors run the following analyses, here are some thoughts. It is possible that the authors have the required data to test the demands of the speed-controlled experiment. Specifically, since the experiments are self-paced, it is possible that the authors can compare pattern similarity in temporal distances separated by geodesic distance x speed variations. If the present experiment does not include sufficient number of trials per subject to make this analysis work, perhaps a super-group analysis may be possible (which the authors also are not required to perform). For instance, one could use SRM or other methods of pooling individual data into a group space and explore the nature of the distance x speed x time interaction.

Whether the authors run the previous analysis or not, this is an important point also because they report similarity to ordinal results, which will also be related to path distance, communicability distance, and other forms of distance measures that can be derived from a graph of the sequence or successor predictive representations (and can still be dissociated from Euclidean distance).

The authors could discuss their reasoning about why the potentially object trace, ordinal, or distance code finding is temporal in nature in the Discussion. They could discuss future experiments and analyses that can discern whether this is an exclusively temporal representation or it could be object race or path distance or other spatially relevant distance in their future directions (e.g. time can be discerned by varying speed across similar distances in a controlled fashion). However, making a strong claim about "time" in the title given the present findings and the role of ordinal information seems unwarranted.

The reviewers here offer interesting suggestions for additional analyses to attempt to dissociate elapsed time from spatial distances. Unfortunately, the data we have from this experiment does not allow us to analyze the relationship of pattern similarity change and temporal distances for different walking speeds. The learning task was indeed self-paced, so there are variations in navigation efficiency and potentially walking speeds across laps. However, the representational change is only assessed after the learning task. Hence, we do not have a lap-by-lap measure of object representations that would allow more fine-grained analysis. Rather, we have to select one variable from the learning task and relate representational change to it. Performing an analysis where temporal distances are assessed as a function of speed would require data in which speed variations are consistent across laps by a participant. This would require the explicit experimental manipulation of walking speeds between different objects in a new experiment. We describe this idea for a future study in the Discussion.

The reviewers suggest to additionally discuss the potential role of object traces or predictive representations as possible explanations for the results. Since objects were presented in random order during the picture viewing tasks, memory-based reactivations of objects preceding or following a cued object would be required to explain the observed effect. In new analyses, we aimed to test predictions that can be derived from the idea that the observed effects reflect the, potentially predictive, reactivation of neighboring object representations. If object cues during the post-learning picture viewing task were to reactivate neighboring objects from a learned sequence, we should be able to detect such object representations. To test this idea, we turned towards a classification analysis. Using the Matlab function fitcecoc, we trained support vector machines in a one-vs.-one coding design to distinguish activity patterns evoked by the different object cues in different regions of interest. In addition to the entorhinal cortex, these included the lateral occipital cortex (LOC) – known to be involved in the visual processing of objects (for review see Grill-Spector et al., 2001) – to test for cortical reinstatement of object representations. We trained the classifiers on the data from the pre-learning scan to test for similar representations during the post-learning scan.

Using this procedure, we were able to classify object identities significantly above chance levels determined through random permutations of trial labels in the LOC (T(25)=7.54, p<0.001, Figure 2—figure supplement 4). However, decoding accuracies were at chance level for both entorhinal ROIs (alEC: T(25)=-0.08, p=0.941; pmEC: T(25)=0.53, p=0.621). If object images during the post-learning scan serve as cues for the reactivation of neighboring objects, objects n-1 and n+1 might be activated when viewing object n. This might result in systematic errors of the classifier. Hence, we analyzed the classifier evidence as a function of sequence position lag to test for above-chance classifier confusion using one-sided t-tests. Focusing on lags ± 3, we did not observe any above-chance classifier evidence for preceding (alEC: most extreme T(25)=1.13; minimum p=0.270; pmEC: most extreme T(25)=1.00; min. p=0.332) or upcoming (alEC: most extreme T(25)=-2.07; min. p=0.055; pmEC: most extreme T(25)=-0.83; min. p=0.414) objects in the entorhinal cortex. Based on evidence for the reinstatement of cortical activity patterns during retrieval (Nyberg et al., 2000; Wheeler et al., 2000; Polyn et al., 2005), which is modulated by hippocampalentorhinal activity (Bosch et al., 2014), we also performed this analysis in the LOC, but again failed to observe above-chance evidence for preceding or following objects. Rather classifier evidence was below chance levels, potentially due to the high accuracy at no lag (preceding objects: most extreme T(25)=-2.51; min. p=0.018; successive objects: most extreme T(25)=-4.09; min. p<0.001). Taken together, the lack of stable object representations in the entorhinal cortex makes it unlikely that the effects we observed go back to a decaying object trace or the reactivation of objects preceding or following a presented object along the route.

As noted above already, we have followed the suggestion to revise the title of the manuscript, which now reads “Mapping sequence structure in the human lateral entorhinal cortex”. Throughout the manuscript, we describe that the temporal structure of the object sequence can be quantified at different levels of measurement. Again, we would like to emphasize that, in our view, ordinal-level representations of the sequence also reflect the temporal structure of the object sequence.

In the revised manuscript, we have included the classification analyses and discuss potential future experiments dissociating elapsed time from ordinal temporal distances and spatial distances such as the path distance. The new figures and revised sections of the manuscript are:

Title: Mapping sequence structure in the human lateral entorhinal cortex

Figure 2—figure supplement 4

Results section

“Does the presentation of object images during the post-learning picture viewing task elicit reactivations of similar representations from the pre-scan? […] We also did not observe abovechance classifier evidence for nearby objects in the LOC, but rather classifier evidence was below chance levels for some lags, potentially due to high classification accuracies at no lag (preceding objects: most extreme T(25)=-2.51; min. p=0.018; successive objects: most extreme T(25)=-4.09; min. p<0.001).”

“An alternative explanation for how the observed effects might arise is through associations between the objects. […] Hence, these results fail to provide evidence for the notion that the reactivation of object representations drove our effects.”

Materials and methods

“ROI masks for the bilateral lateral occipital cortex were defined based on the Freesurfer segmentation and intersected with the combined brain masks from the two fMRI runs since this ROI was located at the edge of our field of view.”

“Classification analysis

To examine whether object representations were stable between the pre- and the postlearning scan, we turned to pattern classification techniques and examined whether classifiers trained on the pre-learning scan exhibited systematic errors when tested on the post-learning data. […] Note that classifier performance is below chance for some preceding and upcoming sequence positions due to high accuracy at lag 0.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Figure 2—source data 1. Z-values of correlations between pattern similarity change in the entorhinal subregions and temporal and Euclidean spatial distances as shown in panel B.
    DOI: 10.7554/eLife.45333.010
    Figure 2—source data 2. Pattern similarity changes in alEC for object pairs separated by low and high temporal distances as shown in panel C.
    DOI: 10.7554/eLife.45333.011
    Figure 2—source data 3. Z-values of correlations between alEC pattern similarity change and temporal distances without comparisons of objects encountered in direct succession along the route and Pearson correlation coefficients quantifying temporal clustering during the free recall task (panel D).
    DOI: 10.7554/eLife.45333.012
    Figure 2—source data 4. Z-value differences quantifying the difference in temporal and spatial mapping in alEC and pmEC as shown in panel E.
    DOI: 10.7554/eLife.45333.013
    Figure 3—source data 1. True and reconstructed temporal coordinates of object positions as shown in panel B.
    DOI: 10.7554/eLife.45333.015
    Figure 3—source data 2. Procrustes distance from mapping coordinates from multidimensional scaling based on alEC pattern similarity change to true temporal coordinates and surrogate distribution obtained from fitting to shuffled temporal coordinates (panel D).
    DOI: 10.7554/eLife.45333.016
    Figure 4—source data 1. Z-values of correlations between pattern similarity change in the entorhinal subregions and ordinal temporal and Euclidean spatial distances.
    DOI: 10.7554/eLife.45333.018
    Transparent reporting form
    DOI: 10.7554/eLife.45333.019

    Data Availability Statement

    Source data files have been provided for Figures 2, 3 and 4. The virtual city Donderstown is available at https://osf.io/78uph/.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES