Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Jul 1.
Published in final edited form as: Cognition. 2024 Apr 29;248:105807. doi: 10.1016/j.cognition.2024.105807

Eye movements reinstate remembered locations during episodic simulation

Jordana S Wynn 1, Daniel L Schacter 2
PMCID: PMC11875530  NIHMSID: NIHMS2049553  PMID: 38688077

Abstract

Imagining the future, like recalling the past, relies on the ability to retrieve and imagine a spatial context. Research suggests that eye movements support this process by reactivating spatial contextual details from memory, a process termed gaze reinstatement. While gaze reinstatement has been linked to successful memory retrieval, it remains unclear whether it supports the related process of future simulation. In the present study, we recorded both eye movements and audio while participants described familiar locations from memory and subsequently imagined future events occurring in those locations while either freely moving their eyes or maintaining central fixation. Restricting viewing during simulation significantly reduced self-reported vividness ratings, supporting a critical role for eye movements in simulation. When viewing was unrestricted, participants spontaneously reinstated gaze patterns specific to the simulated location, replicating findings of gaze reinstatement during memory retrieval. Finally, gaze-based location reinstatement was predictive of simulation success, indexed by the number of internal (episodic) details produced, with both measures peaking early and co-varying over time. Together, these findings suggest that the same oculomotor processes that support episodic memory retrieval – that is, gaze-based reinstatement of spatial context – also support episodic simulation.

Keywords: memory, simulation, eye movements, imagination


Imagining the future is a constructive process that relies heavily on retrieving and restructuring the past (Schacter & Addis, 2007). This process invariably requires the reinstatement of spatial context from memory – that is, the location in which a simulated event might occur. Spatial context is thought to serve as a foundation upon which past and future episodes can be constructed and elaborated (Burgess et al., 2002; Hassabis, Kumaran, & Maguire, 2007; Maguire & Mullally, 2013; for review, see Robin, 2018). So fundamental is spatial context to the construction process that participants tasked with imagining a future event will spontaneously situate that event in a familiar spatial context if none is provided (Robin et al., 2016).

Given that imagined (like real) spatial contexts overwhelmingly invoke visual details (Aydin, 2018; Rubin & Umanath, 2015), it is perhaps not surprising that eye movements, the primary means by which we sample the visual environment, play an important role in the retrieval and reinstatement of those details from memory. Research using eye movement monitoring suggests that when tasked to retrieve an encoded stimulus from memory, participants spontaneously move their eyes to empty regions of the screen that were previously occupied by, or associated with, salient features of that stimulus (Bochynska & Laeng, 2015; Foulsham & Kingstone, 2013; Johansson & Johansson, 2013; Laeng & Teodorescu, 2002; for review, see Ferreira et al., 2008; Wynn et al., 2019). Notably, gaze-based reinstatement of spatial context occurs whether that context was encoded visually or auditorily (Johansson et al., 2006; Spivey & Geng, 2001) and even when retrieval occurs in darkness (Johansson et al., 2006). These findings suggest that eye movements may themselves be part of the event representation, connecting external and internal contexts (Noton & Stark, 1971a, 1971b; Wynn et al., 2019). Indeed, reinstatement of encoding-related eye movements during retrieval has been correlated with both behavioral performance (Holm & Mäntylä, 2007; Laeng et al., 2014; Wynn et al., 2020) and subjective feelings of vividness (Johansson et al., 2022, Lenoble et al., 2019), suggesting that eye movements play a functional role in memory retrieval. Based on this work, we proposed that eye movements support the reactivation of the spatial and/or temporal encoding context, which acts as a scaffold for retrieving more detailed elements of a remembered stimulus or event (Wynn et al., 2019).

Although there is now substantial evidence suggesting that eye movements play an active role in memory retrieval, it remains unclear whether this role extends to imagining the future. Converging evidence from behavioral and neuroimaging studies shows that recall of autobiographical events from one’s past and imagination of future events share several phenomenological similarities and recruit similar neural networks (Benoit & Schacter, 2015; Schacter & Addis, 2007, 2020). It has been hypothesized that this overlap results from a shared reliance on constructive episodic simulation, whereby details from one’s past are reactivated (in the case of recall) and flexibly recombined (in the case of simulation) (Schacter & Addis, 2007, 2020). Notably, the core network underlying constructive episodic simulation, including frontal, temporal, and parietal brain regions, is anatomically and functionally linked to both the visual and oculomotor systems (Ryan et al., 2019; Shen et al., 2016; for review, see Conti & Irish, 2021; Ryan et al., 2020). These connections may allow for the flexible retrieval and updating of visual/oculomotor information in the service of memory/simulation.

Given that both recall of past events and imagination of future events rely on the ability to reactivate details from memory, and that reactivation of encoded details has been linked to eye movements, we propose that eye movements support future simulation by reinstating relevant contextual details from memory. In support of this proposal, prior work using eye movement monitoring has shown that both the specificity and vividness of autobiographical recall (Lenoble et al., 2019) and the number of internal details produced during future imagining (de Vito et al., 2015) are impaired when participants’ eye movements are constrained relative to free-viewing, indicating that eye movements play a functional role in constructive episodic simulation. Indeed, the rate at which fixations are made has been positively correlated with autobiographical memory retrieval, and specifically, the number of internal (episodic) details produced (Armson et al., 2019). Yet, other work has revealed a negative correlation between fixation rate and internal details during future imagining (Sheldon et al., 2019; Wynn et al., 2022). Thus, how eye movements support future simulation remains unclear.

To test our prediction that eye movements support future imagining by reinstating spatial context from memory, we asked participants to first recall and describe highly familiar locations and subsequently imagine events occurring in those locations, while looking at a blank screen. Critically, on half of the simulation trials, participants were restricted from executing eye movements (e.g., fixed-viewing), a manipulation that we predicted would impair future imagining (relative to free-viewing). Specifically, we expected that restricting viewing would reduce both the number of internal details produced and the subjective vividness of simulation. To further probe whether voluntarily executed eye movements spontaneously reinstated spatial contextual details from memory, we measured the spatial overlap between gaze patterns corresponding to imagination of familiar locations and simulated events. We predicted that imagining an event in a particular location would evoke a gaze pattern specific to that location. That is, eye movements when imagining an event in a particular location should be more similar to the eye movements executed when recalling that location than to the eye movements executed when recalling other locations. Finally, we predicted that if gaze reinstatement facilitates future simulation, then the degree to which context-specific gaze patterns are reinstated should be associated with simulation success, indexed by both the number of internal details produced and subjective vividness ratings.

Methods

Participants

Forty-four healthy young adults (ages 18–35, M = 21 years, SD = 3.4 years; 12 male, 31 female, 1 nonbinary) from Harvard University participated in the study for either cash or course credit. Participants were pre-screened to ensure that they had normal or corrected-to-normal vision, English fluency, and no history of neurological or psychiatric disorders. All participants provided written informed consent prior to participating in the study in compliance with the Harvard University Institutional Review Board. Four participants were excluded from all analyses because of missing (>50%) trial data and/or failure to follow instructions. One additional participant was excluded from analyses of gaze data because of calibration errors (resulting in missing >50% of trial gaze data) and 2 participants were excluded from analyses of narrative data because of audio recording errors.

Apparatus

Monocular eye movements were recorded using a desk mounted Eyelink 1000 Plus eye tracker (SR Research Ltd.) at 1000 Hz sampling rate. Stimuli were presented on a 24-inch monitor (1366 × 768-pixel resolution) positioned 70 cm from the participant. Saccades and blinks were defined by EyeLink as saccades greater than 0.5° of visual angle and the period in which the saccade signal was missing for 3 or more consecutive samples, respectively. All remaining samples were classified as fixations. To maximize data quality, calibration was performed prior to the start of the experiment and drift correction was performed between trials. To minimize motion, head stabilization was achieved using a chin rest. Audio was recorded throughout the experiment using the Otter AI application.

Procedure

Prior to the experimental session, participants submitted (via email) a list of 35 highly familiar people and places. As per the instructions, “a familiar person is any person who you know well (i.e., more than an acquaintance). This can include friends, family members, teachers… etc. A familiar location is any place you know well and have visited at least several times.” Participants were instructed to be specific, for example, by listing individual rooms rather than an entire house (multiple rooms or spaces within the same house or general area were permitted). Lists were reviewed by the experimenter and approved prior to testing.

During the experimental testing session, participants first completed 6 practice trials (3 recall and 3 simulation trials) to familiarize themselves with the tasks. Participants then completed 4 experimental blocks, divided by short breaks. Within each block, participants completed 8 trials of the recall task (recall and describe a familiar location from memory) and 8 trials of the simulation task (imagine a future event occurring in a familiar location and with a familiar person), using the same location cues. Location and person cues were taken from the previously provided lists and were randomly assigned to blocks prior to the testing session. During half of the simulation blocks (16 trials), participants were permitted to freely move their eyes around a blank screen (free-viewing). During the other 2 simulation blocks, participants were required to maintain fixation on a centrally presented cross (fixed-viewing). The order of free- and fixed-viewing blocks was counterbalanced across participants. Participants were permitted to freely move their eyes around a blank screen (free-viewing) during all recall trials. Eye movements and audio were recorded throughout the experiment.

Recall Task.

On the recall task, participants were cued with a familiar location (randomly selected from the list they provided) and were instructed to describe the location from memory. Specifically, participants were instructed to “imagine yourself standing in the entryway of the respective location and describe the location as you would see it from that vantage point.” Participants were instructed to visualize the location from the perceived entryway to maximize the spatial correspondence between recollections and simulations. For all trials, the location cue was presented visually in the center of the screen for 2s, after which participants were given up to 1 minute to describe the location from memory out loud while keeping their eyes on the screen. After 1 minute (or a key press indicating completion), participants were given 4s to rate the perceived vividness of the location as they imagined it via key press (1 = not vivid at all or “like a blurry picture”, 4 = extremely vivid or “like a movie in my head”).

Simulation Task.

On the simulation task, participants were cued with a familiar location (randomly selected from the locations used in the recall task within the same block), and just below it, a familiar person (randomly selected from the list they provided) and were instructed to simulate a future event (i.e., one that could plausibly occur in the next 5 years) occurring in that location and involving that person. Specifically, participants were instructed to “imagine yourself standing in the entryway of the respective location and describe the event as you would see it from that vantage point in as much detail as possible.” After 1 minute (or a key press indicating completion), participants were again given 4s to rate the perceived vividness of their simulated event via key press (using the same scale as the recall task).

Data Analysis

Behavioral and gaze measures were analyzed with paired samples t-tests and linear mixed effects models (LMEM) using the lme4 package in R (Bates et al., 2012) with p values approximated using the lmerTest package (Kuznetsova et al., 2017). Complete model results are reported in the Supplementary Materials with tables created using the sjPlot package (Lüdecke, 2023)

Eye movement data analysis.

To test for location-based reinstatement during simulation, we used the gaze similarity analysis (https://github.com/bbuchsbaum/eyesim; for complete details, see Wynn et al., 2020). Fixations from each recall and simulation trial were converted into density maps using a duration-weighted gaussian smoothing function (σ = 80). Simulation-specific density maps were then correlated (using a Fisher-z transformed correlation) with the density map for the corresponding location (e.g., “mom’s kitchen” – “mom’s kitchen, grandma”). To ensure that similarity between recall and simulation density maps was not driven by generic or idiosyncratic viewing biases (e.g., center bias), we additionally correlated each simulation-specific density map with density maps for every other non-corresponding location (e.g., “dorm bathroom” – “mom’s kitchen, grandma”). To test our hypothesis that simulation is supported by location-specific reinstatement, we compared the similarity score for the corresponding location to the average of all non-corresponding similarity scores, with a significant difference (corresponding > non-corresponding) indicating location-specific reinstatement.

Text data analysis.

All simulations were recorded and transcribed for the purposes of quantifying the number of internal and external details. In short, internal details are classified as informational chunks that are relevant to the current memory or simulation (e.g., sights, sounds, feelings…), while external details refer to informational chunks that are not specific to the current memory or simulation (e.g., general facts, personal semantics…) (Levine et al., 2002). The number of internal details produced is thought to reflect the success of a given memory or simulation – that is, the degree to which the remembered or imagined event is vividly and episodically (re)experienced. Indeed, one of the hallmarks of cognitive aging is a decline in the number of internal details produced during both memory (Levine et al., 2002) and simulation (Addis et al., 2008), reflecting a decline in episodic processes. To quantify internal and external details, we used a validated automated version of the described scoring procedure (https://github.com/rubenvangenugten/autobiographical_interview_scoring). Briefly, this method utilizes a modified language model (distilBERT) to identify the amount of event-specific and non-specific content in each sentence (for complete details, see van Genugten & Schacter, 2024).

Results

Behavioral results

On average, participants spent 39.33s (SD = 14.3s) generating and describing simulated events. Simulation time did not significantly differ across conditions (M(SD)Free = 39.03(11.36), M(SD)Fixed = 39.59(12.24), t(39) = −.81, p = .42, 95% CI [−1.95, 0.84], d = .13). Contrary to our predictions, there was no difference across conditions in the number of internal details (M(SD)Free = 77.89(24.74), M(SD)Fixed = 79.14(27.3), t(37) = −.65, p = .52, 95% CI [−5.14, 2.63], d = .11) or external details produced (M(SD)Free = 14.85(14.22), M(SD)Fixed = 16.45(15.9), t(37) = −1.19, p = .24, 95% CI [−4.33, 1.14], d = .19) during simulation, Fig 1 A, B. However, vividness was rated as significantly higher in the free-viewing condition relative to the fixed-viewing condition (M(SD)Free = 2.68(.56), M(SD)Fixed = 2.49(.6), t(39) = −2.53, p = .02, 95% CI [−0.35, −0.04], d = −.4), Fig 1C. Thus, while restricting viewing did not change the objective quality of simulations, it significantly decreased subjective feelings of vividness.

Fig 1.

Fig 1.

(A) Mean internal detail counts for the fixed- and free-viewing conditions. (B) Mean external detail counts for the fixed- and free-viewing conditions. (C) Mean vividness ratings for the fixed- and free-viewing conditions (1 = not vivid at all, 4 = extremely vivid). Black circles represent condition means; error bars represent 95% within-participant confidence intervals (Morey, 2008) computed with the Rmisc package (https://rdrr.io/cran/Rmisc/man/summarySEwithin.html); * = p < .05, ** = p < .01, *** = p < .001.

Eye movement results

To probe the effects of restricted viewing on eye movements, we compared both the rate and dispersion of fixations across conditions. One participant was removed from the analysis of fixation rate and 2 participants from the analysis of fixation dispersion because of values >3 SD from the mean. While the rate of fixations (number of fixations per minute) did not differ significantly across conditions M(SD)Free = 105.26(31.99), M(SD)Fixed = 103.67(28.05), t(37) = .51, p = .62, 95% CI [−4.75, 7.92], d = −.08), fixations in the free-viewing condition were significantly more dispersed (pixel SD) than in the fixed-viewing condition (M(SD)Free = 7425.49(8292.81), M(SD)Fixed = 1665.06(1432.12), t(36) = 4.85, p < .001, 95% CI [3353.09, 8167.78], d = −.8), confirming that participants changed their gaze behavior in response to the restricted viewing manipulation.

To test our hypothesis that imagining an event in a familiar location would evoke a gaze pattern specific to that location, we compared eye movements during simulation to eye movements during recall of the same or different locations. One participant was removed from analysis because of values >3 SD from the mean. Consistent with our prediction, we observed significant reinstatement of relevant locations during (free-viewing) simulation of events in those locations (t(37) = 5.22, p < .001, 95% CI [0.04, 0.09], d = .85), Fig 2A. That is, eye movements during simulation showed significantly more evidence for the cued location (M(SD) = .83(.2)) than other non-cued locations (M(SD) = .77(.19)).

Fig 2.

Fig 2.

(A) Mean similarity scores for same and different locations. Black circles represent condition means; error bars represent 95% within-participant confidence intervals (Morey, 2008) computed with the Rmisc package (https://rdrr.io/cran/Rmisc/man/summarySEwithin.html). (B) Visualization of results from the LMEM relating internal detail count to gaze similarity. (C). Visualization of the change in gaze similarity scores and internal detail counts (scaled within participants) over time. * = p < .05, ** = p < .01, *** = p < .001.

To test for the predicted relationship between gaze similarity and simulation success, we ran a LMEM on gaze similarity with vividness, number of internal details, and number of external details as fixed effects (scaled within participant) and a random intercept for participant. Model comparison proceeded backwards using likelihood ratio tests (α = 0.05) with nonsignificant fixed effects removed from the model in a stepwise fashion. Removal of the fixed effects for vividness and number of external details did not significantly reduce the model fit (vividness: χ2 = 0.93, p = .33; number of external details: χ2 = 0.73, p = .39), thus they were removed from the final model. Results of the most parsimonious model arrived at via backwards model comparison revealed a significant positive relationship between internal details and gaze similarity (β = 0.04, SE = 0.02, t = 2.34, p = .02, 95% CI [0.006, 0.069], Table S1), indicating that greater gaze-based reinstatement of spatial context was associated with successful simulation, Fig 2B. Notably, a separate model found that internal details were significantly negatively predictive of fixation rate (β = −1.34, SE = 0.63, t = −2.13, p = .03, 95% CI [−2.577, −0.104], Table S2), consistent with previous research (Sheldon et al., 2019; Wynn et al., 2022).

Finally, to further probe the relationship between gaze similarity and internal details, we examined how both measures changed over time. Because the narrative data were not time stamped, we divided both the text (words) and gaze (fixations) data into 10 bins based on the total number of words (M(SD)= 94.03(37.82) and fixations (M(SD)= 68.04(33.29) in each trial, respectively (see Knoff et al., 2022). As demonstrated in Fig 2C, both gaze similarity and internal details followed a similar temporal trajectory, peaking early and then declining. A LMEM on temporally binned gaze similarity with number of internal details as a fixed effect (averaged within each time bin) and participant as a random effect was significant (β = 0.01, SE = 0.003, t = 2.3, p = .02, 95% CI [0.001, 0.015], Table S3), indicating that gaze-based reinstatement of spatial location and internal details co-fluctuate over the course of simulation. A second model including the interaction of internal details and time further indicated that the correlation of gaze similarity and internal details increased over time (β = 0.001, SE = 0.0004, t = 2.4, p = .02, 95% CI [0.0002, 0.002], Table S3), suggesting that oculomotor and natural language processes may increasingly sync up during simulation.

Discussion

Converging evidence indicates that the ability to recall past events and the ability to simulate future events are closely related, sharing both cognitive and neural underpinnings (Schacter & Addis, 2007, 2020). In the present study, we sought to extend this work by examining whether oculomotor processes, which have been previously linked to memory retrieval (for review, see Hannula et al., 2010; Ryan et al., 2020), play a similar supporting role in future simulation. Based on widespread evidence of gaze-based reinstatement, we previously proposed that eye movements facilitate memory retrieval by reactivating the spatial and/or temporal context in which a stimulus or event was encoded (Wynn et al., 2019). This hypothesis is consistent with models of scene construction, which propose a central role for scenes or spatial contextual elements in the retrieval of autobiographical memories (for review, see Robin, 2018). Importantly, while this role extends to the closely related process of episodic simulation, there has been little work investigating the relationship between eye movements and the construction of future events.

Based on the available research, we hypothesized that if eye movements support contextual reinstatement necessary for successful simulation, then (1) restricting viewing should impair the ability to successfully simulate a future event, (2) unrestricted viewing should be accompanied by spontaneous reactivation of the gaze pattern associated with the cued location, and (3) such gaze reinstatement should be predictive of simulation success.

Surprisingly, we did not observe a significant difference in the number of internal details produced during free-viewing compared to fixed-viewing simulation. One possible explanation for this result is that fixed viewing specifically impairs the production of visuospatial details, while other details (e.g., emotions, thoughts) are unaffected. However, previous work (de Vito et al., 2015) has demonstrated that fewer internal details are produced during restricted-viewing compared to free-viewing simulation, suggesting that this effect cannot be fully attributed to the types of internal details produced. The discrepancy between these findings and the current findings may be due to the differing nature of the tasks used; whereas de Vito and colleagues restricted viewing via a moving dot, the present study used a stationary fixation cross. The moving dot probe may be an especially successful manipulation for disrupting simulation as it precludes micro-saccades, which may facilitate reinstatement, even during fixed-viewing (van Ede et al., 2019). Although our manipulation did not impair internal detail generation, fixed-viewing was related to a decline in subjective feelings of vividness, in line with prior work (Johansson et al., 2022; see also Lenoble et al., 2019). Extending this work, the present findings suggest that precluding the execution of overt eye movements may qualitatively affect the process of simulating a vivid future event.

While the current findings augment previous work illustrating the importance of freely and voluntarily executed fixations for memory retrieval and future simulation, how these eye movements support constructive episodic processes, particularly during future simulation, has until now remained unclear. In a previous study (Wynn et al., 2022), we showed that when imagining a future scenario (e.g., lying on a white sand beach), participants spontaneously activated a location-specific schema (e.g., beach), indexed via eye movements; specifically, the similarity of their eye movements to a gaze template generated via the aggregated eye movements of other participants imagining the same scenario. These findings point to a central role for eye movements in the reinstatement of schematic contexts. Extending this work, here we recorded eye movements during both simulation and during an initial recall task, allowing us to approximate encoding of a spatial location such that we could compare gaze patterns akin to past studies of retrieval-related gaze reinstatement (in which eye movements during encoding and retrieval are correlated, Wynn et al., 2019).

In line with our predictions, eye movements during simulation reinstated the cued location, as evidenced by greater gaze similarity to the corresponding location cue compared to non-corresponding location cues. In other words, when imagining a future event occurring in a familiar location, participants spontaneously looked to empty regions of the screen which they had looked at when previously recalling that location. This finding is consistent with previous work demonstrating that when retrieving an encoded stimulus from memory, the eyes spontaneously reproduce the pattern of fixations produced during encoding (for review, see Ferreira et al., 2008; Wynn et al., 2019). Given that our recall task focused primarily on spatial details, the observed gaze reinstatement likely reflects reactivation of spatial contextual details specific to the cued location. Future work can examine this effect further by probing for particular detail types, either during memory or simulation, or by indexing gaze locations as participants describe location features (see Johansson et al., 2006).

Importantly, location-specific gaze reinstatement was not only observed during simulation, but was also predictive of the number of internal details produced. Consistent with the functional gaze reinstatement model (Wynn et al., 2019) and prior evidence of gaze reinstatement predicting mnemonic performance (for review, see Ferreira, et al., 2008; Wynn et al., 2019), this finding suggests that eye movements support the simulation of future events via reactivation of a related spatial context from memory. This reinstated context may act as a scaffold upon which further episodic (internal) details can be constructed and elaborated on (Hassabis, Kumaran, & Maguire, 2007; Hassabis et al., 2007; Maguire & Mullally, 2013). Supporting this view, a temporally binned analysis indicated that gaze reinstatement and internal details co-fluctuated over the course of simulation, with both measures peaking early. This finding is consistent with the proposed early role of scene construction in facilitating constructive episodic simulation (Hebscher et al., 2018; see also Miller et al., 2013) and with evidence of gaze reinstatement in initial fixations (Brandt & Stark, 1997; Wynn et al., 2016, 2020). As this analysis was novel and somewhat exploratory, further evidence, ideally using time-stamped narrative and gaze data, will help to firmly establish a temporal link between reinstatement of spatial context and simulation. Nonetheless, the current findings suggest that eye movements might facilitate scene construction by reactivating encoded or stored spatial contextual elements, providing a foundation on which internal details can be generated.

Supplementary Material

WynnSchacter_SupplementaryMaterial

Acknowledgements

The authors would like to thank Leticia Sefia, Tomas Winegar, and Aylin Tanriverdi for assistance with recruitment and testing, Ruben van Genugten for assistance with analysis of text data, and Tarek Amer for feedback on earlier drafts of the manuscript. This work was supported by a National Institute on Aging Grant (Grant Number: R01 AG008441) awarded to Daniel Schacter.

References

  1. Addis DR, Wong AT, & Schacter DL (2008). Age-related changes in the episodic simulation of future events. Psychological Science, 19(1), 33–41. [DOI] [PubMed] [Google Scholar]
  2. Armson MJ, Diamond NB, Levesque L, Ryan JD, & Levine B (2019). Vividness of recollection is supported by eye movements in individuals with high, but not low trait autobiographical memory. Memory, 206(March 2020), 1–37. [DOI] [PubMed] [Google Scholar]
  3. Aydin C (2018). The differential contributions of visual imagery constructs on autobiographical thinking. Memory, 26(2), 189–200. [DOI] [PubMed] [Google Scholar]
  4. Bates D, Maechler M, & Bolker B (2012). lme4: Linear mixed-effects models using S4 classes. R package version. [Google Scholar]
  5. Benoit RG, & Schacter DL (2015). Specifying the core network supporting episodic simulation and episodic memory by activation likelihood estimation. Neuropsychologia, 75(3), 450–457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bochynska A, & Laeng B (2015). Tracking down the path of memory: eye scanpaths facilitate retrieval of visuospatial information. Cognitive Processing, 16(1), 159–163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brandt S, & Stark L (1997). Spontaneous eye movements during visual imagery reflect the content of the visual scene. Journal of Cognitive Neuroscience, 9(1). [DOI] [PubMed] [Google Scholar]
  8. Burgess N, Maguire EA, & O’Keefe J (2002). The human hippocampus and spatial and episodic memory. Neuron, 35(4), 625–641. [DOI] [PubMed] [Google Scholar]
  9. Conti F, & Irish M (2021). Harnessing Visual Imagery and Oculomotor Behaviour to Understand Prospection. Trends in Cognitive Sciences, 25(4), 272–283. [DOI] [PubMed] [Google Scholar]
  10. de Vito S, Buonocore A, Bonnefon JF, & Della Sala S (2015). Eye movements disrupt episodic future thinking. Memory, 23(6), 796–805. [DOI] [PubMed] [Google Scholar]
  11. Ferreira F, Apel J, & Henderson JM (2008). Taking a new look at looking at nothing. Trends in Cognitive Sciences, 12(11), 405–410. [DOI] [PubMed] [Google Scholar]
  12. Foulsham T, & Kingstone A (2013). Fixation-dependent memory for natural scenes: an experimental test of scanpath theory. Journal of Experimental Psychology. General, 142(1), 41–56. [DOI] [PubMed] [Google Scholar]
  13. Hannula DE, Althoff RR, Warren DE, Riggs L, & Cohen NJ (2010). Worth a glance : using eye movements to investigate the cognitive neuroscience of memory. Frontiers in Human Neuroscience, 4, 1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Hassabis D, Kumaran D, & Maguire EA (2007). Using imagination to understand the neural basis of episodic memory. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 27(52), 14365–14374. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Hassabis D, Kumaran D, Vann SD, & Maguire E. a. (2007). Patients with hippocampal amnesia cannot imagine new experiences. Proceedings of the National Academy of Sciences of the United States of America, 104(5), 1726–1731. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hebscher M, Levine B, & Gilboa A (2018). The precuneus and hippocampus contribute to individual differences in the unfolding of spatial representations during episodic autobiographical memory. Neuropsychologia, 110, 123–133. [DOI] [PubMed] [Google Scholar]
  17. Holm L, & Mäntylä T (2007). Memory for scenes: refixations reflect retrieval. Memory & Cognition, 35(7), 1664–1674. [DOI] [PubMed] [Google Scholar]
  18. Johansson R, Holsanova J, & Holmqvist K (2006). Pictures and spoken descriptions elicit similar eye movements during mental imagery, both in light and in complete darkness. Cognitive Science, 30(6), 1053–1079. [DOI] [PubMed] [Google Scholar]
  19. Johansson R, & Johansson M (2013). Look here, eye movements play a functional role in memory retrieval. Psychological Science, 25(1), 236–242. [DOI] [PubMed] [Google Scholar]
  20. Johansson R, Nyström M, Dewhurst R, & Johansson M (2022). Eye-movement replay supports episodic remembering. Proceedings. Biological Sciences / The Royal Society, 289(1976), 20220964. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Knoff AW, Andrews-Hanna JR, & Grilli MD (2022). Shape of the past: Revealing detail arcs while narrating memories of autobiographical life events across the lifespan. 10.31234/osf.io/raz4w [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Kuznetsova A, Brockhoff PB, & Christensen RHB (2017). lmerTest Package: Tests in Linear Mixed Effects Models. Journal of Statistical Software, 82(13). 10.18637/jss.v082.i13 [DOI] [Google Scholar]
  23. Laeng B, Bloem IM, D’Ascenzo S, & Tommasi L (2014). Scrutinizing visual images: The role of gaze in mental imagery and memory. Cognition, 131(2), 263–283. [DOI] [PubMed] [Google Scholar]
  24. Laeng B, & Teodorescu D-S (2002). Eye scanpaths during visual imagery reenact those of perception of the same visual scene. Cognitive Science, 26(2), 207–231. [Google Scholar]
  25. Lenoble Q, Janssen SMJ, & El Haj M (2019). Don’t stare, unless you don’t want to remember: Maintaining fixation compromises autobiographical memory retrieval. Memory, 27(2), 231–238. [DOI] [PubMed] [Google Scholar]
  26. Levine B, Svoboda E, Hay JF, Winocur G, & Moscovitch M (2002). Aging and autobiographical memory: Dissociating episodic from semantic retrieval. Psychology and Aging, 17(4), 677–689. [PubMed] [Google Scholar]
  27. Lüdecke D (2023). _sjPlot: Data Visualization for Statistics in Social Science_. R package version 2.8.15, <https://CRAN.R-project.org/package=sjPlot>. [Google Scholar]
  28. Maguire EA, & Mullally SL (2013). The hippocampus: a manifesto for change. Journal of Experimental Psychology. General, 142(4), 1180–1189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Miller JF, Neufang M, Solway A, Brandt A, Trippel M, Mader I, Hefft S, Merkow M, Polyn SM, Jacobs J, Kahana MJ, & Schulze-Bonhage A (2013). Neural activity in human hippocampal formation reveals the spatial context of retrieved memories. Science, 342(6162), 1111–1114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Morey RD (2008). Confidence Intervals from Normalized Data: A correction to Cousineau (2005). Tutorials in Quantitative Methods for Psychology, 4(2), 61–64. [Google Scholar]
  31. Noton D, & Stark L (1971a). Scanpaths in eye movements during pattern perception. Science, 171(3968), 308–311. [DOI] [PubMed] [Google Scholar]
  32. Noton D, & Stark L (1971b). Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vision Research, 11(9), 929–IN8. [DOI] [PubMed] [Google Scholar]
  33. Robin J (2018). Spatial scaffold effects in event memory and imagination. Wiley Interdisciplinary Reviews. Cognitive Science, 9(4), e1462. [DOI] [PubMed] [Google Scholar]
  34. Robin J, Wynn J, & Moscovitch M (2016). The spatial scaffold: The effects of spatial context on memory for events. Journal of Experimental Psychology. Learning, Memory, and Cognition, 42(2), 308–315. [DOI] [PubMed] [Google Scholar]
  35. Rubin DC, & Umanath S (2015). Event memory: A theory of memory for laboratory, autobiographical, and fictional events. Psychological Review, 122(1), 1–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Ryan JD, Shen K, Kacollja A, Tian H, Griffiths J, Bezgin G, & McIntosh AR (2019). Modeling the influence of the hippocampal memory system on the oculomotor system. Network Neuroscience, 1–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Ryan JD, Shen K, & Liu Z-X (2020). The intersection between the oculomotor and hippocampal memory systems: empirical developments and clinical implications. Annals of the New York Academy of Sciences, 1464(1), 115–141. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Schacter DL, & Addis DR (2007). The cognitive neuroscience of constructive memory: Remembering the past and imagining the future. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 362(1481), 773–786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Schacter DL, & Addis DR (2020). Memory and imagination: Perspectives on constructive episodic simulation. The Cambridge Handbook of the Imagination, 111–131. [Google Scholar]
  40. Sheldon S, Cool K, & El-Asmar N (2019). The processes involved in mentally constructing event- and scene-based autobiographical representations. Journal of Cognitive Psychology, 31(3), 261–275. [Google Scholar]
  41. Shen K, Bezgin G, Selvam R, McIntosh AR, & Ryan JD (2016). An Anatomical Interface between Memory and Oculomotor Systems. Journal of Cognitive Neuroscience, 28(11), 1772–1783. [DOI] [PubMed] [Google Scholar]
  42. Spivey MJ, & Geng JJ (2001). Oculomotor mechanisms activated by imagery and memory: Eye movements to absent objects. Psychological Research, 65(4), 235–241. [DOI] [PubMed] [Google Scholar]
  43. van Ede F, Chekroud SR, & Nobre AC (2019). Human gaze tracks attentional focusing in memorized visual space. Nature Human Behaviour, 27–29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. van Genugten RDI, & Schacter DL (2024). Automated scoring of the Autobiographical Interview with natural language processing. Behavior Research Methods. 10.3758/s13428-023-02145-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Wynn JS, Bone MB, Dragan MC, Hoffman KL, Buchsbaum BR, & Ryan JD (2016). Selective scanpath repetition during memory-guided visual search. Visual Cognition, 24(1), 15–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Wynn JS, Ryan JD, & Buchsbaum BR (2020). Eye movements support behavioral pattern completion. Proceedings of the National Academy of Sciences of the United States of America, 53(9), 1689–1699. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Wynn JS, Shen K, & Ryan JD (2019). Eye Movements Actively Reinstate Spatiotemporal Mnemonic Content. Vision Research, 3(2), 21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Wynn JS, van Genugten RDI, Sheldon S, & Schacter DL (2022). Schema-related eye movements support episodic simulation. Consciousness and Cognition, 100, 103302. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

WynnSchacter_SupplementaryMaterial

RESOURCES